id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
1,561,792 | https://en.wikipedia.org/wiki/Curie%E2%80%93Weiss%20law | In magnetism, the Curie–Weiss law describes the magnetic susceptibility of a ferromagnet in the paramagnetic region above the Curie temperature:
where is a material-specific Curie constant, is the absolute temperature, and is the Curie temperature, both measured in kelvin. The law predicts a singularity in the susceptibility at . Below this temperature, the ferromagnet has a spontaneous magnetization. The name is given after Pierre Curie and Pierre Weiss.
Background
A magnetic moment which is present even in the absence of the external magnetic field is called spontaneous magnetization. Materials with this property are known as ferromagnets, such as iron, nickel, and magnetite. However, when these materials are heated up, at a certain temperature they lose their spontaneous magnetization, and become paramagnetic. This threshold temperature below which a material is ferromagnetic is called the Curie temperature and is different for each material.
The Curie–Weiss law describes the changes in a material's magnetic susceptibility, , near its Curie temperature. The magnetic susceptibility is the ratio between the material's magnetization and the applied magnetic field.
Limitations
In many materials, the Curie–Weiss law fails to describe the susceptibility in the immediate vicinity of the Curie point, since it is based on a mean-field approximation. Instead, there is a critical behavior of the form
with the critical exponent . However, at temperatures the expression of the Curie–Weiss law still holds true, but with replaced by a temperature that is somewhat higher than the actual Curie temperature. Some authors call the Weiss constant to distinguish it from the temperature of the actual Curie point.
Classical approaches to magnetic susceptibility and Bohr–van Leeuwen theorem
According to the Bohr–van Leeuwen theorem, when statistical mechanics and classical mechanics are applied consistently, the thermal average of the magnetization is always zero. Magnetism cannot be explained without quantum mechanics. That means that it can not be explained without taking into account that matter consists of atoms. Next are listed some semi-classical approaches to it, using a simple atom model, as they are easy to understand and relate to even though they are not perfectly correct.
The magnetic moment of a free atom is due to the orbital angular momentum and spin of its electrons and nucleus. When the atoms are such that their shells are completely filled, they do not have any net magnetic dipole moment in the absence of an external magnetic field. When present, such a field distorts the trajectories (classical concept) of the electrons so that the applied field could be opposed as predicted by the Lenz's law. In other words, the net magnetic dipole induced by the external field is in the opposite direction, and such materials are repelled by it. These are called diamagnetic materials.
Sometimes an atom has a net magnetic dipole moment even in the absence of an external magnetic field. The contributions of the individual electrons and nucleus to the total angular momentum do not cancel each other. This happens when the shells of the atoms are not fully filled up (Hund's Rule). A collection of such atoms however, may not have any net magnetic moment as these dipoles are not aligned. An external magnetic field may serve to align them to some extent and develop a net magnetic moment per volume. Such alignment is temperature dependent as thermal agitation acts to disorient the dipoles. Such materials are called paramagnetic.
In some materials, the atoms (with net magnetic dipole moments) can interact with each other to align themselves even in the absence of any external magnetic field when the thermal agitation is low enough. Alignment could be parallel (ferromagnetism) or anti-parallel. In the case of anti-parallel, the dipole moments may or may not cancel each other (antiferromagnetism, ferrimagnetism).
Density matrix approach to magnetic susceptibility
We take a very simple situation in which each atom can be approximated as a two state system. The thermal energy is so low that the atom is in the ground state. In this ground state, the atom is assumed to have no net orbital angular momentum but only one unpaired electron to give it a spin of the half. In the presence of an external magnetic field, the ground state will split into two states having an energy difference proportional to the applied field. The spin of the unpaired electron is parallel to the field in the higher energy state and anti-parallel in the lower one.
A density matrix, , is a matrix that describes a quantum system in a mixed state, a statistical ensemble of several quantum states (here several similar 2-state atoms). This should be contrasted with a single state vector that describes a quantum system in a pure state. The expectation value of a measurement, , over the ensemble is . In terms of a complete set of states, , one can write
Von Neumann's equation tells us how the density matrix evolves with time.
In equilibrium,
one has , and the allowed density matrices are .
The canonical ensemble has , where .
For the 2-state system, we can write
.
Here is the gyromagnetic ratio.
Hence , and
From which
Explanation of para and diamagnetism using perturbation theory
In the presence of a uniform external magnetic field along the z-direction, the Hamiltonian of the atom changes by
where are positive real numbers which are independent of which atom we are looking at but depend on the mass and the charge of the electron. corresponds to individual electrons of the atom.
We apply second order perturbation theory to this situation. This is justified by the fact that even for highest presently attainable field strengths, the shifts in the energy level due to is quite small w.r.t. atomic excitation energies. Degeneracy of the original Hamiltonian is handled by choosing a basis which diagonalizes in the degenerate subspaces. Let be such a basis for the state of the atom (rather the electrons in the atom). Let be the change in energy in . So we get
In our case we can ignore and higher order terms. We get
In case of diamagnetic material, the first two terms are absent as they don't have any angular momentum in their ground state. In case of paramagnetic material all the three terms contribute.
Adding spin–spin interaction in the Hamiltonian: Ising model
So far, we have assumed that the atoms do not interact with each other. Even though this is a reasonable assumption in the case of diamagnetic and paramagnetic substances, this assumption fails in the case of ferromagnetism, where the spins of the atom try to align with each other to the extent permitted by the thermal agitation. In this case, we have to consider the Hamiltonian of the ensemble of the atom. Such a Hamiltonian will contain all the terms described above for individual atoms and terms corresponding to the interaction among the pairs of the atom. Ising model is one of the simplest approximations of such pairwise interaction.
Here the two atoms of a pair are at . Their interaction is determined by their distance vector . In order to simplify the calculation, it is often assumed that interaction happens between neighboring atoms only and is a constant. The effect of such interaction is often approximated as a mean field and, in our case, the Weiss field.
Modification of Curie's law due to Weiss field
The Curie–Weiss law is an adapted version of Curie's law, which for a paramagnetic material may be written in SI units as follows, assuming :
Here μ0 is the permeability of free space; M the magnetization (magnetic moment per unit volume), is the magnetic field, and C the material-specific Curie constant:
where is the Boltzmann constant, the number of magnetic atoms (or molecules) per unit volume, the Landé g-factor, the Bohr magneton, the angular momentum quantum number.
For the Curie-Weiss Law the total magnetic field is where is the Weiss molecular field constant and then
which can be rearranged to get
which is the Curie-Weiss Law
where the Curie temperature is
See also
Curie's law
Paramagnetism
Pierre Curie
Pierre-Ernest Weiss
Exchange interaction
Notes
References
External links
Magnetism: Models and Mechanisms in E. Pavarini, E. Koch, and U. Schollwöck: Emergent Phenomena in Correlated Matter, Jülich 2013,
Magnetic ordering
Pierre Curie | Curie–Weiss law | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,778 | [
"Magnetic ordering",
"Condensed matter physics",
"Electric and magnetic fields in matter",
"Materials science"
] |
1,561,945 | https://en.wikipedia.org/wiki/Heesch%27s%20problem | In geometry, the Heesch number of a shape is the maximum number of layers of copies of the same shape that can surround it with no overlaps and no gaps. Heesch's problem is the problem of determining the set of numbers that can be Heesch numbers. Both are named for geometer Heinrich Heesch, who found a tile with Heesch number 1 (the union of a square, equilateral triangle, and 30-60-90 right triangle) and proposed the more general problem.
For example, a square may be surrounded by infinitely many layers of congruent squares in the square tiling, while a circle cannot be surrounded by even a single layer of congruent circles without leaving some gaps. The Heesch number of the square is infinite and the Heesch number of the circle is zero. In more complicated examples, such as the one shown in the illustration, a polygonal tile can be surrounded by several layers, but not by infinitely many; the maximum number of layers is the tile's Heesch number.
Formal definitions
A tessellation of the plane is a partition of the plane into smaller regions called tiles. The zeroth corona of a tile is defined as the tile itself, and for k > 0 the kth corona is the set of tiles sharing a boundary point with the (k − 1)th corona. The Heesch number of a figure S is the maximum value k such that there exists a tiling of the plane, and tile t within that tiling, for which that all tiles in the zeroth through kth coronas of t are congruent to S. In some work on this problem this definition is modified to additionally require that the union of the zeroth through kth coronas of t is a simply connected region.
If there is no upper bound on the number of layers by which a tile may be surrounded, its Heesch number is said to be infinite. In this case, an argument based on Kőnig's lemma can be used to show that there exists a tessellation of the whole plane by congruent copies of the tile.
Example
Consider the non-convex polygon P shown in the figure to the right, which is formed from a regular hexagon by adding projections on two of its sides and matching indentations on three sides. The figure shows a tessellation consisting of 61 copies of P, one large infinite region, and four small diamond-shaped polygons within the fourth layer. The first through fourth coronas of the central polygon consist entirely of congruent copies of P, so its Heesch number is at least four. One cannot rearrange the copies of the polygon in this figure to avoid creating the small diamond-shaped polygons, because the 61 copies of P have too many indentations relative to the number of projections that could fill them. By formalizing this argument, one can prove that the Heesch number of P is exactly four. According to the modified definition that requires that coronas be simply connected, the Heesch number is three. This example was discovered by Robert Ammann.
Known results
It is unknown whether all positive integers can be Heesch numbers. The first examples of polygons with Heesch number 2 were provided by , who showed that infinitely many polyominoes have this property. Casey Mann has constructed a family of tiles, each with the Heesch number 5. Mann's tiles have Heesch number 5 even with the restricted definition in which each corona must be simply connected. In 2020, Bojan Bašić found a figure with Heesch number 6, the highest finite number until the present.
For the corresponding problem in the hyperbolic plane, the Heesch number may be arbitrarily large.
References
Sources
Further reading
External links
Numberphile video about Heesch Numbers.
Tessellation
Unsolved problems in geometry | Heesch's problem | [
"Physics",
"Mathematics"
] | 791 | [
"Geometry problems",
"Unsolved problems in mathematics",
"Tessellation",
"Euclidean plane geometry",
"Unsolved problems in geometry",
"Planes (geometry)",
"Mathematical problems",
"Symmetry"
] |
1,561,997 | https://en.wikipedia.org/wiki/Contorsion%20tensor | The contorsion tensor in differential geometry is the difference between a connection with and without torsion in it. It commonly appears in the study of spin connections. Thus, for example, a vielbein together with a spin connection, when subject to the condition of vanishing torsion, gives a description of Einstein gravity. For supersymmetry, the same constraint, of vanishing torsion, gives (the field equations of) eleven-dimensional supergravity. That is, the contorsion tensor, along with the connection, becomes one of the dynamical objects of the theory, demoting the metric to a secondary, derived role.
The elimination of torsion in a connection is referred to as the absorption of torsion, and is one of the steps of Cartan's equivalence method for establishing the equivalence of geometric structures.
Definition in metric geometry
In metric geometry, the contorsion tensor expresses the difference between a metric-compatible affine connection with Christoffel symbol and the unique torsion-free Levi-Civita connection for the same metric.
The contorsion tensor is defined in terms of the torsion tensor as (up to a sign, see below)
where the indices are being raised and lowered with respect to the metric:
.
The reason for the non-obvious sum in the definition of the contorsion tensor is due to the sum-sum difference that enforces metric compatibility. The contorsion tensor is antisymmetric in the first two indices, whilst the torsion tensor itself is antisymmetric in its last two indices; this is shown below.
The full metric compatible affine connection can be written as:
where the torsion-free Levi-Civita connection:
Definition in affine geometry
In affine geometry, one does not have a metric nor a metric connection, and so one is not free to raise and lower indices on demand. One can still achieve a similar effect by making use of the solder form, allowing the bundle to be related to what is happening on its base space. This is an explicitly geometric viewpoint, with tensors now being geometric objects in the vertical and horizontal bundles of a fiber bundle, instead of being indexed algebraic objects defined only on the base space. In this case, one may construct a contorsion tensor, living as a one-form on the tangent bundle.
Recall that the torsion of a connection can be expressed as
where is the solder form (tautological one-form). The subscript serves only as a reminder that this torsion tensor was obtained from the connection.
By analogy to the lowering of the index on torsion tensor on the section above, one can perform a similar operation with the solder form, and construct a tensor
Here is the scalar product. This tensor can be expressed as
The quantity is the contorsion form and is exactly what is needed to add to an arbitrary connection to get the torsion-free Levi-Civita connection. That is, given an Ehresmann connection , there is another connection that is torsion-free.
The vanishing of the torsion is then equivalent to having
or
This can be viewed as a field equation relating the dynamics of the connection to that of the contorsion tensor.
Derivation
One way to quickly derive a metric compatible affine connection is to repeat the sum-sum difference idea used in the derivation of the Levi–Civita connection but not take torsion to be zero. Below is a derivation.
Convention for derivation (Choose to define connection coefficients this way. The motivation is that of connection-one forms in gauge theory):
We begin with the Metric Compatible condition:
Now we use sum-sum difference (Cycle the indices on the condition):
We now use the below torsion tensor definition (for a holonomic frame) to rewrite the connection:
Note that this definition of torsion has the opposite sign as the usual definition when using the above convention for the lower index ordering of the connection coefficients, i.e. it has the opposite sign as the coordinate-free definition in the below section on geometry. Rectifying this inconsistency (which seems to be common in the literature) would result in a contorsion tensor with the opposite sign.
Substitute the torsion tensor definition into what we have:
Clean it up and combine like terms
The torsion terms combine to make an object that transforms tensorially. Since these terms combine together in a metric compatible fashion, they are given a name, the Contorsion tensor, which determines the skew-symmetric part of a metric compatible affine connection.
We will define it here with the motivation that it match the indices of the left hand side of the equation above.
Cleaning by using the anti-symmetry of the torsion tensor yields what we will define to be the contorsion tensor:
Subbing this back into our expression, we have:
Now isolate the connection coefficients, and group the torsion terms together:
Recall that the first term with the partial derivatives is the Levi-Civita connection expression used often by relativists.
Following suit, define the following to be the torsion-free Levi-Civita connection:
Then we have that the full metric compatible affine connection can now be written as:
Relationship to teleparallelism
In the theory of teleparallelism, one encounters a connection, the Weitzenböck connection, which is flat (vanishing Riemann curvature) but has a non-vanishing torsion. The flatness is exactly what allows parallel frame fields to be constructed. These notions can be extended to supermanifolds.
See also
Belinfante–Rosenfeld stress–energy tensor
References
Tensors
Riemannian geometry
Connection (mathematics) | Contorsion tensor | [
"Engineering"
] | 1,160 | [
"Tensors"
] |
1,562,127 | https://en.wikipedia.org/wiki/Shekel%20function | The Shekel function or also Shekel's foxholes is a multidimensional, multimodal, continuous, deterministic function commonly used as a test function for testing optimization techniques.
The mathematical form of a function in dimensions with maxima is:
or, similarly,
Global minima
Numerically certified global minima and the corresponding solutions were obtained using interval methods for up to .
See also
Test functions for optimization
Test functions for optimization
Functions and mappings
References
Further reading
Shekel, J. 1971. "Test Functions for Multimodal Search Techniques." Fifth Annual Princeton Conference on Information Science and Systems. | Shekel function | [
"Mathematics"
] | 126 | [
"Mathematical analysis",
"Functions and mappings",
"Mathematical analysis stubs",
"Applied mathematics",
"Mathematical objects",
"Applied mathematics stubs",
"Mathematical relations"
] |
1,562,440 | https://en.wikipedia.org/wiki/Acetyl%20chloride | Acetyl chloride () is an acyl chloride derived from acetic acid (). It belongs to the class of organic compounds called acid halides. It is a colorless, corrosive, volatile liquid. Its formula is commonly abbreviated to AcCl.
Synthesis
On an industrial scale, the reaction of acetic anhydride with hydrogen chloride produces a mixture of acetyl chloride and acetic acid:
Laboratory routes
Acetyl chloride was first prepared in 1852 by French chemist Charles Gerhardt by treating potassium acetate with phosphoryl chloride.
Acetyl chloride is produced in the laboratory by the reaction of acetic acid with chlorodehydrating agents such as phosphorus trichloride (), phosphorus pentachloride (), sulfuryl chloride (), phosgene, or thionyl chloride (). However, these methods usually give acetyl chloride contaminated by phosphorus or sulfur impurities, which may interfere with the organic reactions.
Other methods
When heated, a mixture of dichloroacetyl chloride and acetic acid gives acetyl chloride. It can also be synthesized from the catalytic carbonylation of methyl chloride.
Occurrence
Acetyl chloride is not expected to exist in nature, because contact with water would hydrolyze it into acetic acid and hydrogen chloride. In fact, if handled in open air it releases white "smoke" resulting from hydrolysis due to the moisture in the air. The smoke is actually small droplets of hydrochloric acid and acetic acid formed by hydrolysis.
Uses
Acetyl chloride is used for acetylation reactions, i.e., the introduction of an acetyl group. Acetyl is an acyl group having the formula . For further information on the types of chemical reactions compounds such as acetyl chloride can undergo, see acyl halide. Two major classes of acetylations include esterification and the Friedel-Crafts reaction.
Acetic acid esters and amide
Acetyl chloride is a reagent for the preparation of esters and amides of acetic acid, used in the derivatization of alcohols and amines. One class of acetylation reactions are esterification, for example the reaction with ethanol to produce ethyl acetate and hydrogen chloride:
Frequently such acylations are carried out in the presence of a base such as pyridine, triethylamine, or DMAP, which act as catalysts to help promote the reaction and as bases neutralize the resulting HCl. Such reactions will often proceed via ketene.
Friedel-Crafts acetylations
A second major class of acetylation reactions are the Friedel-Crafts reactions.
See also
Acetic acid
Acetyl bromide
Acetyl fluoride
Acetyl iodide
References
External links
Acyl chlorides
Acetylating agents
Organic compounds with 2 carbon atoms
Acetyl compounds | Acetyl chloride | [
"Chemistry"
] | 609 | [
"Organic compounds",
"Reagents for organic chemistry",
"Acetylating agents",
"Organic compounds with 2 carbon atoms"
] |
1,562,509 | https://en.wikipedia.org/wiki/Voltage%20regulator%20module | A voltage regulator module (VRM), sometimes called processor power module (PPM), is a buck converter that provides the microprocessor and chipset the appropriate supply voltage, converting , or to lower voltages required by the devices, allowing devices with different supply voltages be mounted on the same motherboard. On personal computer (PC) systems, the VRM is typically made up of power MOSFET devices.
Overview
Most voltage regulator module implementations are soldered onto the motherboard. Some processors, such as Intel Haswell and Ice Lake CPUs, feature some voltage regulation components on the same CPU package, reduce the VRM design of the motherboard; such a design brings certain levels of simplification to complex voltage regulation involving numerous CPU supply voltages and dynamic powering up and down of various areas of a CPU. A voltage regulator integrated on-package or on-die is usually referred to as fully integrated voltage regulator (FIVR) or simply an integrated voltage regulator (IVR).
Most modern CPUs require less than , as CPU designers tend to use lower CPU core voltages; lower voltages help in reducing CPU power dissipation, which is often specified through thermal design power (TDP) that serves as the nominal value for designing CPU cooling systems.
Some voltage regulators provide a fixed supply voltage to the processor, but most of them sense the required supply voltage from the processor, essentially acting as a continuously-variable adjustable regulator. In particular, VRMs that are soldered to the motherboard are supposed to do the sensing, according to the Intel specification.
Modern video cards also use a VRM due to higher power and current requirements. These VRMs may generate a significant amount of heat and require heat sinks separate from the GPU.
Voltage identification
The correct supply voltage and current is communicated by the microprocessor to the VRM at startup via a number of bits called VID (voltage identification definition). In particular, the VRM initially provides a standard supply voltage to the VID logic, which is the part of the processor whose only aim is to then send the VID to the VRM. When the VRM has received the VID identifying the required supply voltage, it starts acting as a voltage regulator, providing the required constant voltage and current supply to the processor.
Instead of having a power supply unit generate some fixed voltage, the CPU uses a small set of digital signals, the VID lines, to instruct an on-board power converter of the desired voltage level. The switch-mode buck converter then adjusts its output accordingly. The flexibility so obtained makes it possible to use the same power supply unit for CPUs with different nominal supply voltages and to reduce power consumption during idle periods by lowering the supply voltage.
For example, a unit with 5-bit VID would output one of at most 32 (25) distinct output voltages. These voltages are usually (but not always) evenly spaced within a given range. Some of the code words may be reserved for special functions such as shutting down the unit, hence a 5-bit VID unit may have fewer than 32 output voltage levels. How the numerical codes map to supply voltages is typically specified in tables provided by component manufacturers. Since 2008 VID comes in 5-, 6- and 8-bit varieties and is mostly applied to power modules outputting between and .
VRM and overclocking
The VRMs are essential for overclocking. The quality of a VRM directly impacts the motherboard’s overclocking potential. The same overclocked processor can exhibit noticeable performance differences when paired with different VRMs. The reason for this is that a steady power supply is needed for successful overclocking. When a chip is pushed past its factory settings, that increases the power draw, so the VRM needs to match its output accordingly.
See also
Switched-mode power supply applications (SMPS) applications
Pulse-width modulation
References
External links
"Microprocessor Power Management"
Module
Digital electronics
MOSFETs | Voltage regulator module | [
"Physics",
"Engineering"
] | 828 | [
"Physical quantities",
"Digital electronics",
"Voltage regulation",
"Electronic engineering",
"Voltage"
] |
1,563,701 | https://en.wikipedia.org/wiki/Stag%20hunt | In game theory, the stag hunt, sometimes referred to as the assurance game, trust dilemma or common interest game, describes a conflict between safety and social cooperation. The stag hunt problem originated with philosopher Jean-Jacques Rousseau in his Discourse on Inequality. In the most common account of this dilemma, which is quite different from Rousseau's, two hunters must decide separately, and without the other knowing, whether to hunt a stag or a hare. However, both hunters know the only way to successfully hunt a stag is with the other's help. One hunter can catch a hare alone with less effort and less time, but it is worth far less than a stag and has much less meat. But both hunters would be better off if both choose the more ambitious and more rewarding goal of getting the stag, giving up some autonomy in exchange for the other hunter's cooperation and added might. This situation is often seen as a useful analogy for many kinds of social cooperation, such as international agreements on climate change.
The stag hunt differs from the prisoner's dilemma in that there are two pure-strategy Nash equilibria: one where both players cooperate, and one where both players defect. In the prisoner's dilemma, despite the fact that both players cooperating is Pareto efficient, the only pure Nash equilibrium is when both players choose to defect.
An example of the payoff matrix for the stag hunt is pictured in Figure 2.
Formal definition
Formally, a stag hunt is a game with two pure strategy Nash equilibria—one that is risk dominant and another that is payoff dominant. The payoff matrix in Figure 1 illustrates a generic stag hunt, where .
In addition to the pure strategy Nash equilibria there is one mixed strategy Nash equilibrium. This equilibrium depends on the payoffs, but the risk dominance condition places a bound on the mixed strategy Nash equilibrium. No payoffs (that satisfy the above conditions including risk dominance) can generate a mixed strategy equilibrium where Stag is played with a probability higher than one half. The best response correspondences are pictured here.
The stag hunt and social cooperation
Although most authors focus on the prisoner's dilemma as the game that best represents the problem of social cooperation, some authors believe that the stag hunt represents an equally (or more) interesting context in which to study cooperation and its problems (for an overview see ).
There is a substantial relationship between the stag hunt and the prisoner's dilemma. In biology many circumstances that have been described as prisoner's dilemma might also be interpreted as a stag hunt, depending on how fitness is calculated.
It is also the case that some human interactions that seem like prisoner's dilemmas may in fact be stag hunts. For example, suppose we have a prisoner's dilemma as pictured in Figure 3. The payoff matrix would need adjusting if players who defect against cooperators might be punished for their defection. For instance, if the expected punishment is −2, then the imposition of this punishment turns the above prisoner's dilemma into the stag hunt given at the introduction.
Examples of the stag hunt
The original stag hunt dilemma is as follows: a group of hunters have tracked a large stag, and found it to follow a certain path. If all the hunters work together, they can kill the stag and all eat. If they are discovered, or do not cooperate, the stag will flee, and all will go hungry.
The hunters hide and wait along a path. An hour goes by, with no sign of the stag. Two, three, four hours pass, with no trace. A day passes. The stag may not pass every day, but the hunters are reasonably certain that it will come. However, a hare is seen by all hunters moving along the path.
If a hunter leaps out and kills the hare, he will eat, but the trap laid for the stag will be wasted and the other hunters will starve. There is no certainty that the stag will arrive; the hare is present. The dilemma is that if one hunter waits, he risks one of his fellows killing the hare for himself, sacrificing everyone else. This makes the risk twofold; the risk that the stag does not appear, and the risk that another hunter takes the kill.
In addition to the example suggested by Rousseau, David Hume provides a series of examples that are stag hunts. One example addresses two individuals who must row a boat. If both choose to row they can successfully move the boat. However, if one doesn't, the other wastes his effort. Hume's second example involves two neighbors wishing to drain a meadow. If they both work to drain it they will be successful, but if either fails to do his part the meadow will not be drained.
Several animal behaviors have been described as stag hunts. One is the coordination of slime molds. In times of stress, individual unicellular protists will aggregate to form one large body. Here if they all act together they can successfully reproduce, but success depends on the cooperation of many individual protozoa. Another example is the hunting practices of orcas (known as carousel feeding). Orcas cooperatively corral large schools of fish to the surface and stun them by hitting them with their tails. Since this requires that the fish have no way to escape, it requires the cooperation of many orcas.
Author James Cambias describes a solution to the game as the basis for an extraterrestrial civilization in his 2014 science fiction book A Darkling Sea. Carol M. Rose argues that the stag hunt theory is useful in 'law and humanities' theory. In international law, countries are the participants in a stag hunt. They can, for example, work together to improve good corporate governance.
A stag Hunt with pre-play communication
Robert Aumann proposed: "Let us now change the scenario by permitting pre-play communication. On the face of it, it seems that the players can then 'agree' to play (c,c); though the agreement is not enforceable, it removes each player's doubt about the other one playing c". Aumann concluded that in this game "agreement has no effect, one way or the other." It is his argument: "The information that such an agreement conveys is not that the players will keep it (since it is not binding), but that each wants the other to keep it." In this game "each player always prefers the other to play c, no matter what he himself plays. Therefore, an agreement to play (c,c) conveys no information about what the players will do, and cannot be considered self-enforcing." Weiss and Agassi wrote about this argument: "This we deem somewhat incorrect since it is an oversight of the agreement that may change the mutual expectations of players that the result of the game depends on... Aumann’s assertion that there is no a priori reason to expect agreement to lead to cooperation requires completion; at times, but only at times, there is a posteriori reason for that... How a given player will behave in a given game, thus, depends on the culture within which the game takes place".
See also
Common knowledge (logic)
Discourse on Inequality
Mutual knowledge
Pluralistic ignorance
Prisoner's dilemma
Social contract
Christmas truce
Explanatory footnotes
References
Notes
Bibliography
External links
The stag hunt at GameTheory.net
Non-cooperative games
Evolutionary game theory
Social science experiments | Stag hunt | [
"Mathematics"
] | 1,552 | [
"Game theory",
"Non-cooperative games",
"Evolutionary game theory"
] |
1,564,226 | https://en.wikipedia.org/wiki/Aufbau%20principle | In atomic physics and quantum chemistry, the Aufbau principle (, from ), also called the Aufbau rule, states that in the ground state of an atom or ion, electrons first fill subshells of the lowest available energy, then fill subshells of higher energy. For example, the 1s subshell is filled before the 2s subshell is occupied. In this way, the electrons of an atom or ion form the most stable electron configuration possible. An example is the configuration for the phosphorus atom, meaning that the 1s subshell has 2 electrons, the 2s subshell has 2 electrons, the 2p subshell has 6 electrons, and so on.
The configuration is often abbreviated by writing only the valence electrons explicitly, while the core electrons are replaced by the symbol for the last previous noble gas in the periodic table, placed in square brackets. For phosphorus, the last previous noble gas is neon, so the configuration is abbreviated to [Ne] 3s2 3p3, where [Ne] signifies the core electrons whose configuration in phosphorus is identical to that of neon.
Electron behavior is elaborated by other principles of atomic physics, such as Hund's rule and the Pauli exclusion principle. Hund's rule asserts that if multiple orbitals of the same energy are available, electrons will occupy different orbitals singly and with the same spin before any are occupied doubly. If double occupation does occur, the Pauli exclusion principle requires that electrons that occupy the same orbital must have different spins (+ and −).
Passing from one element to another of the next higher atomic number, one proton and one electron are added each time to the neutral atom.
The maximum number of electrons in any shell is 2n2, where n is the principal quantum number.
The maximum number of electrons in a subshell is equal to 2(2 + 1), where the azimuthal quantum number is equal to 0, 1, 2, and 3 for s, p, d, and f subshells, so that the maximum numbers of electrons are 2, 6, 10, and 14 respectively. In the ground state, the electronic configuration can be built up by placing electrons in the lowest available subshell until the total number of electrons added is equal to the atomic number. Thus subshells are filled in the order of increasing energy, using two general rules to help predict electronic configurations:
Electrons are assigned to subshells in order of increasing value of n + .
For subshells with the same value of n + , electrons are assigned first to the subshell with lower n.
A version of the aufbau principle known as the nuclear shell model is used to predict the configuration of protons and neutrons in an atomic nucleus.
Madelung energy ordering rule
In neutral atoms, the approximate order in which subshells are filled is given by the n + rule, also known as the:
Madelung rule (after Erwin Madelung)
Janet rule (after Charles Janet)
Klechkowsky rule (after Vsevolod Klechkovsky)
Wiswesser's rule (after William Wiswesser)
Moeller's rubric
aufbau (building-up) rule or
diagonal rule
Here n represents the principal quantum number and the azimuthal quantum number; the values = 0, 1, 2, 3 correspond to the s, p, d, and f subshells, respectively. Subshells with a lower n + value are filled before those with higher n + values. In the many cases of equal n + values, the subshell with a lower n value is filled first. The subshell ordering by this rule is 1s, 2s, 2p, 3s, 3p, 4s, 3d, 4p, 5s, 4d, 5p, 6s, 4f, 5d, 6p, 7s, 5f, 6d, 7p, 8s, 5g, ... For example, thallium (Z = 81) has the ground-state configuration or in condensed form, [Xe] 6s2 4f14 5d10 6p1.
Other authors write the subshells outside of the noble gas core in order of increasing n, or if equal, increasing n + l, such as Tl (Z = 81) . They do so to emphasize that if this atom is ionized, electrons leave approximately in the order 6p, 6s, 5d, 4f, etc. On a related note, writing configurations in this way emphasizes the outermost electrons and their involvement in chemical bonding.
In general, subshells with the same n + value have similar energies, but the s-orbitals (with = 0) are exceptional: their energy levels are appreciably far from those of their n + group and are closer to those of the next n + group. This is why the periodic table is usually drawn to begin with the s-block elements.
The Madelung energy ordering rule applies only to neutral atoms in their ground state. There are twenty elements (eleven in the d-block and nine in the f-block) for which the Madelung rule predicts an electron configuration that differs from that determined experimentally, although the Madelung-predicted electron configurations are at least close to the ground state even in those cases.
One inorganic chemistry textbook describes the Madelung rule as essentially an approximate empirical rule although with some theoretical justification, based on the Thomas–Fermi model of the atom as a many-electron quantum-mechanical system.
Exceptions in the d-block
The valence d-subshell "borrows" one electron (in the case of palladium two electrons) from the valence s-subshell.
For example, in copper 29Cu, according to the Madelung rule, the 4s subshell (n + = 4 + 0 = 4) is occupied before the 3d subshell (n + = 3 + 2 = 5). The rule then predicts the electron configuration , abbreviated where [Ar] denotes the configuration of argon, the preceding noble gas. However, the measured electron configuration of the copper atom is . By filling the 3d subshell, copper can be in a lower energy state.
A special exception is lawrencium 103Lr, where the 6d electron predicted by the Madelung rule is replaced by a 7p electron: the rule predicts , but the measured configuration is .
Exceptions in the f-block
The valence d-subshell often "borrows" one electron (in the case of thorium two electrons) from the valence f-subshell. For example, in uranium 92U, according to the Madelung rule, the 5f subshell (n + = 5 + 3 = 8) is occupied before the 6d subshell (n + = 6 + 2 = 8). The rule then predicts the electron configuration where [Rn] denotes the configuration of radon, the preceding noble gas. However, the measured electron configuration of the uranium atom is .
All these exceptions are not very relevant for chemistry, as the energy differences are quite small and the presence of a nearby atom can change the preferred configuration. The periodic table ignores them and follows idealised configurations. They occur as the result of interelectronic repulsion effects; when atoms are positively ionised, most of the anomalies vanish.
The above exceptions are predicted to be the only ones until element 120, where the 8s shell is completed. Element 121, starting the g-block, should be an exception in which the expected 5g electron is transferred to 8p (similarly to lawrencium). After this, sources do not agree on the predicted configurations, but due to very strong relativistic effects there are not expected to be many more elements that show the expected configuration from Madelung's rule beyond 120. The general idea that after the two 8s elements, there come regions of chemical activity of 5g, followed by 6f, followed by 7d, and then 8p, does however mostly seem to hold true, except that relativity "splits" the 8p shell into a stabilized part (8p1/2, which acts like an extra covering shell together with 8s and is slowly drowned into the core across the 5g and 6f series) and a destabilized part (8p3/2, which has nearly the same energy as 9p1/2), and that the 8s shell gets replaced by the 9s shell as the covering s-shell for the 7d elements.
History
The aufbau principle in the new quantum theory
The principle takes its name from German, , "building-up principle", rather than being named for a scientist. It was formulated by Niels Bohr in the early 1920s. This was an early application of quantum mechanics to the properties of electrons and explained chemical properties in physical terms. Each added electron is subject to the electric field created by the positive charge of the atomic nucleus and the negative charge of other electrons that are bound to the nucleus. Although in hydrogen there is no energy difference between subshells with the same principal quantum number n, this is not true for the outer electrons of other atoms.
In the old quantum theory prior to quantum mechanics, electrons were supposed to occupy classical elliptical orbits. The orbits with the highest angular momentum are "circular orbits" outside the inner electrons, but orbits with low angular momentum (s- and p-subshell) have high subshell eccentricity, so that they get closer to the nucleus and feel on average a less strongly screened nuclear charge.
Wolfgang Pauli's model of the atom, including the effects of electron spin, provided a more complete explanation of the empirical aufbau rules.
The n + energy ordering rule
A periodic table in which each row corresponds to one value of n + (where the values of n and correspond to the principal and azimuthal quantum numbers respectively) was suggested by Charles Janet in 1928, and in 1930 he made explicit the quantum basis of this pattern, based on knowledge of atomic ground states determined by the analysis of atomic spectra. This table came to be referred to as the left-step table. Janet "adjusted" some of the actual n + values of the elements, since they did not accord with his energy ordering rule, and he considered that the discrepancies involved must have arisen from measurement errors. As it happens, the actual values were correct and the n + energy ordering rule turned out to be an approximation rather than a perfect fit, although for all elements that are exceptions the regularised configuration is a low-energy excited state, well within reach of chemical bond energies.
In 1936, the German physicist Erwin Madelung proposed this as an empirical rule for the order of filling atomic subshells, and most English-language sources therefore refer to the Madelung rule. Madelung may have been aware of this pattern as early as 1926. The Russian-American engineer Vladimir Karapetoff was the first to publish the rule in 1930, though Janet also published an illustration of it the same year.
In 1945, American chemist William Wiswesser proposed that the subshells are filled in order of increasing values of the function
This formula correctly predicts both the first and second parts of the Madelung rule (the second part being that for two subshells with the same value of n + , the one with the smaller value of n fills first). Wiswesser argued for this formula based on the pattern of both angular and radial nodes, the concept now known as orbital penetration, and the influence of the core electrons on the valence orbitals.
In 1961 the Russian agricultural chemist V.M. Klechkowski proposed a theoretical explanation for the importance of the sum n + , based on the Thomas–Fermi model of the atom. Many French- and Russian-language sources therefore refer to the Klechkowski rule. '
The full Madelung rule was derived from a similar potential in 1971 by Yury N. Demkov and Valentin N. Ostrovsky. They considered the potential
where and are constant parameters; this approaches a Coulomb potential for small . When satisfies the condition
,
where , the zero-energy solutions to the Schrödinger equation for this potential can be described analytically with Gegenbauer polynomials. As passes through each of these values, a manifold containing all states with that value of arises at zero energy and then becomes bound, recovering the Madelung order. The application of perturbation-theory show that states with smaller have lower energy, and that the s-orbitals (with ) have their energies approaching the next group.
In recent years it has been noted that the order of filling subshells in neutral atoms does not always correspond to the order of adding or removing electrons for a given atom. For example, in the fourth row of the periodic table, the Madelung rule indicates that the 4s subshell is occupied before the 3d. Therefore, the neutral atom ground state configuration for K is , Ca is , Sc is and so on. However, if a scandium atom is ionized by removing electrons (only), the configurations differ: Sc is , Sc+ is , and Sc2+ is . The subshell energies and their order depend on the nuclear charge; 4s is lower than 3d as per the Madelung rule in K with 19 protons, but 3d is lower in Sc2+ with 21 protons. In addition to there being ample experimental evidence to support this view, it makes the explanation of the order of ionization of electrons in this and other transition metals more intelligible, given that 4s electrons are invariably preferentially ionized. Generally the Madelung rule should only be used for neutral atoms; however, even for neutral atoms there are exceptions in the d-block and f-block (as shown above).
See also
Ionization energy
References
Further reading
Image: Understanding order of shell filling
Boeyens, J. C. A.: Chemistry from First Principles. Berlin: Springer Science 2008,
External links
Electron Configurations, the Aufbau Principle, Degenerate Orbitals, and Hund's Rule from Purdue University
Electron states
Foundational quantum physics
Chemical bonding | Aufbau principle | [
"Physics",
"Chemistry",
"Materials_science"
] | 2,977 | [
"Electron",
"Foundational quantum physics",
"Quantum mechanics",
"Condensed matter physics",
"nan",
"Chemical bonding",
"Electron states"
] |
1,564,394 | https://en.wikipedia.org/wiki/Electromagnetic%20shielding | In electrical engineering, electromagnetic shielding is the practice of reducing or redirecting the electromagnetic field (EMF) in a space with barriers made of conductive or magnetic materials. It is typically applied to enclosures, for isolating electrical devices from their surroundings, and to cables to isolate wires from the environment through which the cable runs (). Electromagnetic shielding that blocks radio frequency (RF) electromagnetic radiation is also known as RF shielding.
EMF shielding serves to minimize electromagnetic interference. The shielding can reduce the coupling of radio waves, electromagnetic fields, and electrostatic fields. A conductive enclosure used to block electrostatic fields is also known as a Faraday cage. The amount of reduction depends very much upon the material used, its thickness, the size of the shielded volume and the frequency of the fields of interest and the size, shape and orientation of holes in a shield to an incident electromagnetic field.
Materials used
Typical materials used for electromagnetic shielding include thin layer of metal, sheet metal, metal screen, and metal foam. Common sheet metals for shielding include copper, brass, nickel, silver, steel, and tin. Shielding effectiveness, that is, how well a shield reflects or absorbs/suppresses electromagnetic radiation, is affected by the physical properties of the metal. These may include conductivity, solderability, permeability, thickness, and weight. A metal's properties are an important consideration in material selection. For example, electrically dominant waves are reflected by highly conductive metals like copper, silver, and brass, while magnetically dominant waves are absorbed/suppressed by a less conductive metal such as steel or stainless steel. Further, any holes in the shield or mesh must be significantly smaller than the wavelength of the radiation that is being kept out, or the enclosure will not effectively approximate an unbroken conducting surface.
Another commonly used shielding method, especially with electronic goods housed in plastic enclosures, is to coat the inside of the enclosure with a metallic ink or similar material. The ink consists of a carrier material loaded with a suitable metal, typically copper or nickel, in the form of very small particulates. It is sprayed on to the enclosure and, once dry, produces a continuous conductive layer of metal, which can be electrically connected to the chassis ground of the equipment, thus providing effective shielding.
Electromagnetic shielding is the process of lowering the electromagnetic field in an area by barricading it with conductive or magnetic material. Copper is used for radio frequency (RF) shielding because it absorbs radio and other electromagnetic waves. Properly designed and constructed RF shielding enclosures satisfy most RF shielding needs, from computer and electrical switching rooms to hospital CAT-scan and MRI facilities.
EMI (electromagnetic interference) shielding is of great research interest and several new types of nanocomposites made of ferrites, polymers, and 2D materials are being developed to obtain more efficient RF/microwave-absorbing materials (MAMs). EMI shielding is often achieved by electroless plating of copper as most popular plastics are non-conductive or by special conductive paint.
Example of applications
One example is a shielded cable, which has electromagnetic shielding in the form of a wire mesh surrounding an inner core conductor. The shielding impedes the escape of any signal from the core conductor, and also prevents signals from being added to the core conductor.
Some cables have two separate coaxial screens, one connected at both ends, the other at one end only, to maximize shielding of both electromagnetic and electrostatic fields.
The door of a microwave oven has a screen built into the window. From the perspective of microwaves (with wavelengths of 12 cm) this screen finishes a Faraday cage formed by the oven's metal housing. Visible light, with wavelengths ranging between 400 nm and 700 nm, passes easily through the screen holes.
RF shielding is also used to prevent access to data stored on RFID chips embedded in various devices, such as biometric passports.
NATO specifies electromagnetic shielding for computers and keyboards to prevent passive monitoring of keyboard emissions that would allow passwords to be captured; consumer keyboards do not offer this protection primarily because of the prohibitive cost.
RF shielding is also used to protect medical and laboratory equipment to provide protection against interfering signals, including AM, FM, TV, emergency services, dispatch, pagers, ESMR, cellular, and PCS. It can also be used to protect the equipment at the AM, FM or TV broadcast facilities.
Another example of the practical use of electromagnetic shielding would be defense applications. As technology improves, so does the susceptibility to various types of nefarious electromagnetic interference. The idea of encasing a cable inside a grounded conductive barrier can provide mitigation to these risks.
How it works
Electromagnetic radiation consists of coupled electric and magnetic fields. The electric field produces forces on the charge carriers (i.e., electrons) within the conductor. As soon as an electric field is applied to the surface of an ideal conductor, it induces a current that causes displacement of charge inside the conductor that cancels the applied field inside, at which point the current stops. See Faraday cage for more explanation.
Similarly, varying magnetic fields generate eddy currents that act to cancel the applied magnetic field. (The conductor does not respond to static magnetic fields unless the conductor is moving relative to the magnetic field.) The result is that electromagnetic radiation is reflected from the surface of the conductor: internal fields stay inside, and external fields stay outside.
Several factors serve to limit the shielding capability of real RF shields. One is that, due to the electrical resistance of the conductor, the excited field does not completely cancel the incident field. Also, most conductors exhibit a ferromagnetic response to low-frequency magnetic fields, so that such fields are not fully attenuated by the conductor. Any holes in the shield force current to flow around them, so that fields passing through the holes do not excite opposing electromagnetic fields. These effects reduce the field-reflecting capability of the shield.
In the case of high-frequency electromagnetic radiation, the above-mentioned adjustments take a non-negligible amount of time, yet any such radiation energy, as far as it is not reflected, is absorbed by the skin (unless it is extremely thin), so in this case there is no electromagnetic field inside either. This is one aspect of a greater phenomenon called the skin effect. A measure of the depth to which radiation can penetrate the shield is the so-called skin depth.
Magnetic shielding
Equipment sometimes requires isolation from external magnetic fields. For static or slowly varying magnetic fields (below about 100 kHz) the Faraday shielding described above is ineffective. In these cases shields made of high magnetic permeability metal alloys can be used, such as sheets of permalloy and mu-metal or with nanocrystalline grain structure ferromagnetic metal coatings. These materials do not block the magnetic field, as with electric shielding, but rather draw the field into themselves, providing a path for the magnetic field lines around the shielded volume. The best shape for magnetic shields is thus a closed container surrounding the shielded volume. The effectiveness of this type of shielding depends on the material's permeability, which generally drops off at both very low magnetic field strengths and high field strengths where the material becomes saturated. Therefore, to achieve low residual fields, magnetic shields often consist of several enclosures, one inside the other, each of which successively reduces the field inside it. Entry holes within shielding surfaces may degrade their performance significantly.
Because of the above limitations of passive shielding, an alternative used with static or low-frequency fields is active shielding, in which a field created by electromagnets cancels the ambient field within a volume. Solenoids and Helmholtz coils are types of coils that can be used for this purpose, as well as more complex wire patterns designed using methods adapted from those used in coil design for magnetic resonance imaging. Active shields may also be designed accounting for the electromagnetic coupling with passive shields, referred to as hybrid shielding, so that there is broadband shielding from the passive shield and additional cancellation of specific components using the active system.
Additionally, superconducting materials can expel magnetic fields via the Meissner effect.
Mathematical model
Suppose that we have a spherical shell of a (linear and isotropic) diamagnetic material with relative permeability with inner radius and outer radius We then put this object in a constant magnetic field:
Since there are no currents in this problem except for possible bound currents on the boundaries of the diamagnetic material, then we can define a magnetic scalar potential that satisfies Laplace's equation:
where
In this particular problem there is azimuthal symmetry so we can write down that the solution to Laplace's equation in spherical coordinates is:
After matching the boundary conditions
at the boundaries (where is a unit vector that is normal to the surface pointing from side 1 to side 2), then we find that the magnetic field inside the cavity in the spherical shell is:
where is an attenuation coefficient that depends on the thickness of the diamagnetic material and the magnetic permeability of the material:
This coefficient describes the effectiveness of this material in shielding the external magnetic field from the cavity that it surrounds. Notice that this coefficient appropriately goes to 1 (no shielding) in the limit that . In the limit that this coefficient goes to 0 (perfect shielding). When , then the attenuation coefficient takes on the simpler form:
which shows that the magnetic field decreases like
See also
Electromagnetic interference
Electromagnetic radiation and health
Radiation
Ionising radiation protection
Mu-metal
MRI RF shielding
Permalloy
Electric field screening
Faraday cage
Anechoic chamber
Plasma window
References
External links
All about Mu Metal Permalloy material
Mu Metal Shieldings Frequently asked questions (FAQ by MARCHANDISE, Germany) magnetic permeability
Clemson Vehicular Electronics Laboratory: Shielding Effectiveness Calculator
Shielding Issues for Medical Products (PDF) — ETS-Lindgren Paper
Practical Electromagnetic Shielding Tutorial
Simulation of Electromagnetic Shielding in the COMSOL Multiphysics Environment
Magnetoencephalography
Radio electronics
Electromagnetic radiation
Electromagnetic compatibility | Electromagnetic shielding | [
"Physics",
"Engineering"
] | 2,061 | [
"Electromagnetic compatibility",
"Physical phenomena",
"Radio electronics",
"Electromagnetic radiation",
"Radiation",
"Electrical engineering"
] |
1,564,401 | https://en.wikipedia.org/wiki/Multiple%20drug%20resistance | Multiple drug resistance (MDR), multidrug resistance or multiresistance is antimicrobial resistance shown by a species of microorganism to at least one antimicrobial drug in three or more antimicrobial categories. Antimicrobial categories are classifications of antimicrobial agents based on their mode of action and specific to target organisms. The MDR types most threatening to public health are MDR bacteria that resist multiple antibiotics; other types include MDR viruses, parasites (resistant to multiple antifungal, antiviral, and antiparasitic drugs of a wide chemical variety).
Recognizing different degrees of MDR in bacteria, the terms extensively drug-resistant (XDR) and pandrug-resistant (PDR) have been introduced. Extensively drug-resistant (XDR) is the non-susceptibility of one bacteria species to all antimicrobial agents except in two or less antimicrobial categories. Within XDR, pandrug-resistant (PDR) is the non-susceptibility of bacteria to all antimicrobial agents in all antimicrobial categories. The definitions were published in 2011 in the journal Clinical Microbiology and Infection and are openly accessible.
Common multidrug-resistant organisms (MDROs)
Common multidrug-resistant organisms, typically bacteria, include:
Vancomycin-Resistant Enterococci (VRE)
Methicillin-resistant Staphylococcus aureus (MRSA)
Extended-spectrum β-lactamase (ESBLs) producing Gram-negative bacteria
Klebsiella pneumoniae carbapenemase (KPC) producing Gram-negatives
Multidrug-resistant Gram negative rods (MDR GNR) MDRGN bacteria such as Enterobacter species, E.coli, Klebsiella pneumoniae, Acinetobacter baumannii, Pseudomonas aeruginosa
Multi-drug-resistant tuberculosis
Overlapping with MDRGN, a group of Gram-positive and Gram-negative bacteria of particular recent importance have been dubbed as the ESKAPE group (Enterococcus faecium, Staphylococcus aureus, Klebsiella pneumoniae, Acinetobacter baumannii, Pseudomonas aeruginosa and Enterobacter species).
Bacterial resistance to antibiotics
Various microorganisms have survived for thousands of years by their ability to adapt to antimicrobial agents. They do so via spontaneous mutation or by DNA transfer. This process enables some bacteria to oppose the action of certain antibiotics, rendering the antibiotics ineffective. These microorganisms employ several mechanisms in attaining multi-drug resistance:
No longer relying on a glycoprotein cell wall
Enzymatic deactivation of antibiotics
Decreased cell wall permeability to antibiotics
Altered target sites of antibiotic
Efflux mechanisms to remove antibiotics
Increased mutation rate as a stress response
Many different bacteria now exhibit multi-drug resistance, including staphylococci, enterococci, gonococci, streptococci, salmonella, as well as numerous other Gram-negative bacteria and Mycobacterium tuberculosis. Antibiotic resistant bacteria are able to transfer copies of DNA that code for a mechanism of resistance to other bacteria even distantly related to them, which then are also able to pass on the resistance genes, resulting in generations of antibiotics resistant bacteria. This initial transfer of DNA is called horizontal gene transfer.
Bacterial resistance to bacteriophages
Phage-resistant bacteria variants have been observed in human studies. As for antibiotics, horizontal transfer of phage resistance can be acquired by plasmid acquisition.
Antifungal resistance
Yeasts such as Candida species can become resistant under long-term treatment with azole preparations, requiring treatment with a different drug class.
Lomentospora prolificans infections are often fatal because of their resistance to multiple antifungal agents.
Antiviral resistance
HIV is the prime example of MDR against antivirals, as it mutates rapidly under monotherapy.
Influenza virus has become increasingly MDR; first to amantadines, then to neuraminidase inhibitors such as oseltamivir, (2008-2009: 98.5% of Influenza A tested resistant), also more commonly in people with weak immune systems. Cytomegalovirus can become resistant to ganciclovir and foscarnet under treatment, especially in immunosuppressed patients. Herpes simplex virus rarely becomes resistant to acyclovir preparations, mostly in the form of cross-resistance to famciclovir and valacyclovir, usually in immunosuppressed patients.
Antiparasitic resistance
The prime example for MDR against antiparasitic drugs is malaria. Plasmodium vivax has become chloroquine and sulfadoxine-pyrimethamine resistant a few decades ago, and as of 2012 artemisinin-resistant Plasmodium falciparum has emerged in western Cambodia and western Thailand.
Toxoplasma gondii can also become resistant to artemisinin, as well as atovaquone and sulfadiazine, but is not usually MDR
Antihelminthic resistance is mainly reported in the veterinary literature, for example in connection with the practice of livestock drenching and has been recent focus of FDA regulation.
Preventing the emergence of antimicrobial resistance
To limit the development of antimicrobial resistance, it has been suggested to:
Use the appropriate antimicrobial for an infection; e.g. no antibiotics for viral infections
Identify the causative organism whenever possible
Select an antimicrobial which targets the specific organism, rather than relying on a broad-spectrum antimicrobial
Complete an appropriate duration of antimicrobial treatment (not too short and not too long)
Use the correct dose for eradication; subtherapeutic dosing is associated with resistance, as demonstrated in food animals.
More thorough education of and by prescribers on their actions' implications globally.
Vaccination to prevent drug resistance for instance pneumococcus vaccine or flu vaccine
The medical community relies on education of its prescribers, and self-regulation in the form of appeals to voluntary antimicrobial stewardship, which at hospitals may take the form of an antimicrobial stewardship program. It has been argued that depending on the cultural context government can aid in educating the public on the importance of restrictive use of antibiotics for human clinical use, but unlike narcotics, there is no regulation of its use anywhere in the world at this time. Antibiotic use has been restricted or regulated for treating animals raised for human consumption with success, in Denmark for example.
Infection prevention is the most efficient strategy of prevention of an infection with a MDR organism within a hospital, because there are few alternatives to antibiotics in the case of an extensively resistant or panresistant infection; if an infection is localized, removal or excision can be attempted (with MDR-TB the lung for example), but in the case of a systemic infection only generic measures like boosting the immune system with immunoglobulins may be possible. The use of bacteriophages (viruses which kill bacteria) is a developing area of possible therapeutic treatments.
It is necessary to develop new antibiotics over time since the selection of resistant bacteria cannot be prevented completely. This means with every application of a specific antibiotic, the survival of a few bacteria which already have a resistance gene against the substance is promoted, and the concerning bacterial population amplifies. Therefore, the resistance gene is farther distributed in the organism and the environment, and a higher percentage of bacteria means they no longer respond to a therapy with this specific antibiotic. In addition to developing new antibiotics, new strategies entirely must be implemented in order to keep the public safe from the event of total resistance. New strategies are being tested such as UV light treatments and bacteriophage utilization, however more resources must be dedicated to this cause.
See also
Drug resistance
MDRGN bacteria
Xenobiotic metabolism
NDM1 enzymatic resistance
Herbicide resistance
P-glycoprotein
References
Further reading
External links
BURDEN of Resistance and Disease in European Nations - An EU project to estimate the financial burden of antibiotic resistance in European Hospitals
European Centre of Disease Prevention and Control and (ECDC): Multidrug-resistant, extensively drug-resistant and pandrug-resistant bacteria: An international expert proposal for interim standard definitions for acquired resistance Disease Programmes Unit
State of Connecticut Department of Public Health MDRO information MultidrugResistant Organisms MDROs What Are They
Antimicrobial resistance
Drug resistance
Bacteria | Multiple drug resistance | [
"Chemistry",
"Biology"
] | 1,799 | [
"Pharmacology",
"Prokaryotes",
"Drug resistance",
"Bacteria",
"Microorganisms"
] |
1,565,482 | https://en.wikipedia.org/wiki/Vortex%20ring%20state | The vortex ring state (VRS) is a dangerous aerodynamic condition that may arise in helicopter flight, when a vortex ring system engulfs the rotor, causing severe loss of lift. Often the term settling with power is used as a synonym, e.g., in Australia, the UK, and the US, but not in Canada, which uses the latter term for a different phenomenon.
A vortex ring state sets in when the airflow around a helicopter's main rotor assumes a rotationally symmetrical form over the tips of the blades, supported by a laminar flow over the blade tips, and a countering upflow of air outside and away from the rotor. In this condition, the rotor falls into a new topological state of the surrounding flow field, induced by its own downwash, and suddenly loses lift. Since vortex rings are a surprisingly stable fluid dynamical phenomena (a form of topological soliton), the best way to recover from them is to laterally steer clear of them, in order to re-establish lift, and to break them up using maximum engine power, in order to establish turbulence.
This is also why the condition is often mistaken for "settling with insufficient power": high-powered maneuvers can both induce a vortex ring state in free air, and then at low altitude, during landing conditions, possibly break it. If sufficient power is not available to maintain the airfoil of the rotor at a stalled condition, while generating sufficient lift, the aircraft will not be able to stay aloft before the vortex ring state dissipates, and will crash.
This condition also occurs with tiltrotors, and it was responsible for an accident involving a V-22 Osprey in 2000. Vortex ring state caused the loss of a heavily modified MH-60 helicopter during Operation Neptune Spear, the 2011 raid in which Osama bin Laden was killed.
Description
Because the blades are rotating about a central axis, the speed of each airfoil is lowest at the point where it connects to the hub-and-grip assembly. This fundamental physical reality means that the innermost portion of each blade has an inherent vulnerability to stalling.
In forward flight with translational lift, there is no upward flow (upflow) of air in the hub area. As forward airspeed decreases and vertical descent rates increase, an upflow begins simply because there are no airfoil surfaces in the area of the hub, mast and blade-grip assembly.
Then, as the volume of upflow increases in the central region (i.e. between the hub and the innermost edges of the airfoils), the induced flow (air pulled or "induced" downwards through the rotor system) of the inner blade sections is overcome. This causes the innermost portions of the blades to begin to stall.
As the inner blade sections stall, a second set of vortices, similar to the rotor-tip vortices, begins to form in and around the center of the rotor system. This, combined with the outer set of vortices, results in severe loss of lift. The failure of a helicopter pilot to recognize and react to the condition can lead to high descent rates and catastrophic ground impact.
Occurrence
A helicopter normally encounters this condition when attempting to hover out-of-ground-effect (OGE) without maintaining precise altitude control, and while making downwind or steep, powered approaches when the airspeed is below Effective Translational Lift (ETL).
Detection and correction
The signs of VRS are a vibration in the main rotor system followed by an increasing sink rate and possibly a decrease of cyclic authority.
In single rotor helicopters, the vortex ring state is traditionally corrected by slightly lowering the collective to regain cyclic authority and using the cyclic control to apply lateral motion, often pitching the nose down to establish forward flight. In tandem-rotor helicopters, recovery is accomplished through lateral cyclic or pedal input or both. The aircraft will fly out of the vortex ring into "clean air", and will be able to regain lift.
Another correction now widely known as the Vuichard Recovery Technique after gaining recent popularity, was taught by Claude Vuichard, a Federal Office for Civil Aviation (FOCA) inspector in Switzerland. This technique uses a combination of all three controls together to reduce altitude loss and recover more quickly: apply cyclic in the direction of tail rotor thrust, increase the collective to climb power, and coordinate with the power pedal to maintain heading (cross controls). Recovery is complete when the rotor disc reaches the upwind part of the vortex.
Powering out of vortex ring state
It is possible to power out of vortex ring state, but this requires having about twice the power it takes to hover. Only one full-scale helicopter, the Sikorsky S-64 Skycrane, is documented as being able to do this, when unladen.
Pilot or operator reaction
Helicopter pilots are most commonly taught to avoid VRS by monitoring their rates of descent at lower airspeeds. When encountering VRS, pilots are taught to apply forward cyclic to fly out of the condition and/or lowering collective pitch. While transitioning to forward or lateral flight will alleviate the condition by itself, lowering the collective to reduce the power demand decreases the size of the vortices and reduces the amount of time required to be free of the condition. However, since the condition often occurs near the ground, lowering the collective may not be an option; a loss of altitude will occur proportional to the rate of descent developed before beginning the recovery. In some cases, vortex ring state is encountered and allowed to advance to the point that the pilot may severely lose cyclic authority due to the disrupted airflow. In these cases, the pilot's only recourse may be to enter an autorotation to break the rotor system free of its vortex ring state.
Tandem rotor helicopters
In a tandem rotor helicopter, forward cyclic will not arrest the rate of descent caused by VRS. In such a helicopter, which utilizes differential collective pitch in order to gain airspeed, lateral cyclic inputs must be made accompanied by pedal inputs in order to slide horizontally out of the vortex ring state's disturbed air.
Radio control multirotors
Radio controlled multirotors (common on drones) are subject to normal rotorcraft aerodynamics, including vortex ring state. Frame design, size and power affect the likelihood of entering the state and recovering from it. Multirotors that do not have altitude hold are also more likely to succumb to operator error, where the pilot drops the craft too fast resulting in the upwash at the rotor hubs that can lead to vortex ring state. Those that are equipped with that feature, on the other hand, tend to control their descent automatically and can usually (but not always) escape the dangerous condition.
See also
References
External links
Vortex ring state FAA Helicopter Flying Handbook
Free-Vortex Wake Calculations of Helicopter Rotors and Tilt-Rotors Operating-In and Transitioning Through the Vortex Ring State
Dispelling the Myth of the MV-22 Archive
Vortex Ring on SKYbrary
Vuichard Recovery Technique - How to escape a Vortex Ring State - Video showing recovery technique, and visualisation using water spray.
Helicopter aerodynamics
Aviation risks
Vortices | Vortex ring state | [
"Chemistry",
"Mathematics"
] | 1,463 | [
"Dynamical systems",
"Vortices",
"Fluid dynamics"
] |
1,565,926 | https://en.wikipedia.org/wiki/Estimation%20theory | Estimation theory is a branch of statistics that deals with estimating the values of parameters based on measured empirical data that has a random component. The parameters describe an underlying physical setting in such a way that their value affects the distribution of the measured data. An estimator attempts to approximate the unknown parameters using the measurements.
In estimation theory, two approaches are generally considered:
The probabilistic approach (described in this article) assumes that the measured data is random with probability distribution dependent on the parameters of interest
The set-membership approach assumes that the measured data vector belongs to a set which depends on the parameter vector.
Examples
For example, it is desired to estimate the proportion of a population of voters who will vote for a particular candidate. That proportion is the parameter sought; the estimate is based on a small random sample of voters. Alternatively, it is desired to estimate the probability of a voter voting for a particular candidate, based on some demographic features, such as age.
Or, for example, in radar the aim is to find the range of objects (airplanes, boats, etc.) by analyzing the two-way transit timing of received echoes of transmitted pulses. Since the reflected pulses are unavoidably embedded in electrical noise, their measured values are randomly distributed, so that the transit time must be estimated.
As another example, in electrical communication theory, the measurements which contain information regarding the parameters of interest are often associated with a noisy signal.
Basics
For a given model, several statistical "ingredients" are needed so the estimator can be implemented. The first is a statistical sample – a set of data points taken from a random vector (RV) of size N. Put into a vector,
Secondly, there are M parameters
whose values are to be estimated. Third, the continuous probability density function (pdf) or its discrete counterpart, the probability mass function (pmf), of the underlying distribution that generated the data must be stated conditional on the values of the parameters:
It is also possible for the parameters themselves to have a probability distribution (e.g., Bayesian statistics). It is then necessary to define the Bayesian probability
After the model is formed, the goal is to estimate the parameters, with the estimates commonly denoted , where the "hat" indicates the estimate.
One common estimator is the minimum mean squared error (MMSE) estimator, which utilizes the error between the estimated parameters and the actual value of the parameters
as the basis for optimality. This error term is then squared and the expected value of this squared value is minimized for the MMSE estimator.
Estimators
Commonly used estimators (estimation methods) and topics related to them include:
Maximum likelihood estimators
Bayes estimators
Method of moments estimators
Cramér–Rao bound
Least squares
Minimum mean squared error (MMSE), also known as Bayes least squared error (BLSE)
Maximum a posteriori (MAP)
Minimum variance unbiased estimator (MVUE)
Nonlinear system identification
Best linear unbiased estimator (BLUE)
Unbiased estimators — see estimator bias.
Particle filter
Markov chain Monte Carlo (MCMC)
Kalman filter, and its various derivatives
Wiener filter
Examples
Unknown constant in additive white Gaussian noise
Consider a received discrete signal, , of independent samples that consists of an unknown constant with additive white Gaussian noise (AWGN) with zero mean and known variance (i.e., ).
Since the variance is known then the only unknown parameter is .
The model for the signal is then
Two possible (of many) estimators for the parameter are:
which is the sample mean
Both of these estimators have a mean of , which can be shown through taking the expected value of each estimator
and
At this point, these two estimators would appear to perform the same.
However, the difference between them becomes apparent when comparing the variances.
and
It would seem that the sample mean is a better estimator since its variance is lower for every N > 1.
Maximum likelihood
Continuing the example using the maximum likelihood estimator, the probability density function (pdf) of the noise for one sample is
and the probability of becomes ( can be thought of a )
By independence, the probability of becomes
Taking the natural logarithm of the pdf
and the maximum likelihood estimator is
Taking the first derivative of the log-likelihood function
and setting it to zero
This results in the maximum likelihood estimator
which is simply the sample mean.
From this example, it was found that the sample mean is the maximum likelihood estimator for samples of a fixed, unknown parameter corrupted by AWGN.
Cramér–Rao lower bound
To find the Cramér–Rao lower bound (CRLB) of the sample mean estimator, it is first necessary to find the Fisher information number
and copying from above
Taking the second derivative
and finding the negative expected value is trivial since it is now a deterministic constant
Finally, putting the Fisher information into
results in
Comparing this to the variance of the sample mean (determined previously) shows that the sample mean is equal to the Cramér–Rao lower bound for all values of and .
In other words, the sample mean is the (necessarily unique) efficient estimator, and thus also the minimum variance unbiased estimator (MVUE), in addition to being the maximum likelihood estimator.
Maximum of a uniform distribution
One of the simplest non-trivial examples of estimation is the estimation of the maximum of a uniform distribution. It is used as a hands-on classroom exercise and to illustrate basic principles of estimation theory. Further, in the case of estimation based on a single sample, it demonstrates philosophical issues and possible misunderstandings in the use of maximum likelihood estimators and likelihood functions.
Given a discrete uniform distribution with unknown maximum, the UMVU estimator for the maximum is given by
where m is the sample maximum and k is the sample size, sampling without replacement. This problem is commonly known as the German tank problem, due to application of maximum estimation to estimates of German tank production during World War II.
The formula may be understood intuitively as;
the gap being added to compensate for the negative bias of the sample maximum as an estimator for the population maximum.
This has a variance of
so a standard deviation of approximately , the (population) average size of a gap between samples; compare above. This can be seen as a very simple case of maximum spacing estimation.
The sample maximum is the maximum likelihood estimator for the population maximum, but, as discussed above, it is biased.
Applications
Numerous fields require the use of estimation theory.
Some of these fields include:
Interpretation of scientific experiments
Signal processing
Clinical trials
Opinion polls
Quality control
Telecommunications
Project management
Software engineering
Control theory (in particular Adaptive control)
Network intrusion detection system
Orbit determination
Measured data are likely to be subject to noise or uncertainty and it is through statistical probability that optimal solutions are sought to extract as much information from the data as possible.
See also
Best linear unbiased estimator (BLUE)
Completeness (statistics)
Detection theory
Efficiency (statistics)
Expectation-maximization algorithm (EM algorithm)
Fermi problem
Grey box model
Information theory
Least-squares spectral analysis
Matched filter
Maximum entropy spectral estimation
Nuisance parameter
Parametric equation
Pareto principle
Rule of three (statistics)
State estimator
Statistical signal processing
Sufficiency (statistics)
Notes
References
Citations
Sources
External links
Signal processing
Mathematical and quantitative methods (economics) | Estimation theory | [
"Technology",
"Engineering"
] | 1,522 | [
"Telecommunications engineering",
"Computer engineering",
"Signal processing"
] |
13,430,116 | https://en.wikipedia.org/wiki/Microsoft%20SQL%20Server%20Master%20Data%20Services | Microsoft SQL Server Master Data Services (MDS) is a Master Data Management (MDM) product from Microsoft that ships as a part of the Microsoft SQL Server relational database management system. Master data management (MDM) allows an organization to discover and define non-transactional lists of data, and compile maintainable, reliable master lists. Master Data Services first shipped with Microsoft SQL Server 2008 R2. Microsoft SQL Server 2016 introduced enhancements to Master Data Services, such as improved performance and security, and the ability to clear transaction logs, create custom indexes, share entity data between different models, and support for many-to-many relationships.
Overview
In Master Data Services, the model is the highest level container in the structure of your master data. You create a model to manage groups of similar data. A model contains one or more entities, and entities contain members that are the data records. An entity is similar to a table.
Like other MDM products, Master Data Services aims to create a centralized data source and keep it synchronized, and thus reduce redundancies, across the applications which process the data.
Sharing the architectural core with Stratature +EDM, Master Data Services uses a Microsoft SQL Server database as the physical data store. It is a part of the Master Data Hub, which uses the database to store and manage data entities. It is a database with the software to validate and manage the data, and keep it synchronized with the systems that use the data. The master data hub has to extract the data from the source system, validate, sanitize and shape the data, remove duplicates, and update the hub repositories, as well as synchronize the external sources. The entity schemas, attributes, data hierarchies, validation rules and access control information are specified as metadata to the Master Data Services runtime. Master Data Services does not impose any limitation on the data model. Master Data Services also allows custom Business rules, used for validating and sanitizing the data entering the data hub, to be defined, which is then run against the data matching the specified criteria. All changes made to the data are validated against the rules, and a log of the transaction is stored persistently. Violations are logged separately, and optionally the owner is notified, automatically. All the data entities can be versioned.
Master Data Services allows the master data to be categorized by hierarchical relationships, such as employee data are a subtype of organization data. Hierarchies are generated by relating data attributes. Data can be automatically categorized using rules, and the categories are introspected programmatically. Master Data Services can also expose the data as Microsoft SQL Server views, which can be pulled by any SQL-compatible client. It uses a role-based access control system to restrict access to the data. The views are generated dynamically, so they contain the latest data entities in the master hub. It can also push out the data by writing to some external journals. Master Data Services also includes a web-based UI for viewing and managing the data. It uses ASP.NET in the back-end. The Silverlight front-end was replaced with HTML5 in SQL Server 2019.
Master Data Services provides a Web service interface to expose the data, as well as an API, which internally uses the exposed web services, exposing the feature set, programmatically, to access and manipulate the data. It also integrates with Active Directory for authentication purposes. Unlike +EDM, Master Data Services supports Unicode characters, as well as support multilingual user interfaces.
SQL Server 2016 introduced a significant performance increase in Master Data Services over previous versions.
Terminology
Model is the highest level of an MDS instance. It is the primary container for specific groupings of master data. In many ways it is very similar to the idea of a database.
Entities are containers created within a model. Entities provide a home for members, and are in many ways analogous to database tables. (e.g. Customer)
Members are analogous to the records in a database table (Entity) e.g. Will Smith. Members are contained within entities. Each member is made up of two or more attributes.
Attributes are analogous to the columns within a database table (Entity) e.g. Surname. Attributes exist within entities and help describe members (the records within the table). Name and Code attributes are created by default for each entity and serve to describe and uniquely identify leaf members. Attributes can be related to other attributes from other entities which are called 'domain-based' attributes. This is similar to the concept of a foreign key.
Other attributes however, will be of type 'free-form' (most common) or 'file'.
Attribute Groups are explicitly defined collections of particular attributes. Say you have an entity "customer" that has 50 attributes — too much information for many of your users. Attribute groups enable the creation of custom sets of hand-picked attributes that are relevant for specific audiences. (e.g. "customer - delivery details" that would include just their name and last known delivery address). This is very similar to a database view.
Hierarchies organize members into either Derived or Explicit hierarchical structures. Derived hierarchies, as the name suggests, are derived by the MDS engine based on the relationships that exist between attributes. Explicit hierarchies are created by hand using both leaf and consolidated members.
Business Rules can be created and applied against model data to ensure that custom business logic is adhered to. In order to be committed into the system data must pass all business rule validations applied to them. e.g. Within the Customer Entity you may want to create a business rule that ensures all members of the 'Country' Attribute contain either the text "USA" or "Canada". The Business Rule once created and ran will then verify all the data is correct before it accepts it into the approved model.
Versions provide system owners / administrators with the ability to Open, Lock or Commit a particular version of a model and the data contained within it at a particular point in time. As the content within a model varies, grows or shrinks over time versions provide a way of managing metadata so that subscribing systems can access to the correct content.
References
External links
Microsoft SQL Server 2016 Master Data Services
What's New in Master Data Services (MDS)
Data management
SQL Server Master Data Services
2010 software | Microsoft SQL Server Master Data Services | [
"Technology"
] | 1,313 | [
"Data management",
"Data"
] |
13,431,536 | https://en.wikipedia.org/wiki/Material%20point%20method | The material point method (MPM) is a numerical technique used to simulate the behavior of solids, liquids, gases, and any other continuum material. Especially, it is a robust spatial discretization method for simulating multi-phase (solid-fluid-gas) interactions. In the MPM, a continuum body is described by a number of small Lagrangian elements referred to as 'material points'. These material points are surrounded by a background mesh/grid that is used to calculate terms such as the deformation gradient. Unlike other mesh-based methods like the finite element method, finite volume method or finite difference method, the MPM is not a mesh based method and is instead categorized as a meshless/meshfree or continuum-based particle method, examples of which are smoothed particle hydrodynamics and peridynamics. Despite the presence of a background mesh, the MPM does not encounter the drawbacks of mesh-based methods (high deformation tangling, advection errors etc.) which makes it a promising and powerful tool in computational mechanics.
The MPM was originally proposed, as an extension of a similar method known as FLIP (a further extension of a method called PIC) to computational solid dynamics, in the early 1990 by Professors Deborah L. Sulsky, Zhen Chen and Howard L. Schreyer at University of New Mexico. After this initial development, the MPM has been further developed both in the national labs as well as the University of New Mexico, Oregon State University, University of Utah and more across the US and the world. Recently the number of institutions researching the MPM has been growing with added popularity and awareness coming from various sources such as the MPM's use in the Disney film Frozen.
The algorithm
An MPM simulation consists of the following stages:
(Prior to the time integration phase)
Initialization of grid and material points.
A geometry is discretized into a collection of material points, each with its own material properties and initial conditions (velocity, stress, temperature, etc.)
The grid, being only used to provide a place for gradient calculations is normally made to cover an area large enough to fill the expected extent of computational domain needed for the simulation.
(During the time integration phase - explicit formulation)
Material point quantities are extrapolated to grid nodes.
Material point mass (), momenta (), stresses (), and external forces () are extrapolated to the nodes at the corners of the cells within which the material points reside. This is most commonly done using standard linear shape functions (), the same used in FEM.
The grid use the material point values to create the masses (), velocities (), internal and external force vectors (,) for the nodes:
Equations of motion are solved on the grid.
Newton's 2nd Law is solved to obtain the nodal acceleration ()
New nodal velocities are found ().
Derivative terms are extrapolated back to material points
Material point acceleration (), deformation gradient () (or strain rate () depending on the strain theory used) is extrapolated from the surrounding nodes using similar shape functions to before ().
Variables on the material points: positions, velocities, strains, stresses etc. are then updated with these rates depending on integration scheme of choice and a suitable constitutive model.
Resetting of grid.
Now that the material points are fully updated at the next time step, the grid is reset to allow for the next time step to begin.
History of PIC/MPM
The PIC was originally conceived to solve problems in fluid dynamics, and developed by Harlow at Los Alamos National Laboratory in 1957. One of the first PIC codes was the Fluid-Implicit Particle (FLIP) program, which was created by Brackbill in 1986 and has been constantly in development ever since. Until the 1990s, the PIC method was used principally in fluid dynamics.
Motivated by the need for better simulating penetration problems in solid dynamics, Sulsky, Chen and Schreyer started in 1993 to reformulate the PIC and develop the MPM, with funding from Sandia National Laboratories. The original MPM was then further extended by Bardenhagen et al.. to include frictional contact, which enabled the simulation of granular flow, and by Nairn to include explicit cracks and crack propagation (known as CRAMP).
Recently, an MPM implementation based on a micro-polar Cosserat continuum has been used to simulate high-shear granular flow, such as silo discharge. MPM's uses were further extended into Geotechnical engineering with the recent development of a quasi-static, implicit MPM solver which provides numerically stable analyses of large-deformation problems in Soil mechanics.
Annual workshops on the use of MPM are held at various locations in the United States. The Fifth MPM Workshop was held at Oregon State University, in Corvallis, OR, on April 2 and 3, 2009.
Applications of PIC/MPM
The uses of the PIC or MPM method can be divided into two broad categories: firstly, there are many applications involving fluid dynamics, plasma physics, magnetohydrodynamics, and multiphase applications. The second category of applications comprises problems in solid mechanics.
Fluid dynamics and multiphase simulations
The PIC method has been used to simulate a wide range of fluid-solid interactions, including sea ice dynamics, penetration of biological soft tissues, fragmentation of gas-filled canisters, dispersion of atmospheric pollutants, multiscale simulations coupling molecular dynamics with MPM, and fluid-membrane interactions. In addition, the PIC-based FLIP code has been applied in magnetohydrodynamics and plasma processing tools, and simulations in astrophysics and free-surface flow.
As a result of a joint effort between UCLA's mathematics department and Walt Disney Animation Studios, MPM was successfully used to simulate snow in the 2013 animated film Frozen.
Solid mechanics
MPM has also been used extensively in solid mechanics, to simulate impact, penetration, collision and rebound, as well as crack propagation. MPM has also become a widely used method within the field of soil mechanics: it has been used to simulate granular flow, quickness test of sensitive clays, landslides, silo discharge, pile driving, fall-cone test, bucket filling, and material failure; and to model soil stress distribution, compaction, and hardening. It is now being used in wood mechanics problems such as simulations of transverse compression on the cellular level including cell wall contact. The work also received the George Marra Award for paper of the year from the Society of Wood Science and Technology.
Classification of PIC/MPM codes
MPM in the context of numerical methods
One subset of numerical methods are Meshfree methods, which are defined as methods for which "a predefined mesh is not necessary, at least in field variable interpolation". Ideally, a meshfree method does not make use of a mesh "throughout the process of solving the problem governed by partial differential equations, on a given arbitrary domain, subject to all kinds of boundary conditions," although existing methods are not ideal and fail in at least one of these respects. Meshless methods, which are also sometimes called particle methods, share a "common feature that the history of state variables is traced at points (particles) which are not connected with any element mesh, the distortion of which is a source of numerical difficulties." As can be seen by these varying interpretations, some scientists consider MPM to be a meshless method, while others do not. All agree, however, that MPM is a particle method.
The Arbitrary Lagrangian Eulerian (ALE) methods form another subset of numerical methods which includes MPM. Purely Lagrangian methods employ a framework in which a space is discretised into initial subvolumes, whose flowpaths are then charted over time. Purely Eulerian methods, on the other hand, employ a framework in which the motion of material is described relative to a mesh that remains fixed in space throughout the calculation. As the name indicates, ALE methods combine Lagrangian and Eulerian frames of reference.
Subclassification of MPM/PIC
PIC methods may be based on either the strong form collocation or a weak form discretisation of the underlying partial differential equation (PDE). Those based on the strong form are properly referred to as finite-volume PIC methods. Those based on the weak form discretisation of PDEs may be called either PIC or MPM.
MPM solvers can model problems in one, two, or three spatial dimensions, and can also model axisymmetric problems. MPM can be implemented to solve either quasi-static or dynamic equations of motion, depending on the type of problem that is to be modeled. Several versions of MPM include Generalized Interpolation Material Point Method ;Convected Particle Domain Interpolation Method; Convected Particle Least Squares Interpolation Method.
The time-integration used for MPM may be either explicit or implicit. The advantage to implicit integration is guaranteed stability, even for large timesteps. On the other hand, explicit integration runs much faster and is easier to implement.
Advantages
Compared to FEM
Unlike FEM, MPM does not require periodical remeshing steps and remapping of state variables, and is therefore better suited to the modeling of large material deformations. In MPM, particles and not the mesh points store all the information on the state of the calculation. Therefore, no numerical error results from the mesh returning to its original position after each calculation cycle, and no remeshing algorithm is required.
The particle basis of MPM allows it to treat crack propagation and other discontinuities better than FEM, which is known to impose the mesh orientation on crack propagation in a material. Also, particle methods are better at handling history-dependent constitutive models.
Compared to pure particle methods
Because in MPM nodes remain fixed on a regular grid, the calculation of gradients is trivial.
In simulations with two or more phases it is rather easy to detect contact between entities, as particles can interact via the grid with other particles in the same body, with other solid bodies, and with fluids.
Disadvantages of MPM
MPM is more expensive in terms of storage than other methods, as MPM makes use of mesh as well as particle data. MPM is more computationally expensive than FEM, as the grid must be reset at the end of each MPM calculation step and reinitialised at the beginning of the following step. Spurious oscillation may occur as particles cross the boundaries of the mesh in MPM, although this effect can be minimized by using generalized interpolation methods (GIMP). In MPM as in FEM, the size and orientation of the mesh can impact the results of a calculation: for example, in MPM, strain localisation is known to be particularly sensitive to mesh refinement.
One stability problem in MPM that does not occur in FEM is the cell-crossing errors and null-space errors because the number of integration points (material points) does not remain constant in a cell.
Notes
External links
Center for Simulation of Accidental Fires and Explosions – MPM code available
NairnMPM – open source
MPM3D - open source (MPM3D-F90) and free trial version (MPM3D)
Taichi - Physically Based Computer Graphics Library – open source MPM code available
Anura3D open source – software for geotechnical problems and soil-water-structure interactions by Anura3D MPM Research Community
Numerical analysis
Numerical differential equations
Computational fluid dynamics
Computational mathematics
Simulation | Material point method | [
"Physics",
"Chemistry",
"Mathematics"
] | 2,397 | [
"Computational fluid dynamics",
"Applied mathematics",
"Computational mathematics",
"Computational physics",
"Mathematical relations",
"Numerical analysis",
"Approximations",
"Fluid dynamics"
] |
8,752,642 | https://en.wikipedia.org/wiki/Nuclear%20structure | Understanding the structure of the atomic nucleus is one of the central challenges in nuclear physics.
Models
The cluster model
The cluster model describes the nucleus as a molecule-like collection of proton-neutron groups (e.g., alpha particles) with one or more valence neutrons occupying molecular orbitals.
The liquid drop model
The liquid drop model is one of the first models of nuclear structure, proposed by Carl Friedrich von Weizsäcker in 1935. It describes the nucleus as a semiclassical fluid made up of neutrons and protons, with an internal repulsive electrostatic force proportional to the number of protons. The quantum mechanical nature of these particles appears via the Pauli exclusion principle, which states that no two nucleons of the same kind can be at the same state. Thus the fluid is actually what is known as a Fermi liquid.
In this model, the binding energy of a nucleus with protons and neutrons is given by
where is the total number of nucleons (Mass Number). The terms proportional to and represent the volume and surface energy of the liquid drop, the term proportional to represents the electrostatic energy, the term proportional to represents the Pauli exclusion principle and the last term is the pairing term, which lowers the energy for even numbers of protons or neutrons.
The coefficients and the strength of the pairing term may be estimated theoretically, or fit to data.
This simple model reproduces the main features of the binding energy of nuclei.
The assumption of nucleus as a drop of Fermi liquid is still widely used in the form of Finite Range Droplet Model (FRDM), due to the possible good reproduction of nuclear binding energy on the whole chart, with the necessary accuracy for predictions of unknown nuclei.
The shell model
The expression "shell model" is ambiguous in that it refers to two different items. It was previously used to describe the existence of nucleon shells according to an approach closer to what is now called mean field theory.
Nowadays, it refers to a formalism analogous to the configuration interaction formalism used in quantum chemistry.
Introduction to the shell concept
Systematic measurements of the binding energy of atomic nuclei show systematic deviations with respect to those estimated from the liquid drop model. In particular, some nuclei having certain values for the number of protons and/or neutrons are bound more tightly together than predicted by the liquid drop model. These nuclei are called singly/doubly magic. This observation led scientists to assume the existence of a shell structure of nucleons (protons and neutrons) within the nucleus, like that of electrons within atoms.
Indeed, nucleons are quantum objects. Strictly speaking, one should not speak of energies of individual nucleons, because they are all correlated with each other. However, as an approximation one may envision an average nucleus, within which nucleons propagate individually. Owing to their quantum character, they may only occupy discrete energy levels. These levels are by no means uniformly distributed; some intervals of energy are crowded, and some are empty, generating a gap in possible energies. A shell is such a set of levels separated from the other ones by a wide empty gap.
The energy levels are found by solving the Schrödinger equation for a single nucleon moving in the average potential generated by all other nucleons. Each level may be occupied by a nucleon, or empty. Some levels accommodate several different quantum states with the same energy; they are said to be degenerate. This occurs in particular if the average nucleus exhibits a certain symmetry, like a spherical shape.
The concept of shells allows one to understand why some nuclei are bound more tightly than others. This is because two nucleons of the same kind cannot be in the same state (Pauli exclusion principle). Werner Heisenberg extended the principle of Pauli exclusion to nucleons, via the introduction of the iso-spin concept. Nucleons are thought to be composed of two kind of particles, the neutron and the proton that differ through their intrinsic property, associated with their iso-spin quantum number. This concept enables the explanation of the bound state of Deuterium, in which the proton and neutron can couple their spin and iso-spin in two different manners. So the lowest-energy state of the nucleus is one where nucleons fill all energy levels from the bottom up to some level. Nuclei that exhibit an odd number of either protons or neutrons are less bound than nuclei with even number. A nucleus with full shells is exceptionally stable, as will be explained.
As with electrons in the electron shell model, protons in the outermost shell are relatively loosely bound to the nucleus if there are only few protons in that shell, because they are farthest from the center of the nucleus. Therefore, nuclei which have a full outer proton shell will be more tightly bound and have a higher binding energy than other nuclei with a similar total number of protons. This is also true for neutrons.
Furthermore, the energy needed to excite the nucleus (i.e. moving a nucleon to a higher, previously unoccupied level) is exceptionally high in such nuclei. Whenever this unoccupied level is the next after a full shell, the only way to excite the nucleus is to raise one nucleon across the gap, thus spending a large amount of energy. Otherwise, if the highest occupied energy level lies in a partly filled shell, much less energy is required to raise a nucleon to a higher state in the same shell.
Some evolution of the shell structure observed in stable nuclei is expected away from the valley of stability. For example, observations of unstable isotopes have shown shifting and even a reordering of the single particle levels of which the shell structure is composed. This is sometimes observed as the creation of an island of inversion or in the reduction of excitation energy gaps above the traditional magic numbers.
Basic hypotheses
Some basic hypotheses are made in order to give a precise conceptual framework to the shell model:
The atomic nucleus is a quantum n-body system.
The internal motion of nucleons within the nucleus is non-relativistic, and their behavior is governed by the Schrödinger equation.
Nucleons are considered to be pointlike, without any internal structure.
Brief description of the formalism
The general process used in the shell model calculations is the following. First a Hamiltonian for the nucleus is defined. Usually, for computational practicality, only one- and two-body terms are taken into account in this definition. The interaction is an effective theory: it contains free parameters which have to be fitted with experimental data.
The next step consists in defining a basis of single-particle states, i.e. a set of wavefunctions describing all possible nucleon states. Most of the time, this basis is obtained via a Hartree–Fock computation. With this set of one-particle states, Slater determinants are built, that is, wavefunctions for Z proton variables or N neutron variables, which are antisymmetrized products of single-particle wavefunctions (antisymmetrized meaning that under exchange of variables for any pair of nucleons, the wavefunction only changes sign).
In principle, the number of quantum states available for a single nucleon at a finite energy is finite, say n. The number of nucleons in the nucleus must be smaller than the number of available states, otherwise the nucleus cannot hold all of its nucleons. There are thus several ways to choose Z (or N) states among the n possible. In combinatorial mathematics, the number of choices of Z objects among n is the binomial coefficient C. If n is much larger than Z (or N), this increases roughly like nZ. Practically, this number becomes so large that every computation is impossible for A=N+Z larger than 8.
To obviate this difficulty, the space of possible single-particle states is divided into core and valence, by analogy with chemistry (see core electron and valence electron). The core is a set of single-particles which are assumed to be inactive, in the sense that they are the well bound lowest-energy states, and that there is no need to reexamine their situation. They do not appear in the Slater determinants, contrary to the states in the valence space, which is the space of all single-particle states not in the core, but possibly to be considered in the choice of the build of the (Z-) N-body wavefunction. The set of all possible Slater determinants in the valence space defines a basis for (Z-) N-body states.
The last step consists in computing the matrix of the Hamiltonian within this basis, and to diagonalize it. In spite of the reduction of the dimension of the basis owing to the fixation of the core, the matrices to be diagonalized reach easily dimensions of the order of 109, and demand specific diagonalization techniques.
The shell model calculations give in general an excellent fit with experimental data. They depend however strongly on two main factors:
The way to divide the single-particle space into core and valence.
The effective nucleon–nucleon interaction.
Mean field theories
The independent-particle model (IPM)
The interaction between nucleons, which is a consequence of strong interactions and binds the nucleons within the nucleus, exhibits the peculiar behaviour of having a finite range: it vanishes when the distance between two nucleons becomes too large; it is attractive at medium range, and repulsive at very small range. This last property correlates with the Pauli exclusion principle according to which two fermions (nucleons are fermions) cannot be in the same quantum state. This results in a very large mean free path predicted for a nucleon within the nucleus.
The main idea of the Independent Particle approach is that a nucleon moves inside a certain potential well (which keeps it bound to the nucleus) independently from the other nucleons. This amounts to replacing an N-body problem (N particles interacting) by N single-body problems. This essential simplification of the problem is the cornerstone of mean field theories. These are also widely used in atomic physics, where electrons move in a mean field due to the central nucleus and the electron cloud itself.
The independent particle model and mean field theories (we shall see that there exist several variants) have a great success in describing the properties of the nucleus starting from an effective interaction or an effective potential, thus are a basic part of atomic nucleus theory. One should also notice that they are modular enough, in that it is quite easy to extend the model to introduce effects such as nuclear pairing, or collective motions of the nucleon like rotation, or vibration, adding the corresponding energy terms in the formalism. This implies that in many representations, the mean field is only a starting point for a more complete description which introduces correlations reproducing properties like collective excitations and nucleon transfer.
Nuclear potential and effective interaction
A large part of the practical difficulties met in mean field theories is the definition (or calculation) of the potential of the mean field itself. One can very roughly distinguish between two approaches:
The phenomenological approach is a parameterization of the nuclear potential by an appropriate mathematical function. Historically, this procedure was applied with the greatest success by Sven Gösta Nilsson, who used as a potential a (deformed) harmonic oscillator potential. The most recent parameterizations are based on more realistic functions, which account more accurately for scattering experiments, for example. In particular the form known as the Woods–Saxon potential can be mentioned.
The self-consistent or Hartree–Fock approach aims to deduce mathematically the nuclear potential from an effective nucleon–nucleon interaction. This technique implies a solution of the Schrödinger equation in an iterative fashion, starting from an ansatz wavefunction and improving it variationally, since the potential depends there upon the wavefunctions to be determined. The latter are written as Slater determinants.
In the case of the Hartree–Fock approaches, the trouble is not to find the mathematical function which describes best the nuclear potential, but that which describes best the nucleon–nucleon interaction. Indeed, in contrast with atomic physics where the interaction is known (it is the Coulomb interaction), the nucleon–nucleon interaction within the nucleus is not known analytically.
There are two main reasons for this fact. First, the strong interaction acts essentially among the quarks forming the nucleons. The nucleon–nucleon interaction in vacuum is a mere consequence of the quark–quark interaction. While the latter is well understood in the framework of the Standard Model at high energies, it is much more complicated in low energies due to color confinement and asymptotic freedom. Thus there is yet no fundamental theory allowing one to deduce the nucleon–nucleon interaction from the quark–quark interaction. Furthermore, even if this problem were solved, there would remain a large difference between the ideal (and conceptually simpler) case of two nucleons interacting in vacuum, and that of these nucleons interacting in the nuclear matter. To go further, it was necessary to invent the concept of effective interaction. The latter is basically a mathematical function with several arbitrary parameters, which are adjusted to agree with experimental data.
Most modern interaction are zero-range so they act only when the two nucleons are in contact, as introduced by Tony Skyrme. In a seminal paper by Dominique Vautherin and David M. Brink it was demonstrated that a Skyrme force that is density dependent can reproduce basic properties of atomic nuclei. Other commonly used interaction is the finite range Gogny force,
The self-consistent approaches of the Hartree–Fock type
In the Hartree–Fock approach of the n-body problem, the starting point is a Hamiltonian containing n kinetic energy terms, and potential terms. As mentioned before, one of the mean field theory hypotheses is that only the two-body interaction is to be taken into account. The potential term of the Hamiltonian represents all possible two-body interactions in the set of n fermions. It is the first hypothesis.
The second step consists in assuming that the wavefunction of the system can be written as a Slater determinant of one-particle spin-orbitals. This statement is the mathematical translation of the independent-particle model. This is the second hypothesis.
There remains now to determine the components of this Slater determinant, that is, the individual wavefunctions of the nucleons. To this end, it is assumed that the total wavefunction (the Slater determinant) is such that the energy is minimum. This is the third hypothesis.
Technically, it means that one must compute the mean value of the (known) two-body Hamiltonian on the (unknown) Slater determinant, and impose that its mathematical variation vanishes. This leads to a set of equations where the unknowns are the individual wavefunctions: the Hartree–Fock equations. Solving these equations gives the wavefunctions and individual energy levels of nucleons, and so the total energy of the nucleus and its wavefunction.
This short account of the Hartree–Fock method explains why it is called also the variational approach. At the beginning of the calculation, the total energy is a "function of the individual wavefunctions" (a so-called functional), and everything is then made in order to optimize the choice of these wavefunctions so that the functional has a minimum – hopefully absolute, and not only local. To be more precise, there should be mentioned that the energy is a functional of the density, defined as the sum of the individual squared wavefunctions. The Hartree–Fock method is also used in atomic physics and condensed matter physics as Density Functional Theory, DFT.
The process of solving the Hartree–Fock equations can only be iterative, since these are in fact a Schrödinger equation in which the potential depends on the density, that is, precisely on the wavefunctions to be determined. Practically, the algorithm is started with a set of individual grossly reasonable wavefunctions (in general the eigenfunctions of a harmonic oscillator). These allow to compute the density, and therefrom the Hartree–Fock potential. Once this done, the Schrödinger equation is solved anew, and so on. The calculation stops – convergence is reached – when the difference among wavefunctions, or energy levels, for two successive iterations is less than a fixed value. Then the mean field potential is completely determined, and the Hartree–Fock equations become standard Schrödinger equations. The corresponding Hamiltonian is then called the Hartree–Fock Hamiltonian.
The relativistic mean field approaches
Born first in the 1970s with the works of John Dirk Walecka on quantum hadrodynamics, the relativistic models of the nucleus were sharpened up towards the end of the 1980s by P. Ring and coworkers. The starting point of these approaches is the relativistic quantum field theory. In this context, the nucleon interactions occur via the exchange of virtual particles called mesons. The idea is, in a first step, to build a Lagrangian containing these interaction terms. Second, by an application of the least action principle, one gets a set of equations of motion. The real particles (here the nucleons) obey the Dirac equation, whilst the virtual ones (here the mesons) obey the Klein–Gordon equations.
In view of the non-perturbative nature of strong interaction, and also since the exact potential form of this interaction between groups of nucleons is relatively badly known, the use of such an approach in the case of atomic nuclei requires drastic approximations. The main simplification consists in replacing in the equations all field terms (which are operators in the mathematical sense) by their mean value (which are functions). In this way, one gets a system of coupled integro-differential equations, which can be solved numerically, if not analytically.
The interacting boson model
The interacting boson model (IBM) is a model in nuclear physics in which nucleons are represented as pairs, each of them acting as a boson particle, with integral spin of 0, 2 or 4. This makes calculations feasible for larger nuclei.
There are several branches of this model - in one of them (IBM-1) one can group all types of nucleons in pairs, in others (for instance - IBM-2) one considers protons and neutrons in pairs separately.
Spontaneous breaking of symmetry in nuclear physics
One of the focal points of all physics is symmetry. The nucleon–nucleon interaction and all effective interactions used in practice have certain symmetries. They are invariant by translation (changing the frame of reference so that directions are not altered), by rotation (turning the frame of reference around some axis), or parity (changing the sense of axes) in the sense that the interaction does not change under any of these operations. Nevertheless, in the Hartree–Fock approach, solutions which are not invariant under such a symmetry can appear. One speaks then of spontaneous symmetry breaking.
Qualitatively, these spontaneous symmetry breakings can be explained in the following way: in the mean field theory, the nucleus is described as a set of independent particles. Most additional correlations among nucleons which do not enter the mean field are neglected. They can appear however by a breaking of the symmetry of the mean field Hamiltonian, which is only approximate. If the density used to start the iterations of the Hartree–Fock process breaks certain symmetries, the final Hartree–Fock Hamiltonian may break these symmetries, if it is advantageous to keep these broken from the point of view of the total energy.
It may also converge towards a symmetric solution. In any case, if the final solution breaks the symmetry, for example, the rotational symmetry, so that the nucleus appears not to be spherical, but elliptic, all configurations deduced from this deformed nucleus by a rotation are just as good solutions for the Hartree–Fock problem. The ground state of the nucleus is then degenerate.
A similar phenomenon happens with the nuclear pairing, which violates the conservation of the number of baryons (see below).
Extensions of the mean field theories
Nuclear pairing phenomenon
The most common extension to mean field theory is the nuclear pairing. Nuclei with an even number of nucleons are systematically more bound than those with an odd one. This implies that each nucleon binds with another one to form a pair, consequently the system cannot be described as independent particles subjected to a common mean field. When the nucleus has an even number of protons and neutrons, each one of them finds a partner. To excite such a system, one must at least use such an energy as to break a pair. Conversely, in the case of odd number of protons or neutrons, there exists an unpaired nucleon, which needs less energy to be excited.
This phenomenon is closely analogous to that of Type 1 superconductivity in solid state physics. The first theoretical description of nuclear pairing was proposed at the end of the 1950s by Aage Bohr, Ben Mottelson, and David Pines (which contributed to the reception of the Nobel Prize in Physics in 1975 by Bohr and Mottelson). It was close to the BCS theory of Bardeen, Cooper and Schrieffer, which accounts for metal superconductivity. Theoretically, the pairing phenomenon as described by the BCS theory combines with the mean field theory: nucleons are both subject to the mean field potential and to the pairing interaction.
The Hartree–Fock–Bogolyubov (HFB) method is a more sophisticated approach, enabling one to consider the pairing and mean field interactions consistently on equal footing. HFB is now the de facto standard in the mean field treatment of nuclear systems.
Symmetry restoration
Peculiarity of mean field methods is the calculation of nuclear property by explicit symmetry breaking. The calculation of the mean field with self-consistent methods (e.g. Hartree-Fock), breaks rotational symmetry, and the calculation of pairing property breaks particle-number.
Several techniques for symmetry restoration by projecting on good quantum numbers have been developed.
Particle vibration coupling
Mean field methods (eventually considering symmetry restoration) are a good approximation for the ground state of the system, even postulating a system of independent particles. Higher-order corrections consider the fact that the particles interact together by the means of correlation. These correlations can be introduced taking into account the coupling of independent particle degrees of freedom, low-energy collective excitation of systems with even number of protons and neutrons.
In this way, excited states can be reproduced by the means of random phase approximation (RPA), also eventually consistently calculating corrections to the ground state (e.g. by the means of nuclear field theory).
See also
Nuclear magnetic moment
CHARISSA, a nuclear structure research collaboration
Further reading
General audience
James M. Cork ; Radioactivité & physique nucléaire, Dunod (1949).
Introductory texts
Luc Valentin ; Le monde subatomique - Des quarks aux centrales nucléaires, Hermann (1986).
Luc Valentin ; Noyaux et particules - Modèles et symétries, Hermann (1997).
David Halliday ; Introductory Nuclear Physics, Wiley & Sons (1957).
Kenneth Krane ; Introductory Nuclear Physics, Wiley & Sons (1987).
Carlos Bertulani ; Nuclear Physics in a Nutshell, Princeton University Press (2007).
Fundamental texts
Peter E. Hodgson; Nuclear Reactions and Nuclear Structure. Oxford University Press (1971).Irving Kaplan; Nuclear physics, the Addison-Wesley Series in Nuclear Science & Engineering, Addison-Wesley (1956). 2nd edition (1962).
A. Bohr & B. Mottelson ; Nuclear Structure, 2 vol., Benjamin (1969–1975). Volume 1 : Single Particle Motion ; Volume 2 : Nuclear Deformations. Réédité par World Scientific Publishing Company (1998), .
P. Ring & P. Schuck; The nuclear many-body problem, Springer Verlag (1980),
A. de Shalit & H. Feshbach; Theoretical Nuclear Physics, 2 vol., John Wiley & Sons (1974). Volume 1: Nuclear Structure; Volume 2: Nuclear Reactions'',
References
External links
English
Institut de Physique Nucléaire (IPN), France
Facility for Antiproton and Ion Research (FAIR), Germany
Gesellschaft für Schwerionenforschung (GSI), Germany
Joint Institute for Nuclear Research (JINR), Russia
Argonne National Laboratory (ANL), USA
Riken, Japan
National Superconducting Cyclotron Laboratory, Michigan State University, USA
Facility for Rare Isotope Beams, Michigan State University, USA
French
Institut de Physique Nucléaire (IPN), France
Centre de Spectrométrie Nucléaire et de Spectrométrie de Masse (CSNSM), France
Service de Physique Nucléaire CEA/DAM, France
Institut National de Physique Nucléaire et de Physique des Particules (In2p3), France
Grand Accélérateur National d'Ions Lourds (GANIL), France
Commissariat à l'Energie Atomique (CEA), France
Centre Européen de Recherches Nucléaires, Suisse
The LIVEChart of Nuclides - IAEA
Nuclear physics
Quantum mechanics | Nuclear structure | [
"Physics"
] | 5,378 | [
"Theoretical physics",
"Quantum mechanics",
"Nuclear physics"
] |
8,753,939 | https://en.wikipedia.org/wiki/GF%20method | The GF method, sometimes referred to as FG method, is a classical mechanical method introduced by Edgar Bright Wilson to obtain certain internal coordinates for a vibrating semi-rigid molecule, the so-called normal coordinates Qk. Normal coordinates decouple the classical vibrational motions of the molecule and thus give an easy route to obtaining vibrational amplitudes of the atoms as a function of time. In Wilson's GF method it is assumed that the molecular kinetic energy consists only of harmonic vibrations of the atoms, i.e., overall rotational and translational energy is ignored. Normal coordinates appear also in a quantum mechanical description of the vibrational motions of the molecule and the Coriolis coupling between rotations and vibrations.
It follows from application of the Eckart conditions that the matrix G−1 gives the kinetic energy in terms of arbitrary linear internal coordinates, while F represents the (harmonic) potential energy in terms of these coordinates. The GF method gives the linear transformation from general internal coordinates to the special set of normal coordinates.
The GF method
A non-linear molecule consisting of N atoms has 3N − 6 internal degrees of freedom, because positioning a molecule in three-dimensional space requires three degrees of freedom, and the description of its orientation in space requires another three degree of freedom. These degrees of freedom must be subtracted from the 3N degrees of freedom of a system of N particles.
The interaction among atoms in a molecule is described by a potential energy surface (PES), which is a function of 3N − 6 coordinates. The internal degrees of freedom s1, ..., s3N−6 describing the PES in an optimal way are often non-linear; they are for instance valence coordinates, such as bending and torsion angles and bond stretches. It is possible to write the quantum mechanical kinetic energy operator for such curvilinear coordinates, but it is hard to formulate a general theory applicable to any molecule. This is why Wilson linearized the internal coordinates by assuming small displacements. The linearized version of the internal coordinate st is denoted by St.
The PES V can be Taylor expanded around its minimum in terms of the St. The third term (the Hessian of V) evaluated in the minimum is a force derivative matrix F. In the harmonic approximation the Taylor series is ended after this term. The second term, containing first derivatives, is zero because it is evaluated in the minimum of V. The first term can be included in the zero of energy.
Thus,
The classical vibrational kinetic energy has the form:
where gst is an element of the metric tensor of the internal (curvilinear) coordinates. The dots indicate time derivatives. Mixed terms generally present in curvilinear coordinates are not present here, because only linear coordinate transformations are used. Evaluation of the metric tensor g in the minimum s0 of V gives the positive definite and symmetric matrix G = g(s0)−1.
One can solve the two matrix problems
simultaneously, since they are equivalent to the generalized eigenvalue problem
where where fi is equal to ( is the frequency of normal mode i); is the unit matrix. The matrix L−1 contains the normal coordinates Qk in its rows:
Because of the form of the generalized eigenvalue problem, the method is called the GF method,
often with the name of its originator attached to it: Wilson's GF method. By matrix transposition in both sides of the equation and using the fact that both G and F are symmetric matrices, as are diagonal matrices, one can recast this equation into a very similar one for FG . This is why the method is also referred to as Wilson's FG method.
We introduce the vectors
which satisfy the relation
Upon use of the results of the generalized eigenvalue equation, the energy E = T + V (in the harmonic approximation) of the molecule becomes:
The Lagrangian L = T − V is
The corresponding Lagrange equations are identical to the Newton equations
for a set of uncoupled harmonic oscillators. These ordinary second-order differential equations are easily solved, yielding Qt as a function of time; see the article on harmonic oscillators.
Normal coordinates in terms of Cartesian displacement coordinates
Often the normal coordinates are expressed as linear combinations of Cartesian displacement coordinates.
Let RA be the position vector of nucleus A and RA0
the corresponding equilibrium position. Then
is by definition the Cartesian displacement coordinate of nucleus A.
Wilson's linearizing of the internal curvilinear coordinates qt expresses the coordinate St in terms of the displacement coordinates
where sAt is known as a Wilson s-vector.
If we put the into a (3N − 6) × 3N matrix B, this equation becomes in matrix language
The actual form of the matrix elements of B can be fairly complicated.
Especially for a torsion angle, which involves 4 atoms, it requires tedious vector algebra to derive the corresponding values of the . See for more details on this method, known as
the Wilson s-vector method, the book by Wilson et al., or molecular vibration. Now,
which can be inverted and put in summation language:
Here D is a (3N − 6) × 3N matrix, which is given by (i) the linearization of the internal coordinates s (an algebraic process) and (ii) solution of Wilson's GF equations (a numeric process).
Matrices involved in the analysis
There are several related coordinate systems commonly used in the GF matrix analysis. These quantities are related by a variety of matrices. For clarity, we provide the coordinate systems and their interrelations here.
The relevant coordinates are:
Cartesian coordinates for each atom
Internal coordinates for each atom
Mass-weighted Cartesian coordinates
Normal coordinates
These different coordinate systems are related to one another by:
, i.e. the matrix transforms the Cartesian coordinates to (linearized) internal coordinates.
i.e. the mass matrix transforms Cartesian coordinates to mass-weighted Cartesian coordinates.
i.e. the matrix transforms the normal coordinates to mass-weighted internal coordinates.
i.e. the matrix transforms the normal coordinates to internal coordinates.
Note the useful relationship:
These matrices allow one to construct the G matrix quite simply as
Relation to Eckart conditions
From the invariance of the internal coordinates St under overall rotation and translation
of the molecule, follows the same for the linearized coordinates stA.
It can be shown that this implies that the following 6 conditions are satisfied by the internal
coordinates,
These conditions follow from the Eckart conditions that hold for the displacement vectors,
References
Further references
Spectroscopy
Molecular physics
Quantum chemistry | GF method | [
"Physics",
"Chemistry"
] | 1,367 | [
"Molecular physics",
"Spectrum (physical sciences)",
"Quantum chemistry",
"Instrumental analysis",
"Quantum mechanics",
"Theoretical chemistry",
" molecular",
"nan",
"Atomic",
"Spectroscopy",
" and optical physics"
] |
8,757,878 | https://en.wikipedia.org/wiki/Medical%20Products%20Agency%20%28Sweden%29 | The Medical Products Agency (MPA; ) is the government agency in Sweden responsible for regulation and surveillance of the development, manufacturing and sale of medicinal drugs, medical devices and cosmetics.
Its task is also to ensure that both patients and healthcare professionals have access to safe and effective medicinal products and that these are used in a rational and cost-effective manner.
The Swedish Medical Products Agency is one of the leading regulatory authorities in the EU. During the last five years, the Swedish MPA has been among the top three agencies in Europe, counting the number of approvals processes managed for central (i.e. European) approvals of medicines. The Swedish MPA also has strong representation in more than 110 working groups and committees in the scope of the Heads of Medicines Agencies (HMA) and European Medicines Agency (EMA) for regulation of medical products in Europe.
The Medical Products Agency is a government body under the aegis of the Swedish Ministry of Health and Social Affairs. Its operations are largely financed through fees. Approximately 750 people work at the agency; most are pharmacists and doctors.
General directors
1990–1999: Kjell Strandberg
1999–2008: Gunnar Alván
2008–2014: Christina Rångemark Åkerman
2014–2020: Catarina Andersson Forsman
2020–2021: Joakim Brandberg (acting)
2021– : Björn Eriksson
Critics
In 2016, the Swedish National Audit Office published an audit report examining how the state (the government, the Medical Products Agency, the National Board of Health and Welfare and the Swedish Agency for Medical and Social Evaluation) handles the pharmaceutical industry's influence over state drug control and knowledge management. In its review, the National Audit Office sharply criticizes the Medical Products Agency for shortcomings on several points. However, the Medical Products Agency has pointed out that several of the conclusions were based on claims that lack objective support. The government also rejected large parts of the National Audit Office's criticism.
See also
European Medicines Agency
References
External links
The Swedish Medical Products Agency, MPA
The Innovation Office at the MPA
National agencies for drug regulation
Government agencies of Sweden
Medical and health organizations based in Sweden | Medical Products Agency (Sweden) | [
"Chemistry"
] | 434 | [
"National agencies for drug regulation",
"Drug safety"
] |
8,758,154 | https://en.wikipedia.org/wiki/Berberine | Berberine is a quaternary ammonium salt from the protoberberine group of benzylisoquinoline alkaloids, occurring naturally as a secondary metabolite in some plants including species of Berberis, from which its name is derived.
Due to their yellow pigmentation, raw Berberis materials were once commonly used to dye wool, leather, and wood. Under ultraviolet light, berberine shows a strong yellow fluorescence, making it useful in histology for staining heparin in mast cells. As a natural dye, berberine has a color index of 75160.
Research
Studies on the pharmacological effects of berberine, including its potential use as a medicine, are preliminary basic research: some studies are conducted on cell cultures or animal models, whereas clinical trials investigating the use of berberine in humans are limited. A 2023 review study stated that berberine may improve lipid concentrations. High-quality, large clinical studies are needed to properly evaluate the effectiveness and safety of berberine in various health conditions, because existing studies are insufficient to draw reliable conclusions.
Berberine supplements are widely available in the U.S. but have not been approved by the U.S. Food and Drug Administration (FDA) for any specific medical use. Researchers publicly warn that studies linking berberine to supposed health benefits are limited. Furthermore, the quality of berberine supplements can vary between different brands. A study conducted in 2017 found that out of 15 different products sold in the U.S., only six contained at least 90% of specified berberine amount.
Drug interactions
Berberine is known to inhibit the activity of CYP3A4, an important enzyme involved in drug metabolism and clearance of endogenous substances, including steroid hormones such as cortisol, progesterone and testosterone. Several studies have demonstrated that berberine can increase the concentrations of cyclosporine in renal transplant patients and midazolam in healthy adult volunteers, confirming its inhibitory effect on CYP3A4.
Biological sources
Berberis vulgaris (barberry)
Berberis aristata (tree turmeric)
Berberis thunbergii
Fibraurea tinctoria
Mahonia aquifolium (Oregon grape)
Hydrastis canadensis (goldenseal)
Xanthorhiza simplicissima (yellowroot)
Phellodendron amurense (Amur cork tree)
Coptis chinensis (Chinese goldthread)
Tinospora cordifolia
Argemone mexicana (prickly poppy)
Eschscholzia californica (California poppy)
Berberine is usually found in the roots, rhizomes, stems, and bark.
Biosynthesis
The alkaloid berberine has a tetracyclic skeleton derived from a benzyltetrahydroisoquinoline system with the incorporation of an extra carbon atom as a bridge. Formation of the berberine bridge is rationalized as an oxidative process in which the N-methyl group, supplied by S-adenosyl methionine (SAM), is oxidized to an iminium ion, and a cyclization to the aromatic ring occurs by virtue of the phenolic group.
Reticuline is the immediate precursor of protoberberine alkaloids in plants. Berberine is an alkaloid derived from tyrosine. L-DOPA and 4-hydroxypyruvic acid both come from L-tyrosine. Although two tyrosine molecules are used in the biosynthetic pathway, only the phenethylamine fragment of the tetrahydroisoquinoline ring system is formed via DOPA; the remaining carbon atoms come from tyrosine via 4-hydroxyphenylacetaldehyde.
References
Aromatase inhibitors
Benzodioxoles
Benzylisoquinoline alkaloids
CYP2D6 inhibitors
CYP3A4 inhibitors
DNA-binding substances
Hypolipidemic agents
M2 receptor agonists
Nitrogen heterocycles
Quaternary ammonium compounds
Traditional Chinese medicine | Berberine | [
"Biology"
] | 845 | [
"Genetics techniques",
"DNA-binding substances"
] |
8,758,178 | https://en.wikipedia.org/wiki/FlyBase | FlyBase is an online bioinformatics database and the primary repository of genetic and molecular data for the insect family Drosophilidae. For the most extensively studied species and model organism, Drosophila melanogaster, a wide range of data are presented in different formats.
Information in FlyBase originates from a variety of sources ranging from large-scale genome projects to the primary research literature. These data types include mutant phenotypes; molecular characterization of mutant alleles; and other deviations, cytological maps, wild-type expression patterns, anatomical images, transgenic constructs and insertions, sequence-level gene models, and molecular classification of gene product functions. Query tools allow navigation of FlyBase through DNA or protein sequence, by gene or mutant name, or through terms from the several ontologies used to capture functional, phenotypic, and anatomical data. The database offers several different query tools in order to provide efficient access to the data available and facilitate the discovery of significant relationships within the database. Links between FlyBase and external databases, such as BDGP or modENCODE, provide opportunities for further exploration into other model organism databases and other resources of biological and molecular information. The FlyBase project is carried out by a consortium of Drosophila researchers and computer scientists at Harvard University and Indiana University in the United States, and University of Cambridge in the United Kingdom.
FlyBase is one of the organizations contributing to the Generic Model Organism Database (GMOD).
the FlyBase home page requested a website access fee of US$150.00 per person per year, stating that "The NHGRI has reduced the funding of FlyBase by 50%".
Background
Drosophila melanogaster has been an experimental organism since the early 1900s, and has since been placed at the forefront of many areas of research. As this field of research spread and became global, researchers working on the same problems needed a way to communicate and monitor progress in the field. This niche was initially filled by community newsletters such as the Drosophila Information Service (DIS), which dates back to 1934 when the field was starting to spread from Thomas Hunt Morgan's lab. Material in these pages presented regular 'catalogs' of mutations, and bibliographies of the Drosophila literature. As computer infrastructure developed in the '80s and '90s, these newsletters gave way and merged with internet mailing lists, and these eventually became online resources and data. In 1992, data on the genetics and genomics of D. melanogaster and related species were electronically available over the Internet through the funded FlyBase, BDGP (Berkeley Drosophila Genome Project) and EDGP (European Drosophila Genome Project) informatics groups. These groups recognized that most genome project and community data types overlapped. They decided it would be of value to present the scientific community with an integrated view of the data. In October 1992, the National Center for Human Genome Research of the NIH funded the FlyBase project with the objective of designing, building and releasing a database of genetic and molecular information concerning Drosophila melanogaster. FlyBase also receives support from the Medical Research Council, London. In 1998, the FlyBase consortium integrated the information into a single Drosophila genomics server. the FlyBase project was carried out by a consortium of Drosophila researchers and computer scientists at Harvard University, University of Cambridge (UK), Indiana University and the University of New Mexico.
Contents
FlyBase contains a complete annotation of the Drosophila melanogaster genome that is updated several times per year. It also included a searchable bibliography of research on Drosophila genetics in the last century. Information on current researchers, and a partial pedigree of relationships between current researchers, was searchable, based on registration of the participating scientist. The site also provides a large database of images illustrating the full genome, and several movies detailing embryogenesis (ImageBrowser ). The two major tributaries to the database are the large multispecies data sets deposited by the Drosophila 12 Genomes Consortium (Clark et al 2007) and Crosby et al 2007.
Search Strategies—Gene reports for genes from all twelve sequenced Drosophila genomes are available in FlyBase. There are four main ways this data can be browsed: Precomputed Files BLAST, Gbrowse, and Gene Report Pages. Gbrowse and precomputed files are for genome-wide analysis, bioinformatics, and comparative genomics. BLAST and gene report pages are for a specific gene, protein, or region across the species.
When looking for cytology there are two main tools available. Use Cytosearch when looking for cytologically-mapped genes or deficiencies, that have not been molecularly mapped to the sequence. Use Gbrowse when looking for molecularly mapped sequences, insertions, or Affymetrix probes.
There are two main query tools in FlyBase. The first main query tool is called Jump to Gene (J2G). This is found in the top right of the blue navigation bar on every page of FlyBase. This tool is useful when you know exactly what you are looking for and want to go to the report page with that data. The second main query tool is called QuickSearch. This is located on the FlyBase homepage. This tool is most useful when you want to look up something quickly that you may only know a little about. Searching can be performed within D. melanogaster only or within all species. Data other than genes can be searched using the ‘data class’ menu.
Related research
The following provides two examples of research that is related to or uses FlyBase:
The first is a study of expressed genes from alate (meaning "having wings") Toxoptera citricida, more commonly known as the brown citrus aphid. The brown citrus aphid, is considered the primary vector of citrus tristeza virus, a severe pathogen which causes losses to citrus industries worldwide. The winged form of this aphid can fly long distances with the wind, enabling them to spread the citrus tristeza virus in citrus growing regions. To better understand the biology of the brown citrus aphid and the emergence of genes expressed during wing development, researchers undertook a large-scale 5′ end sequencing project of cDNA clones from winged aphids. Similar large-scale expressed sequence tag (EST) sequencing projects from other insects have provided a vehicle for answering biological questions relating to development and physiology. Although there is a growing database in GenBank of ESTs from insects, most are from Drosophila melanogaster, with relatively few specifically derived from aphids. The researchers were able to provide a large data set of ESTs from the alate (winged) brown citrus aphid and have begun to analyze this valuable resource. They were able to do this with the help of information on Drosophila melanogaster in FlyBase. Putative sequence identity was determined using BLAST searches. Sequence matches with E-value scores ≤ −10 were considered significant and were categorized according to the Gene Ontology (GO) classification system based on annotation of the 5 ‘best hit’ matches in BLASTX searches. All D. melanogaster matches were cataloged using FlyBase. Nearly all of these ‘best hit’ matches were characterized with respect to the functionally annotated genes in D. melanogaster using FlyBase. Genetic information is crucial to advancing the understanding of aphid biology, and will play a major role in the development of future non-chemical, gene-based control strategies against these insect pests.
Enhancing Drosophila Gene Ontology Annotation: What gene products do and where they do it are important questions for biologists. The Gene Ontology project was established 13 years ago in order to summarize this data consistently across different databases by using a common set of defined vocabulary terms. They also encode relationships between terms. The Gene Ontology Project is a major bioinformatics initiative with the aim of standardizing the representation of gene and gene product attributes across species and databases. The project also provides gene product annotation data from GO consortium members. FlyBase was one of the three founding members of the Gene Ontology Consortium. GO annotation comprises at least three components: a GO term that describes molecular function, biological role, or subcellular location; an "evidence code" that describes the type of analysis used to support the GO term; and an attribution to a specific reference. GO annotation is useful for both small-scale and large-scale analyses. It can provide a first indication of the nature of a gene product and, in conjunction with evidence codes, point directly to papers with pertinent experimental data. The current priorities for annotation are: homologs of human disease genes, genes that are highly conserved across species, genes involved in biochemical/signaling pathways, and topical genes shown to be of significant interest in recent publications. FlyBase has been contributing GO annotations to the project since it started in August 2006. GO annotations appear on the Gene Report page in FlyBase. GO data are searchable in FlyBase using both TermLink and QueryBuilder. The GO is dynamic and can change on a daily basis, for example the addition of new terms. To keep up, FlyBase loads a new version of the GO every one or two releases of FlyBase. The GO annotation set is submitted to the GOC at the same time as a new version of FlyBase is released.
See also
List of Drosophila databases
Model Organism Databases
WormBase
Xenbase
Notes and references
External links
Official Site
Drosophila melanogaster genetics
Insect developmental biology
Model organism databases | FlyBase | [
"Biology"
] | 2,008 | [
"Model organism databases",
"Model organisms"
] |
11,894,762 | https://en.wikipedia.org/wiki/Polyad | In mathematics, polyad is a concept of category theory introduced by Jean Bénabou in generalising monads. A polyad in a bicategory D is a bicategory morphism Φ from a locally punctual bicategory C to D, . (A bicategory C is called locally punctual if all hom-categories C(X,Y) consist of one object and one morphism only.) Monads are polyads where C has only one object.
Notes
Bibliography
Category theory | Polyad | [
"Mathematics"
] | 113 | [
"Functions and mappings",
"Mathematical structures",
"Category theory stubs",
"Mathematical objects",
"Fields of abstract algebra",
"Mathematical relations",
"Category theory"
] |
11,895,109 | https://en.wikipedia.org/wiki/MicroMegas%20detector | The MicroMegas detector (Micro-Mesh Gaseous Structure) is a gaseous particle detector and an advancement of the wire chamber. Invented in 1996 by Georges Charpak and Ioannis Giomataris, Micromegas detectors are mainly used in experimental physics, in particular in particle physics, nuclear physics and astrophysics for the detection of ionizing particles.
Micromegas detectors are used to detect passing charged particles and obtain properties such as position, arrival time and momentum. The advantage of the Micromegas technology a high gain of 104 while operating with small response times in the order of 100 ns. This is realized by dividing the gas chamber with a microscopic mesh, which makes the Micromegas detector a micropattern gaseous detector. In order to minimize the perturbation on the impinging particle, the detector is just a few millimeters thick.
Working principle
Ionization and charge amplification
While passing through the detector, a particle ionizes the gas, resulting in an electron/ion pair. Due to an electric field in the order of 400 V/cm, the pair does not recombine, and the electron drifts toward the amplification electrode (the mesh) and the ion toward the cathode. Close to the mesh, the electron is accelerated by an intense electric field, typically in the order of 40 kV/cm in the amplification gap. This creates more electron/ion pairs, resulting in an electron avalanche. A gain on the order of 104 creates a sufficiently large signal to be read out by the intended electrode. The readout electrode is usually segmented into strips and pixels in order to reconstruct the position of the impinging particle. The amplitude and the shape of the signal allows users to obtain information about the impinging time and energy of the impinging particle.
Analog signal of a Micromegas
The signal is induced by the movement of charges in the volume between the micro-mesh and the readout electrode, called the amplification gap. The 100 ns long signal consists of an electron peak (blue) and an ion tail (red). Since the electron mobility in gas is over 1000 times higher than the ion mobility, its signal is registered much faster than the ionic signal. The electron signal allows to precisely measure the impinging time, while the ionic signal is necessary to reconstruct the energy of the particle.
History
First concept at the Hadron Blind Detector
In 1991, to improve the detection of hadrons at the Hadron Blind Detector experiment, I. Giomataris and G. Charpak reduced the amplification gap of a parallel plate spark chamber in order to shorten the response time. A 1 mm amplification gap prototype was built for the HDB experiment but the gain was not uniform enough to be used in the experiment. The millimeter gap was not controlled enough and created large gain fluctuations. Nevertheless, the benefits of a reduce amplification gap had been demonstrated and the Micromegas concept was born in October 1992, shortly before the announcement of the Nobel Prize attribution to Georges Charpak for the invention of the wire chambers. Georges Charpak used to say that this detector and some other new concepts belonging to the family of micro-pattern gaseous detectors (MPGDs) would revolutionize nuclear and particle physics just as his detector had done.
The Micromegas technology research and development
Starting in 1992 at CEA Saclay and CERN, the Micromegas technology has been developed to provide more stable, reliable, precise and faster detectors. In 2001, twelve large Micromegas detectors of 40 x 40 cm2 were used for the first time in a large scale experiment at COMPASS situated on the Super Proton Synchrotron accelerator at CERN.
Another example of the development of the Micromegas detectors is the invention of the “bulk” technology. The “bulk” technology consists of the integration of the micro-mesh with the printed circuit board carrying the readout electrodes in order to build a monolithic detector. Such a detector is very robust and can be produced via an industrial process (a successful implementation was demonstrated by 3M in 2006) allowing public applications. For instance, by modifying the micro-mesh in order to make it photo-sensitive to UV light, Micromegas detectors can be used to detect forest fires. A photo-sensitive Micromegas is also used for fast-timing applications. The PICOSEC-Micromegas uses a Cherenkov radiator and a photocathode in front of the gaseous volume and a time resolution of 24 ps is measured with minimum ionizing particles.
Micromegas detectors in experimental physics
Micromegas detectors are used in several experiments :
Hadronic physics: COMPASS, NA48, and projects for the ILC-TPC and CLAS12 at J-lab are under active study
Particle physics: T2K, CAST, HELAZ, IAXO
Neutron physics: nTOF, ESS nBLM
Micromegas detector will be used in the ATLAS experiment, as part of the upgrade of its planned muon spectrometer.
See also
Gaseous ionization detector
Micropattern gaseous detector
Gas electron multiplier
Notes and references
Particle detectors | MicroMegas detector | [
"Technology",
"Engineering"
] | 1,067 | [
"Particle detectors",
"Measuring instruments"
] |
11,895,648 | https://en.wikipedia.org/wiki/QMAP | QMAP was a balloon experiment to measure the anisotropy of the cosmic microwave background (CMB). It flew twice in 1996, and was used with an interlocking scan of the skies to produce CMB maps at angular scales between 0.7° and 9°.
The gondola was later used for ground-based observations in the MAT/TOCO experiment.
See also
Cosmic microwave background experiments
Observational cosmology
References
Physics experiments
Cosmic microwave background experiments
Balloon-borne telescopes | QMAP | [
"Physics"
] | 103 | [
"Experimental physics",
"Physics experiments"
] |
11,896,846 | https://en.wikipedia.org/wiki/TRPML | TRPML (transient receptor potential cation channel, mucolipin subfamily) comprises a group of three evolutionarily related proteins that belongs to the large family of transient receptor potential ion channels. The three proteins TRPML1, TRPML2 and TRPML3 are encoded by the mucolipin-1 (MCOLN1), mucolipin-2 (MCOLN2) and mucolipin-3 (MCOLN3) genes, respectively.
The three members of the TRPML ("ML" for mucolipin) sub-family are not extremely well characterized. TRPML1 is known to be localized in late endosomes. This subunit also contains a lipase domain between its S1 and S2 segments. While the function of this domain is unknown it has been proposed that it is involved in channel regulation. Physiological studies have described TRPML1 channels as proton leak channels in lysosomes responsible for preventing these organelles from becoming too acidic. TRPML2 and TRPML3 more poorly characterized than TRPML1.
Deficiencies can lead to enlarged vesicles.
Genes
(TRPML1)
(TRPML2)
(TRPML3)
References
External links
Membrane proteins
Ion channels | TRPML | [
"Chemistry",
"Biology"
] | 267 | [
"Neurochemistry",
"Ion channels",
"Protein classification",
"Membrane proteins"
] |
11,896,851 | https://en.wikipedia.org/wiki/TRPP | TRPP (transient receptor potential polycystic) is a family of transient receptor potential ion channels which when mutated can cause polycystic kidney disease.
Subcategories
TRPP subunits can be divided into two subcategories depending on structural similarity.
Polycystic Kidney Disease 1 (PKD1)-Like Group
The first group, polycystic kidney disease 1 (PKD1)-like, contains polycystin-1 (Previously known as TRPP1), PKDREJ, PKD1L1, PKD1L2, and PKD1L3. Polycystin-1 contains numerous N-terminal adhesive domains that are important for cell-cell contact. This group of subunits also contain a large extracellular domain with numerous polycystin motifs. These motifs are of unknown function and are located between the S6 and S7 segments. The large intracellular C-terminal segment of TRPP1 seems to interact with TRPP2 to act as a signaling complex.
Polycystic Kidney Disease 2 (PKD2)-Like Group
This group of TRPP members (previously known as TRPP2-like) are: TRPP1 (previously known as TRPP2 or PKD2), TRPP2 (previously known as TRPP3 or PKDL2), and TRPP3 (previously known as TRPP5 or polycystin-L2). Unlike the previous group, which contain 11 membrane-spanning segments, this group resemble other TRP channels, having 6 membrane-spanning segments with intracellular N- and C-termini. All of the members of this group contain a coiled coil region in their C-terminus involved in the interaction with the polycystin-1 group. TRPP1 and TRPP3 form constitutively active cation-selective ion channels that are permeable to calcium. TRPP2 has also been implicated in sour taste perception. Coupling of PKD1 and TRPP1 recruits TRPP1 to the membrane. Here, its activity is decreased and it suppresses the activation of G proteins by PKD1.
Genes
Group 1: polycystic kidney disease 1 (PKD1) like proteins
PKD1
Group 2: polycystic kidney disease 2 (PKD2) like proteins
TRPP1
TRPP2
TRPP3
See also
Polycystic kidney disease
References
External links
Membrane proteins
Ion channels | TRPP | [
"Chemistry",
"Biology"
] | 516 | [
"Neurochemistry",
"Ion channels",
"Protein classification",
"Membrane proteins"
] |
11,896,859 | https://en.wikipedia.org/wiki/TRPA%20%28ion%20channel%29 | TRPA is a family of transient receptor potential ion channels. The TRPA family is made up of 7 subfamilies: TRPA1, TRPA- or TRPA1-like, TRPA5, painless, pyrexia, waterwitch, and HsTRPA. TRPA1 is the only subfamily widely expressed across animals, while the other subfamilies (collectively referred to as the basal clade) are largely absent in deuterostomes (and in the case of HsTRPA, only expressed in hymenopteran insects).
TRPA1s have been the most extensively studied subfamily; they typically contain 14 N-terminal ankyrin repeats and are believed to function as mechanical stress, temperature, and chemical sensors. TRPA1 is known to be activated by compounds such as isothiocyanate (which are the pungent chemicals in substances such as mustard oil and wasabi) and Michael acceptors (e.g. cinnamaldehyde). These compounds are capable of forming covalent chemical bonds with the protein's cysteins. Non-covalent activators of TRPA1 also exists, such as methyl salicylate, menthol, and the synthetic compound PF-4840154.
The thermal sensitivity of TRPAs varies by species. For example, TRPA1 functions as a high-temperature sensor in insects and snakes, but as a cold sensor in mammals. The basal TRPAs have evolved some degree of thermal sensitivity as well: painless and pyrexia function in high-temperature sensing in Drosophila melanogaster, and the honey bee HsTRPA underwent neofunctionalization following its divergence from waterwitch, gaining function as a high-temperature sensor.
TRPA1s promiscuity with respect to sensory modality has been the source of controversy, particularly when considering its ability to detect cold. More recent work has alternatively (or additionally) proposed that reactive oxygen species activate TRPA1, across species.
References
External links
Membrane proteins
Ion channels | TRPA (ion channel) | [
"Chemistry",
"Biology"
] | 426 | [
"Neurochemistry",
"Ion channels",
"Protein classification",
"Membrane proteins"
] |
11,897,205 | https://en.wikipedia.org/wiki/Polycystin%201 | Polycystin 1 (PC1) is a protein that in humans is encoded by the PKD1 gene. Mutations of PKD1 are associated with most cases of autosomal dominant polycystic kidney disease, a severe hereditary disorder of the kidneys characterised by the development of renal cysts and severe kidney dysfunction.
Protein structure and function
PC1 is a membrane-bound protein 4303 amino acids in length expressed largely upon the primary cilium, as well as apical membranes, adherens junctions, and desmosomes. It has 11 transmembrane domains, a large extracellular N-terminal domain, and a short (about 200 amino acid) cytoplasmic C-terminal domain. This intracellular domain contains a coiled-coil domain through which PC1 interacts with polycystin 2 (PC2), a membrane-bound Ca2+-permeable ion channel.
PC1 has been proposed to act as a G protein–coupled receptor. The C-terminal domain may be cleaved in a number of different ways. In one instance, a ~35 kDa portion of the tail has been found to accumulate in the cell nucleus in response to decreased fluid flow in the mouse kidney. In another instance, a 15 kDa fragment may be yielded, interacting with transcriptional activator and co-activator STAT6 and p100, or components of the canonical Wnt signaling pathway in an inhibitory manner.
The structure of the human PKD1-PKD2 complex has been solved by cryo-electron microscopy, which showed a 1:3 ratio of PKD1 and PKD2 in the structure. PKD1 consists of a voltage-gated ion channel fold that interacts with PKD2.
PC1 mediates mechanosensation of fluid flow by the primary cilium in the renal epithelium and of mechanical deformation of articular cartilage.
Gene
Splice variants encoding different isoforms have been noted for PKD1. The gene is closely linked to six pseudogenes in a known duplicated region on chromosome 16p.
References
External links
GeneReviews/NIH/NCBI/UW entry on Polycystic Kidney Disease, Autosomal Dominant
EF-hand-containing proteins
Ion channels | Polycystin 1 | [
"Chemistry"
] | 480 | [
"Neurochemistry",
"Ion channels"
] |
11,897,423 | https://en.wikipedia.org/wiki/TRPM6 | TRPM6 is a transient receptor potential ion channel associated with hypomagnesemia with secondary hypocalcemia.
See also
TRPM
Ruthenium red
References
Further reading
External links
Ion channels | TRPM6 | [
"Chemistry"
] | 43 | [
"Neurochemistry",
"Ion channels"
] |
11,897,510 | https://en.wikipedia.org/wiki/TRPC6 | Transient receptor potential cation channel, subfamily C, member 6 or Transient receptor potential canonical 6, also known as TRPC6, is a protein encoded in the human by the TRPC6 gene. TRPC6 is a transient receptor potential channel of the classical TRPC subfamily.
TRPC6 channels are nonselective cation channels that respond directly to diacylglycerol (DAG), a product of phospholipase C activity. This activation leads to cellular depolarization and calcium influx.
Unlike the closely related TRPC3 channels, TRPC6 channels possess the distinctive ability to transport heavy metal ions. TRPC6 channels facilitate the transport of zinc ions, promoting their accumulation inside cells.
In addition, despite their non-selectiveness, TRPC6 exhibits a strong preference for calcium ions, with a permeability ratio of calcium to sodium (/) of roughly six. This selectivity is significantly higher compared to TRPC3, which displays a weaker preference for calcium with a (/) ratio of only 1.1.
Function
TRPC6 channels are widely distributed in the human body and are emerging as crucial regulators of several key physiological functions:
In blood vessels
Small arteries and arterioles exhibit a self-regulatory mechanism called myogenic tone, enabling them to maintain relatively stable blood flow despite fluctuating intravascular pressures. When intravascular pressure within a small artery or arteriole increases, the vessel walls automatically constrict. This narrowing reduces blood flow, effectively counteracting the rising pressure and stabilizing overall flow. Conversely, if blood pressure suddenly drops, vasodilation occurs to allow more blood flow and compensate for the decrease.
TRPC6 channels are present both in endothelial and smooth muscle cells, and their function is similar to α‑adrenoreceptors; they are both involved in vasoconstriction. However, TPRC6-mediated vasoconstriction is mechanosensetive (i.e. activated by mechanical stimulation) and these channels are involved in maintenance of the myogenic tone of blood vessels and autoregulation of blood flow.
When intravascular blood pressure rises, this causes stretching of the walls of blood vessels. This mechanical stretch activates the TRPC6 channel. Once activated, TRPC6 allows Ca2+ to enter the smooth muscle cells. This increase in intracellular Ca2+ triggers a chain reaction leading to vasoconstriction.
In the kidneys
TRPC6 channels are extensively present throughout the kidney, both in the tubular segments and the glomeruli. Within the glomeruli, expression of TRPC6 is primarily concentrated in podocytes. Despite being extensively expressed throughout the kidneys and despite the established link between TRPC6 over-activation and kidney pathologies, the physiological roles of this channel in healthy kidney function remain less understood. Podocytes normally display minimal baseline activity of TRPC6 channels and TRPC6 knockout mice have not shown any evident changes in glomerular structure or filtration.
Nevertheless, it has been hypothesized that the function of TRPC6 channels in podocytes resembles their function in smooth muscles of blood vessels.
Glomerular capillaries operate under significantly higher pressure than most other capillary beds. When podocytes are stretched by glomerular capillary pressure, mechanosensitive TRPC6 channels trigger a surge in Ca2+ influx into podocytes, causing them to contract. This podocyte contraction exerts a force that opposes capillary wall overstretching and distention, that would otherwise lead to protein leakage.
However, in order to control the degree of podocyte contraction and maintain blood vessel patency, the influx of Ca2+ mediated by TRPC6 channels is accompanied by an increase in the activity of big potassium (BK) channels, leading to the efflux of K+. BK channel activation and the resultant K+ efflux mitigate and counteract the depolarization induced by TRPC6 activation, potentially serving as a protective mechanism through regulation of membrane depolarization and limiting podocyte contraction.
In the central nervous system
Research of learning and memory mechanisms suggests that a continuous increase in the strength of synaptic transmission is necessary to achieve long-term modification of neural network properties and memory storage. TRPC6 appears to be essential for the formation of an excitatory synapse; overexpressing TRPC6 greatly increased dendritic spine density and the level of synapsin I and PSD-95 cluster, known as the pre- and postsynaptic markers.
TRPC6 has also been proven to participate in neuroprotection and its neuroprotective effect could be explained due to the antagonism of extrasynaptic NMDA receptor (NMDAR)-mediated intracellular calcium overload. TRPC6 activates calcineurin, which impedes the NMDAR activity.
Hyperactivation of NMDAR is a critical event in glutamate-driven excitotoxicity that causes a rapid increase in intracellular calcium concentration. Such rapid increases in cytoplasmic calcium concentrations may activate and over-stimulate a variety of proteases, kinases, endonucleases, etc. This downstream neurotoxic cascade may trigger severe damage to neuronal functioning. Hyperactivation of NMDAR is frequently observed during brain ischemia and late stage Alzheimer's disease.
Clinical significance
Since TRPC6 channels play a multifaceted role by participating in various signaling pathways, these channels are emerging as key players in the pathogenesis of a wide range of diseases including:
Kidney diseases
Disorders of the nervous system
Cancers
Cardiovascular diseases
Pulmonary diseases
Interactions
TRPC6 has been shown to interact with:
FYN,
TRPC2, and
TRPC3.
Ligands
Two of the primary active constituents responsible for the antidepressant and anxiolytic benefits of Hypericum perforatum, also known as St. John's Wort, are hyperforin and adhyperforin. These compounds are inhibitors of the reuptake of serotonin, norepinephrine, dopamine, γ-aminobutyric acid, and glutamate, and they are reported to exert these effects by binding to and activating TRPC6. Recent results with hyperforin have cast doubt on these findings as similar currents are seen upon Hyperforin treatment regardless of the presence of TRPC6.
References
Further reading
External links
Membrane proteins
Ion channels | TRPC6 | [
"Chemistry",
"Biology"
] | 1,361 | [
"Neurochemistry",
"Ion channels",
"Protein classification",
"Membrane proteins"
] |
11,898,194 | https://en.wikipedia.org/wiki/Lense%E2%80%93Thirring%20precession | In general relativity, Lense–Thirring precession or the Lense–Thirring effect (; named after Josef Lense and Hans Thirring) is a relativistic correction to the precession of a gyroscope near a large rotating mass such as the Earth. It is a gravitomagnetic frame-dragging effect. It is a prediction of general relativity consisting of secular precessions of the longitude of the ascending node and the argument of pericenter of a test particle freely orbiting a central spinning mass endowed with angular momentum .
The difference between de Sitter precession and the Lense–Thirring effect is that the de Sitter effect is due simply to the presence of a central mass, whereas the Lense–Thirring effect is due to the rotation of the central mass. The total precession is calculated by combining the de Sitter precession with the Lense–Thirring precession.
According to a 2007 historical analysis by Herbert Pfister, the effect should be renamed the Einstein–Thirring–Lense effect.
Lense–Thirring metric
The gravitational field of a spinning spherical body of constant density was studied by Lense and Thirring in 1918, in the weak-field approximation. They obtained the metric
where the symbols represent:
the metric,
the flat-space line element in three dimensions,
the "radial" position of the observer,
the speed of light,
the gravitational constant,
the completely antisymmetric Levi-Civita symbol,
the mass of the rotating body,
the angular momentum of the rotating body,
the energy–momentum tensor.
The above is the weak-field approximation of the full solution of the Einstein equations for a rotating body, known as the Kerr metric, which, due to the difficulty of its solution, was not obtained until 1965.
Coriolis term
The frame-dragging effect can be demonstrated in several ways. One way is to solve for geodesics; these will then exhibit a Coriolis force-like term, except that, in this case (unlike the standard Coriolis force), the force is not fictional, but is due to frame dragging induced by the rotating body. So, for example, an (instantaneously) radially infalling geodesic at the equator will satisfy the equation
where
is the time,
is the azimuthal angle (longitudinal angle),
is the magnitude of the angular momentum of the spinning massive body.
The above can be compared to the standard equation for motion subject to the Coriolis force:
where is the angular velocity of the rotating coordinate system. Note that, in either case, if the observer is not in radial motion, i.e. if , there is no effect on the observer.
Precession
The frame-dragging effect will cause a gyroscope to precess. The rate of precession is given by
where:
is the angular velocity of the precession, a vector, and one of its components,
the angular momentum of the spinning body, as before,
the ordinary flat-metric inner product of the position and the angular momentum.
That is, if the gyroscope's angular momentum relative to the fixed stars is , then it precesses as
The rate of precession is given by
where is the Christoffel symbol for the above metric. Gravitation by Misner, Thorne, and Wheeler provides hints on how to most easily calculate this.
Gravitoelectromagnetic analysis
It is popular in some circles to use the gravitoelectromagnetic approach to the linearized field equations. The reason for this popularity should be immediately evident below, by contrasting it to the difficulties of working with the equations above. The linearized metric can be read off from the Lense–Thirring metric given above, where , and . In this approach, one writes the linearized metric, given in terms of the gravitomagnetic potentials and is
and
where
is the gravito-electric potential, and
is the gravitomagnetic potential. Here is the 3D spatial coordinate of the observer, and is the angular momentum of the rotating body, exactly as defined above. The corresponding fields are
for the gravitoelectric field, and
is the gravitomagnetic field. It is then a matter of substitution and rearranging to obtain
as the gravitomagnetic field. Note that it is half the Lense–Thirring precession frequency. In this context, Lense–Thirring precession can essentially be viewed as a form of Larmor precession. The factor of 1/2 suggests that the correct gravitomagnetic analog of the g-factor is two. This factor of two can be explained completely analogous to the electron's g-factor by taking into account relativistic calculations.
The gravitomagnetic analog of the Lorentz force in the non-relativistic limit is given by
where is the mass of a test particle moving with velocity . This can be used in a straightforward way to compute the classical motion of bodies in the gravitomagnetic field. For example, a radially infalling body will have a velocity ; direct substitution yields the Coriolis term given in a previous section.
Example: Foucault's pendulum
To get a sense of the magnitude of the effect, the above can be used to compute the rate of precession of Foucault's pendulum, located at the surface of the Earth.
For a solid ball of uniform density, such as the Earth, of radius , the moment of inertia is given by so that the absolute value of the angular momentum is with the angular speed of the spinning ball.
The direction of the spin of the Earth may be taken as the z axis, whereas the axis of the pendulum is perpendicular to the Earth's surface, in the radial direction. Thus, we may take , where is the latitude. Similarly, the location of the observer is at the Earth's surface . This leaves rate of precession is as
As an example the latitude of the city of Nijmegen in the Netherlands is used for reference. This latitude gives a value for the Lense–Thirring precession
At this rate a Foucault pendulum would have to oscillate for more than 16000 years to precess 1 degree. Despite being quite small, it is still two orders of magnitude larger than Thomas precession for such a pendulum.
The above does not include the de Sitter precession; it would need to be added to get the total relativistic precessions on Earth.
Experimental verification
The Lense–Thirring effect, and the effect of frame dragging in general, continues to be studied experimentally. There are two basic settings for experimental tests: direct observation via satellites and spacecraft orbiting Earth, Mars or Jupiter, and indirect observation by measuring astrophysical phenomena, such as accretion disks surrounding black holes and neutron stars, or astrophysical jets from the same.
The Juno spacecraft's suite of science instruments will primarily characterize and explore the three-dimensional structure of Jupiter's polar magnetosphere, auroras and mass composition.
As Juno is a polar-orbit mission, it will be possible to measure the orbital frame-dragging, known also as Lense–Thirring precession, caused by the angular momentum of Jupiter.
Results from astrophysical settings are presented after the following section.
Astrophysical setting
A star orbiting a spinning supermassive black hole experiences Lense–Thirring precession, causing its orbital line of nodes to precess at a rate
where
a and e are the semimajor axis and eccentricity of the orbit,
M is the mass of the black hole,
χ is the dimensionless spin parameter (0 < χ < 1).
The precessing stars also exert a torque back on the black hole, causing its spin axis to precess, at a rate
where
Lj is the angular momentum of the jth star,
aj and ej are its semimajor axis and eccentricity.
A gaseous accretion disk that is tilted with respect to a spinning black hole will experience Lense–Thirring precession, at a rate given by the above equation, after setting and identifying a with the disk radius. Because the precession rate varies with distance from the black hole, the disk will "wrap up", until viscosity forces the gas into a new plane, aligned with the black hole's spin axis.
Astrophysical tests
The orientation of an astrophysical jet can be used as evidence to deduce the orientation of an accretion disk; a rapidly changing jet orientation suggests a reorientation of the accretion disk, as described above. Exactly such a change was observed in 2019 with the black hole X-ray binary in V404 Cygni.
Pulsars emit rapidly repeating radio pulses with extremely high regularity, which can be measured with microsecond precision over time spans of years and even decades. A 2020 study reports the observation of a pulsar in a tight orbit with a white dwarf, to sub-millisecond precision over two decades. The precise determination allows the change of orbital parameters to be studied; these confirm the operation of the Lense–Thirring effect in this astrophysical setting.
It may be possible to detect the Lense–Thirring effect by long-term measurement of the orbit of the S2 star around the supermassive black hole in the center of the Milky Way, using the GRAVITY instrument of the Very Large Telescope. The star orbits with a period of 16 years, and it should be possible to constrain the angular momentum of the black hole by observing the star over two to three periods (32 to 48 years).
See also
Gravity Probe B
References
External links
(German) explanation of Thirring–Lense effect; has pictures for the satellite example.
Precession
General relativity | Lense–Thirring precession | [
"Physics"
] | 2,079 | [
"Physical quantities",
"General relativity",
"Precession",
"Theory of relativity",
"Wikipedia categories named after physical quantities"
] |
11,902,016 | https://en.wikipedia.org/wiki/Stem%20Cell%20Research%20Enhancement%20Act | Stem Cell Research Enhancement Act was the name of two similar bills that both passed through the United States House of Representatives and Senate, but were both vetoed by President George W. Bush and were not enacted into law.
Stem Cell Research Enhancement Act of 2005
The Stem Cell Research Enhancement Act of 2005 () was the first bill ever vetoed by United States President George W. Bush, more than five years after his inauguration. The bill, which passed both houses of Congress, but by less than the two-thirds majority needed to override the veto, would have allowed federal funding of stem cell research on new lines of stem cells derived from discarded human embryos created for fertility treatments.
The bill passed the House of Representatives by a vote of 238 to 194 on May 24, 2005., then passed the Senate by a vote of 63 to 37 on July 18, 2006. President Bush vetoed the bill on July 19, 2006. The House of Representatives then failed to override the veto (235 to 193) on July 19, 2006.
Stem Cell Research Enhancement Act of 2007
The Stem Cell Research Enhancement Act of 2007 (), was proposed federal legislation that would have amended the Public Health Service Act to provide for human embryonic stem cell research. It was similar in content to the vetoed Stem Cell Research Enhancement Act of 2005.
The bill passed the Senate on April 11, 2007, by a vote of 63–34, then passed the House on June 7, 2007, by a vote of 247–176. President Bush vetoed the bill on June 19, 2007, and an override was not attempted.
Stem Cell Research Enhancement Act of 2009
The bill was re-introduced in the 111th Congress. It was introduced in the House by Representative Diana DeGette (D-CO) on February 4, 2009. A Senate version was introduced by Tom Harkin (D-IA) on February 26, 2009. The House bill had 113 co-sponsors and the Senate 10 co-sponsors, as of November 20, 2009.
Legislative history
References
External links
How your senator voted, "U.S. Senate Roll Call Votes," from www.senate.gov, recorded on July 18, 2006, accessed on October 31, 2006.
How your congressman voted, "FINAL VOTE RESULTS FOR ROLL CALL 388," from clerk.house.gov, recorded on July 19, 2006, accessed on October 31, 2006.
Text of the 2007 Bill
S. 5: Stem Cell Research Enhancement Act of 2007 at GovTrack.us
World Stem Cell Policies
|/bss/111search.html|
Stem cell research pros and cons, Information and resource for stem cell research
Proposed legislation of the 109th United States Congress
Proposed legislation of the 110th United States Congress
Proposed legislation of the 111th United States Congress
Stem cell research
Medical law | Stem Cell Research Enhancement Act | [
"Chemistry",
"Biology"
] | 574 | [
"Translational medicine",
"Tissue engineering",
"Stem cell research"
] |
11,904,061 | https://en.wikipedia.org/wiki/Metamorphic%20reaction | A metamorphic reaction is a chemical reaction that takes place during the geological process of metamorphism wherein one assemblage of minerals is transformed into a second assemblage which is stable under the new temperature/pressure conditions resulting in the final stable state of the observed metamorphic rock.
Examples include the production of talc under varied metamorphic conditions:
serpentine + carbon dioxide → talc + magnesite + water
chlorite + quartz → kyanite + talc + water
Polymorphic transformations
Exsolution reactions
Devolatilization reactions
Continuous reactions
Ion exchange reactions
Oxidation/reduction reactions
Reactions involving dissolved species
Chemographics
Petrogenetic grids
Schreinemaker's method
Reaction mechanisms
See also
Index mineral
Notes
Metamorphic petrology
Geochemical processes
Reaction mechanisms | Metamorphic reaction | [
"Chemistry"
] | 158 | [
"Reaction mechanisms",
"Chemical kinetics",
"Geochemical processes",
"Physical organic chemistry"
] |
11,904,093 | https://en.wikipedia.org/wiki/Ramsbottom%20carbon%20residue | Ramsbottom carbon residue (RCR) is well known in the petroleum industry as a method to calculate the carbon residue of a fuel. The carbon residue value is considered by some to give an approximate indication of the combustibility and deposit forming tendencies of the fuel.
The carbon residue of a fuel
The Ramsbottom test is used to measure carbon residues of an oil. In brief, the carbon residue of a fuel is the tendency to form carbon deposits under high temperature conditions in an inert atmosphere. This is an important value for the crude oil refinery, and usually one of the measurements in a crude oil assay. Carbon residue is an important measurement for the feed to the refinery process fluid catalytic cracking and delayed coking.
Calculation methods
There are three methods to calculate this carbon residue. It may be expressed as Ramsbottom carbon residue (RCR), Conradson carbon residue (CCR) or micro carbon residue (MCR). Numerically, the CCR value is the same as that of MCR.
Sometimes the carbon residue value can be listed as residual carbon content, RCC, which is normally the same as MCR/CCR.
For the test, 4 grams of the sample are put into a weighed glass bulb. The sample in the bulb is heated in a bath at 553°C for 20 minutes. After cooling the bulb is weighed again and the difference noted.
See also
Conradson carbon residue
Cracking (chemistry)
Crude oil assay
Micro carbon residue
Oil refinery
Petroleum
References
Petroleum technology | Ramsbottom carbon residue | [
"Chemistry",
"Engineering"
] | 307 | [
"Petroleum",
"Petroleum engineering",
"Petroleum technology",
"Petroleum stubs"
] |
335,094 | https://en.wikipedia.org/wiki/Reducing%20atmosphere | A reducing atmosphere is an atmosphere in which oxidation is prevented by absence of oxygen and other oxidizing gases or vapours, and which may contain actively reductant gases such as hydrogen, carbon monoxide, methane and hydrogen sulfide that would be readily oxidized to remove any free oxygen. Although Early Earth had a reducing prebiotic atmosphere prior to the Proterozoic eon, starting at about 2.5 billion years ago in the late Neoarchaean period, the Earth's atmosphere experienced a significant rise in oxygen and transitioned to an oxidizing atmosphere with a surplus of molecular oxygen (dioxygen, O2) as the primary oxidizing agent.
Foundry operations
The principal mission of an iron foundry is the conversion of iron oxides (purified iron ores) to iron metal. This reduction is usually effected using a reducing atmosphere consisting of some mixture of natural gas, hydrogen (H2), and carbon monoxide. The byproduct is carbon dioxide.
Metal processing
In metal processing, a reducing atmosphere is used in annealing ovens for relaxation of metal stresses without corroding the metal. A non-oxidizing gas, usually nitrogen or argon, is typically used as a carrier gas so that diluted amounts of reducing gases may be used. Typically, this is achieved through using the combustion products of fuels and tailoring the ratio of CO:CO2. However, other common reducing atmospheres in the metal processing industries consist of dissociated ammonia, vacuum, and direct mixing of appropriately pure gases of N2, Ar, and H2.
A reducing atmosphere is also used to produce specific effects on ceramic wares being fired. A reduction atmosphere is produced in a fuel fired kiln by reducing the draft and depriving the kiln of oxygen. This diminished level of oxygen causes incomplete combustion of the fuel and raises the level of carbon inside the kiln. At high temperatures the carbon will bond with and remove the oxygen in the metal oxides used as colorants in the glazes. This loss of oxygen results in a change in the color of the glazes because it allows the metals in the glaze to be seen in an unoxidized form. A reduction atmosphere can also affect the color of the clay body. If iron is present in the clay body, as it is in most stoneware, then it will be affected by the reduction atmosphere as well.
In most commercial incinerators, exactly the same conditions are created to encourage the release of carbon-bearing fumes. These fumes are then oxidized in reburn tunnels where oxygen is injected progressively. The exothermic oxidation reaction maintains the temperature of the reburn tunnels. This system allows lower temperatures to be employed in the incinerator section, where the solids are volumetrically reduced.
Origin of life
The atmosphere of Early Earth is widely speculated to have been reducing. The Miller–Urey experiment, related to some hypotheses for the origin of life, entailed reactions in a reducing atmosphere composed of a mixed atmosphere of methane, ammonia and hydrogen sulfide. Some hypotheses for the origin of life invoke a reducing atmosphere consisting of hydrogen cyanide (HCN). Experiments show that HCN can polymerize in the presence of ammonia to give a variety of products including amino acids. The same principle applies to Mars, Venus and Titan.
Cyanobacteria are suspected to be the first photoautotrophs that evolved oxygenic photosynthesis, which over the latter half of the Archaen eon eventually depleted all reductants in the Earth's oceans, terrestrial surface and atmosphere, gradually increasing the oxygen concentration in the atmosphere, changing it to what is known as an oxidizing atmosphere. This rising oxygen initially led to a 300 million-year-long ice age that devastated the then-mostly anaerobe-dominated biosphere, forcing the surviving anaerobic colonies to evolve into symbiotic microbial mats with the newly evolved aerobes. Some aerobic bacteria eventually became endosymbiont within other anaerobes (likely archaea), and the resultant symbiogenesis led to the evolution of a completely new lineage of life — the eukaryotes, who took advantage of mitochondrial aerobic respiration to power their cellular activities, allowing life to thrive and evolve into ever more complex forms. The increased oxygen in the atmosphere also eventually created the ozone layer, which shielded away harmful ionizing ultraviolet radiation that otherwise would have photodissociated away surface water and rendered life impossible on land and the ocean surface.
In contrast to the hypothesized early reducing atmosphere, evidence exists that Hadean atmospheric oxygen levels were similar to those of today. These results suggests prebiotic building blocks were delivered from elsewhere in the galaxy. The results however do not run contrary to existing theories on life's journey from anaerobic to aerobic organisms. The results quantify the nature of gas molecules containing carbon, hydrogen, and sulphur in the earliest atmosphere, but they shed no light on the much later rise of free oxygen in the air.
See also
Notes
Metallurgy
Planetary science
Pottery
Redox | Reducing atmosphere | [
"Chemistry",
"Materials_science",
"Astronomy",
"Engineering"
] | 1,067 | [
"Redox",
"Metallurgy",
"Materials science",
"Reducing agents",
"Electrochemistry",
"nan",
"Planetary science",
"Astronomical sub-disciplines"
] |
335,109 | https://en.wikipedia.org/wiki/Compressive%20strength | In mechanics, compressive strength (or compression strength) is the capacity of a material or structure to withstand loads tending to reduce size (compression). It is opposed to tensile strength which withstands loads tending to elongate, resisting tension (being pulled apart). In the study of strength of materials, compressive strength, tensile strength, and shear strength can be analyzed independently.
Some materials fracture at their compressive strength limit; others deform irreversibly, so a given amount of deformation may be considered as the limit for compressive load. Compressive strength is a key value for design of structures.
Compressive strength is often measured on a universal testing machine. Measurements of compressive strength are affected by the specific test method and conditions of measurement. Compressive strengths are usually reported in relationship to a specific technical standard.
Introduction
When a specimen of material is loaded in such a way that it extends it is said to be in tension. On the other hand, if the material compresses and shortens it is said to be in compression.
On an atomic level, molecules or atoms are forced together when in compression, whereas they are pulled apart when in tension. Since atoms in solids always try to find an equilibrium position, and distance between other atoms, forces arise throughout the entire material which oppose both tension or compression. The phenomena prevailing on an atomic level are therefore similar.
The "strain" is the relative change in length under applied stress; positive strain characterizes an object under tension load which tends to lengthen it, and a compressive stress that shortens an object gives negative strain. Tension tends to pull small sideways deflections back into alignment, while compression tends to amplify such deflection into buckling.
Compressive strength is measured on materials, components, and structures.
The ultimate compressive strength of a material is the maximum uniaxial compressive stress that it can withstand before complete failure. This value is typically determined through a compressive test conducted using a universal testing machine. During the test, a steadily increasing uniaxial compressive load is applied to the test specimen until it fails. The specimen, often cylindrical in shape, experiences both axial shortening and lateral expansion under the load. As the load increases, the machine records the corresponding deformation, plotting a stress-strain curve that would look similar to the following:
The compressive strength of the material corresponds to the stress at the red point shown on the curve. In a compression test, there is a linear region where the material follows Hooke's law. Hence, for this region, where, this time, refers to the Young's modulus for compression. In this region, the material deforms elastically and returns to its original length when the stress is removed.
This linear region terminates at what is known as the yield point. Above this point the material behaves plastically and will not return to its original length once the load is removed.
There is a difference between the engineering stress and the true stress. By its basic definition the uniaxial stress is given by:
where is load applied [N] and is area [m2].
As stated, the area of the specimen varies on compression. In reality therefore the area is some function of the applied load i.e. . Indeed, stress is defined as the force divided by the area at the start of the experiment. This is known as the engineering stress, and is defined bywhere is the original specimen area [m2].
Correspondingly, the engineering strain is defined bywhere is the current specimen length [m] and is the original specimen length [m]. True strain, also known as logarithmic strain or natural strain, provides a more accurate measure of large deformations, such as in materials like ductile metalsThe compressive strength therefore corresponds to the point on the engineering stress–strain curve defined by
where is the load applied just before crushing and is the specimen length just before crushing.
Deviation of engineering stress from true stress
When a uniaxial compressive load is applied to an object it will become shorter and spread laterally so its original cross sectional area () increases to the loaded area (). Thus the true stress () deviates from engineering stress (). Tests that measure the engineering stress at the point of failure in a material are often sufficient for many routine applications, such as quality control in concrete production. However, determining the true stress in materials under compressive loads is important for research focused on the properties on new materials and their processing.
The geometry of test specimens and friction can significantly influence the results of compressive stress tests. Friction at the contact points between the testing machine and the specimen can restrict the lateral expansion at its ends (also known as 'barreling') leading to non-uniform stress distribution. This is discussed in section on contact with friction.
Frictionless contact
With a compressive load on a test specimen it will become shorter and spread laterally so its cross sectional area increases and the true compressive stress isand the engineering stress isThe cross sectional area () and consequently the stress ( ) are uniform along the length of the specimen because there are no external lateral constraints. This condition represents an ideal test condition. For all practical purposes the volume of a high bulk modulus material (e.g. solid metals) is not changed by uniaxial compression. SoUsing the strain equation from aboveandNote that compressive strain is negative, so the true stress ( ) is less than the engineering stress (). The true strain () can be used in these formulas instead of engineering strain () when the deformation is large.
Contact with friction
As the load is applied, friction at the interface between the specimen and the test machine restricts the lateral expansion at its ends. This has two effects:
It can cause non-uniform stress distribution across the specimen, with higher stress at the centre and lower stress at the edges, which affects the accuracy of the result.
It causes a barreling effect (bulging at the centre) in ductile materials. This changes the specimen's geometry and affects its load-bearing capacity, leading to a higher apparent compressive strength.
Various methods can be used to reduce the friction according to the application:
Applying a suitable lubricant, such as MoS2, oil or grease; however, care must be taken not to affect the material properties with the lubricant used.
Use of PTFE or other low-friction sheets between the test machine and specimen.
A spherical or self-aligning test fixture, which can minimize friction by applying the load more evenly across the specimen's surface.
Three methods can be used to compensate for the effects of friction on the test result:
Correction formulas
Geometric extrapolation
Finite element analysis
Correction formulas
Round test specimens made from ductile materials with a high bulk modulus, such as metals, tend to form a barrel shape under axial compressive loading due to frictional contact at the ends. For this case the equivalent true compressive stress for this condition can be calculated usingwhere
is the loaded length of the test specimen,
is the loaded diameter of the test specimen at its ends, and
is the maximum loaded diameter of the test specimen.
Note that if there is frictionless contact between the ends of the specimen and the test machine, the bulge radius becomes infinite () and . In this case, the formulas yield the same result as because changes according to the ratio .
The parameters () obtained from a test result can be used with these formulas to calculate the equivalent true stress at failure.
The graph of specimen shape effect shows how the ratio of true stress to engineering stress (σ´/σe) varies with the aspect ratio of the test specimen (). The curves were calculated using the formulas provided above, based on the specific values presented in the table for specimen shape effect calculations. For the curves where end restraint is applied to the specimens, they are assumed to be fully laterally restrained, meaning that the coefficient of friction at the contact points between the specimen and the testing machine is greater than or equal to one (μ ⩾ 1). As shown in the graph, as the relative length of the specimen increases (), the ratio of true to engineering stress () approaches the value corresponding to frictionless contact between the specimen and the machine, which is the ideal test condition.
Geometric extrapolation
As shown in the section on correction formulas, as the length of test specimens is increased and their aspect ratio approaches zero (), the compressive stresses (σ) approach the true value (σ′). However, conducting tests with excessively long specimens is impractical, as they would fail by buckling before reaching the material's true compressive strength. To overcome this, a series of tests can be conducted using specimens with varying aspect ratios, and the true compressive strength can then be determined through extrapolation.
Finite element analysis
Comparison of compressive and tensile strengths
Concrete and ceramics typically have much higher compressive strengths than tensile strengths. Composite materials, such as glass fiber epoxy matrix composite, tend to have higher tensile strengths than compressive strengths. Metals are difficult to test to failure in tension vs compression. In compression metals fail from buckling/crumbling/45° shear which is much different (though higher stresses) than tension which fails from defects or necking down.
Compressive failure modes
If the ratio of the length to the effective radius of the material loaded in compression (Slenderness ratio) is too high, it is likely that the material will fail under buckling. Otherwise, if the material is ductile yielding usually occurs which displaying the barreling effect discussed above. A brittle material in compression typically will fail by axial splitting, shear fracture, or ductile failure depending on the level of constraint in the direction perpendicular to the direction of loading. If there is no constraint (also called confining pressure), the brittle material is likely to fail by axial splitting. Moderate confining pressure often results in shear fracture, while high confining pressure often leads to ductile failure, even in brittle materials.
Axial Splitting relieves elastic energy in brittle material by releasing strain energy in the directions perpendicular to the applied compressive stress. As defined by a materials Poisson ratio a material compressed elastically in one direction will strain in the other two directions. During axial splitting a crack may release that tensile strain by forming a new surface parallel to the applied load. The material then proceeds to separate in two or more pieces. Hence the axial splitting occurs most often when there is no confining pressure, i.e. a lesser compressive load on axis perpendicular to the main applied load. The material now split into micro columns will feel different frictional forces either due to inhomogeneity of interfaces on the free end or stress shielding. In the case of stress shielding, inhomogeneity in the materials can lead to different Young's modulus. This will in turn cause the stress to be disproportionately distributed, leading to a difference in frictional forces. In either case this will cause the material sections to begin bending and lead to ultimate failure.
Microcracking
Microcracks are a leading cause of failure under compression for brittle and quasi-brittle materials. Sliding along crack tips leads to tensile forces along the tip of the crack. Microcracks tend to form around any pre-existing crack tips. In all cases it is the overall global compressive stress interacting with local microstructural anomalies to create local areas of tension. Microcracks can stem from a few factors.
Porosity is the controlling factor for compressive strength in many materials. Microcracks can form around pores, until about they reach approximately the same size as their parent pores. (a)
Stiff inclusions within a material such as a precipitate can cause localized areas of tension. (b) When inclusions are grouped up or larger, this effect can be amplified.
Even without pores or stiff inclusions, a material can develop microcracks between weak inclined (relative to applied stress) interfaces. These interfaces can slip and create a secondary crack. These secondary cracks can continue opening, as the slip of the original interfaces keeps opening the secondary crack (c). The slipping of interfaces alone is not solely responsible for secondary crack growth as inhomogeneities in the material's Young's modulus can lead to an increase in effective misfit strain. Cracks that grow this way are known as wingtip microcracks.
The growth of microcracks is not the growth of the original crack or imperfection. The cracks that nucleate do so perpendicular to the original crack and are known as secondary cracks. The figure below emphasizes this point for wingtip cracks.
These secondary cracks can grow to as long as 10-15 times the length of the original cracks in simple (uniaxial) compression. However, if a transverse compressive load is applied. The growth is limited to a few integer multiples of the original crack's length.
Shear bands
If the sample size is large enough such that the worse defect's secondary cracks cannot grow large enough to break the sample, other defects within the sample will begin to grow secondary cracks as well. This will occur homogeneously over the entire sample. These micro-cracks form an echelon that can form an “intrinsic” fracture behavior, the nucleus of a shear fault instability. Shown right:
Eventually this leads the material deforming non-homogeneously. That is the strain caused by the material will no longer vary linearly with the load. Creating localized shear bands on which the material will fail according to deformation theory. “The onset of localized banding does not necessarily constitute final failure of a material element, but it presumably is at least the beginning of the primary failure process under compressive loading.”
Typical values
Compressive strength of concrete
For designers, compressive strength is one of the most important engineering properties of concrete. It is standard industrial practice that the compressive strength of a given concrete mix is classified by grade. Cubic or cylindrical samples of concrete are tested under a compression testing machine to measure this value. Test requirements vary by country based on their differing design codes. Use of a Compressometer is common. As per Indian codes, compressive strength of concrete is defined as:
The compressive strength of concrete is given in terms of the characteristic compressive strength of 150 mm size cubes tested after 28 days (fck). In field, compressive strength tests are also conducted at interim duration i.e. after 7 days to verify the anticipated compressive strength expected after 28 days. The same is done to be forewarned of an event of failure and take necessary precautions. The characteristic strength is defined as the strength of the concrete below which not more than 5% of the test results are expected to fall.
For design purposes, this compressive strength value is restricted by dividing with a factor of safety, whose value depends on the design philosophy used.
The construction industry is often involved in a wide array of testing. In addition to simple compression testing, testing standards such as ASTM C39, ASTM C109, ASTM C469, ASTM C1609 are among the test methods that can be followed to measure the mechanical properties of concrete. When measuring the compressive strength and other material properties of concrete, testing equipment that can be manually controlled or servo-controlled may be selected depending on the procedure followed. Certain test methods specify or limit the loading rate to a certain value or a range, whereas other methods request data based on test procedures run at very low rates.
Ultra-high performance concrete (UHPC) is defined as having a compressive strength over 150 MPa.
See also
Buff strength
Container compression test
Crashworthiness
Deformation (engineering)
Schmidt hammer, for measuring compressive strength of materials
Plane strain compression test
References
Mikell P. Groover, Fundamentals of Modern Manufacturing, John Wiley & Sons, 2002 U.S.A,
Callister W.D. Jr., Materials Science & Engineering an Introduction, John Wiley & Sons, 2003 U.S.A,
Materials science
Product testing | Compressive strength | [
"Physics",
"Materials_science",
"Engineering"
] | 3,312 | [
"Applied and interdisciplinary physics",
"Materials science",
"nan"
] |
335,612 | https://en.wikipedia.org/wiki/Nuclear%20Overhauser%20effect | The nuclear Overhauser effect (NOE) is the transfer of nuclear spin polarization from one population of spin-active nuclei (e.g. 1H, 13C, 15N etc.) to another via cross-relaxation. A phenomenological definition of the NOE in nuclear magnetic resonance spectroscopy (NMR) is the change in the integrated intensity (positive or negative) of one NMR resonance that occurs when another is saturated by irradiation with an RF field. The change in resonance intensity of a nucleus is a consequence of the nucleus being close in space to those directly affected by the RF perturbation.
The NOE is particularly important in the assignment of NMR resonances, and the elucidation and confirmation of the structures or configurations of organic and biological molecules. The 1H two-dimensional NOE spectroscopy (NOESY) experiment and its extensions are important tools to identify stereochemistry of proteins and other biomolecules in solution, whereas in solid form crystal x-ray diffraction typically used to identify stereochemistry. The heteronuclear NOE is particularly important in 13C NMR spectroscopy to identify carbons bonded to protons, to provide polarization enhancements to such carbons to increase signal-to-noise, and to ascertain the extent the relaxation of these carbons is controlled by the dipole-dipole relaxation mechanism.
History
The NOE developed from the theoretical work of American physicist Albert Overhauser who in 1953 proposed that nuclear spin polarization could be enhanced by the microwave irradiation of the conduction electrons in certain metals. The electron-nuclear enhancement predicted by Overhauser was experimentally demonstrated in 7Li metal by T. R. Carver and C. P. Slichter also in 1953. A general theoretical basis and experimental observation of an Overhauser effect involving only nuclear spins in the HF molecule was published by Ionel Solomon in 1955. Another early experimental observation of the NOE was used by Kaiser in 1963 to show how the NOE may be used to determine the relative signs of scalar coupling constants, and to assign spectral lines in NMR spectra to transitions between energy levels. In this study, the resonance of one population of protons (1H) in an organic molecule was enhanced when a second distinct population of protons in the same organic molecule was saturated by RF irradiation. The application of the NOE was used by Anet and Bourn in 1965 to confirm the assignments of the NMR resonances for β,β-dimethylacrylic acid and dimethyl formamide, thereby showing that conformation and configuration information about organic molecules in solution can be obtained. Bell and Saunders reported direct correlation between NOE enhancements and internuclear distances in 1970 while quantitative measurements of internuclear distances in molecules with three or more spins was reported by Schirmer et al.
Richard R. Ernst was awarded the 1991 Nobel Prize in Chemistry for developing Fourier transform and two-dimensional NMR spectroscopy, which was soon adapted to the measurement of the NOE, particularly in large biological molecules. In 2002, Kurt Wuthrich won the Nobel Prize in Chemistry for the development of nuclear magnetic resonance spectroscopy for determining the three-dimensional structure of biological macromolecules in solution, demonstrating how the 2D NOE method (NOESY) can be used to constrain the three-dimensional structures of large biological macromolecules. Professor Anil Kumar was the first to apply the two-dimensional Nuclear Overhauser Effect (2D-NOE now known as NOESY) experiment to a biomolecule, which opened the field for the determination of three-dimensional structures of biomolecules in solution by NMR spectroscopy.
Relaxation
The NOE and nuclear spin-lattice relaxation are closely related phenomena. For a single spin- nucleus in a magnetic field there are two energy levels that are often labeled α and β, which correspond to the two possible spin quantum states, + and -, respectively. At thermal equilibrium, the population of the two energy levels is determined by the Boltzmann distribution with spin populations given by Pα and Pβ. If the spin populations are perturbed by an appropriate RF field at the transition energy frequency, the spin populations return to thermal equilibrium by a process called spin-lattice relaxation. The rate of transitions from α to β is proportional to the population of state α, Pα, and is a first order process with rate constant W. The condition where the spin populations are equalized by continuous RF irradiation (Pα = Pβ) is called saturation and the resonance disappears since transition probabilities depend on the population difference between the energy levels.
In the simplest case where the NOE is relevant, the resonances of two spin- nuclei, I and S, are chemically shifted but not J-coupled. The energy diagram for such a system has four energy levels that depend on the spin-states of I and S corresponding to αα, αβ, βα, and ββ, respectively. The W'''s are the probabilities per unit time that a transition will occur between the four energy levels, or in other terms the rate at which the corresponding spin flips occur. There are two single quantum transitions, W1I, corresponding to αα ➞ βα and αβ ➞ ββ; W1S, corresponding to αα ➞ αβ and βα ➞ ββ; a zero quantum transition, W0, corresponding to βα ➞ αβ, and a double quantum transition corresponding to αα ➞ ββ.
While rf irradiation can only induce single-quantum transitions (due to so-called quantum mechanical selection rules) giving rise to observable spectral lines, dipolar relaxation may take place through any of the pathways. The dipolar mechanism is the only common relaxation mechanism that can cause transitions in which more than one spin flips. Specifically, the dipolar relaxation mechanism gives rise to transitions between the αα and ββ states (W2) and between the αβ and the βα states (W0).
Expressed in terms of their bulk NMR magnetizations, the experimentally observed steady-state NOE for nucleus I when the resonance of nucleus S is saturated () is defined by the expression:
where is the magnetization (resonance intensity) of nucleus at thermal equilibrium. An analytical expression for the NOE can be obtained by considering all the relaxation pathways and applying the Solomon equations to obtain
where
and .
is the total longitudinal dipolar relaxation rate () of spin I due to the presence of spin s, is referred to as the cross-relaxation rate, and and are the magnetogyric ratios characteristic of the and nuclei, respectively.
Saturation of the degenerate W1S transitions disturbs the equilibrium populations so that Pαα = Pαβ and Pβα = Pββ. The system's relaxation pathways, however, remain active and act to re-establish an equilibrium, except that the W1S transitions are irrelevant because the population differences across these transitions are fixed by the RF irradiation while the population difference between the WI transitions does not change from their equilibrium values. This means that if only the single quantum transitions were active as relaxation pathways, saturating the resonance would not affect the intensity of the resonance. Therefore to observe an NOE on the resonance intensity of I, the contribution of and must be important. These pathways, known as cross-relaxation pathways, only make a significant contribution to the spin-lattice relaxation when the relaxation is dominated by dipole-dipole or scalar coupling interactions, but the scalar interaction is rarely important and is assumed to be negligible. In the homonuclear case where , if is the dominant relaxation pathway, then saturating increases the intensity of the resonance and the NOE is positive, whereas if is the dominant relaxation pathway, saturating decreases the intensity of the resonance and the NOE is negative.
Molecular motion
Whether the NOE is positive or negative depends sensitively on the degree of rotational molecular motion. The three dipolar relaxation pathways contribute to differing extents to the spin-lattice relaxation depending a number of factors. A key one is that the balance between ω2, ω1 and ω0 depends crucially on molecular rotational correlation time, , the time it takes a molecule to rotate one radian. NMR theory shows that the transition probabilities are related to and the Larmor precession frequencies, , by the relations:
where is the distance separating two spin- nuclei.
For relaxation to occur, the frequency of molecular tumbling must match the Larmor frequency of the nucleus. In mobile solvents, molecular tumbling motion is much faster than . The so-called extreme-narrowing limit where ). Under these conditions the double-quantum relaxation W2 is more effective than W1 or W0, because τc and 2ω0 match better than τc and ω1. When ω2 is the dominant relaxation process, a positive NOE results.
This expression shows that for the homonuclear case where I = S, most notably for 1H NMR, the maximum NOE that can be observed is 1\2 irrespective of the proximity of the nuclei. In the heteronuclear case where I ≠ S, the maximum NOE is given by 1\2 (γS/γI), which, when observing heteronuclei under conditions of broadband proton decoupling, can produce major sensitivity improvements. The most important example in organic chemistry is observation of 13C while decoupling 1H, which also saturates the 1J resonances. The value of γS/γI is close to 4, which gives a maximum NOE enhancement of 200% yielding resonances 3 times as strong as they would be without NOE. In many cases, carbon atoms have an attached proton, which causes the relaxation to be dominated by dipolar relaxation and the NOE to be near maximum. For non-protonated carbon atoms the NOE enhancement is small while for carbons that relax by relaxation mechanisms by other than dipole-dipole interactions the NOE enhancement can be significantly reduced. This is one motivation for using deuteriated solvents (e.g. CDCl3) in 13C NMR. Since deuterium relaxes by the quadrupolar mechanism, there are no cross-relaxation pathways and NOE is non-existent. Another important case is 15N, an example where the value of its magnetogyric ratio is negative. Often 15N resonances are reduced or the NOE may actually null out the resonance when 1H nuclei are decoupled. It is usually advantageous to take such spectra with pulse techniques that involve polarization transfer from protons to the 15N to minimize the negative NOE.
Structure elucidation
While the relationship of the steady-state NOE to internuclear distance is complex, depending on relaxation rates and molecular motion, in many instances for small rapidly tumbling molecules in the extreme-narrowing limit, the semiquantitative nature of positive NOE's is useful for many structural applications often in combination with the measurement of J-coupling constants. For example, NOE enhancements can be used to confirm NMR resonance assignments, distinguish between structural isomers, identify aromatic ring substitution patterns and aliphatic substituent configurations, and determine conformational preferences.
Nevertheless, the inter-atomic distances derived from the observed NOE can often help to confirm the three-dimensional structure of a molecule. In this application, the NOE differs from the application of J-coupling in that the NOE occurs through space, not through chemical bonds. Thus, atoms that are in close proximity to each other can give a NOE, whereas spin coupling is observed only when the atoms are connected by 2–3 chemical bonds. However, the relation ηIS(max)= obscures how the NOE is related to internuclear distances because it applies only for the idealized case where the relaxation is 100% dominated by dipole-dipole interactions between two nuclei I and S. In practice, the value of ρI contains contributions from other competing mechanisms, which serve only to reduce the influence of W0 and W2 by increasing W1. Sometimes, for example, relaxation due to electron-nuclear interactions with dissolved oxygen or paramagnetic metal ion impurities in the solvent can prohibit the observation of weak NOE enhancements. The observed NOE in the presence of other relaxation mechanisms is given by
where ρ⋇ is the additional contribution to the total relaxation rate from relaxation mechanisms not involving cross relaxation. Using the same idealized two-spin model for dipolar relaxation in the extreme narrowing limit:
It is easy to show that
Thus, the two-spin steady-state NOE depends on internuclear distance only when there is a contribution from external relaxation. Bell and Saunders showed that following strict assumptions ρ⋇/τc is nearly constant for similar molecules in the extreme narrowing limit. Therefore, taking ratio's of steady-state NOE values can give relative values for the internuclear distance r. While the steady-state experiment is useful in many cases, it can only provide information on relative internuclear distances. On the other hand, the initial rate at which the NOE grows is proportional to rIS−6, which provides other more sophisticated alternatives for obtaining structural information via transient experiments such as 2D-NOESY.
Two-dimensional NMR
The motivations for using two-dimensional NMR for measuring NOE's are similar as for other 2-D methods. The maximum resolution is improved by spreading the affected resonances over two dimensions, therefore more peaks are resolved, larger molecules can be observed and more NOE's can be observed in a single measurement. More importantly, when the molecular motion is in the intermediate or slow motional regimes when the NOE is either zero or negative, the steady-state NOE experiment fails to give results that can be related to internuclear distances.
Nuclear Overhauser Effect Spectroscopy (NOESY) is a 2D NMR spectroscopic method used to identify nuclear spins undergoing cross-relaxation and to measure their cross-relaxation rates. Since 1H dipole-dipole couplings provide the primary means of cross-relaxation for organic molecules in solution, spins undergoing cross-relaxation are those close to one another in space. Therefore, the cross peaks of a NOESY spectrum indicate which protons are close to each other in space. In this respect, the NOESY experiment differs from the COSY experiment that relies on J-coupling to provide spin-spin correlation, and whose cross peaks indicate which 1H's are close to which other 1H's through the chemical bonds of the molecule.
The basic NOESY sequence consists of three 90° pulses. The first pulse creates transverse spin magnetization. The spins precess during the evolution time t1, which is incremented during the course of the 2D experiment. The second pulse produces longitudinal magnetization equal to the transverse magnetization component orthogonal to the pulse direction. Thus, the idea is to produce an initial condition for the mixing period τm. During the NOE mixing time, magnetization transfer via cross-relaxation can take place. For the basic NOESY experiment, τm is kept constant throughout the 2D experiment, but chosen for the optimum cross-relaxation rate and build-up of the NOE. The third pulse creates transverse magnetization from the remaining longitudinal magnetization. Data acquisition begins immediately following the third pulse and the transverse magnetization is observed as a function of the pulse delay time t2. The NOESY spectrum is generated by a 2D Fourier transform with respect to t1 and t2. A series of experiments are carried out with increasing mixing times, and the increase in NOE enhancement is followed. The closest protons show the most rapid build-up rates of the NOE.
Inter-proton distances can be determined from unambiguously assigned, well-resolved, high signal-to-noise NOESY spectra by analysis of cross peak intensities. These may be obtained by volume integration and can be converted into estimates of interproton distances. The distance between two atoms and can be calculated from the cross-peak volumes and a scaling constant
where can be determined based on measurements of known fixed distances. The range of distances can be reported based on known distances and volumes in the spectrum, which gives a mean and a standard deviation , a measurement of multiple regions in the NOESY spectrum showing no peaks, i.e. noise , and a measurement error . The parameter is set so that all known distances are within the error bounds. This shows that the lower range of the NOESY volume can be shown
and that the upper bound is
Such fixed distances depend on the system studied. For example, locked nucleic acids have many atoms whose distance varies very little in the sugar, which allows estimation of the glycosidic torsion angles, which allowed NMR to benchmark LNA molecular dynamics predictions. RNAs, however, have sugars that are much more conformationally flexible, and require wider estimations of low and high bounds.
In protein structural characterization, NOEs are used to create constraints on intramolecular distances. In this method, each proton pair is considered in isolation and NOESY cross peak intensities are compared with a reference cross peak from a proton pair of fixed distance, such as a geminal methylene proton pair or aromatic ring protons. This simple approach is reasonably insensitive to the effects of spin diffusion or non-uniform correlation times and can usually lead to definition of the global fold of the protein, provided a sufficiently large number of NOEs have been identified. NOESY cross peaks can be classified as strong, medium or weak and can be translated into upper distance restraints of around 2.5, 3.5 and 5.0 Å, respectively. Such constraints can then be used in molecular mechanics optimizations to provide a picture of the solution state conformation of the protein. Full structure determination relies on a variety of NMR experiments and optimization methods utilizing both chemical shift and NOESY constraints.
Heteronuclear NOE
Some experimental methods
Some examples of one and two-dimensional NMR experimental techniques exploiting the NOE include:
NOESY, Nuclear Overhauser effect Spectroscopy
HOESY, Heteronuclear Overhauser effect spectroscopy
ROESY, Rotational frame nuclear Overhauser effect spectroscopy
TRNOE, Transferred nuclear Overhauser effect
DPFGSE-NOE, Double pulsed field gradient spin echo NOE experiment
NOESY is the determination of the relative orientations of atoms in a molecule, for example a protein or other large biological molecule, producing a three-dimensional structure. HOESY is NOESY cross-correlation between atoms of different elements. ROESY involves spin-locking the magnetization to prevent it from going to zero, applied for molecules for which regular NOESY is not applicable. TRNOE measures the NOE between two different molecules interacting in the same solution, as in a ligand binding to a protein. In a DPFGSE-NOE experiment, a transient experiment that allows for suppression of strong signals and thus detection of very small NOEs.
Examples of nuclear Overhauser effect
The figure (top) displays how Nuclear Overhauser Effect Spectroscopy can elucidate the structure of a switchable compound. In this example, the proton designated as {H} shows two different sets of NOEs depending on the isomerization state (cis or trans) of the switchable azo groups. In the trans state proton {H} is far from the phenyl group showing blue coloured NOEs; while the cis'' state holds proton {H} in the vicinity of the phenyl group resulting in the emergence of new NOEs (show in red).
Another example (bottom) where application where the NOE is useful to assign resonances and determine configuration is polysaccharides. For instance, complex glucans possess a multitude of overlapping signals, especially in a proton spectrum. Therefore, it is advantageous to utilize 2D NMR experiments including NOESY for the assignment of signals. See, for example, NOE of carbohydrates.
See also
Dynamic nuclear polarization
Magnetization
Nuclear magnetic resonance
Nuclear magnetic resonance spectroscopy
Nuclear magnetic resonance spectroscopy of proteins
Proton nuclear magnetic resonance
Spin polarization
Two-dimensional nuclear magnetic resonance spectroscopy
References
External links
Hans J. Reich: The Nuclear Overhauser Effect
Eugene E. Kwan: Lecture12: The Nuclear Overhauser Effect
Williams, Martin and Rovnyak Vol 2: R. R. Gil and A. Navarro-Vázquez: Chapter 1 Application of the Nuclear Overhauser Effect to the Structural Elucidation of Natural Products
James Keeler: 8 Relaxation
YouTube: James Keeler, Lecture 10, Relaxation II. 2013 Cambridge lecture on NOE
Nuclear magnetic resonance spectroscopy
Nuclear magnetic resonance
Chemical physics | Nuclear Overhauser effect | [
"Physics",
"Chemistry"
] | 4,326 | [
"Applied and interdisciplinary physics",
"Spectrum (physical sciences)",
"Nuclear magnetic resonance",
"Nuclear magnetic resonance spectroscopy",
"nan",
"Nuclear physics",
"Spectroscopy",
"Chemical physics"
] |
336,138 | https://en.wikipedia.org/wiki/Amphidromic%20point | An amphidromic point, also called a tidal node, is a geographical location where there is little or no difference in sea height between high tide and low tide; it has zero tidal amplitude for one harmonic constituent of the tide. The tidal range (the peak-to-peak amplitude, or the height difference between high tide and low tide) for that harmonic constituent increases with distance from this point, though not uniformly. As such, the concept of amphidromic points is crucial to understanding tidal behaviour. The term derives from the Greek words amphi ("around") and dromos ("running"), referring to the rotary tides which circulate around amphidromic points. It was first discovered by William Whewell, who extrapolated the cotidal lines from the coast of the North Sea and found that the lines must meet at some point.
Amphidromic points occur because interference within oceanic basins, seas and bays, combined with the Coriolis effect, creates a wave pattern — called an amphidromic system — which rotates around the amphidromic point. At the amphidromic points of the dominant tidal constituent, there is almost no vertical change in sea level from tidal action; that is, there is little or no difference between high tide and low tide at these locations. There can still be tidal currents since the water levels on either side of the amphidromic point are not the same. A separate amphidromic system is created by each periodic tidal component.
In most locations the "principal lunar semi-diurnal", known as M2, is the largest tidal constituent. Cotidal lines connect points which reach high tide at the same time and low tide at the same time. In Figure 1, the low tide lags or leads by 1 hr 2 min from its neighboring lines. Where the lines meet are amphidromes, and the tide rotates around them; for example, along the Chilean coast, and from southern Mexico to Peru, the tide propagates southward, while from Baja California to Alaska the tide propagates northward.
Formation of amphidromic points
Tides are generated as a result of gravitational attraction by the Sun and Moon. This gravitational attraction results in a tidal force that acts on the ocean. The ocean reacts to this external forcing by generating, in particular relevant for describing tidal behaviour, Kelvin waves and Poincaré waves (also known as Sverdrup waves). These tidal waves can be considered wide, relative to the Rossby radius of deformation (~3000 km in the open ocean), and shallow, as the water depth (D, on average ~4 kilometre deep) in the ocean is much smaller (i.e. D/λ <1/20) than the wavelength (λ) which is in the order of thousands of kilometres.
In real oceans, the tides cannot endlessly propagate as progressive waves. The waves reflect due to changes in water depth (for example when entering shelf seas) and at coastal boundaries. The result is a reflected wave that propagates in the opposite direction to the incident wave. The combination of the reflected wave and the incident wave is the total wave. Due to resonance between the reflected and the incident wave, the amplitude of the total wave can either be suppressed or amplified. The points at which the two waves amplify each other are known as antinodes and the points at which the two waves cancel each other out are known as nodes. Figure 2 shows a λ resonator. The first node is located at λ of the total wave, followed by the next node reoccurring λ farther at λ.
A long, progressive wave travelling in a channel on a rotating Earth behaves differently from a wave travelling along a non-rotating channel. Due to the Coriolis force, the water in the ocean is deflected towards the right in the northern hemisphere and conversely in the southern hemisphere. This side-way component of the flow due to the Coriolis force causes a build-up of water that results in a pressure gradient. The resulting slope develops until it is equilibrium with the Coriolis force; resulting in geostrophic balance. As a result of this geostrophic balance, Kelvin waves (originally described by Lord Kelvin) and Poincaré waves are generated. The amplitude of a Kelvin wave is highest near the coast and, when considering a wave on the northern hemisphere, decreases to further away from its right-hand coastal boundary. The propagation of Kelvin waves is always alongshore and its amplification falls off according to the Rossby radius of deformation. In contrast, Poincaré waves are able to propagate both alongshore as a free wave with a propagating wave pattern and cross-shore as a trapped wave with a standing wave pattern.
Infinitely long channel
In an infinitely long channel, which can be viewed upon as a simplified approximation of the Atlantic Ocean and Pacific Ocean, the tide propagates as an incident and a reflective Kelvin wave. The amplitude of the waves decreases further away from the coast and at certain points in the middle of the basin, the amplitude of the total wave becomes zero. Moreover, the phase of the tide seems to rotate around these points of zero amplitude. These points are called amphidromic points. The sense of rotation of the wave around the amphidromic point is in the direction of the Coriolis force; anticlockwise in the northern hemisphere and clockwise in the southern hemisphere.
Semi-enclosed basin
In a semi-enclosed basin, such as the North Sea, Kelvin waves, though being the dominant tidal wave propagating in alongshore direction, are not able to propagate cross shore as they rely on the presence of lateral boundaries or the equator. As such, the tidal waves observed cross-shore are predominantly Poincaré waves. The tides observed in a semi-enclosed basin are therefore chiefly the summation of the incident Kelvin wave, reflected Kelvin wave and cross-shore standing Poincaré wave. An animation of the tidal amplitude, tidal currents and its amphidromic behaviour is shown in Animation 2.
Position of amphidromic points
Figure 2 shows that the first node of the total wave is located at λ with reoccurring nodes at intervals of λ. In an idealized situation, amphidromic points can be found at the position of these nodes of the total tidal wave. When neglecting friction, the position of the amphidromic points would be in the middle of the basin, as the initial amplitude and the amplitude decay of the incident wave and the reflected wave are equal, this can be seen in Animations 1 and 2 However, tidal waves in the ocean are subject to friction from the seabed and from interaction with coastal boundaries. Moreover, variation in water depth influences the spacing between amphidromic points.
Firstly, the distance between amphidromic points is dependent on the water depth:
Where g is the gravitational acceleration, D is the water depth and T is the period of the wave.
Locations with more shallow water depth have their amphidromic points closer to each other as the distance of the interval (λ) of the nodes decreases. Secondly, energy losses due to friction in shallow seas and coastal boundaries result in additional adjustments of the tidal pattern. Tidal waves are not perfectly reflected, resulting in energy loss which causes a smaller reflected wave compared to the incoming wave. Consequently, on the northern hemisphere, the amphidromic point will be displaced from the centre line of the channel towards the left of the direction of the incident wave.
The degree of displacement on the northern hemisphere for the first amphidrome is given by:
Where γ is the displacement of the amphidrome from the centre of the channel (γ=0), g is the gravitational acceleration, D is the water depth, f is the Coriolis frequency and α is the ratio between amplitudes of the reflected wave and the incident wave. Because the reflected wave is smaller than the incident wave, α will be smaller than 1 and lnα will be negative. Hence the amphidromic displacement γ is to the left of the incident wave on the northern hemisphere.
Furthermore, a study has shown than there is a pattern of amphidrome movement related to spring-neap cycles in the Irish Sea. The maximum displacement of the amphidrome from the centre coincides with spring tides, whereas the minimum occurs at neaps. During spring tides, more energy is absorbed from the tidal wave compared to neap tides. As a result, the reflection coefficient α is smaller and the displacement of the amphidromic point from the centre is larger. Similar amphidromic movement is expected in other seas where energy dissipation due to friction is high.
It can occur that the amphidromic point moves inland of the coastal boundary. In this case, the amplitude and the phase of the tidal wave will still rotate around an inland point, which is called a virtual or degenerate amphidrome.
Amphidromic points and sea level rise
The position of amphidromic points and their movement predominantly depends on the wavelength of the tidal wave and friction. As a result of enhanced greenhouse gas emissions, the oceans in the world are becoming subject to sea-level rise. As the water depth increases, the wavelength of the tidal wave will increase. Consequently the position of the amphidromic points located at λ in semi-enclosed systems will move further away from the cross-shore coastal boundary. Furthermore, amphidromic points will move further away from each other as the interval of λ increases. This effect will be more pronounced in shallow seas and coastal regions, as the relative water depth increase due to sea-level rise will be larger, when compared to the open ocean. Moreover, the amount of sea-level rise differs per region. Some regions will be subject to a higher rate of sea-level rise than other regions and nearby amphidromic points will be more susceptible to change location. Lastly, sea-level rise results in less bottom friction and therefore less energy dissipation. This causes the amphidromic points to move further away from the coastal boundaries and more towards the centre its channel/basin.
In the M2 tidal constituent
Based on Figure 1, there are the following clockwise and anticlockwise amphidromic points:
Clockwise amphidromic points
north of the Seychelles
near Enderby Land
off Perth
east of New Guinea
south of Easter Island
west of the Galapagos Islands
north of Queen Maud Land
Counterclockwise amphidromic points
near Sri Lanka
north of New Guinea
at Tahiti
between Mexico and Hawaii
near the Leeward Islands
east of Newfoundland
midway between Rio de Janeiro and Angola
east of Iceland
Outside Eigersund in southwestern Norway
The islands of Madagascar and New Zealand are amphidromic points in the sense that the tide goes around them in about 12 and a half hours, but the amplitude of the tides on their coasts is in some places large.
See also
Kelvin wave
Tides
Theory of tides
References and notes
Wave mechanics
Tides | Amphidromic point | [
"Physics"
] | 2,269 | [
"Wave mechanics",
"Waves",
"Physical phenomena",
"Classical mechanics"
] |
336,254 | https://en.wikipedia.org/wiki/De%20Sitter%20universe | A de Sitter universe is a cosmological solution to the Einstein field equations of general relativity, named after Willem de Sitter. It models the universe as spatially flat and neglects ordinary matter, so the dynamics of the universe are dominated by the cosmological constant, thought to correspond to dark energy in our universe or the inflaton field in the early universe. According to the models of inflation and current observations of the accelerating universe, the concordance models of physical cosmology are converging on a consistent model where our universe was best described as a de Sitter universe at about a time after the fiducial Big Bang singularity, and far into the future.
Mathematical expression
A de Sitter universe has no ordinary matter content but with a positive cosmological constant () that sets the expansion rate, . A larger cosmological constant leads to a larger expansion rate:
where the constants of proportionality depend on conventions.
It is common to describe a patch of this solution as an expanding universe of the FLRW form where the scale factor is given by
where the constant is the Hubble expansion rate and is time. As in all FLRW spaces, , the scale factor, describes the expansion of physical spatial distances.
Unique to universes described by the FLRW metric, a de Sitter universe has a Hubble Law that is not only consistent through all space, but also through all time (since the deceleration parameter is ), thus satisfying the perfect cosmological principle that assumes isotropy and homogeneity throughout space and time. There are ways to cast de Sitter space with static coordinates (see de Sitter space), so unlike other FLRW models, de Sitter space can be thought of as a static solution to Einstein's equations even though the geodesics followed by observers necessarily diverge as expected from the expansion of physical spatial dimensions. As a model for the universe, de Sitter's solution was not considered viable for the observed universe until models for inflation and dark energy were developed. Before then, it was assumed that the Big Bang implied only an acceptance of the weaker cosmological principle, which holds that isotropy and homogeneity apply spatially but not temporally.
Relative expansion
The exponential expansion of the scale factor means that the physical distance between any two non-accelerating observers will eventually be growing faster than the speed of light. At this point those two observers will no longer be able to make contact. Therefore, any observer in a de Sitter universe would have cosmological horizons beyond which that observer can never see nor learn any information. If our universe is approaching a de Sitter universe then eventually we will not be able to observe any galaxies other than our own Milky Way (and any others in the gravitationally bound Local Group, assuming they were to somehow survive to that time without merging).
Role in the Benchmark Model
The Benchmark Model is a model consisting of a universe made of three components – radiation, ordinary matter, and dark energy – that fit current data about the history of the universe. These components make different contributions to the expansion of the universe as time elapses. Specifically, when the universe is radiation dominated, the expansion factor scales as , and when the universe is matter dominated . Since both of these grow slower than the exponential, in the future the scale factor will be dominated by the exponential factor representing the pure de Sitter universe. The point at which this starts to occur is known as the matter–lambda equivalence point and the modern-day universe is believed to be relatively close to this point.
See also
Cosmic inflation
De Sitter space – for more mathematical properties
Deceleration parameter
Causal patch
Lambda-CDM model
References
Physical cosmology
Exact solutions in general relativity
Inflation (cosmology)
ru:Модель де Ситтера | De Sitter universe | [
"Physics",
"Astronomy",
"Mathematics"
] | 793 | [
"Exact solutions in general relativity",
"Theoretical physics",
"Mathematical objects",
"Astrophysics",
"Equations",
"Physical cosmology",
"Astronomical sub-disciplines"
] |
336,271 | https://en.wikipedia.org/wiki/Approximation | An approximation is anything that is intentionally similar but not exactly equal to something else.
Etymology and usage
The word approximation is derived from Latin approximatus, from proximus meaning very near and the prefix ad- (ad- before p becomes ap- by assimilation) meaning to. Words like approximate, approximately and approximation are used especially in technical or scientific contexts. In everyday English, words such as roughly or around are used with a similar meaning. It is often found abbreviated as approx.
The term can be applied to various properties (e.g., value, quantity, image, description) that are nearly, but not exactly correct; similar, but not exactly the same (e.g., the approximate time was 10 o'clock).
Although approximation is most often applied to numbers, it is also frequently applied to such things as mathematical functions, shapes, and physical laws.
In science, approximation can refer to using a simpler process or model when the correct model is difficult to use. An approximate model is used to make calculations easier. Approximations might also be used if incomplete information prevents use of exact representations.
The type of approximation used depends on the available information, the degree of accuracy required, the sensitivity of the problem to this data, and the savings (usually in time and effort) that can be achieved by approximation.
Mathematics
Approximation theory is a branch of mathematics, and a quantitative part of functional analysis. Diophantine approximation deals with approximations of real numbers by rational numbers.
Approximation usually occurs when an exact form or an exact numerical number is unknown or difficult to obtain. However some known form may exist and may be able to represent the real form so that no significant deviation can be found. For example, 1.5 × 106 means that the true value of something being measured is 1,500,000 to the nearest hundred thousand (so the actual value is somewhere between 1,450,000 and 1,550,000); this is in contrast to the notation 1.500 × 106, which means that the true value is 1,500,000 to the nearest thousand (implying that the true value is somewhere between 1,499,500 and 1,500,500).
Numerical approximations sometimes result from using a small number of significant digits. Calculations are likely to involve rounding errors and other approximation errors. Log tables, slide rules and calculators produce approximate answers to all but the simplest calculations. The results of computer calculations are normally an approximation expressed in a limited number of significant digits, although they can be programmed to produce more precise results. Approximation can occur when a decimal number cannot be expressed in a finite number of binary digits.
Related to approximation of functions is the asymptotic value of a function, i.e. the value as one or more of a function's parameters becomes arbitrarily large. For example, the sum is asymptotically equal to k. No consistent notation is used throughout mathematics and some texts use ≈ to mean approximately equal and ~ to mean asymptotically equal whereas other texts use the symbols the other way around.
Typography
The approximately equals sign, ≈, was introduced by British mathematician Alfred Greenhill in 1892, in his book Applications of Elliptic Functions.
LaTeX symbols
Symbols used in LaTeX markup.
(\approx), usually to indicate approximation between numbers, like .
(\not\approx), usually to indicate that numbers are not approximately equal ().
(\simeq), usually to indicate asymptotic equivalence between functions, like .
So writing would be wrong under this definition, despite wide use.
(\sim), usually to indicate proportionality between functions, the same of the line above will be .
(\cong), usually to indicate congruence between figures, like .
(\eqsim), usually to indicate that two quantities are equal up to constants.
(\lessapprox) and (\gtrapprox), usually to indicate that either the inequality holds or the two values are approximately equal.
Unicode
Symbols used to denote items that are approximately equal are wavy or dotted equals signs.
Science
Approximation arises naturally in scientific experiments. The predictions of a scientific theory can differ from actual measurements. This can be because there are factors in the real situation that are not included in the theory. For example, simple calculations may not include the effect of air resistance. Under these circumstances, the theory is an approximation to reality. Differences may also arise because of limitations in the measuring technique. In this case, the measurement is an approximation to the actual value.
The history of science shows that earlier theories and laws can be approximations to some deeper set of laws. Under the correspondence principle, a new scientific theory should reproduce the results of older, well-established, theories in those domains where the old theories work. The old theory becomes an approximation to the new theory.
Some problems in physics are too complex to solve by direct analysis, or progress could be limited by available analytical tools. Thus, even when the exact representation is known, an approximation may yield a sufficiently accurate solution while reducing the complexity of the problem significantly. Physicists often approximate the shape of the Earth as a sphere even though more accurate representations are possible, because many physical characteristics (e.g., gravity) are much easier to calculate for a sphere than for other shapes.
Approximation is also used to analyze the motion of several planets orbiting a star. This is extremely difficult due to the complex interactions of the planets' gravitational effects on each other. An approximate solution is effected by performing iterations. In the first iteration, the planets' gravitational interactions are ignored, and the star is assumed to be fixed. If a more precise solution is desired, another iteration is then performed, using the positions and motions of the planets as identified in the first iteration, but adding a first-order gravity interaction from each planet on the others. This process may be repeated until a satisfactorily precise solution is obtained.
The use of perturbations to correct for the errors can yield more accurate solutions. Simulations of the motions of the planets and the star also yields more accurate solutions.
The most common versions of philosophy of science accept that empirical measurements are always approximations — they do not perfectly represent what is being measured.
Law
Within the European Union (EU), "approximation" refers to a process through which EU legislation is implemented and incorporated within Member States' national laws, despite variations in the existing legal framework in each country. Approximation is required as part of the pre-accession process for new member states, and as a continuing process when required by an EU Directive. Approximation is a key word generally employed within the title of a directive, for example the Trade Marks Directive of 16 December 2015 serves "to approximate the laws of the Member States relating to trade marks". The European Commission describes approximation of law as "a unique obligation of membership in the European Union".
See also
Double tilde (disambiguation)Various meanings of ~~ or ≈
References
External links
Numerical analysis
Equivalence (mathematics)
Comparison (mathematical) | Approximation | [
"Mathematics"
] | 1,444 | [
"Computational mathematics",
"Arithmetic",
"Mathematical relations",
"Comparison (mathematical)",
"Numerical analysis",
"Approximations"
] |
336,940 | https://en.wikipedia.org/wiki/Girsanov%20theorem | In probability theory, Girsanov's theorem or the Cameron-Martin-Girsanov theorem explains how stochastic processes change under changes in measure. The theorem is especially important in the theory of financial mathematics as it explains how to convert from the physical measure, which describes the probability that an underlying instrument (such as a share price or interest rate) will take a particular value or values, to the risk-neutral measure which is a very useful tool for evaluating the value of derivatives on the underlying.
History
Results of this type were first proved by Cameron-Martin in the 1940s and by Igor Girsanov in 1960. They have been subsequently extended to more general classes of process culminating in the general form of Lenglart (1977).
Significance
Girsanov's theorem is important in the general theory of stochastic processes since it enables the key result that if Q is a measure that is absolutely continuous with respect to P then every P-semimartingale is a Q-semimartingale.
Statement of theorem
We state the theorem first for the special case when the underlying stochastic process is a Wiener process. This special case is sufficient for risk-neutral pricing in the Black–Scholes model.
Let be a Wiener process on the Wiener probability space . Let be a measurable process adapted to the natural filtration of the Wiener process ; we assume that the usual conditions have been satisfied.
Given an adapted process define
where is the stochastic exponential of X with respect to W, i.e.
and denotes the quadratic variation of the process X.
If is a martingale then a probability
measure Q can be defined on such that Radon–Nikodym derivative
Then for each t the measure Q restricted to the unaugmented sigma fields is equivalent to P restricted to
Furthermore, if is a local martingale under P then the process
is a Q local martingale on the filtered probability space .
Corollary
If X is a continuous process and W is a Brownian motion under measure P then
is a Brownian motion under Q.
The fact that is continuous is trivial; by Girsanov's theorem it is a Q local martingale, and by computing
it follows by Levy's characterization of Brownian motion that this is a Q Brownian
motion.
Comments
In many common applications, the process X is defined by
For X of this form then a necessary and sufficient condition for to be a martingale is Novikov's condition which requires that
The stochastic exponential is the process Z which solves the stochastic differential equation
The measure Q constructed above is not equivalent to P on as this would only be the case if the Radon–Nikodym derivative were a uniformly integrable martingale, which the exponential martingale described above is not. On the other hand, as long as Novikov's condition is satisfied the measures are equivalent on .
Additionally, then combining this above observation in this case, we see that the process
for is a Q Brownian motion. This was Igor Girsanov's original formulation of the above theorem.
Application to finance
This theorem can be used to show in the Black–Scholes model the unique risk-neutral measure, i.e. the measure in which the fair value of a derivative is the discounted expected value, Q, is specified by
Application to Langevin equations
Another application of this theorem, also given in the original paper of Igor Girsanov, is for stochastic differential equations. Specifically, let us consider the equation
where denotes a Brownian motion. Here and are fixed deterministic functions. We assume that this equation has a unique strong solution on . In this case Girsanov's theorem may be used to compute functionals of directly in terms a related functional for Brownian motion. More specifically, we have for any bounded functional on continuous functions that
This follows by applying Girsanov's theorem, and the above observation, to the martingale process
In particular, with the notation above, the process
is a Q Brownian motion. Rewriting this in differential form as
we see that the law of under Q solves the equation defining , as is a Q Brownian motion. In particular, we see that the right-hand side may be written as , where Q is the measure taken with respect to the process Y, so the result now is just the statement of Girsanov's theorem.
A more general form of this application is that if both
admit unique strong solutions on , then for any bounded functional on , we have that
See also
References
External links
Notes on Stochastic Calculus which contain a simple outline proof of Girsanov's theorem.
Stochastic processes
Mathematical theorems
Mathematical finance | Girsanov theorem | [
"Mathematics"
] | 966 | [
"Applied mathematics",
"nan",
"Mathematical problems",
"Mathematical theorems",
"Mathematical finance"
] |
337,083 | https://en.wikipedia.org/wiki/Particle%20swarm%20optimization | In computational science, particle swarm optimization (PSO) is a computational method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. It solves a problem by having a population of candidate solutions, here dubbed particles, and moving these particles around in the search-space according to simple mathematical formulae over the particle's position and velocity. Each particle's movement is influenced by its local best known position, but is also guided toward the best known positions in the search-space, which are updated as better positions are found by other particles. This is expected to move the swarm toward the best solutions.
PSO is originally attributed to Kennedy, Eberhart and Shi and was first intended for simulating social behaviour, as a stylized representation of the movement of organisms in a bird flock or fish school. The algorithm was simplified and it was observed to be performing optimization. The book by Kennedy and Eberhart describes many philosophical aspects of PSO and swarm intelligence. An extensive survey of PSO applications is made by Poli. In 2017, a comprehensive review on theoretical and experimental works on PSO has been published by Bonyadi and Michalewicz.
PSO is a metaheuristic as it makes few or no assumptions about the problem being optimized and can search very large spaces of candidate solutions. Also, PSO does not use the gradient of the problem being optimized, which means PSO does not require that the optimization problem be differentiable as is required by classic optimization methods such as gradient descent and quasi-newton methods. However, metaheuristics such as PSO do not guarantee an optimal solution is ever found.
Algorithm
A basic variant of the PSO algorithm works by having a population (called a swarm) of candidate solutions (called particles). These particles are moved around in the search-space according to a few simple formulae. The movements of the particles are guided by their own best-known position in the search-space as well as the entire swarm's best-known position. When improved positions are being discovered these will then come to guide the movements of the swarm. The process is repeated and by doing so it is hoped, but not guaranteed, that a satisfactory solution will eventually be discovered.
Formally, let f: ℝn → ℝ be the cost function which must be minimized. The function takes a candidate solution as an argument in the form of a vector of real numbers and produces a real number as output which indicates the objective function value of the given candidate solution. The gradient of f is not known. The goal is to find a solution a for which f(a) ≤ f(b) for all b in the search-space, which would mean a is the global minimum.
Let S be the number of particles in the swarm, each having a position xi ∈ ℝn in the search-space and a velocity vi ∈ ℝn. Let pi be the best known position of particle i and let g be the best known position of the entire swarm. A basic PSO algorithm to minimize the cost function is then:
for each particle i = 1, ..., S do
Initialize the particle's position with a uniformly distributed random vector: xi ~ U(blo, bup)
Initialize the particle's best known position to its initial position: pi ← xi
if f(pi) < f(g) then
update the swarm's best known position: g ← pi
Initialize the particle's velocity: vi ~ U(-|bup-blo|, |bup-blo|)
while a termination criterion is not met do:
for each particle i = 1, ..., S do
for each dimension d = 1, ..., n do
Pick random numbers: rp, rg ~ U(0,1)
Update the particle's velocity: vi,d ← w vi,d + φp rp (pi,d-xi,d) + φg rg (gd-xi,d)
Update the particle's position: xi ← xi + vi
if f(xi) < f(pi) then
Update the particle's best known position: pi ← xi
if f(pi) < f(g) then
Update the swarm's best known position: g ← pi
The values blo and bup represent the lower and upper boundaries of the search-space respectively. The w parameter is the inertia weight. The parameters φp and φg are often called cognitive coefficient and social coefficient.
The termination criterion can be the number of iterations performed, or a solution where the adequate objective function value is found. The parameters w, φp, and φg are selected by the practitioner and control the behaviour and efficacy of the PSO method (below).
Parameter selection
The choice of PSO parameters can have a large impact on optimization performance. Selecting PSO parameters that yield good performance has therefore been the subject of much research.
To prevent divergence ("explosion") the inertia weight must be smaller than 1. The two other parameters can be then derived thanks to the constriction approach, or freely selected, but the analyses suggest convergence domains to constrain them. Typical values are in .
The PSO parameters can also be tuned by using another overlaying optimizer, a concept known as meta-optimization, or even fine-tuned during the optimization, e.g., by means of fuzzy logic.
Parameters have also been tuned for various optimization scenarios.
Neighbourhoods and topologies
The topology of the swarm defines the subset of particles with which each particle can exchange information. The basic version of the algorithm uses the global topology as the swarm communication structure. This topology allows all particles to communicate with all the other particles, thus the whole swarm share the same best position g from a single particle. However, this approach might lead the swarm to be trapped into a local minimum, thus different topologies have been used to control the flow of information among particles. For instance, in local topologies, particles only share information with a subset of particles. This subset can be a geometrical one – for example "the m nearest particles" – or, more often, a social one, i.e. a set of particles that is not depending on any distance. In such cases, the PSO variant is said to be local best (vs global best for the basic PSO).
A commonly used swarm topology is the ring, in which each particle has just two neighbours, but there are many others. The topology is not necessarily static. In fact, since the topology is related to the diversity of communication of the particles, some efforts have been done to create adaptive topologies (SPSO, APSO, stochastic star, TRIBES, Cyber Swarm, and C-PSO)
By using the ring topology, PSO can attain generation-level parallelism, significantly enhancing the evolutionary speed.
Inner workings
There are several schools of thought as to why and how the PSO algorithm can perform optimization.
A common belief amongst researchers is that the swarm behaviour varies between exploratory behaviour, that is, searching a broader region of the search-space, and exploitative behaviour, that is, a locally oriented search so as to get closer to a (possibly local) optimum. This school of thought has been prevalent since the inception of PSO. This school of thought contends that the PSO algorithm and its parameters must be chosen so as to properly balance between exploration and exploitation to avoid premature convergence to a local optimum yet still ensure a good rate of convergence to the optimum. This belief is the precursor of many PSO variants, see below.
Another school of thought is that the behaviour of a PSO swarm is not well understood in terms of how it affects actual optimization performance, especially for higher-dimensional search-spaces and optimization problems that may be discontinuous, noisy, and time-varying. This school of thought merely tries to find PSO algorithms and parameters that cause good performance regardless of how the swarm behaviour can be interpreted in relation to e.g. exploration and exploitation. Such studies have led to the simplification of the PSO algorithm, see below.
Convergence
In relation to PSO the word convergence typically refers to two different definitions:
Convergence of the sequence of solutions (aka, stability analysis, converging) in which all particles have converged to a point in the search-space, which may or may not be the optimum,
Convergence to a local optimum where all personal bests p or, alternatively, the swarm's best known position g, approaches a local optimum of the problem, regardless of how the swarm behaves.
Convergence of the sequence of solutions has been investigated for PSO. These analyses have resulted in guidelines for selecting PSO parameters that are believed to cause convergence to a point and prevent divergence of the swarm's particles (particles do not move unboundedly and will converge to somewhere). However, the analyses were criticized by Pedersen for being oversimplified as they assume the swarm has only one particle, that it does not use stochastic variables and that the points of attraction, that is, the particle's best known position p and the swarm's best known position g, remain constant throughout the optimization process. However, it was shown that these simplifications do not affect the boundaries found by these studies for parameter where the swarm is convergent. Considerable effort has been made in recent years to weaken the modeling assumption utilized during the stability analysis of PSO, with the most recent generalized result applying to numerous PSO variants and utilized what was shown to be the minimal necessary modeling assumptions.
Convergence to a local optimum has been analyzed for PSO in and. It has been proven that PSO needs some modification to guarantee finding a local optimum.
This means that determining the convergence capabilities of different PSO algorithms and parameters still depends on empirical results. One attempt at addressing this issue is the development of an "orthogonal learning" strategy for an improved use of the information already existing in the relationship between p and g, so as to form a leading converging exemplar and to be effective with any PSO topology. The aims are to improve the performance of PSO overall, including faster global convergence, higher solution quality, and stronger robustness. However, such studies do not provide theoretical evidence to actually prove their claims.
Adaptive mechanisms
Without the need for a trade-off between convergence ('exploitation') and divergence ('exploration'), an adaptive mechanism can be introduced. Adaptive particle swarm optimization (APSO) features better search efficiency than standard PSO. APSO can perform global search over the entire search space with a higher convergence speed. It enables automatic control of the inertia weight, acceleration coefficients, and other algorithmic parameters at the run time, thereby improving the search effectiveness and efficiency at the same time. Also, APSO can act on the globally best particle to jump out of the likely local optima. However, APSO will introduce new algorithm parameters, it does not introduce additional design or implementation complexity nonetheless.
Besides, through the utilization of a scale-adaptive fitness evaluation mechanism, PSO can efficiently address computationally expensive optimization problems.
Variants
Numerous variants of even a basic PSO algorithm are possible. For example, there are different ways to initialize the particles and velocities (e.g. start with zero velocities instead), how to dampen the velocity, only update pi and g after the entire swarm has been updated, etc. Some of these choices and their possible performance impact have been discussed in the literature.
A series of standard implementations have been created by leading researchers, "intended for use both as a baseline for performance testing of improvements to the technique, as well as to represent PSO to the wider optimization community. Having a well-known, strictly-defined standard algorithm provides a valuable point of comparison which can be used throughout the field of research to better test new advances." The latest is Standard PSO 2011 (SPSO-2011).
Hybridization
New and more sophisticated PSO variants are also continually being introduced in an attempt to improve optimization performance. There are certain trends in that research; one is to make a hybrid optimization method using PSO combined with other optimizers, e.g., combined PSO with biogeography-based optimization, and the incorporation of an effective learning method.
Alleviate premature convergence
Another research trend is to try to alleviate premature convergence (that is, optimization stagnation), e.g. by reversing or perturbing the movement of the PSO particles, another approach to deal with premature convergence is the use of multiple swarms (multi-swarm optimization). The multi-swarm approach can also be used to implement multi-objective optimization. Finally, there are developments in adapting the behavioural parameters of PSO during optimization.
Simplifications
Another school of thought is that PSO should be simplified as much as possible without impairing its performance; a general concept often referred to as Occam's razor. Simplifying PSO was originally suggested by Kennedy and has been studied more extensively, where it appeared that optimization performance was improved, and the parameters were easier to tune and they performed more consistently across different optimization problems.
Another argument in favour of simplifying PSO is that metaheuristics can only have their efficacy demonstrated empirically by doing computational experiments on a finite number of optimization problems. This means a metaheuristic such as PSO cannot be proven correct and this increases the risk of making errors in its description and implementation. A good example of this presented a promising variant of a genetic algorithm (another popular metaheuristic) but it was later found to be defective as it was strongly biased in its optimization search towards similar values for different dimensions in the search space, which happened to be the optimum of the benchmark problems considered. This bias was because of a programming error, and has now been fixed.
Bare Bones PSO
Initialization of velocities may require extra inputs. The Bare Bones PSO variant has been proposed in 2003 by James Kennedy, and does not need to use velocity at all.
In this variant of PSO one dispences with the velocity of the particles and instead updates the positions of the particles using the following simple rule,
where , are the position and the best position of the particle ; is the global best position; is the normal distribution with the mean and standard deviation ; and where signifies the norm of a vector.
Accelerated Particle Swarm Optimization
Another simpler variant is the accelerated particle swarm optimization (APSO), which also does not need to use velocity and can speed up the convergence in many applications. A simple demo code of APSO is available.
In this variant of PSO one dispences with both the particle's velocity and the particle's best position. The particle position is updated according to the following rule,
where is a random uniformly distributed vector, is the typical length of the problem at hand, and and are the parameters of the method. As a refinement of the method one can decrease with each iteration, , where is the number of the iteration and is the decrease control parameter.
Multi-objective optimization
PSO has also been applied to multi-objective problems, in which the objective function comparison takes Pareto dominance into account when moving the PSO particles and non-dominated solutions are stored so as to approximate the pareto front.
Binary, discrete, and combinatorial
As the PSO equations given above work on real numbers, a commonly used method to solve discrete problems is to map the discrete search space to a continuous domain, to apply a classical PSO, and then to demap the result. Such a mapping can be very simple (for example by just using rounded values) or more sophisticated.
However, it can be noted that the equations of movement make use of operators that perform four actions:
computing the difference of two positions. The result is a velocity (more precisely a displacement)
multiplying a velocity by a numerical coefficient
adding two velocities
applying a velocity to a position
Usually a position and a velocity are represented by n real numbers, and these operators are simply -, *, +, and again +. But all these mathematical objects can be defined in a completely different way, in order to cope with binary problems (or more generally discrete ones), or even combinatorial ones. One approach is to redefine the operators based on sets.
See also
Artificial bee colony algorithm
Bees algorithm
Derivative-free optimization
Multi-swarm optimization
Particle filter
Swarm intelligence
Fish School Search
Dispersive flies optimisation
References
External links
Particle Swarm Central is a repository for information on PSO. Several source codes are freely available.
A brief video of particle swarms optimizing three benchmark functions.
Simulation of PSO convergence in a two-dimensional space (Matlab).
Applications of PSO.
Links to PSO source code
Nature-inspired metaheuristics
Optimization algorithms and methods
Multi-agent systems | Particle swarm optimization | [
"Engineering"
] | 3,504 | [
"Artificial intelligence engineering",
"Multi-agent systems"
] |
337,196 | https://en.wikipedia.org/wiki/Neuroanatomy | Neuroanatomy is the study of the structure and organization of the nervous system. In contrast to animals with radial symmetry, whose nervous system consists of a distributed network of cells, animals with bilateral symmetry have segregated, defined nervous systems. Their neuroanatomy is therefore better understood. In vertebrates, the nervous system is segregated into the internal structure of the brain and spinal cord (together called the central nervous system, or CNS) and the series of nerves that connect the CNS to the rest of the body (known as the peripheral nervous system, or PNS). Breaking down and identifying specific parts of the nervous system has been crucial for figuring out how it operates. For example, much of what neuroscientists have learned comes from observing how damage or "lesions" to specific brain areas affects behavior or other neural functions.
For information about the composition of non-human animal nervous systems, see nervous system. For information about the typical structure of the Homo sapiens nervous system, see human brain or peripheral nervous system. This article discusses information pertinent to the study of neuroanatomy.
History
The first known written record of a study of the anatomy of the human brain is an ancient Egyptian document, the Edwin Smith Papyrus. In Ancient Greece, interest in the brain began with the work of Alcmaeon, who appeared to have dissected the eye and related the brain to vision. He also suggested that the brain, not the heart, was the organ that ruled the body (what Stoics would call the hegemonikon) and that the senses were dependent on the brain.
The debate regarding the hegemonikon persisted among ancient Greek philosophers and physicians for a very long time. Those who argued for the brain often contributed to the understanding of neuroanatomy as well. Herophilus and Erasistratus of Alexandria were perhaps the most influential with their studies involving dissecting human brains, affirming the distinction between the cerebrum and the cerebellum, and identifying the ventricles and the dura mater. The Greek physician and philosopher Galen, likewise, argued strongly for the brain as the organ responsible for sensation and voluntary motion, as evidenced by his research on the neuroanatomy of oxen, Barbary apes, and other animals.
The cultural taboo on human dissection continued for several hundred years afterward, which brought no major progress in the understanding of the anatomy of the brain or of the nervous system. However, Pope Sixtus IV effectively revitalized the study of neuroanatomy by altering the papal policy and allowing human dissection. This resulted in a flush of new activity by artists and scientists of the Renaissance, such as Mondino de Luzzi, Berengario da Carpi, and Jacques Dubois, and culminating in the work of Andreas Vesalius.
In 1664, Thomas Willis, a physician and professor at Oxford University, coined the term neurology when he published his text Cerebri Anatome which is considered the foundation of modern neuroanatomy. The subsequent three hundred and fifty some years has produced a great deal of documentation and study of the neural system.
Composition
At the tissue level, the nervous system is composed of neurons, glial cells, and extracellular matrix. Both neurons and glial cells come in many types (see, for example, the nervous system section of the list of distinct cell types in the adult human body). Neurons are the information-processing cells of the nervous system: they sense our environment, communicate with each other via electrical signals and chemicals called neurotransmitters which generally act across synapses (close contacts between two neurons, or between a neuron and a muscle cell; note also extrasynaptic effects are possible, as well as release of neurotransmitters into the neural extracellular space), and produce our memories, thoughts, and movements. Glial cells maintain homeostasis, produce myelin (oligodendrocytes, Schwann cells), and provide support and protection for the brain's neurons. Some glial cells (astrocytes) can even propagate intercellular calcium waves over long distances in response to stimulation, and release gliotransmitters in response to changes in calcium concentration. Wound scars in the brain largely contain astrocytes. The extracellular matrix also provides support on the molecular level for the brain's cells, vehiculating substances to and from the blood vessels.
At the organ level, the nervous system is composed of brain regions, such as the hippocampus in mammals or the mushroom bodies of the fruit fly. These regions are often modular and serve a particular role within the general systemic pathways of the nervous system. For example, the hippocampus is critical for forming memories in connection with many other cerebral regions. The peripheral nervous system also contains afferent or efferent nerves, which are bundles of fibers that originate from the brain and spinal cord, or from sensory or motor sorts of peripheral ganglia, and branch repeatedly to innervate every part of the body. Nerves are made primarily of the axons or dendrites of neurons (axons in case of efferent motor fibres, and dendrites in case of afferent sensory fibres of the nerves), along with a variety of membranes that wrap around and segregate them into nerve fascicles.
The vertebrate nervous system is divided into the central and peripheral nervous systems. The central nervous system (CNS) consists of the brain, retina, and spinal cord, while the peripheral nervous system (PNS) is made up of all the nerves and ganglia (packets of peripheral neurons) outside of the CNS that connect it to the rest of the body. The PNS is further subdivided into the somatic and autonomic nervous systems. The somatic nervous system is made up of "afferent" neurons, which bring sensory information from the somatic (body) sense organs to the CNS, and "efferent" neurons, which carry motor instructions out to the voluntary muscles of the body. The autonomic nervous system can work with or without the control of the CNS (that's why it is called 'autonomous'), and also has two subdivisions, called sympathetic and parasympathetic, which are important for transmitting motor orders to the body's basic internal organs, thus controlling functions such as heartbeat, breathing, digestion, and salivation. Autonomic nerves, unlike somatic nerves, contain only efferent fibers. Sensory signals coming from the viscera course into the CNS through the somatic sensory nerves (e.g., visceral pain), or through some particular cranial nerves (e.g., chemosensitive or mechanic signals).
Orientation in neuroanatomy
In anatomy in general and neuroanatomy in particular, several sets of topographic terms are used to denote orientation and location, which are generally referred to the body or brain axis (see Anatomical terms of location). The axis of the CNS is often wrongly assumed to be more or less straight, but it actually shows always two ventral flexures (cervical and cephalic flexures) and a dorsal flexure (pontine flexure), all due to differential growth during embryogenesis. The pairs of terms used most commonly in neuroanatomy are:
Dorsal and ventral: Dorsal refers more or less to the top or upper side of the brain, which is symbolized by the floor plate, and ventral to the bottom or lower side. These descriptors originally were used for dorsum and ventrum – back and belly – of the body; the belly of most animals is oriented towards the ground; the erect posture of humans places our ventral aspect anteriorly, and the dorsal aspect becomes posterior. The case of the head and the brain is peculiar, since the belly does not properly extend into the head, unless we assume that the mouth represents an extended belly element. Therefore, in common use, those brain parts that lie close to the base of the cranium, and through it to the mouth cavity, are called ventral – i.e., at its bottom or lower side, as defined above – whereas dorsal parts are closer to the enclosing cranial vault. Reference to the roof and floor plates of the brain is less prone to confusion, also allow us to keep an eye on the axial flexures mentioned above. Dorsal and ventral are thus relative terms in the brain, whose exact meaning depends on the specific location.
Rostral and caudal: rostral refers in general anatomy to the front of the body (towards the nose, or rostrum in Latin), and caudal refers to the tail end of the body (towards the tail; cauda in Latin). The rostrocaudal dimension of the brain corresponds to its length axis, which runs across the cited flexures from the caudal tip of the spinal cord into a rostral end roughly at the optic chiasma. In the erect Man, the directional terms "superior" and "inferior" essentially refer to this rostrocaudal dimension, because our body and brain axes are roughly oriented vertically in the erect position. However, all vertebrates develop a very marked ventral kink in the neural tube that is still detectable in the adult central nervous system, known as the cephalic flexure. The latter bends the rostral part of the CNS at a 180-degree angle relative to the caudal part, at the transition between the forebrain (axis ending rostrally at the optic chiasma) and the brainstem and spinal cord (axis roughly vertical, but including additional minor kinks at the pontine and cervical flexures) These flexural changes in axial dimension are problematic when trying to describe relative position and sectioning planes in the brain. There is abundant literature that wrongly disregards the axial flexures and assumes a relatively straight brain axis.
Medial and lateral: medial refers to being close, or relatively closer, to the midline (the descriptor median means a position precisely at the midline). Lateral is the opposite (a position more or less separated away from the midline).
Note that such descriptors (dorsal/ventral, rostral/caudal; medial/lateral) are relative rather than absolute (e.g., a lateral structure may be said to lie medial to something else that lies even more laterally).
Commonly used terms for planes of orientation or planes of section in neuroanatomy are "sagittal", "transverse" or "coronal", and "axial" or "horizontal". Again in this case, the situation is different for swimming, creeping or quadrupedal (prone) animals than for Man, or other erect species, due to the changed position of the axis. Due to the axial brain flexures, no section plane ever achieves a complete section series in a selected plane, because some sections inevitably result cut oblique or even perpendicular to it, as they pass through the flexures. Experience allows to discern the portions that result cut as desired.
A mid-sagittal plane divides the body and brain into left and right halves; sagittal sections, in general, are parallel to this median plane, moving along the medial-lateral dimension (see the image above). The term sagittal refers etymologically to the median suture between the right and left parietal bones of the cranium, known classically as sagittal suture, because it looks roughly like an arrow by its confluence with other sutures (sagitta; arrow in Latin).
A section plane orthogonal to the axis of any elongated form in principle is held to be transverse (e.g., a transverse section of a finger or of the vertebral column); if there is no length axis, there is no way to define such sections, or there are infinite possibilities. Therefore, transverse body sections in vertebrates are parallel to the ribs, which are orthogonal to the vertebral column, which represents the body axis both in animals and man. The brain also has an intrinsic longitudinal axis – that of the primordial elongated neural tube – which becomes largely vertical with the erect posture of Man, similarly as the body axis, except at its rostral end, as commented above. This explains that transverse spinal cord sections are roughly parallel to our ribs, or to the ground. However, this is only true for the spinal cord and the brainstem, since the forebrain end of the neural axis bends crook-like during early morphogenesis into the chiasmatic hypothalamus, where it ends; the orientation of true transverse sections accordingly changes, and is no longer parallel to the ribs and ground, but perpendicular to them; lack of awareness of this morphologic brain peculiarity (present in all vertebrate brains without exceptions) has caused and still causes much erroneous thinking on forebrain brain parts. Acknowledging the singularity of rostral transverse sections, tradition has introduced a different descriptor for them, namely coronal sections. Coronal sections divide the forebrain from rostral (front) to caudal (back), forming a series orthogonal (transverse) to the local bent axis. The concept cannot be applied meaningfully to the brainstem and spinal cord, since there the coronal sections become horizontal to the axial dimension, being parallel to the axis. In any case, the concept of 'coronal' sections is less precise than that of 'transverse', since often coronal section planes are used which are not truly orthogonal to the rostral end of the brain axis. The term is etymologically related to the coronal suture of the craneum and this to the position where crowns are worn (Latin corona means crown). It is not clear what sort of crown was meant originally (maybe just a diadema), and this leads unfortunately to ambiguity in the section plane defined merely as coronal.
A coronal plane across the human head and brain is modernly conceived to be parallel to the face (the plane in which a king's crown sits on his head is not exactly parallel to the face, and exportation of the concept to less frontally endowed animals than us is obviously even more conflictive, but there is an implicit reference to the coronal suture of the cranium, which forms between the frontal and temporal/parietal bones, giving a sort of diadema configuration which is roughly parallel to the face). Coronal section planes thus essentially refer only to the head and brain, where a diadema makes sense, and not to the neck and body below.
Horizontal sections by definition are aligned (parallel) with the horizon. In swimming, creeping and quadrupedal animals the body axis itself is horizontal, and, thus, horizontal sections run along the length of the spinal cord, separating ventral from dorsal parts. Horizontal sections are orthogonal to both transverse and sagittal sections, and in theory, are parallel to the length axis. Due to the axial bend in the brain (forebrain), true horizontal sections in that region are orthogonal to coronal (transverse) sections (as is the horizon relative to the face).
According to these considerations, the three directions of space are represented precisely by the sagittal, transverse and horizontal planes, whereas coronal sections can be transverse, oblique or horizontal, depending on how they relate to the brain axis and its incurvations.
Tools
Modern developments in neuroanatomy are directly correlated to the technologies used to perform research. Therefore, it is necessary to discuss the various tools that are available. Many of the histological techniques used to study other tissues can be applied to the nervous system as well. However, there are some techniques that have been developed especially for the study of neuroanatomy.
Cell staining
In biological systems, staining is a technique used to enhance the contrast of particular features in microscopic images.
Nissl staining uses aniline basic dyes to intensely stain the acidic polyribosomes in the rough endoplasmic reticulum, which is abundant in neurons. This allows researchers to distinguish between different cell types (such as neurons and glia), and neuronal shapes and sizes, in various regions of the nervous system cytoarchitecture.
The classic Golgi stain uses potassium dichromate and silver nitrate to fill selectively with a silver chromate precipitate a few neural cells (neurons or glia, but in principle, any cells can react similarly). This so-called silver chromate impregnation procedure stains entirely or partially the cell bodies and neurites of some neurons -dendrites, axon- in brown and black, allowing researchers to trace their paths up to their thinnest terminal branches in a slice of nervous tissue, thanks to the transparency consequent to the lack of staining in the majority of surrounding cells. Modernly, Golgi-impregnated material has been adapted for electron-microscopic visualization of the unstained elements surrounding the stained processes and cell bodies, thus adding further resolutive power.
Histochemistry
Histochemistry uses knowledge about biochemical reaction properties of the chemical constituents of the brain (including notably enzymes) to apply selective methods of reaction to visualize where they occur in the brain and any functional or pathological changes. This applies importantly to molecules related to neurotransmitter production and metabolism, but applies likewise in many other directions chemoarchitecture, or chemical neuroanatomy.
Immunocytochemistry is a special case of histochemistry that uses selective antibodies against a variety of chemical epitopes of the nervous system to selectively stain particular cell types, axonal fascicles, neuropiles, glial processes or blood vessels, or specific intracytoplasmic or intranuclear proteins and other immunogenetic molecules, e.g., neurotransmitters. Immunoreacted transcription factor proteins reveal genomic readout in terms of translated protein. This immensely increases the capacity of researchers to distinguish between different cell types (such as neurons and glia) in various regions of the nervous system.
In situ hybridization uses synthetic RNA probes that attach (hybridize) selectively to complementary mRNA transcripts of DNA exons in the cytoplasm, to visualize genomic readout, that is, distinguish active gene expression, in terms of mRNA rather than protein. This allows identification histologically (in situ) of the cells involved in the production of genetically-coded molecules, which often represent differentiation or functional traits, as well as the molecular boundaries separating distinct brain domains or cell populations.
Genetically encoded markers
By expressing variable amounts of red, green, and blue fluorescent proteins in the brain, the so-called "brainbow" mutant mouse allows the combinatorial visualization of many different colors in neurons. This tags neurons with enough unique colors that they can often be distinguished from their neighbors with fluorescence microscopy, enabling researchers to map the local connections or mutual arrangement (tiling) between neurons.
Optogenetics uses transgenic constitutive and site-specific expression (normally in mice) of blocked markers that can be activated selectively by illumination with a light beam. This allows researchers to study axonal connectivity in the nervous system in a very discriminative way.
Non-invasive brain imaging
Magnetic resonance imaging has been used extensively to investigate brain structure and function non-invasively in healthy human subjects. An important example is diffusion tensor imaging, which relies on the restricted diffusion of water in tissue in order to produce axon images. In particular, water moves more quickly along the direction aligned with the axons, permitting the inference of their structure.
Viral-based methods
Certain viruses can replicate in brain cells and cross synapses. So, viruses modified to express markers (such as fluorescent proteins) can be used to trace connectivity between brain regions across multiple synapses. Two tracer viruses which replicate and spread transneuronal/transsynaptic are the Herpes simplex virus type1 (HSV) and the Rhabdoviruses. Herpes simplex virus was used to trace the connections between the brain and the stomach, in order to examine the brain areas involved in viscero-sensory processing. Another study injected herpes simplex virus into the eye, thus allowing the visualization of the optical pathway from the retina into the visual system. An example of a tracer virus which replicates from the synapse to the soma is the pseudorabies virus. By using pseudorabies viruses with different fluorescent reporters, dual infection models can parse complex synaptic architecture.
Dye-based methods
Axonal transport methods use a variety of dyes (horseradish peroxidase variants, fluorescent or radioactive markers, lectins, dextrans) that are more or less avidly absorbed by neurons or their processes. These molecules are selectively transported anterogradely (from soma to axon terminals) or retrogradely (from axon terminals to soma), thus providing evidence of primary and collateral connections in the brain. These 'physiologic' methods (because properties of living, unlesioned cells are used) can be combined with other procedures, and have essentially superseded the earlier procedures studying degeneration of lesioned neurons or axons. Detailed synaptic connections can be determined by correlative electron microscopy.
Connectomics
Serial section electron microscopy has been extensively developed for use in studying nervous systems. For example, the first application of serial block-face scanning electron microscopy was on rodent cortical tissue. Circuit reconstruction from data produced by this high-throughput method is challenging, and the Citizen science game EyeWire has been developed to aid research in that area.
Computational neuroanatomy
Is a field that utilizes various imaging modalities and computational techniques to model and quantify the spatiotemporal dynamics of neuroanatomical structures in both normal and clinical populations.
Model systems
Aside from the human brain, there are many other animals whose brains and nervous systems have received extensive study as model systems, including mice, zebrafish, fruit fly, and a species of roundworm called C. elegans. Each of these has its own advantages and disadvantages as a model system. For example, the C. elegans nervous system is extremely stereotyped from one individual worm to the next. This has allowed researchers using electron microscopy to map the paths and connections of all of the 302 neurons in this species. The fruit fly is widely studied in part because its genetics is very well understood and easily manipulated. The mouse is used because, as a mammal, its brain is more similar in structure to our own (e.g., it has a six-layered cortex, yet its genes can be easily modified and its reproductive cycle is relatively fast).
Caenorhabditis elegans
The brain is small and simple in some species, such as the nematode worm, where the body plan is quite simple: a tube with a hollow gut cavity running from the mouth to the anus, and a nerve cord with an enlargement (a ganglion) for each body segment, with an especially large ganglion at the front, called the brain. The nematode Caenorhabditis elegans has been studied because of its importance in genetics. In the early 1970s, Sydney Brenner chose it as a model system for studying the way that genes control development, including neuronal development. One advantage of working with this worm is that the nervous system of the hermaphrodite contains exactly 302 neurons, always in the same places, making identical synaptic connections in every worm. Brenner's team sliced worms into thousands of ultrathin sections and photographed every section under an electron microscope, then visually matched fibers from section to section, to map out every neuron and synapse in the entire body, to give a complete connectome of the nematode. Nothing approaching this level of detail is available for any other organism, and the information has been used to enable a multitude of studies that would not have been possible without it.
Drosophila melanogaster
Drosophila melanogaster is a popular experimental animal because it is easily cultured en masse from the wild, has a short generation time, and mutant animals are readily obtainable.
Arthropods have a central brain with three divisions and large optical lobes behind each eye for visual processing. The brain of a fruit fly contains several million synapses, compared to at least 100 billion in the human brain. Approximately two-thirds of the Drosophila brain is dedicated to visual processing.
Thomas Hunt Morgan started to work with Drosophila in 1906, and this work earned him the 1933 Nobel Prize in Medicine for identifying chromosomes as the vector of inheritance for genes. Because of the large array of tools available for studying Drosophila genetics, they have been a natural subject for studying the role of genes in the nervous system. The genome has been sequenced and published in 2000. About 75% of known human disease genes have a recognizable match in the genome of fruit flies. Drosophila is being used as a genetic model for several human neurological diseases including the neurodegenerative disorders Parkinson's, Huntington's, spinocerebellar ataxia and Alzheimer's disease. In spite of the large evolutionary distance between insects and mammals, many basic aspects of Drosophila neurogenetics have turned out to be relevant to humans. For instance, the first biological clock genes were identified by examining Drosophila mutants that showed disrupted daily activity cycles.
See also
Connectogram
Outline of the human brain
Outline of brain mapping
List of regions in the human brain
Medical image computing
Neurology
Neurodiversity
Neuroscience
Computational anatomy
Citations
Sources
External links
Neuroanatomy, an annual journal of clinical neuroanatomy
Mouse, Rat, Primate and Human Brain Atlases (UCLA Center for Computational Biology)
brainmaps.org: High-Resolution Neuroanatomically-Annotated Brain Atlases
BrainInfo for Neuroanatomy
Brain Architecture Management System, several atlases of brain anatomy
White Matter Atlas, Diffusion Tensor Imaging Atlas of the Brain's White Matter Tracts
Nervous system | Neuroanatomy | [
"Biology"
] | 5,496 | [
"Organ systems",
"Nervous system"
] |
337,279 | https://en.wikipedia.org/wiki/Self-ionization%20of%20water | The self-ionization of water (also autoionization of water, autoprotolysis of water, autodissociation of water, or simply dissociation of water) is an ionization reaction in pure water or in an aqueous solution, in which a water molecule, H2O, deprotonates (loses the nucleus of one of its hydrogen atoms) to become a hydroxide ion, OH−. The hydrogen nucleus, H+, immediately protonates another water molecule to form a hydronium cation, H3O+. It is an example of autoprotolysis, and exemplifies the amphoteric nature of water.
History and notation
The self-ionization of water was first proposed in 1884 by Svante Arrhenius as part of the theory of ionic dissociation which he proposed to explain the conductivity of electrolytes including water. Arrhenius wrote the self-ionization as H2O <=> H+ + OH-. At that time, nothing was yet known of atomic structure or subatomic particles, so he had no reason to consider the formation of an H+ ion from a hydrogen atom on electrolysis as any less likely than, say, the formation of a Na+ ion from a sodium atom.
In 1923 Johannes Nicolaus Brønsted and Martin Lowry proposed that the self-ionization of water actually involves two water molecules: H2O + H2O <=> H3O+ + OH-. By this time the electron and the nucleus had been discovered and Rutherford had shown that a nucleus is very much smaller than an atom. This would include a bare ion H+ which would correspond to a proton with zero electrons. Brønsted and Lowry proposed that this ion does not exist free in solution, but always attaches itself to a water (or other solvent) molecule to form the hydronium ion H3O+ (or other protonated solvent).
Later spectroscopic evidence has shown that many protons are actually hydrated by more than one water molecule. The most descriptive notation for the hydrated ion is H+(aq), where aq (for aqueous) indicates an indefinite or variable number of water molecules. However the notations H+ and H3O+ are still also used extensively because of their historical importance. This article mostly represents the hydrated proton as H3O+, corresponding to hydration by a single water molecule.
Equilibrium constant
Chemically pure water has an electrical conductivity of 0.055 μS/cm. According to the theories of Svante Arrhenius, this must be due to the presence of ions. The ions are produced by the water self-ionization reaction, which applies to pure water and any aqueous solution:
H2O + H2O H3O+ + OH−
Expressed with chemical activities , instead of concentrations, the thermodynamic equilibrium constant for the water ionization reaction is:
which is numerically equal to the more traditional thermodynamic equilibrium constant written as:
under the assumption that the sum of the chemical potentials of H+ and H3O+ is formally equal to twice the chemical potential of H2O at the same temperature and pressure.
Because most acid–base solutions are typically very dilute, the activity of water is generally approximated as being equal to unity, which allows the ionic product of water to be expressed as:
In dilute aqueous solutions, the activities of solutes (dissolved species such as ions) are approximately equal to their concentrations. Thus, the ionization constant, dissociation constant, self-ionization constant, water ion-product constant or ionic product of water, symbolized by Kw, may be given by:
where [H3O+] is the molarity (molar concentration) of hydrogen cation or hydronium ion, and [OH−] is the concentration of hydroxide ion. When the equilibrium constant is written as a product of concentrations (as opposed to activities) it is necessary to make corrections to the value of depending on ionic strength and other factors (see below).
At 24.87 °C and zero ionic strength, Kw is equal to . Note that as with all equilibrium constants, the result is dimensionless because the concentration is in fact a concentration relative to the standard state, which for H+ and OH− are both defined to be 1 molal (= 1 mol/kg) when molality is used or 1 molar (= 1 mol/L) when molar concentration is used. For many practical purposes, the molality (mol solute/kg water) and molar (mol solute/L solution) concentrations can be considered as nearly equal at ambient temperature and pressure if the solution density remains close to one (i.e., sufficiently diluted solutions and negligible effect of temperature changes). The main advantage of the molal concentration unit (mol/kg water) is to result in stable and robust concentration values which are independent of the solution density and volume changes (density depending on the water salinity (ionic strength), temperature and pressure); therefore, molality is the preferred unit used in thermodynamic calculations or in precise or less-usual conditions, e.g., for seawater with a density significantly different from that of pure water, or at elevated temperatures, like those prevailing in thermal power plants.
We can also define pKw −log10 Kw (which is approximately 14 at 25 °C). This is analogous to the notations pH and pKa for an acid dissociation constant, where the symbol p denotes a cologarithm. The logarithmic form of the equilibrium constant equation is pKw = pH + pOH.
Dependence on temperature, pressure and ionic strength
The dependence of the water ionization on temperature and pressure has been investigated thoroughly. The value of pKw decreases as temperature increases from the melting point of ice to a minimum at c. 250 °C, after which it increases up to the critical point of water c. 374 °C. It decreases with increasing pressure
With electrolyte solutions, the value of pKw is dependent on ionic strength of the electrolyte. Values for sodium chloride are typical for a 1:1 electrolyte. With 1:2 electrolytes, MX2, pKw decreases with increasing ionic strength.
The value of Kw is usually of interest in the liquid phase. Example values for superheated steam (gas) and supercritical water fluid are given in the table.
{| class="wikitable" style="text-align:center"
|+Comparison of pKw values for liquid water, superheated steam, and supercritical water.
|-
! !! 350 °C !! 400 °C !! 450 °C !! 500 °C !! 600 °C !! 800 °C
|-
! scope="row" |0.1 MPa
||| 47.961b || 47.873b || 47.638b || 46.384b ||40.785b
|-
! scope="row" |17 MPa
|11.920 (liquid)a || || || || ||
|-
! scope="row" |25 MPa
|11.551 (liquid)c ||16.566||18.135||18.758||19.425||20.113
|-
! scope="row" |100 MPa
|10.600 (liquid)c ||10.744||11.005||11.381||12.296||13.544
|-
! scope="row" |1000 MPa
|8.311 (liquid)c ||8.178||8.084||8.019||7.952||7.957
|}
Notes to the table. The values are for supercritical fluid except those marked: a at saturation pressure corresponding to 350 °C. b superheated steam. c compressed or subcooled liquid.
Isotope effects
Heavy water, D2O, self-ionizes less than normal water, H2O;
D2O + D2O D3O+ + OD−
This is due to the equilibrium isotope effect, a quantum mechanical effect attributed to oxygen forming a slightly stronger bond to deuterium because the larger mass of deuterium results in a lower zero-point energy.
Expressed with activities a, instead of concentrations, the thermodynamic equilibrium constant for the heavy water ionization reaction is:
Assuming the activity of the D2O to be 1, and assuming that the activities of the D3O+ and OD− are closely approximated by their concentrations
The following table compares the values of pKw for H2O and D2O.
{| class="wikitable" style="text-align:center"
|+pKw values for pure water
|-
! scope="row" |T/°C
|10||20|| 25||30|| 40 || 50
|-
! scope="row" |H2O
|14.535 || 14.167|| 13.997|| 13.830|| 13.535 ||13.262
|-
! scope="row" |D2O
|15.439||15.049||14.869||14.699||14.385|| 14.103
|}
Ionization equilibria in water–heavy water mixtures
In water–heavy water mixtures equilibria several species are involved: H2O, HDO, D2O, H3O+, D3O+, H2DO+, HD2O+, HO−, DO−.
Mechanism
The rate of reaction for the ionization reaction
2 H2O → H3O+ + OH−
depends on the activation energy, ΔE‡. According to the Boltzmann distribution the proportion of water molecules that have sufficient energy, due to thermal population, is given by
where k is the Boltzmann constant. Thus some dissociation can occur because sufficient thermal energy is available. The following sequence of events has been proposed on the basis of electric field fluctuations in liquid water. Random fluctuations in molecular motions occasionally (about once every 10 hours per water molecule) produce an electric field strong enough to break an oxygen–hydrogen bond, resulting in a hydroxide (OH−) and hydronium ion (H3O+); the hydrogen nucleus of the hydronium ion travels along water molecules by the Grotthuss mechanism and a change in the hydrogen bond network in the solvent isolates the two ions, which are stabilized by solvation. Within 1 picosecond, however, a second reorganization of the hydrogen bond network allows rapid proton transfer down the electric potential difference and subsequent recombination of the ions. This timescale is consistent with the time it takes for hydrogen bonds to reorientate themselves in water.
The inverse recombination reaction
H3O+ + OH− → 2 H2O
is among the fastest chemical reactions known, with a reaction rate constant of at room temperature. Such a rapid rate is characteristic of a diffusion-controlled reaction, in which the rate is limited by the speed of molecular diffusion.
Relationship with the neutral point of water
Water molecules dissociate into equal amounts of H3O+ and OH−, so their concentrations are almost exactly at 25 °C and 0.1 MPa. A solution in which the H3O+ and OH− concentrations equal each other is considered a neutral solution. In general, the pH of the neutral point is numerically equal to pKw.
Pure water is neutral, but most water samples contain impurities. If an impurity is an acid or base, this will affect the concentrations of hydronium ion and hydroxide ion. Water samples that are exposed to air will absorb some carbon dioxide to form carbonic acid (H2CO3) and the concentration of H3O+ will increase due to the reaction H2CO3 + H2O = HCO3− + H3O+. The concentration of OH− will decrease in such a way that the product [H3O+][OH−] remains constant for fixed temperature and pressure. Thus these water samples will be slightly acidic. If a pH of exactly 7.0 is required, it must be maintained with an appropriate buffer solution.
See also
Acid–base reaction
Chemical equilibrium
Molecular autoionization (of various solvents)
Standard hydrogen electrode
References
External links
General Chemistry – Autoionization of Water
Ionization
Water chemistry
Equilibrium chemistry
Water
de:Protolyse#Autoprotolyse | Self-ionization of water | [
"Physics",
"Chemistry"
] | 2,675 | [
"Ionization",
"Acid–base chemistry",
"Physical phenomena",
"Equilibrium chemistry",
"nan"
] |
337,301 | https://en.wikipedia.org/wiki/National%20Ignition%20Facility | The National Ignition Facility (NIF) is a laser-based inertial confinement fusion (ICF) research device, located at Lawrence Livermore National Laboratory in Livermore, California, United States. NIF's mission is to achieve fusion ignition with high energy gain. It achieved the first instance of scientific breakeven controlled fusion in an experiment on December 5, 2022, with an energy gain factor of 1.5. It supports nuclear weapon maintenance and design by studying the behavior of matter under the conditions found within nuclear explosions.
NIF is the largest and most powerful ICF device built to date. The basic ICF concept is to squeeze a small amount of fuel to reach pressure and temperature necessary for fusion. NIF hosts the world's most energetic laser. The laser indirectly heats the outer layer of a small sphere. The energy is so intense that it causes the sphere to implode, squeezing the fuel inside. The implosion reaches a peak speed of , raising the fuel density from about that of water to about 100 times that of lead. The delivery of energy and the adiabatic process during implosion raises the temperature of the fuel to hundreds of millions of degrees. At these temperatures, fusion processes occur in the tiny interval before the fuel explodes outward.
Construction on the NIF began in 1997. NIF was completed five years behind schedule and cost almost four times its original budget. Construction was certified complete on March 31, 2009, by the U.S. Department of Energy. The first large-scale experiments were performed in June 2009 and the first "integrated ignition experiments" (which tested the laser's power) were declared completed in October 2010.
From 2009 to 2012 experiments were conducted under the National Ignition Campaign, with the goal of reaching ignition just after the laser reached full power, some time in the second half of 2012. The campaign officially ended in September 2012, at about the conditions needed for ignition. Thereafter NIF has been used primarily for materials science and weapons research. In 2021, after improvements in fuel target design, NIF produced 70% of the energy of the laser, beating the record set in 1997 by the JET reactor at 67% and achieving a burning plasma. On December 5, 2022, after further technical improvements, NIF reached "ignition", or scientific breakeven, for the first time, achieving a 154% energy yield compared to the input energy. However, while this was scientifically a success, the experiment in practice produced less than 1% of the energy the facility used to create it: while 3.15 MJ of energy was yielded from 2.05 MJ input, the lasers delivering the 2.05 MJ of energy took about 300 MJ to produce in the facility.
Inertial confinement fusion basics
Inertial confinement fusion (ICF) devices use intense energy to rapidly heat the outer layers of a target in order to compress it. Nuclear fission provides the energy source for thermonuclear warheads, while sources such as laser beams and particle beams are used in non-weapon devices.
The target is a small spherical pellet containing a few milligrams of fusion fuel, typically a mix of deuterium (D) and tritium (T), as this composition has the lowest ignition temperature.
The lasers can either heat the surface of the fuel pellet directly – known as direct drive – or heat the inner surface of a hollow metal cylinder around the pellet – known as indirect drive. In the indirect drive case, the cylinder, called a hohlraum (German for 'hollow room' or 'cavity'), becomes hot enough to re-emit the energy as even higher frequency X-rays. These X-rays, which are more symmetrically distributed than the original laser light, heat the surface of pellet.
In either case, the material on the outside of the pellet is turned into a plasma, which explodes away from the surface. The rest of the pellet is driven inward on all sides, into a small volume of extremely high density. The surface explosion creates shock waves that travel inward. At the center of the fuel, a small volume is further heated and compressed. When the temperature and density are high enough, fusion reactions occur. The energy must be delivered quickly and spread extremely evenly across the target's outer surface in order to compress the fuel symmetrically.
The reactions release high-energy particles, some of which, primarily alpha particles, collide with unfused fuel and heat it further, potentially triggering additional fusion. At the same time, the fuel is also losing heat through x-ray losses and hot electrons leaving the fuel area. Thus the rate of alpha heating must be greater than the loss rate, termed bootstrapping. Given the right conditions—high enough density, temperature, and duration—bootstrapping results in a chain reaction, burning outward from the center. This is known as ignition, which fuses a significant portion of the fuel and releases large amounts of energy.
As of 1998, most ICF experiments had used laser drivers. Other drivers have been examined, such as heavy ions driven by particle accelerators.
Design
System
NIF primarily uses the indirect drive method of operation, in which the laser heats a small metal cylinder surrounding the capsule inside it, which then emits X-rays that heat the fuel pellet. Experimental systems, including the OMEGA and Nova lasers, validated this approach. The NIF's high power supports a much larger target than OMEGA or Nova; the baseline pellet design is about 2 mm in diameter. It is chilled to about 18 kelvin (−255 °C) and lined with a layer of frozen deuterium–tritium (DT) fuel. The hollow interior contains a small amount of DT gas.
In a typical experiment, the laser generates 3 MJ of infrared laser energy of a possible 4. About 1.5 MJ remains after conversion to UV, and another 15 percent is lost in the hohlraum. About 15 percent of the resulting x-rays, about 150 kJ, are absorbed by the target's outer layers. The coupling between the capsule and the x-rays is lossy, and ultimately only about 10 to 14 kJ of energy is deposited in the fuel.
The fuels in the center of the target are compressed to a density of about 1000 g/cm3. For comparison, lead has a density of about 11 g/cm3. The pressure is the equivalent of 300 billion atmospheres.
Before NIF was constructed, it was expected based on simulations that 10–15 MJ of fusion energy would be released, resulting in a net fusion energy gain, denoted Q, of about 5–8 (fusion energy out/UV laser energy in). Due to the design of the target chamber, the baseline design limited the maximum possible fusion energy release to 45 MJ, equivalent to about 11 kg of TNT exploding.
When NIF was built and used in 2011, the fusion energy was far lower than expected – less than 1 kJ. Performance was gradually improved until, as of 2024, the fusion energy routinely exceeded 2 MJ.
To be useful for energy production, a fusion facility must produce fusion output at least an order of magnitude more than the energy used to power the laser amplifiers – 400 MJ in the case of NIF. Commercial laser fusion systems would use much more efficient diode-pumped solid state lasers, where wall-plug efficiencies of 10 percent have been demonstrated, and efficiencies 16–18 percent were expected with advanced concepts under development in 1996.
Laser
As of 2010 NIF aimed to create a single 500 terawatt (TW) peak flash of light that reaches the target from numerous directions within a few picoseconds. The design uses 192 beamlines in a parallel system of flashlamp-pumped, neodymium-doped phosphate glass lasers.
To ensure that the output of the beamlines is uniform, the laser is amplified from a single source in the Injection Laser System (ILS). This starts with a low-power flash of 1053-nanometer (nm) infrared light generated in an ytterbium-doped optical fiber laser termed Master Oscillator. Its light is split and directed into 48 Preamplifier Modules (PAMs). Each PAM conducts a two-stage amplification process via xenon flash lamps. The first stage is a regenerative amplifier in which the pulse circulates 30 to 60 times, increasing its energy from nanojoules to tens of millijoules. The second stage sends the light four times through a circuit containing a neodymium glass amplifier similar to (but much smaller than) the ones used in the main beamlines, boosting the millijoules to about 6 joules. According to LLNL, designing the PAMs was one of the major challenges. Subsequent improvements allowed them to surpass their initial design goals.<ref>, Lawrence Livermore National Laboratory. Retrieved on October 2, 2007</ref>
The main amplification takes place in a series of glass amplifiers located at one end of the beamlines. Before firing, the amplifiers are first optically pumped by a total of 7,680 flash lamps. The lamps are powered by a capacitor bank that stores 400 MJ (110 kWh). When the wavefront passes through them, the amplifiers release some of the energy stored in them into the beam. The beams are sent through the main amplifier four times, using an optical switch located in a mirrored cavity. These amplifiers boost the original 6 J to a nominal 4 MJ. Given the time scale of a few nanoseconds, the peak UV power delivered to the target reaches 500 TW.
Near the center of each beamline, and taking up the majority of the total length, are spatial filters. These consist of long tubes with small telescopes at the end that focus the beam to a tiny point in the center of the tube, where a mask cuts off any stray light outside the focal point. The filters ensure that the beam image is extremely uniform. Spatial filters were a major step forward. They were introduced in the Cyclops laser, an earlier LLNL experiment.
The end-to-end length of the path the laser beam travels, including switches, is about . The various optical elements in the beamlines are generally packaged into Line Replaceable Units (LRUs), standardized boxes about the size of a vending machine that can be dropped out of the beamline for replacement from below.
After amplification is complete the light is switched back into the beamline, where it runs to the far end of the building to the target chamber. The target chamber is a multi-piece steel sphere weighing . Just before reaching the target chamber, the light is reflected off mirrors in the switchyard and target area in order to hit the target from different directions. Since the path length from the Master Oscillator to the target is different for each beamline, optics are used to delay the light in order to ensure that they all reach the center within a few picoseconds of each other.
One of the last steps before reaching the target chamber is to convert the infrared (IR) light at 1053 nm into the ultraviolet (UV) at 351 nm in a device known as a frequency converter. These are made of thin sheets (about 1 cm thick) cut from a single crystal of potassium dihydrogen phosphate. When the 1053 nm (IR) light passes through the first of two of these sheets, frequency addition converts a large fraction of the light into 527 nm light (green). On passing through the second sheet, frequency combination converts much of the 527 nm light and the remaining 1053 nm light into 351 nm (UV) light. Infrared (IR) light is much less effective than UV at heating the targets, because IR couples more strongly with hot electrons that absorb a considerable amount of energy and interfere with compression. The conversion process can reach peak efficiencies of about 80 percent for a laser pulse that has a flat temporal shape, but the temporal shape needed for ignition varies significantly over the duration of the pulse. The actual conversion process is about 50 percent efficient, reducing delivered energy to a nominal 1.8 MJ.
As of 2010, one important aspect of any ICF research project was ensuring that experiments could be carried out on a timely basis. Previous devices generally had to cool down for many hours to allow the flashlamps and laser glass to regain their shapes after firing (due to thermal expansion), limiting their use to one or fewer firings per day. One of the goals for NIF has been to reduce this time to less than four hours, in order to allow 700 firings a year.
Other concepts
NIF is also exploring new types of targets. Previous experiments generally used plastic ablators, typically polystyrene (CH). NIF targets are constructed by coating a plastic form with a layer of sputtered beryllium or beryllium–copper alloy, and then oxidizing the plastic out of the center. Beryllium targets offer higher implosion efficiencies from x-ray inputs.
Although NIF was primarily designed as an indirect drive device, the energy in the laser as of 2008 was high enough to be used as a direct drive system, where the laser shines directly on the target without conversion to x-rays. The power delivered by NIF UV rays was estimated to be more than enough to cause ignition, allowing fusion energy gains of about 40x, somewhat higher than the indirect drive system.
As of 2005, scaled implosions on the OMEGA laser and computer simulations showed NIF to be capable of ignition using a polar direct drive (PDD) configuration where the target was irradiated directly by the laser only from the top and bottom, without changes to the NIF beamline layout.
As of 2005, other targets, called saturn targets, were specifically designed to reduce the anisotropy and improve the implosion. They feature a small plastic ring around the "equator" of the target, which becomes a plasma when hit by the laser. Some of the laser light is refracted through this plasma back towards the equator of the target, evening out the heating. NIF ignition with gains of just over 35 times are thought to be possible, producing results almost as good as the fully symmetric direct drive approach.
History
Impetus, 1957
The history of ICF at Lawrence Livermore National Laboratory in Livermore, California, started with physicist John Nuckolls, who started considering the problem after a 1957 meeting arranged by Edward Teller there. During these meetings, the idea later known as PACER emerged. PACER envisioned the explosion of small hydrogen bombs in large caverns to generate steam that would be converted into electrical power. After identifying problems with this approach, Nuckolls wondered how small a bomb could be made that would still generate net positive power.
A typical hydrogen bomb has two parts: a plutonium-based fission bomb known as the primary, and a cylindrical arrangement of fusion fuels known as the secondary. The primary releases x-rays, which are trapped within the bomb casing. They heat and compress the secondary until it ignites. The secondary consists of lithium deuteride (LiD) fuel, which requires an external neutron source. This is normally in the form of a small plutonium "spark plug" in the center of the fuel. Nuckolls's idea was to explore how small the secondary could be made, and what effects this would have on the energy needed from the primary to cause ignition. The simplest change is to replace the LiD fuel with DT gas, removing the need for the spark plug. This allows secondaries of any size – as the secondary shrinks, so does the amount of energy needed for ignition. At the milligram level, the energy levels started to approach those available through several known devices.
By the early 1960s, Nuckolls and several other weapons designers had developed ICF's outlines. The DT fuel would be placed in a small capsule, designed to rapidly ablate when heated and thereby maximize compression and shock wave formation. This capsule would be placed within an engineered shell, the hohlraum, which acts like the bomb casing. The hohlraum did not have to be heated by x-rays; any source of energy could be used as long as it delivered enough energy to heat the hohlraum and produce x-rays. Ideally the energy source would be located some distance away, to mechanically isolate both ends of the reaction. A small atomic bomb could be used as the energy source, as in a hydrogen bomb, but ideally smaller energy sources would be used. Using computer simulations, the teams estimated that about 5 MJ of energy would be needed from the primary, generating a 1 MJ beam. To put this in perspective, a small (0.5 kt ) fission primary releases 2 TJ.
ICF program, 1970s
While Nuckolls and LLNL were working on hohlraum-based concepts, UCSD physicist Keith Brueckner was independently working on direct drive. In the early 1970s, Brueckner formed KMS Fusion to commercialize this concept. This sparked an intense rivalry between KMS and the weapons labs. Formerly ignored, ICF became a hot topic and most of the labs started ICF work. LLNL decided to concentrate on glass lasers, while other facilities studied gas lasers using carbon dioxide (e.g. ANTARES, Los Alamos National Laboratory) or KrF (e.g. Nike laser, Naval Research Laboratory).
Throughout these early stages, much of the understanding of the fusion process was the result of computer simulations, primarily LASNEX. LASNEX simplified the reaction to a 2-dimensional approximation, which was all that was possible with the available computing power. LASNEX estimated that laser drivers in the kJ range could reach low gain, which was just within the state of the art. This led to the Shiva laser project which was completed in 1977. Shiva fell far short of its goals. The densities reached were thousands of times smaller than predicted. This was traced to issues with the way the laser delivered heat to the target. Most of its energy energized electrons rather than the entire fuel mass. Further experiments and simulations demonstrated that this process could be dramatically improved by using shorter wavelengths.
Further upgrades to the simulation programs, accounting for these effects, predicted that a different design would reach ignition. This system took the form of the 20-beam 200 kJ Nova laser. During the construction phase, Nuckolls found an error in his calculations, and an October 1979 review chaired by former LLNL director John S. Foster Jr. confirmed that Nova would not reach ignition. It was modified into a smaller 10-beam design that converted the light to 351 nm and increase coupling efficiency. Nova was able to deliver about 30 kJ of UV laser energy, about half of what was expected, primarily due to optical damage to the final focusing optics. Even at those levels, it was clear that the predictions for fusion production were wrong; even at the limited powers available, fusion yields were far below predictions.
Halite and Centurion, 1978
Each experiment showed that the energy needed to reach ignition continued to be underestimated. The Department of Energy (DOE) decided that direct experimentation was the best way to settle the issue, and in 1978 they started a series of underground experiments at the Nevada Test Site that used small nuclear bombs to illuminate ICF targets. The tests were known as Halite (LLNL) and Centurion (LANL).
The basic concept behind the tests had been developed in the 1960s as a way to develop anti-ballistic missile warheads. It was found that bombs that exploded outside the atmosphere gave off bursts of X-rays that could damage an enemy warhead at long range. To test the effectiveness of this system, and to develop countermeasures to protect US warheads, the Defense Atomic Support Agency (now the Defense Threat Reduction Agency) developed a system that placed the targets at the end of long tunnels behind fast-shutting doors. The doors were timed to shut in the brief period between the arrival of the X-rays and the subsequent blast. This saved the reentry vehicle (RV) from blast damage and allowed them to be inspected.
ICF tests used the same system, replacing the RVs by hohlraums. Each test simultaneously illuminated many targets, each at a different distance from the bomb to test the effect of varying of illumination. Another question was how large the fuel assembly had to be in order for the fuel to self-heat from the fusion reactions and thus reach ignition. Initial data were available by mid-1984, and the testing ceased in 1988. Ignition was achieved for the first time during these tests. The amount of energy and the size of the fuel targets needed to reach ignition was far higher than predicted. During this same period, experiments began on Nova using similar targets to understand their behavior under laser illumination, allowing direct comparison against the bomb tests.
This data suggested that about 10 MJ of X-ray energy would be needed to reach ignition, far beyond what had earlier been calculated. If those X-rays are created by beaming an IR laser to a hohlraum, as in Nova or NIF, then dramatically more laser energy would be required, on the order of 100 MJ.
This triggered a debate in the ICF community. One group suggested an attempt to build a laser of this power; Leonardo Mascheroni and Claude Phipps designed a new type of hydrogen fluoride laser pumped by high-energy electrons and reach the 100 MJ threshold. Others used the same data and new versions of their computer simulations to suggest that careful shaping of the laser pulse and more beams spread more evenly could achieve ignition with a laser powered between 5 and 10 MJ.John Lindl, Development of the Indirect-Drive Approach to Inertial Confinement Fusion and the Target Physics Basis for Ignition and Gain, Physics of Plasmas Vol. 2, No. 11, November 1995; pp. 3933–4024
These results prompted the DOE to request a custom military ICF facility named the "Laboratory Microfusion Facility" (LMF). LMF would use a driver on the order of 10 MJ, delivering fusion yields of between 100 and 1,000 MJ. A 1989–1990 review of this concept by the National Academy of Sciences suggested that LMF was too ambitious, and that fundamental physics needed to be further explored. They recommended further experiments before attempting to move to a 10 MJ system. Nevertheless, the authors noted, "Indeed, if it did turn out that a 100 MJ driver were required for ignition and gain, one would have to rethink the entire approach to, and rationale for, ICF".
Laboratory Microfusion Facility and Nova Upgrade, 1990
As of 1992, the Laboratory Microfusion Facility was estimated to cost about $1 billion. LLNL initially submitted a design with a 5 MJ 350 nm (UV) driver that would be able to reach about 200 MJ yield, which was enough to attain the majority of the LMF goals.That program was estimated to cost about $600 million FY 1989 dollars. An additional $250 million would pay to upgrade it to a full 1,000 MJ. The total would surpass $1 billion to meet all of the goals requested by the DOE.
The NAS review led to a reevaluation of these plans, and in July 1990, LLNL responded with the Nova Upgrade, which would reuse most of Nova, along with the adjacent Shiva facility. The resulting system would be much lower power than the LMF concept, with a driver of about 1 MJ. The new design included features that advanced the state of the art in the driver section, including multi-pass in the main amplifiers, and 18 beamlines (up from 10) that were split into 288 "beamlets" as they entered the target area. The plans called for the installation of two main banks of beamlines, one in the existing Nova beamline room, and the other in the older Shiva building next door, extending through its laser bay and target area into an upgraded Nova target area. The lasers would deliver about 500 TW in a 4 ns pulse. The upgrades were expected to produce fusion yields of between 2 and 10 MJ. The initial estimates from 1992 estimated construction costs around $400 million, with construction taking place from 1995 to 1999.
NIF, 1994
Throughout this period, the ending of the Cold War led to dramatic changes in defense funding and priorities. The political support for nuclear weapons declined and arms agreements led to a reduction in warhead count and less design work. The US was faced with the prospect of losing a generation of nuclear weapon designers able to maintain existing stockpiles, or design new weapons. At the same time, the Comprehensive Nuclear-Test-Ban Treaty (CTBT) was signed in 1996, which would ban all criticality testing and made the development of newer generations of nuclear weapons more difficult.
Out of these changes came the Stockpile Stewardship and Management Program (SSMP), which, among other things, included funds for the development of methods to design and build nuclear weapons without having to test them explosively. In a series of meetings that started in 1995, an agreement formed between the labs to divide up SSMP efforts. An important part of this would be confirmation of computer models using low-yield ICF experiments. The Nova Upgrade was too small to use for these experiments. A redesign matured into NIF in 1994. The estimated cost of the project remained almost $1 billion, with completion in 2002.
In spite of the agreement, the large project cost combined with the ending of similar projects at other labs resulted in critical comments by scientists at other labs, Sandia National Laboratories in particular. In May 1997, Sandia fusion scientist Rick Spielman publicly stated that NIF had "virtually no internal peer review on the technical issues" and that "Livermore essentially picked the panel to review themselves". A retired Sandia manager, Bob Puerifoy, was even more blunt than Spielman: "NIF is worthless ... it can't be used to maintain the stockpile, period". Ray Kidder, one of the original developers of the ICF concept at LLNL, was also highly critical. He stated in 1997 that its primary purpose was to "recruit and maintain a staff of theorists and experimentalists" and that while some of the experimental data would prove useful for weapons design, differences in the experimental setup limit their relevance. "Some of the physics is the same; but the details, 'wherein the devil lies,' are quite different. It would therefore also be wrong to assume that NIF will be able to support for the long term a staff of weapons designers and engineers with detailed design competence comparable to that of those now working at the weapons design laboratories."
In 1997, Victor Reis, assistant secretary for Defense Programs within DOE and SSMP chief architect defended the program telling the U.S. House Armed Services Committee that NIF was "designed to produce, for the first time in a laboratory setting, conditions of temperature and density of matter close to those that occur in the detonation of nuclear weapons. The ability to study the behavior of matter and the transfer of energy and radiation under these conditions is key to understanding the basic physics of nuclear weapons and predicting their performance without underground nuclear testing." In 1998, two JASON panels, composed of scientific and technical experts, stated that NIF is the most scientifically valuable of all programs proposed for science-based stockpile stewardship.
Despite the initial criticism, Sandia, as well as Los Alamos, supported the development of many NIF technologies, and both laboratories later became partners with NIF in the National Ignition Campaign.
Construction of first unit, 1994–1998
Work on the NIF started with a single beamline demonstrator, Beamlet. Beamlet successfully operated between 1994 and 1997. It was then sent to Sandia National Laboratories as a light source in their Z machine. A full-sized demonstrator then followed, in AMPLAB, which started operations in 1997. The official groundbreaking on the main NIF site was on May 29, 1997.
At the time, the DOE was estimating that the NIF would cost approximately $1.1 billion and another $1 billion for related research, and would be complete as early as 2002. Later in 1997 the DOE approved an additional $100 million in funding and pushed the operational date back to 2004. As late as 1998 LLNL's public documents stated the overall price was $1.2 billion, with the first eight lasers coming online in 2001 and full completion in 2003.
The facility's physical scale alone made the construction project challenging. By the time the "conventional facility" (the shell for the laser) was complete in 2001, more than 210,000 cubic yards of soil had been excavated, more than 73,000 cubic yards of concrete had been poured, 7,600 tons of reinforcing steel rebar had been placed, and more than 5,000 tons of structural steel had been erected. To isolate the laser system from vibration, the foundation of each laser bay was made independent of the rest of the structure. Three-foot-thick, 420-foot-long and 80-foot-wide slabs required continuous concrete pours to achieve their specifications.
In November 1997, an El Niño storm dumped two inches of rain in two hours, flooding the NIF site with 200,000 gallons of water just three days before the scheduled foundation pour. The earth was so soaked that the framing for the retaining wall sank six inches, forcing the crew to disassemble and reassemble it. Construction was halted in December 1997, when 16,000-year-old mammoth bones were discovered. Paleontologists were called in to remove and preserve the bones, delaying construction by four days.
A variety of research and development, technology and engineering challenges arose, such as creating an optics fabrication capability to supply the laser glass for NIF's 7,500 meter-sized optics. State-of-the-art optics measurement, coating and finishing techniques were developed to withstand NIF's high-energy lasers, as were methods for amplifying the laser beams to the needed energy levels. Continuous-pour glass, rapid-growth crystals, innovative optical switches, and deformable mirrors were among NIF's technology innovations developed.
Sandia, with extensive experience in pulsed power delivery, designed the capacitor banks used to feed the flashlamps, completing the first unit in October 1998. To everyone's surprise, the Pulsed Power Conditioning Modules (PCMs) suffered capacitor failures that led to explosions. This required a redesign of the module to contain the debris, but since the concrete had already been poured, this left the new modules so tightly packed that in-place maintenance was impossible. Another redesign followed, this time allowing the modules to be removed from the bays for servicing. Continuing problems further delayed operations, and in September 1999, an updated DOE report stated that NIF required up to $350 million more and completion occur only in 2006.
Re-baseline and GAO report, 1999–2000
Throughout this period the problems with NIF were not reported up the management chain. In 1999 then Secretary of Energy Bill Richardson reported to Congress that NIF was on time and budget, as project leaders had reported. In August that year it was revealed that neither claim was close to the truth. As the Government Accountability Office (GAO) would later note, "Furthermore, the Laboratory's former laser director, who oversaw NIF and all other laser activities, assured Laboratory managers, DOE, the university, and the Congress that the NIF project was adequately funded and staffed and was continuing on cost and schedule, even while he was briefed on clear and growing evidence that NIF had serious problems". A DOE Task Force reported to Richardson in January 2000 that "organizations of the NIF project failed to implement program and project management procedures and processes commensurate with a major research and development project... [and that] ...no one gets a passing grade on NIF Management: not the DOE's office of Defense Programs, not the Lawrence Livermore National Laboratory and not the University of California".
Given the budget problems, the US Congress requested an independent GAO review. They returned a critical report in August 2000 estimating that the cost was likely to be $3.9 billion, including R&D, and that the facility was unlikely to be completed anywhere near on time.GAO Report Cites New NIF Cost Estimate, FYI, American Institute of Physics, Number 101: August 30, 2000. Retrieved on May 7, 2008. The report noted management problems for the overruns, and criticized the program for failing to budget money for target fabrication, including it in operational costs instead of development.
In 2000, the DOE began a comprehensive "rebaseline review" because of the technical delays and project management issues, and adjusted the schedule and budget accordingly. John Gordon, National Nuclear Security Administrator, stated "We have prepared a detailed bottom-up cost and schedule to complete the NIF project... The independent review supports our position that the NIF management team has made significant progress and resolved earlier problems". The report revised their budget estimate to $2.25 billion, not including related R&D which pushed it to $3.3 billion total, and pushed back the completion date to 2006 with the first lines coming online in 2004.More on New NIF Cost and Schedule, FYI, American Institute of Physics, Number 65, June 15, 2000. Retrieved on May 7, 2008. A follow-up report the next year pushed the budget to $4.2 billion, and the completion date to 2008.
The project got a new management teamCampbell Investigation Triggers Livermore Management Changes, Fusion Power Report, Sep 1, 1999
http://www.thefreelibrary.com/Campbell+Investigation+Triggers+Livermore+Management+Changes.-a063375944 (retrieved July 13, 2012) in September 1999, headed by George Miller, who was named acting associate director for lasers. Ed Moses, former head of the Atomic Vapor Laser Isotope Separation (AVLIS) program at LLNL, became NIF project manager. Thereafter, NIF management received many positive reviews and the project met the budgets and schedules approved by Congress. In October 2010, the project was named "Project of the Year" by the Project Management Institute, which cited NIF as a "stellar example of how properly applied project management excellence can bring together global teams to deliver a project of this scale and importance efficiently."
Tests and construction completion, 2003–2009
In May 2003, the NIF achieved "first light" on a bundle of four beams, producing a 10.4 kJ IR pulse in a single beamline. In 2005 the first eight beams produced 153 kJ of IR, eclipsing OMEGA as the planet's highest energy laser (per pulse). By January 2007 all of the LRUs in the Master Oscillator Room (MOOR) were complete and the computer room had been installed. By August 2007, 96 laser lines were completed and commissioned, and "A total infrared energy of more than 2.5 megajoules has now been fired. This is more than 40 times what the Nova laser typically operated at the time it was the world's largest laser".
In 2005, an independent review by the JASON Defense Advisory Group that was generally positive, concluded that "The scientific and technical challenges in such a complex activity suggest that success in the early attempts at ignition in 2010, while possible, is unlikely". On January 26, 2009, the final line replaceable unit (LRU) was installed, unofficially completing construction. On February 26, 2009, NIF fired all 192 laser beams into the target chamber. On March 10, 2009, NIF became the first laser to break the megajoule barrier, delivering 1.1 MJ of UV light, known as 3ω (from third-harmonic generation), to the target chamber center in a shaped ignition pulse. The main laser delivered 1.952 MJ of IR.
Operations, 2009–2012
On May 29, 2009, the NIF was dedicated in a ceremony attended by thousands. The first laser shots into a hohlraum target were fired in late June.
Buildup to main experiments, 2010
On January 28, 2010, NIF reported the delivery of a 669 kJ pulse to a gold hohlraum, breaking records for laser power delivery, and analysis suggested that suspected interference by generated plasma would not be a problem in igniting a fusion reaction. Due to the size of the test hohlraums, laser/plasma interactions produced plasma-optics gratings, acting like tiny prisms, which produced symmetric X-ray drive on the capsule inside the hohlraum.
After gradually altering the wavelength of the laser, scientists compressed a spherical capsule evenly and heated it to 3.3 million kelvins (285 eV). The capsule contained cryogenically cooled gas, acting as a substitute for the deuterium and tritium fuel capsules to be used later. Plasma Physics Group Leader Siegfried Glenzer said that they could maintain the precise fuel layers needed in the lab, but not yet within the laser system.
As of January 2010, the NIF reached 1.8 megajoules. The target chamber then needed to be equipped with shields to block neutrons.
National Ignition Campaign 2010–2012
With the main construction complete, NIF started its National Ignition Campaign (NIC) to reach ignition. At the time, articles appeared in science magazines stating that ignition was imminent. Scientific American opened a 2010 review article with the statement "Ignition is close now. Within a year or two..."
The first test was carried out on October 8, 2010, at slightly over 1 MJ. However, problems slowed the drive toward ignition-level laser energies in the 1.4–1.5 MJ range.
One problem was the potential for damage from overheating due to a greater concentration of energy on optical components. Other issues included problems layering the fuel inside the target, and minute quantities of dust on the capsule surface.
The power level continued to increase and targets became more sophisticated. Then minute amounts of water vapor appeared in the target chamber and froze to the windows on the ends of the hohlraums, causing an asymmetric implosion. This was solved by adding a second layer of glass on either end, in effect creating a storm window.
Shots halted from February to April 2011, to conduct SSMP materials experiments. Then, NIF was upgraded, improving diagnostic and measurement instruments. The Advanced Radiographic Capability (ARC) system was added, which uses 4 of the NIF's 192 beams as a backlight for imaging the implosion sequence. ARC is essentially a petawatt-class laser with peak power exceeding a quadrillion (1015) watts. It is designed to produce brighter, more penetrating, higher-energy x rays. ARC became the world's highest-energy short-pulse laser, capable of creating picosecond-duration laser pulses to produce energetic x rays in the range of 50–100 keV.
NIC runs restarted in May 2011 with the goal of more precisely timing the four laser shock waves that compress the fusion target.
In January 2012, Mike Dunne, director of NIF's laser fusion energy program, predicted that ignition would be achieved at NIF by October. In the same month, the NIF fired a record high 57 shots. On March 15 NIF produced a laser pulse with 411 TW of peak power. On July 5, it produced a shorter pulse of 1.85 MJ and increased power of 500 TW.
DOE Report, July 19, 2012
NIC was periodically reviewed. The 6th review, was published on July 19, 2012. The report praised the quality of the installation: lasers, optics, targets, diagnostics, and operations. However:
The integrated conclusion based on this extensive period of experimentation, however, is that considerable hurdles must be overcome to reach ignition or the goal of observing unequivocal alpha heating. Indeed the reviewers note that given the unknowns with the present 'semi-empirical' approach, the probability of ignition before the end of December is extremely low and even the goal of demonstrating unambiguous alpha heating is challenging.
Further, the report expressed deep concerns that the gaps between observed performance and simulation codes implied that the current codes were of limited utility. Specifically, they found a lack of predictive ability of the radiation drive to the capsule and inadequately modeled laser–plasma interactions. Pressure was reaching only one half to one third of that required for ignition, far below the predicted values. The memo discussed the mixing of ablator material and capsule fuel likely due to hydrodynamics instabilities in the ablator's outer surface.
The report suggested using a thicker ablator, although this would increase its inertia. To keep the required implosion speed, they proposed that the NIF energy be increased to 2MJ. It questioned whether or not the energy was sufficient to compress a large enough capsule to avoid the mix limit and reach ignition. The report concluded that ignition within the calendar year 2012 was 'highly unlikely'.
NIC officially ended on September 30, 2012. Media reports suggested that NIF would shift its focus toward materials research.
In 2008, LLNL began the Laser Inertial Fusion Energy program (LIFE), to explore ways to use NIF technologies as the basis for a commercial power plant design. The focus was on pure fusion devices, incorporating technologies that developed in parallel with NIF that would greatly improve the performance of the design. In April 2014, LIFE ended.
Fuel gain breakeven, 2013
A NIF fusion shot on September 27, 2013, produced more energy than was absorbed by the deuterium–tritium fuel. This has been confused with having reached "scientific breakeven", defined as the fusion energy exceeding the laser input energy. Using this definition gives 14.4 kJ out and 1.8 MJ in, a ratio of 0.008.
Stockpile experiments, 2013–2015
In 2013, NIF shifted focus to materials and weapons research. Experiments beginning in FY 2015 used plutonium targets. Plutonium shots simulate the compression of the primary in a nuclear bomb by high explosives, which had not seen direct testing since CNTB took effect. Plutonium use ranged from less than a milligram to 10 milligrams.
In FY 2014, NIF performed 191 shots, slightly more than one every two days. As of April 2015 NIF was on track to meet its goal of 300 laser shots in FY 2015.
Back to fusion, 2016–present
On January 28, 2016, NIF successfully executed its first gas pipe experiment intended to study the absorption of large amounts of laser light within long targets relevant to high-gain magnetized liner inertial fusion (MagLIF). In order to investigate key aspects of the propagation, stability, and efficiency of laser energy coupling at full scale for high-gain MagLIF target designs, a single quad of NIF was used to deliver 30 kJ of energy to a target during a 13 nanosecond shaped pulse. Data return was favorable.
In 2018, improvements in controlling compression asymmetry was demonstrated in a shot with an output of 1.9×1016 neutrons, resulting in 0.054 MJ of fusion energy released by a 1.5 MJ laser pulse.
Burning plasma achieved, 2021
Experiments in 2020 and 2021 yielded the world's first burning plasmas, in which most of the plasma heating came from nuclear fusion reactions. This result was followed on August 8, 2021 by the world's first ignited plasma, in which the fusion heating was sufficient to sustain the thermonuclear reaction. It produced excess neutrons consistent with a short-lived chain reaction of around 100 trillionths of a second.
The fusion energy yield of the 2021 experiment was estimated to be 70% of the laser energy incident on the plasma. This result slightly beat the former record of 67% set by the JET torus in 1997. Taking the energy efficiency of the laser itself into account, the experiment used about 477 MJ of electrical energy to get 1.8 MJ of energy into the target to create 1.3 MJ of fusion energy.
Several design changes enabled this result. The material of the capsule shell was changed to diamond to increase the absorbance of secondary x-rays created by the laser burst, thus increasing the efficacy of the collapse, and its surface was further smoothed. The size of the hole in the capsule used to inject fuel was reduced. The holes in the gold cylinder surrounding the capsule were shrunk to reduce energy loss. The laser pulse was extended.
Scientific breakeven achieved, 2022
The NIF became the first fusion experiment to achieve scientific breakeven on December 5, 2022, with an experiment producing 3.15 megajoules of energy from a 2.05 megajoule input of laser light for an energy gain of about 1.5. Charging the laser consumed "well above 400 megajoules". In a public announcement on December 13, the Secretary of Energy Jennifer Granholm announced the facility had achieved ignition. While this was often characterized as a "net energy gain" from fusion, this was only true with respect to the energy delivered by the laser; reports sometimes omitted the ~300 MJ power input required.
The feat required the use of a slightly thicker and smoother capsule surrounding the fuel and a 2.05 MJ laser (up from 1.9 MJ in 2021), yielding 3.15 MJ, a 54% surplus. They also redistributed the energy among the split laser beams, which produced a more symmetrical (spherical) implosion.
The NIF achieved breakeven for a second time on July 30, 2023 yielding 3.88 MJ, an 89% surplus. At least four of six shots performed after the first successful one in December 2022 achieved breakeven. These successes led the DOE to fund three additional research centers. Lawrence Livermore planned to raise laser energy to 2.2 MJ per shot through upgraded optics and lasers , reaching it on the experiment held on October 30, 2023.
Similar projects
Some similar experimental ICF projects are:
Laser Mégajoule (LMJ)
Nike laser
High Power laser Energy Research facility (HiPER)
Laboratory for Laser Energetics (LLE)
Magnetized liner inertial fusion (MagLIF)
Shenguang-II High Power Laser
Pictures
In popular culture
The NIF was used as the set for the starship Enterprise's warp core in the 2013 movie Star Trek Into Darkness''.
See also
Z Pulsed Power Facility
Chain reaction
HiPER
Inertial confinement fusion
ITER
Laser Mégajoule
Nuclear fusion
Nuclear reactor
Notes
References
External links
Nuclear research institutes
Lawrence Livermore National Laboratory
Laboratories in California
Research institutes in the San Francisco Bay Area
United States Department of Energy facilities
Engineering projects
Inertial confinement fusion research lasers
Nuclear stockpile stewardship
Articles containing video clips | National Ignition Facility | [
"Engineering"
] | 9,717 | [
"Nuclear research institutes",
"Nuclear organizations",
"nan"
] |
337,353 | https://en.wikipedia.org/wiki/Safety%20data%20sheet | A safety data sheet (SDS), material safety data sheet (MSDS), or product safety data sheet (PSDS) is a document that lists information relating to occupational safety and health for the use of various substances and products. SDSs are a widely used type of fact sheet used to catalogue information on chemical species including chemical compounds and chemical mixtures. SDS information may include instructions for the safe use and potential hazards associated with a particular material or product, along with spill-handling procedures. The older MSDS formats could vary from source to source within a country depending on national requirements; however, the newer SDS format is internationally standardized.
An SDS for a substance is not primarily intended for use by the general consumer, focusing instead on the hazards of working with the material in an occupational setting. There is also a duty to properly label substances on the basis of physico-chemical, health, or environmental risk. Labels often include hazard symbols such as the European Union standard symbols. The same product (e.g. paints sold under identical brand names by the same company) can have different formulations in different countries. The formulation and hazards of a product using a generic name may vary between manufacturers in the same country.
Globally Harmonized System
The Globally Harmonized System of Classification and Labelling of Chemicals contains a standard specification for safety data sheets. The SDS follows a 16 section format which is internationally agreed and for substances especially, the SDS should be followed with an Annex which contains the exposure scenarios of this particular substance. The 16 sections are:
SECTION 1: Identification of the substance/mixture and of the company/undertaking
1.1. Product identifier
1.2. Relevant identified uses of the substance or mixture and uses advised against
1.3. Details of the supplier of the safety data sheet
1.4. Emergency telephone number
SECTION 2: Hazards identification
2.1. Classification of the substance or mixture
2.2. Label elements
2.3. Other hazards
SECTION 3: Composition/information on ingredients
3.1. Substances
3.2. Mixtures
SECTION 4: First aid measures
4.1. Description of first aid measures
4.2. Most important symptoms and effects, both acute and delayed
4.3. Indication of any immediate medical attention and special treatment needed
SECTION 5: Firefighting measures
5.1. Extinguishing media
5.2. Special hazards arising from the substance or mixture
5.3. Advice for firefighters
SECTION 6: Accidental release measure
6.1. Personal precautions, protective equipment and emergency procedures
6.2. Environmental precautions
6.3. Methods and material for containment and cleaning up
6.4. Reference to other sections
SECTION 7: Handling and storage
7.1. Precautions for safe handling
7.2. Conditions for safe storage, including any incompatibilities
7.3. Specific end use(s)
SECTION 8: Exposure controls/personal protection
8.1. Control parameters
8.2. Exposure controls
SECTION 9: Physical and chemical properties
9.1. Information on basic physical and chemical properties
9.2. Other information
SECTION 10: Stability and reactivity
10.1. Reactivity
10.2. Chemical stability
10.3. Possibility of hazardous reactions
10.4. Conditions to avoid
10.5. Incompatible materials
10.6. Hazardous decomposition products
SECTION 11: Toxicological information
11.1. Information on toxicological effects
SECTION 12: Ecological information
12.1. Toxicity
12.2. Persistence and degradability
12.3. Bioaccumulative potential
12.4. Mobility in soil
12.5. Results of PBT and vPvB assessment
12.6. Other adverse effects
SECTION 13: Disposal considerations
13.1. Waste treatment methods
SECTION 14: Transport information
14.1. UN number
14.2. UN proper shipping name
14.3. Transport hazard class(es)
14.4. Packing group
14.5. Environmental hazards
14.6. Special precautions for user
14.7. Transport in bulk according to Annex II of MARPOL and the IBC Code
SECTION 15: Regulatory information
15.1. Safety, health and environmental regulations/legislation specific for the substance or mixture
15.2. Chemical safety assessment
SECTION 16: Other information
16.2. Date of the latest revision of the SDS
National and international requirements
Canada
In Canada, the program known as the Workplace Hazardous Materials Information System (WHMIS) establishes the requirements for SDSs in workplaces and is administered federally by Health Canada under the Hazardous Products Act, Part II, and the Controlled Products Regulations.
European Union
Safety data sheets have been made an integral part of the system of Regulation (EC) No 1907/2006 (REACH). The original requirements of REACH for SDSs have been further adapted to take into account the rules for safety data sheets of the Global Harmonised System (GHS) and the implementation of other elements of the GHS into EU legislation that were introduced by Regulation (EC) No 1272/2008 (CLP) via an update to Annex II of REACH.
The SDS must be supplied in an official language of the Member State(s) where the substance or mixture is placed on the market, unless the Member State(s) concerned provide(s) otherwise (Article 31(5) of REACH).
The European Chemicals Agency (ECHA) has published a guidance document on the compilation of safety data sheets.
Germany
In Germany, safety data sheets must be compiled in accordance with REACH Regulation No. 1907/2006. The requirements concerning national aspects are defined in the Technical Rule for Hazardous Substances (TRGS) 220 "National aspects when compiling safety data sheets". A national measure mentioned in SDS section 15 is as example the water hazard class (WGK) it is based on regulations governing systems for handling substances hazardous to waters (AwSV).
The Netherlands
Dutch Safety Data Sheets are well known as veiligheidsinformatieblad or Chemiekaarten. This is a collection of Safety Data Sheets of the most widely used chemicals. The Chemiekaarten boek is commercially available, but also made available through educational institutes, such as the web site offered by the University of Groningen.
South Africa
This section contributes to a better understanding of the regulations governing SDS within the South African framework. As regulations may change, it is the responsibility of the reader to verify the validity of the regulations mentioned in text.
As globalisation increased and countries engaged in cross-border trade, the quantity of hazardous material crossing international borders amplified. Realising the detrimental effects of hazardous trade, the United Nations established a committee of experts specialising in the transportation of hazardous goods. The committee provides best practises governing the conveyance of hazardous materials and goods for land including road and railway; air as well as sea transportation. These best practises are constantly updated to remain current and relevant.
There are various other international bodies who provide greater detail and guidance for specific modes of transportation such as the International Maritime Organisation (IMO) by means of the International Maritime Code and the International Civil Aviation Organisation (ICAO) via the Technical Instructions for the safe transport of dangerous goods by air as well as the International Air Transport Association (IATA) who provides regulations for the transport of dangerous goods.
These guidelines prescribed by the international authorities are applicable to the South African land, sea and air transportation of hazardous materials and goods. In addition to these rules and regulations to International best practice, South Africa has also implemented common laws which are laws based on custom and practise. Common laws are a vital part of maintaining public order and forms the basis of case laws. Case laws, using the principles of common law are interpretations and decisions of statutes made by courts. Acts of parliament are determinations and regulations by parliament which form the foundation of statutory law. Statutory laws are published in the government gazette or on the official website. Lastly, subordinate legislation are the bylaws issued by local authorities and authorised by parliament.
Statutory law gives effect to the Occupational Health and Safety Act of 1993 and the National Road Traffic Act of 1996. The Occupational Health and Safety Act details the necessary provisions for the safe handling and storage of hazardous materials and goods whilst the transport act details with the necessary provisions for the transportation of the hazardous goods.
Relevant South African legislation includes the Hazardous Chemicals Agent regulations of 2021 under the Occupational Health and Safety Act of 1993, the Chemical Substance Act 15 of 1973, and the National Road Traffic Act of 1996, and the Standards Act of 2008.
There has been selective incorporation of aspects of the Globally Harmonised System (GHS) of Classification and Labelling of Chemicals into South African legislation. At each point of the chemical value chain, there is a responsibility to manage chemicals in a safe and responsible manner. SDS is therefore required by law. A SDS is included in the requirements of Occupational Health and Safety Act, 1993 (Act No.85 of 1993) Regulation 1179 dated 25 August 1995.
The categories of information supplied in the SDS are listed in SANS 11014:2010; dangerous goods standards – Classification and information. SANS 11014:2010 supersedes the first edition SANS 11014-1:1994 and is an identical implementation of ISO 11014:2009. According to SANS 11014:2010:
United Kingdom
In the U.K., the Chemicals (Hazard Information and Packaging for Supply) Regulations 2002 - known as CHIP Regulations - impose duties upon suppliers, and importers into the EU, of hazardous materials.
NOTE: Safety data sheets (SDS) are no longer covered by the CHIP regulations. The laws that require a SDS to be provided have been transferred to the European REACH Regulations.
The Control of Substances Hazardous to Health (COSHH) Regulations govern the use of hazardous substances in the workplace in the UK and specifically require an assessment of the use of a substance. Regulation 12 requires that an employer provides employees with information, instruction and training for people exposed to hazardous substances. This duty would be very nearly impossible without the data sheet as a starting point. It is important for employers therefore to insist on receiving a data sheet from a supplier of a substance.
The duty to supply information is not confined to informing only business users of products. SDSs for retail products sold by large DIY shops are usually obtainable on those companies' web sites.
Web sites of manufacturers and large suppliers do not always include them even if the information is obtainable from retailers but written or telephone requests for paper copies will usually be responded to favourably.
United Nations
The United Nations (UN) defines certain details used in SDSs such as the UN numbers used to identify some hazardous materials in a standard form while in international transit.
United States
In the U.S., the Occupational Safety and Health Administration requires that SDSs be readily available to all employees for potentially harmful substances handled in the workplace under the Hazard Communication Standard. The SDS is also required to be made available to local fire departments and local and state emergency planning officials under Section 311 of the Emergency Planning and Community Right-to-Know Act. The American Chemical Society defines Chemical Abstracts Service Registry Numbers (CAS numbers) which provide a unique number for each chemical and are also used internationally in SDSs.
Reviews of material safety data sheets by the U.S. Chemical Safety and Hazard Investigation Board have detected dangerous deficiencies.
The board's Combustible Dust Hazard Study analyzed 140 data sheets of substances capable of producing combustible dusts. None of the SDSs contained all the information the board said was needed to work with the material safely, and 41 percent failed to even mention that the substance was combustible.
As part of its study of an explosion and fire that destroyed the Barton Solvents facility in Valley Center, Kansas, in 2007, the safety board reviewed 62 material safety data sheets for commonly used nonconductive flammable liquids. As in the combustible dust study, the board found all the data sheets inadequate.
In 2012, the US adopted the 16 section Safety Data Sheet to replace Material Safety Data Sheets. This became effective on 1 December 2013. These new Safety Data Sheets comply with the Globally Harmonized System of Classification and Labeling of Chemicals (GHS). By 1 June 2015, employers were required to have their workplace labeling and hazard communication programs updated as necessary – including all MSDSs replaced with SDS-formatted documents.
SDS authoring
Many companies offer the service of collecting, or writing and revising, data sheets to ensure they are up to date and available for their subscribers or users. Some jurisdictions impose an explicit duty of care that each SDS be regularly updated, usually every three to five years. However, when new information becomes available, the SDS must be revised without delay. If a full SDS is not feasible, then a reduced workplace label should be authored.
See also
Occupational exposure banding
References
Chemical safety
Documents
Environmental law
Industrial hygiene
Materials
Occupational safety and health
Regulation of chemicals in the European Union
Safety engineering
Toxicology | Safety data sheet | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 2,622 | [
"Systems engineering",
"Chemical accident",
"Regulation of chemicals in the European Union",
"Toxicology",
"Safety engineering",
"Regulation of chemicals",
"Materials",
"nan",
"Chemical safety",
"Matter"
] |
337,457 | https://en.wikipedia.org/wiki/Raoul%20Bott | Raoul Bott (September 24, 1923 – December 20, 2005) was a Hungarian-American mathematician known for numerous foundational contributions to geometry in its broad sense. He is best known for his Bott periodicity theorem, the Morse–Bott functions which he used in this context, and the Borel–Bott–Weil theorem.
Early life
Bott was born in Budapest, Hungary, the son of Margit Kovács and Rudolph Bott. His father was of Austrian descent, and his mother was of Hungarian Jewish descent; Bott was raised a Catholic by his mother and stepfather in Bratislava, Czechoslovakia, now the capital of Slovakia. Bott grew up in Czechoslovakia and spent his working life in the United States. His family emigrated to Canada in 1938, and subsequently he served in the Canadian Army in Europe during World War II.
Career
Bott later went to college at McGill University in Montreal, where he studied electrical engineering. He then earned a PhD in mathematics from Carnegie Mellon University in Pittsburgh in 1949. His thesis, titled Electrical Network Theory, was written under the direction of Richard Duffin. Afterward, he began teaching at the University of Michigan in Ann Arbor. Bott continued his study at the Institute for Advanced Study in Princeton. He was a professor at Harvard University from 1959 to 1999. In 2005 Bott died of cancer in San Diego.
With Richard Duffin at Carnegie Mellon, Bott studied existence of electronic filters corresponding to given positive-real functions. In 1949 they proved a fundamental theorem of filter synthesis. Duffin and Bott extended earlier work by Otto Brune that requisite functions of complex frequency s could be realized by a passive network of inductors and capacitors. The proof relied on induction on the sum of the degrees of the polynomials in the numerator and denominator of the rational function.
In his 2000 interview with Allyn Jackson of the American Mathematical Society, he explained that he sees "networks as discrete versions of harmonic theory", so his experience with network synthesis and electronic filter topology introduced him to algebraic topology.
Bott met Arnold S. Shapiro at the IAS and they worked together.
He studied the homotopy theory of Lie groups, using methods from Morse theory, leading to the Bott periodicity theorem (1957). In the course of this work, he introduced Morse–Bott functions, an important generalization of Morse functions.
This led to his role as collaborator over many years with Michael Atiyah, initially via the part played by periodicity in K-theory. Bott made important contributions towards the index theorem, especially in formulating related fixed-point theorems, in particular the so-called 'Woods Hole fixed-point theorem', a combination of the Riemann–Roch theorem and Lefschetz fixed-point theorem (it is named after Woods Hole, Massachusetts, the site of a conference at which collective discussion formulated it). The major Atiyah–Bott papers on what is now the Atiyah–Bott fixed-point theorem were written in the years up to 1968; they collaborated further in recovering in contemporary language Ivan Petrovsky on Petrovsky lacunas of hyperbolic partial differential equations, prompted by Lars Gårding. In the 1980s, Atiyah and Bott investigated gauge theory, using the Yang–Mills equations on a Riemann surface to obtain topological information about the moduli spaces of stable bundles on Riemann surfaces. In 1983 he spoke to the Canadian Mathematical Society in a talk he called "A topologist marvels at Physics".
He is also well known in connection with the Borel–Bott–Weil theorem on representation theory of Lie groups via holomorphic sheaves and their cohomology groups; and for work on foliations. With Chern he worked on Nevanlinna theory, studied holomorphic vector bundles over complex analytic manifolds and introduced the Bott-Chern classes, useful in the theory of Arakelov geometry and also to algebraic number theory.
He introduced Bott–Samelson varieties and the Bott residue formula for complex manifolds and the Bott cannibalistic class.
Awards
In 1964, he was awarded the Oswald Veblen Prize in Geometry by the American Mathematical Society. In 1983, he was awarded the Jeffery–Williams Prize by the Canadian Mathematical Society. In 1987, he was awarded the National Medal of Science.
In 2000, he received the Wolf Prize. In 2005, he was elected an Overseas Fellow of the Royal Society of London.
Students
Bott had 35 PhD students, including Stephen Smale, Lawrence Conlon, Daniel Quillen, Peter Landweber, Robert MacPherson, Robert W. Brooks, Robin Forman, Rama Kocherlakota, Susan Tolman, András Szenes, Kevin Corlette, and Eric Weinstein. Smale and Quillen won Fields Medals in 1966 and 1978 respectively.
Publications
1995: Collected Papers. Vol. 4. Mathematics Related to Physics. Edited by Robert MacPherson. Contemporary Mathematicians. Birkhäuser Boston, xx+485 pp.
1995: Collected Papers. Vol. 3. Foliations. Edited by Robert D. MacPherson. Contemporary Mathematicians. Birkhäuser, xxxii+610 pp.
1994: Collected Papers. Vol. 2. Differential Operators. Edited by Robert D. MacPherson. Contemporary Mathematicians. Birkhäuser, xxxiv+802 pp.
1994: Collected Papers. Vol. 1. Topology and Lie Groups. Edited by Robert D. MacPherson. Contemporary Mathematicians. Birkhäuser, xii+584 pp.
1982: (with Loring W. Tu) Differential Forms in Algebraic Topology. Graduate Texts in Mathematics #82. Springer-Verlag, New York-Berlin. xiv+331 pp.
1969: Lectures on K(X). Mathematics Lecture Note Series W. A. Benjamin, New York-Amsterdam x+203 pp.
See also
Bott–Duffin inverse
Parallelizable manifold
Thom's and Bott's proofs of the Lefschetz hyperplane theorem
References
External links
.
(By Loring W. Tu, January 4, 2002).
(The New York Times, January 8, 2006).
1923 births
2005 deaths
20th-century American mathematicians
21st-century American mathematicians
American people of Hungarian-Jewish descent
Hungarian Jews
20th-century Hungarian mathematicians
Topologists
Geometers
Differential geometers
Algebraic geometers
Harvard University Department of Mathematics faculty
University of Michigan faculty
McGill University Faculty of Engineering alumni
Carnegie Mellon University alumni
Foreign members of the Royal Society
National Medal of Science laureates
Wolf Prize in Mathematics laureates
Members of the French Academy of Sciences
Hungarian Roman Catholics
Hungarian emigrants to Canada
Canadian emigrants to the United States
Hungarian expatriates in Czechoslovakia | Raoul Bott | [
"Mathematics"
] | 1,387 | [
"Topologists",
"Topology",
"Geometers",
"Geometry"
] |
337,713 | https://en.wikipedia.org/wiki/Composite%20data%20type | In computer science, a composite data type or compound data type is a data type that consists of programming language scalar data types and other composite types that may be heterogeneous and hierarchical in nature. It is sometimes called a structure or by a language-specific keyword used to define one such as struct. It falls into the aggregate type classification which includes homogenous collections such as the array and list.
See also
References
Data types
Type theory
Articles with example C code
Articles with example C++ code | Composite data type | [
"Mathematics"
] | 106 | [
"Type theory",
"Mathematical logic",
"Mathematical structures",
"Mathematical objects"
] |
337,862 | https://en.wikipedia.org/wiki/Table%20%28information%29 | A table is an arrangement of information or data, typically in rows and columns, or possibly in a more complex structure. Tables are widely used in communication, research, and data analysis. Tables appear in print media, handwritten notes, computer software, architectural ornamentation, traffic signs, and many other places. The precise conventions and terminology for describing tables vary depending on the context. Further, tables differ significantly in variety, structure, flexibility, notation, representation and use. Information or data conveyed in table form is said to be in tabular format (adjective). In books and technical articles, tables are typically presented apart from the main text in numbered and captioned floating blocks.
Basic description
A table consists of an ordered arrangement of rows and columns. This is a simplified description of the most basic kind of table. Certain considerations follow from this simplified description:
the term row has several common synonyms (e.g., record, k-tuple, n-tuple, vector);
the term column has several common synonyms (e.g., field, parameter, property, attribute, stanchion);
a column is usually identified by a name;
a column name can consist of a word, phrase or a numerical index;
the intersection of a row and a column is called a cell.
The elements of a table may be grouped, segmented, or arranged in many different ways, and even nested recursively. Additionally, a table may include metadata, annotations, a header, a footer or other ancillary features.
Simple table
The following illustrates a simple table with four columns and nine rows. The first row is not counted, because it is only used to display the column names. This is called a "header row".
Multi-dimensional table
The concept of dimension is also a part of basic terminology. Any "simple" table can be represented as a "multi-dimensional"
table by normalizing the data values into ordered hierarchies. A common example of such a table is a multiplication table.
In multi-dimensional tables, each cell in the body of the table (and the value of that cell) relates to the values at the beginnings of the column (i.e. the header), the row, and other structures in more complex tables. This is an injective relation: each combination of the values of the headers row (row 0, for lack of a better term) and the headers column (column 0 for lack of a better term) is related to a unique cell in
the table:
Column 1 and row 1 will only correspond to cell (1,1);
Column 1 and row 2 will only correspond to cell (2,1) etc.
The first column often presents information dimension description by which the rest of the table is navigated. This column is called "stub column". Tables may contain three or multiple dimensions and can be classified by the number of dimensions. Multi-dimensional tables may have super-rows - rows that describe additional dimensions for the rows that are presented below that row and are usually grouped in a tree-like structure. This structure is typically visually presented with an appropriate number of white spaces in front of each stub's label.
In literature tables often present numerical values, cumulative statistics, categorical values, and at times parallel descriptions in form of text. They can condense large amount of information to a limited space and therefore they are popular in scientific literature in many fields of study.
Generic representation
As a communication tool, a table allows a form of generalization of information from an unlimited number of different social or scientific contexts. It provides a familiar way to convey information that might otherwise not be obvious or readily understood.
For example, in the following diagram, two alternate representations of the same information are presented side by side. On the left is the NFPA 704 standard "fire diamond" with example values indicated and on the right is a simple table displaying the same values, along with additional information. Both representations convey essentially the same information, but the tabular representation is arguably more comprehensible to someone who is not familiar with the NFPA 704 standard. The tabular representation may not, however, be ideal for every circumstance (for example because of space limitations, or safety reasons).
Specific uses
There are several specific situations in which tables are routinely used as a matter of custom or formal convention.
Publishing
Cross-reference (Table of contents)
Mathematics
Arithmetic (Multiplication table)
Logic (Truth table)
Natural sciences
Chemistry (Periodic table)
Oceanography (tide table)
Information technology
Software applications
Modern software applications give users the ability to generate, format, and edit tables and tabular data for a wide variety of uses, for example:
word processing applications;
spreadsheet applications;
presentation software;
tables specified in HTML or another markup language
Software development
Tables have uses in software development for both high-level specification and low-level implementation.
Usage in software specification can encompass ad hoc inclusion of simple decision tables in textual documents through to the use of tabular specification methodologies, examples of which include Software Cost Reduction and Statestep.
Proponents of tabular techniques, among whom David Parnas is prominent, emphasize their understandability, as well as the quality and cost advantages of a format allowing systematic inspection, while corresponding shortcomings experienced with a graphical notation were cited in motivating the development of at least two tabular approaches.
At a programming level, software may be implemented using constructs generally represented or understood as tabular, whether to store data (perhaps to memoize earlier results), for example, in arrays or hash tables, or control tables determining the flow of program execution in response to various events or inputs.
Databases
Database systems often store data in structures called tables; in which columns are data fields and rows represent data records.
Historical relationship to furniture
In medieval counting houses, the tables were covered with a piece of checkered cloth, to count money.Exchequer is an archaic term for the English institution which accounted for money owed to the monarch. Thus the checkerboard tables of stacks of coins are a concrete realization of this information.
See also
Chart
Diagram
Abstract data type
Column (database)
Information graphics
Periodic table
Reference table
Row (database)
Table (database)
Table (HTML)
Tensor
Dependent and independent variables
Zebra striping
References
External links
Infographics
Data modeling | Table (information) | [
"Engineering"
] | 1,294 | [
"Data modeling",
"Data engineering"
] |
338,046 | https://en.wikipedia.org/wiki/Dihedral%20angle | A dihedral angle is the angle between two intersecting planes or half-planes. It is a plane angle formed on a third plane, perpendicular to the line of intersection between the two planes or the common edge between the two half-planes. In higher dimensions, a dihedral angle represents the angle between two hyperplanes. In chemistry, it is the clockwise angle between half-planes through two sets of three atoms, having two atoms in common.
Mathematical background
When the two intersecting planes are described in terms of Cartesian coordinates by the two equations
the dihedral angle, between them is given by:
and satisfies It can easily be observed that the angle is independent of and .
Alternatively, if and are normal vector to the planes, one has
where is the dot product of the vectors and is the product of their lengths.
The absolute value is required in above formulas, as the planes are not changed when changing all coefficient signs in one equation, or replacing one normal vector by its opposite.
However the absolute values can be and should be avoided when considering the dihedral angle of two half planes whose boundaries are the same line. In this case, the half planes can be described by a point of their intersection, and three vectors , and such that , and belong respectively to the intersection line, the first half plane, and the second half plane. The dihedral angle of these two half planes is defined by
,
and satisfies In this case, switching the two half-planes gives the same result, and so does replacing with In chemistry (see below), we define a dihedral angle such that replacing with changes the sign of the angle, which can be between and .
In polymer physics
In some scientific areas such as polymer physics, one may consider a chain of points and links between consecutive points. If the points are sequentially numbered and located at positions , , , etc. then bond vectors are defined by =−, =−, and =−, more generally. This is the case for kinematic chains or amino acids in a protein structure. In these cases, one is often interested in the half-planes defined by three consecutive points, and the dihedral angle between two consecutive such half-planes. If , and are three consecutive bond vectors, the intersection of the half-planes is oriented, which allows defining a dihedral angle that belongs to the interval . This dihedral angle is defined by
or, using the function atan2,
This dihedral angle does not depend on the orientation of the chain (order in which the point are considered) — reversing this ordering consists of replacing each vector by its opposite vector, and exchanging the indices 1 and 3. Both operations do not change the cosine, but change the sign of the sine. Thus, together, they do not change the angle.
A simpler formula for the same dihedral angle is the following (the proof is given below)
or equivalently,
This can be deduced from previous formulas by using the vector quadruple product formula, and the fact that a scalar triple product is zero if it contains twice the same vector:
Given the definition of the cross product, this means that is the angle in the clockwise direction of the fourth atom compared to the first atom, while looking down the axis from the second atom to the third. Special cases (one may say the usual cases) are , and , which are called the trans, gauche+, and gauche− conformations.
In stereochemistry
In stereochemistry, a torsion angle is defined as a particular example of a dihedral angle, describing the geometric relation of two parts of a molecule joined by a chemical bond. Every set of three non-colinear atoms of a molecule defines a half-plane. As explained above, when two such half-planes intersect (i.e., a set of four consecutively-bonded atoms), the angle between them is a dihedral angle. Dihedral angles are used to specify the molecular conformation. Stereochemical arrangements corresponding to angles between 0° and ±90° are called syn (s), those corresponding to angles between ±90° and 180° anti (a). Similarly, arrangements corresponding to angles between 30° and 150° or between −30° and −150° are called clinal (c) and those between 0° and ±30° or ±150° and 180° are called periplanar (p).
The two types of terms can be combined so as to define four ranges of angle; 0° to ±30° synperiplanar (sp); 30° to 90° and −30° to −90° synclinal (sc); 90° to 150° and −90° to −150° anticlinal (ac); ±150° to 180° antiperiplanar (ap). The synperiplanar conformation is also known as the syn- or cis-conformation; antiperiplanar as anti or trans; and synclinal as gauche or skew.
For example, with n-butane two planes can be specified in terms of the two central carbon atoms and either of the methyl carbon atoms. The syn-conformation shown above, with a dihedral angle of 60° is less stable than the anti-conformation with a dihedral angle of 180°.
For macromolecular usage the symbols T, C, G+, G−, A+ and A− are recommended (ap, sp, +sc, −sc, +ac and −ac respectively).
Proteins
A Ramachandran plot (also known as a Ramachandran diagram or a [φ,ψ] plot), originally developed in 1963 by G. N. Ramachandran, C. Ramakrishnan, and V. Sasisekharan, is a way to visualize energetically allowed regions for backbone dihedral angles ψ against φ of amino acid residues in protein structure.
In a protein chain three dihedral angles are defined:
ω (omega) is the angle in the chain Cα − C' − N − Cα,
φ (phi) is the angle in the chain C' − N − Cα − C'
ψ (psi) is the angle in the chain N − Cα − C' − N (called φ′ by Ramachandran)
The figure at right illustrates the location of each of these angles (but it does not show correctly the way they are defined).
The planarity of the peptide bond usually restricts ω to be 180° (the typical trans case) or 0° (the rare cis case). The distance between the Cα atoms in the trans and cis isomers is approximately 3.8 and 2.9 Å, respectively. The vast majority of the peptide bonds in proteins are trans, though the peptide bond to the nitrogen of proline has an increased prevalence of cis compared to other amino-acid pairs.
The side chain dihedral angles are designated with χn (chi-n). They tend to cluster near 180°, 60°, and −60°, which are called the trans, gauche−, and gauche+ conformations. The stability of certain sidechain dihedral angles is affected by the values φ and ψ. For instance, there are direct steric interactions between the Cγ of the side chain in the gauche+ rotamer and the backbone nitrogen of the next residue when ψ is near -60°. This is evident from statistical distributions in backbone-dependent rotamer libraries.
Geometry
Every polyhedron has a dihedral angle at every edge describing the relationship of the two faces that share that edge. This dihedral angle, also called the face angle, is measured as the internal angle with respect to the polyhedron. An angle of 0° means the face normal vectors are antiparallel and the faces overlap each other, which implies that it is part of a degenerate polyhedron. An angle of 180° means the faces are parallel, as in a tiling. An angle greater than 180° exists on concave portions of a polyhedron.
Every dihedral angle in an edge-transitive polyhedron has the same value. This includes the 5 Platonic solids, the 13 Catalan solids, the 4 Kepler–Poinsot polyhedra, the two quasiregular solids, and two quasiregular dual solids.
Law of cosines for dihedral angle
Given 3 faces of a polyhedron which meet at a common vertex P and have edges AP, BP and CP, the cosine of the dihedral angle between the faces containing APC and BPC is:
This can be deduced from the spherical law of cosines, but can also be found by other means.
See also
Atropisomer
References
External links
The Dihedral Angle in Woodworking at Tips.FM
Analysis of the 5 Regular Polyhedra gives a step-by-step derivation of these exact values.
Stereochemistry
Protein structure
Euclidean solid geometry
Angle
Planes (geometry) | Dihedral angle | [
"Physics",
"Chemistry",
"Mathematics"
] | 1,850 | [
"Geometric measurement",
"Scalar physical quantities",
"Planes (geometry)",
"Physical quantities",
"Euclidean solid geometry",
"Protein structure",
"Stereochemistry",
"Mathematical objects",
"Infinity",
"Space",
"Structural biology",
"nan",
"Spacetime",
"Wikipedia categories named after ph... |
338,129 | https://en.wikipedia.org/wiki/Plateau%27s%20problem | In mathematics, Plateau's problem is to show the existence of a minimal surface with a given boundary, a problem raised by Joseph-Louis Lagrange in 1760. However, it is named after Joseph Plateau who experimented with soap films. The problem is considered part of the calculus of variations. The existence and regularity problems are part of geometric measure theory.
History
Various specialized forms of the problem were solved, but it was only in 1930 that general solutions were found in the context of mappings (immersions) independently by Jesse Douglas and Tibor Radó. Their methods were quite different; Radó's work built on the previous work of René Garnier and held only for rectifiable simple closed curves, whereas Douglas used completely new ideas with his result holding for an arbitrary simple closed curve. Both relied on setting up minimization problems; Douglas minimized the now-named Douglas integral while Radó minimized the "energy". Douglas went on to be awarded the Fields Medal in 1936 for his efforts.
In higher dimensions
The extension of the problem to higher dimensions (that is, for -dimensional surfaces in -dimensional space) turns out to be much more difficult to study. Moreover, while the solutions to the original problem are always regular, it turns out that the solutions to the extended problem may have singularities if . In the hypersurface case where , singularities occur only for . An example of such singular solution of the Plateau problem is the Simons cone, a cone over in that was first described by Jim Simons and was shown to be an area minimizer by Bombieri, De Giorgi and Giusti. To solve the extended problem in certain special cases, the theory of perimeters (De Giorgi) for codimension 1 and the theory of rectifiable currents (Federer and Fleming) for higher codimension have been developed. The theory guarantees existence of codimension 1 solutions that are smooth away from a closed set of Hausdorff dimension . In the case of higher codimension Almgren proved existence of solutions with singular set of dimension at most in his regularity theorem. S. X. Chang, a
student of Almgren, built upon Almgren’s work to show that the singularities of 2-dimensional area
minimizing integral currents (in arbitrary codimension) form a finite discrete set.
The axiomatic approach of Jenny Harrison and Harrison Pugh treats a wide variety of special cases. In particular, they solve the anisotropic Plateau problem in arbitrary dimension and codimension for any collection of rectifiable sets satisfying a combination of general homological, cohomological or homotopical spanning conditions. A different proof of Harrison-Pugh's results were obtained by Camillo De Lellis, Francesco Ghiraldin and Francesco Maggi.
Physical applications
Physical soap films are more accurately modeled by the -minimal sets of Frederick Almgren, but the lack of a compactness theorem makes it difficult to prove the existence of an area minimizer. In this context, a persistent open question has been the existence of a least-area soap film. Ernst Robert Reifenberg solved such a "universal Plateau's problem" for boundaries which are homeomorphic to single embedded spheres.
See also
Double Bubble conjecture
Dirichlet principle
Plateau's laws
Stretched grid method
Bernstein's problem
References
Calculus of variations
Minimal surfaces
Mathematical problems | Plateau's problem | [
"Chemistry",
"Mathematics"
] | 703 | [
"Foams",
"Mathematical problems",
"Minimal surfaces"
] |
338,192 | https://en.wikipedia.org/wiki/Common-ion%20effect | In chemistry, the common-ion effect refers to the decrease in solubility of an ionic precipitate by the addition to the solution of a soluble compound with an ion in common with the precipitate. This behaviour is a consequence of Le Chatelier's principle for the equilibrium reaction of the ionic association/dissociation. The effect is commonly seen as an effect on the solubility of salts and other weak electrolytes. Adding an additional amount of one of the ions of the salt generally leads to increased precipitation of the salt, which reduces the concentration of both ions of the salt until the solubility equilibrium is reached. The effect is based on the fact that both the original salt and the other added chemical have one ion in common with each other.
Examples of the common-ion effect
Dissociation of hydrogen sulfide in presence of hydrochloric acid
Hydrogen sulfide (H2S) is a weak electrolyte. It is partially ionized when in aqueous solution, therefore there exists an equilibrium between un-ionized molecules and constituent ions in an aqueous medium as follows:
H2S H+ + HS−
By applying the law of mass action, we have
Hydrochloric acid (HCl) is a strong electrolyte, which nearly completely ionizes as
HCl → H+ + Cl−
If HCl is added to the H2S solution, H+ is a common ion and creates a common ion effect. Due to the increase in concentration of H+ ions from the added HCl, the equilibrium of the dissociation of H2S shifts to the left and keeps the value of Ka constant. Thus the dissociation of H2S decreases, the concentration of un-ionized H2S increases, and as a result, the concentration of sulfide ions decreases.
Solubility of barium iodate in presence of barium nitrate
Barium iodate, Ba(IO3)2, has a solubility product Ksp = [Ba2+][IO3−]2 = 1.57 x 10−9. Its solubility in pure water is 7.32 x 10−4 M. However in a solution that is 0.0200 M in barium nitrate, Ba(NO3)2, the increase in the common ion barium leads to a decrease in iodate ion concentration. The solubility is therefore reduced to 1.40 x 10−4 M, about five times smaller.
Solubility effects
A practical example used very widely in areas drawing drinking water from chalk or limestone aquifers is the addition of sodium carbonate to the raw water to reduce the hardness of the water. In the water treatment process, highly soluble sodium carbonate salt is added to precipitate out sparingly soluble calcium carbonate. The very pure and finely divided precipitate of calcium carbonate that is generated is a valuable by-product used in the manufacture of toothpaste.
The salting-out process used in the manufacture of soaps benefits from the common-ion effect. Soaps are sodium salts of fatty acids. Addition of sodium chloride reduces the solubility of the soap salts. The soaps precipitate due to a combination of common-ion effect and increased ionic strength.
Sea, brackish and other waters that contain appreciable amount of sodium ions (Na+) interfere with the normal behavior of soap because of common-ion effect. In the presence of excess Na+, the solubility of soap salts is reduced, making the soap less effective.
Buffering effect
A buffer solution contains an acid and its conjugate base or a base and its conjugate acid. Addition of the conjugate ion will result in a change of pH of the buffer solution. For example, if both sodium acetate and acetic acid are dissolved in the same solution they both dissociate and ionize to produce acetate ions. Sodium acetate is a strong electrolyte, so it dissociates completely in solution. Acetic acid is a weak acid, so it only ionizes slightly. According to Le Chatelier's principle, the addition of acetate ions from sodium acetate will suppress the ionization of acetic acid and shift its equilibrium to the left. Thus the percent dissociation of the acetic acid will decrease, and the pH of the solution will increase. The ionization of an acid or a base is limited by the presence of its conjugate base or acid.
NaCH3CO2(s) → Na+(aq) + CH3CO2−(aq)
CH3CO2H(aq) H+(aq) + CH3CO2−(aq)
This will decrease the hydronium concentration, and thus the common-ion solution will be less acidic than a solution containing only acetic acid.
Exceptions
Many transition-metal compounds violate this rule due to the formation of complex ions, a scenario not part of the equilibria that are involved in simple precipitation of salts from ionic solution. For example, copper(I) chloride is insoluble in water, but it dissolves when chloride ions are added, such as when hydrochloric acid is added. This is due to the formation of soluble CuCl2− complex ions.
Uncommon-ion effect
Sometimes adding an ion other than the ones that are part of the precipitated salt itself can increase the solubility of the salt. This "salting in" is called the "uncommon-ion effect" (also "salt effect" or the "diverse-ion effect"). It occurs because as the total ion concentration increases, inter-ion attraction within the solution can become an important factor. This alternate equilibrium makes the ions less available for the precipitation reaction. This is also called odd ion effect.
References
Equilibrium chemistry
Solutions | Common-ion effect | [
"Chemistry"
] | 1,214 | [
"Homogeneous chemical mixtures",
"Solutions",
"Equilibrium chemistry"
] |
1,005,128 | https://en.wikipedia.org/wiki/Wave%20soldering | Wave soldering is a bulk soldering process used in printed circuit board manufacturing. The circuit board is passed over a pan of molten solder in which a pump produces an upwelling of solder that looks like a standing wave. As the circuit board makes contact with this wave, the components become soldered to the board. Wave soldering is used for both through-hole printed circuit assemblies, and surface mount. In the latter case, the components are glued onto the surface of a printed circuit board (PCB) by placement equipment, before being run through the molten solder wave. Wave soldering is mainly used in soldering of through hole components.
As through-hole components have been largely replaced by surface mount components, wave soldering has been supplanted by reflow soldering methods in many large-scale electronics applications. However, there is still significant wave soldering where surface-mount technology (SMT) is not suitable (e.g., large power devices and high pin count connectors), or where simple through-hole technology prevails (certain major appliances).
Wave solder process
There are many types of wave solder machines; however, the basic components and principles of these machines are the same. The basic equipment used during the process is a conveyor that moves the PCB through the different zones, a pan of solder used in the soldering process, a pump that produces the actual wave, the sprayer for the flux and the preheating pad. The solder is usually a mixture of metals. A typical leaded solder is composed of 50% tin, 49.5% lead, and 0.5% antimony. The Restriction of Hazardous Substances Directive (RoHS) has led to an ongoing transition away from 'traditional' leaded solder in modern manufacturing in favor of lead-free alternatives. Both tin-silver-copper and tin-copper-nickel alloys are commonly used, with one common alloy (SN100C) being 99.25% tin, 0.7% copper, 0.05% nickel and <0.01% germanium.
Fluxing
Flux in the wave soldering process has a primary and a secondary objective. The primary objective is to clean the components that are to be soldered, principally any oxide layers that may have formed. There are two types of flux, corrosive and noncorrosive. Noncorrosive flux requires precleaning and is used when low acidity is required. Corrosive flux is quick and requires little precleaning, but has a higher acidity.
Preheating
Preheating helps to accelerate the soldering process and to prevent thermal shock.
Cleaning
Some types of flux, called "no-clean" fluxes, do not require cleaning; their residues are benign after the soldering process. Typically no-clean fluxes are especially sensitive to process conditions, which may make them undesirable in some applications. Other kinds of flux, however, require a cleaning stage, in which the PCB is washed with solvents and/or deionized water to remove flux residue.
Finish and quality
Quality depends on proper temperatures when heating and on properly treated surfaces.
Solder types
Different combinations of tin, lead and other metals are used to create solder. The combinations used depend on the desired properties. The most popular combinations are SAC (Tin(Sn)/Silver(Ag)/Copper(Cu)) alloys for lead-free processes and Sn63Pb37 (Sn63A) which is a eutectic alloy consisting of 63% tin and 37% lead. This latter combination is strong, has a low melting range, and melts and sets quickly (i.e., no 'plastic' range between the solid and molten states like the older 60% tin / 40% lead alloy). Higher tin compositions give the solder higher corrosion resistances, but raise the melting point. Another common composition is 11% tin, 37% lead, 42% bismuth, and 10% cadmium. This combination has a low melting point and is useful for soldering components that are sensitive to heat.
Environmental and performance requirements also factor into alloy selection. Common restrictions include restrictions on lead (Pb) when RoHS compliance is required and restrictions on pure tin (Sn) when long term reliability is a concern.
Effects of cooling rate
It is important that the PCBs be allowed to cool at a reasonable rate. If they are cooled too fast, then the PCB can become warped and the solder can be compromised. On the other hand, if the PCB is allowed to cool too slowly, then the PCB can become brittle and some components may be damaged by heat. The PCB should be cooled by either a fine water spray or air cooled to decrease the amount of damage to the board.
Thermal profiling
Thermal profiling is the act of measuring several points on a circuit board to determine the thermal excursion it takes through the soldering process.
In the electronics manufacturing industry, SPC (Statistical Process Control) helps determine if the process is in control, measured against the reflow parameters defined by the soldering technologies and component requirements.
Products like the Solderstar WaveShuttle and the Optiminer have been developed special fixtures which are passed through the process and can measure the temperature profile, along with contact times, wave parallelism and wave heights. These fixture combined with analysis software allows the production engineer to establish and then control the wave solder process.
Solder wave height
The height of the solder wave is a key parameter that needs to be evaluated when setting up the wave solder process. The contact time between the solder wave and assembly being soldered is typically set to between 2 and 4 seconds. This contact time is controlled by two parameters on the machine, conveyor speed and wave height, changes to either of these parameters will result in a change in contact time. The wave height is typically controlled by increasing or decreasing the pump speed on the machine. Changes can be evaluated and checked using a tempered glass plate, if more detailed recording are required fixture are available which digitally record the contact times, height and speed. Also, some wave solder machines can give the operator a choice between a smooth laminar wave or a slightly higher-pressure 'dancer' wave.
See also
Dip soldering
Thermal profiling
Solder mask
References
Further reading
Seeling, Karl (1995). A study of lead-free alloys. AIM, 1, Retrieved April 18, 2008, from
Biocca, Peter (2005, April 5). Lead-free wave soldering. Retrieved April 18, 2008, from EMSnow Web site:
Electronic Production Design & Test (2015, February 13) The importance of wave height measurement in wave solder process control
Soldering
Articles containing video clips
Printed circuit board manufacturing | Wave soldering | [
"Engineering"
] | 1,401 | [
"Electrical engineering",
"Electronic engineering",
"Printed circuit board manufacturing"
] |
1,005,852 | https://en.wikipedia.org/wiki/Artronix | Artronix Incorporated began in 1970 and has roots in a project in a computer science class at Washington University School of Medicine in St Louis. The class designed, built and tested a 12-bit minicomputer, which later evolved to become the PC12 minicomputer. The new company entered the bio-medical computing market with a set of peripherals and software for use in Radiation Treatment Planning (see full article and abstract) and ultrasound scanning. Software for the PC12 was written in assembly language and FORTRAN; later software was written in MUMPS. The company was located in two buildings in the Hanley Industrial Park off South Hanley Road in Maplewood, Missouri.
The company later developed another product line of brain-scanning or computed tomography equipment based on the Lockheed SUE 16-bit minicomputer (see also Pluribus); later designs included an optional vector processor using AMD Am2900 bipolar bit-slices to speed tomographic reconstruction calculations. In contrast to earlier designs, the Artronix scanner used a fan-shaped beam with 128 detectors on a rotating gantry. The system would take 540 degrees of data (1½ rotations) to average out noise in the samples. The beam allowed 3mm slices, but several slices would routinely be mathematically combined into one image for display purposes. The first generation of scanners was a head scanner while a later generation was a torso (whole-body) scanner. The CAT-3 (computerized axial tomography) system was a success at first, but the technology surrendered ground to PET (positron emission tomography) and MRI (magnetic resonance imaging) systems. Artronix closed its doors in 1978. A video of the Artronix torso scanner operating without a shroud is available on YouTube at Commissie NVvRadiologie with narration in Dutch.
Artronix was founded by Arne Roestel. Mr. Roestel went on to found Multidata Systems International. For his leadership of Artronix, Mr. Roestel was named as the Small Businessman of the Year for Missouri in 1976 by the Small Business Administration and was hosted at a luncheon by President Gerald Ford (source: Ford Library Museum).
References
Link broken when tested on 2017-07-13.
Defunct computer companies of the United States
Defunct computer hardware companies
Companies based in Missouri | Artronix | [
"Technology"
] | 483 | [
"Computing stubs",
"Computer hardware stubs"
] |
1,006,293 | https://en.wikipedia.org/wiki/Biorobotics | Biorobotics is an interdisciplinary science that combines the fields of biomedical engineering, cybernetics, and robotics to develop new technologies that integrate biology with mechanical systems to develop more efficient communication, alter genetic information, and create machines that imitate biological systems.
Cybernetics
Cybernetics focuses on the communication and system of living organisms and machines that can be applied and combined with multiple fields of study such as biology, mathematics, computer science, engineering, and much more.
This discipline falls under the branch of biorobotics because of its combined field of study between biological bodies and mechanical systems. Studying these two systems allow for advanced analysis on the functions and processes of each system as well as the interactions between them.
History
Cybernetic theory is a concept that has existed for centuries, dating back to the era of Plato where he applied the term to refer to the "governance of people". The term cybernetique is seen in the mid-1800s used by physicist André-Marie Ampère. The term cybernetics was popularized in the late 1940s to refer to a discipline that touched on, but was separate, from established disciplines, such as electrical engineering, mathematics, and biology.
Science
Cybernetics is often misunderstood because of the breadth of disciplines it covers. In the early 20th century, it was coined as an interdisciplinary field of study that combines biology, science, network theory, and engineering. Today, it covers all scientific fields with system related processes. The goal of cybernetics is to analyze systems and processes of any system or systems in an attempt to make them more efficient and effective.
Applications
Cybernetics is used as an umbrella term so applications extend to all systems related scientific fields such as biology, mathematics, computer science, engineering, management, psychology, sociology, art, and more. Cybernetics is used amongst several fields to discover principles of systems, adaptation of organisms, information analysis and much more.
Genetic engineering
Genetic engineering is a field that uses advances in technology to modify biological organisms. Through different methods, scientists are able to alter the genetic material of microorganisms, plants and animals to provide them with desirable traits. For example, making plants grow bigger, better, and faster. Genetic engineering is included in biorobotics because it uses new technologies to alter biology and change an organism's DNA for their and society's benefit.
History
Although humans have modified genetic material of animals and plants through artificial selection for millennia (such as the genetic mutations that developed teosinte into corn and wolves into dogs), genetic engineering refers to the deliberate alteration or insertion of specific genes to an organism's DNA. The first successful case of genetic engineering occurred in 1973 when Herbert Boyer and Stanley Cohen were able to transfer a gene with antibiotic resistance to a bacterium.
Science
There are three main techniques used in genetic engineering: The plasmid method, the vector method and the biolistic method.
Plasmid method
This technique is used mainly for microorganisms such as bacteria. Through this method, DNA molecules called plasmids are extracted from bacteria and placed in a lab where restriction enzymes break them down. As the enzymes break the molecules down, some develop a rough edge that resembles that of a staircase which is considered 'sticky' and capable of reconnecting. These 'sticky' molecules are inserted into another bacteria where they will connect to the DNA rings with the altered genetic material.
Vector method
The vector method is considered a more precise technique than the plasmid method as it involves the transfer of a specific gene instead of a whole sequence. In the vector method, a specific gene from a DNA strand is isolated through restriction enzymes in a laboratory and is inserted into a vector. Once the vector accepts the genetic code, it is inserted into the host cell where the DNA will be transferred.
Biolistic method
The biolistic method is typically used to alter the genetic material of plants. This method embeds the desired DNA with a metallic particle such as gold or tungsten in a high speed gun. The particle is then bombarded into the plant. Due to the high velocities and the vacuum generated during bombardment, the particle is able to penetrate the cell wall and inserts the new DNA into the cell.
Applications
Genetic engineering has many uses in the fields of medicine, research and agriculture. In the medical field, genetically modified bacteria are used to produce drugs such as insulin, human growth hormones and vaccines. In research, scientists genetically modify organisms to observe physical and behavioral changes to understand the function of specific genes. In agriculture, genetic engineering is extremely important as it is used by farmers to grow crops that are resistant to herbicides and to insects such as BTCorn.
Bionics
Bionics is a medical engineering field and a branch of biorobotics consisting of electrical and mechanical systems that imitate biological systems, such as prosthetics and hearing aids. It's a portmanteau that combines biology and electronics.
History
The history of bionics goes as far back in time as ancient Egypt. A prosthetic toe made out of wood and leather was found on the foot of a mummy. The time period of the mummy corpse was estimated to be from around the fifteenth century B.C. Bionics can also be witnessed in ancient Greece and Rome. Prosthetic legs and arms were made for amputee soldiers. In the early 16th century, a French military surgeon by the name of Ambroise Pare became a pioneer in the field of bionics. He was known for making various types of upper and lower prosthetics. One of his most famous prosthetics, Le Petit Lorrain, was a mechanical hand operated by catches and springs. During the early 19th century, Alessandro Volta further progressed bionics. He set the foundation for the creation of hearing aids with his experiments. He found that electrical stimulation could restore hearing by inserting an electrical implant to the saccular nerve of a patient's ear. In 1945, the National Academy of Sciences created the Artificial Limb Program, which focused on improving prosthetics since there were a large number of World War II amputee soldiers. Since this creation, prosthetic materials, computer design methods, and surgical procedures have improved, creating modern-day bionics.
Science
Prosthetics
The important components that make up modern-day prosthetics are the pylon, the socket, and the suspension system. The pylon is the internal frame of the prosthetic that is made up of metal rods or carbon-fiber composites. The socket is the part of the prosthetic that connects the prosthetic to the person's missing limb. The socket consists of a soft liner that makes the fit comfortable, but also snug enough to stay on the limb. The suspension system is important in keeping the prosthetic on the limb. The suspension system is usually a harness system made up of straps, belts or sleeves that are used to keep the limb attached.
The operation of a prosthetic could be designed in various ways. The prosthetic could be body-powered, externally-powered, or myoelectrically powered. Body-powered prosthetics consist of cables attached to a strap or harness, which is placed on the person's functional shoulder, allowing the person to manipulate and control the prosthetic as he or she deems fit. Externally-powered prosthetics consist of motors to power the prosthetic and buttons and switches to control the prosthetic. Myoelectrically powered prosthetics are new, advanced forms of prosthetics where electrodes are placed on the muscles above the limb. The electrodes will detect the muscle contractions and send electrical signals to the prosthetic to move the prosthetic. The downside to this type of prosthetic is that if the sensors are not placed correctly on the limb then the electrical impulses will fail to move the prosthetic. TrueLimb is a specific brand of prosthetics that uses myoelectrical sensors which enable a person to have control of their bionic limb.
Hearing aids
Four major components make up the hearing aid: the microphone, the amplifier, the receiver, and the battery. The microphone takes in outside sound, turns that sound to electrical signals, and sends those signals to the amplifier. The amplifier increases the sound and sends that sound to the receiver. The receiver changes the electrical signal back into sound and sends the sound into the ear. Hair cells in the ear will sense the vibrations from the sound, convert the vibrations into nerve signals, and send it to the brain so the sounds can become coherent to the person. The battery simply powers the hearing aid.
Applications
Cochlear Implant
Cochlear implants are a type of hearing aid for those who are deaf. Cochlear implants send electrical signals straight to the auditory nerve, the nerve responsible for sound signals, instead of just sending the signals to the ear canal like conventional hearing aids.
Bone-Anchored Hearing Aids
These hearing aids are also used for people with severe hearing loss. They attach to the bones of the middle ear to create sound vibrations in the skull and send those vibrations to the cochlea.
Artificial sensing skin
Artificial sensing-skin detects any pressure put on it and is meant for people who have lost any sense of feeling on parts of their bodies, such as diabetics with peripheral neuropathy.
Bionic eye
A bionic eye is a bioelectronic implant designed to restore vision for individuals with blindness.
Although the technology is still in development, it has enabled some legally blind individuals to distinguish letters again.
Replicating the retina, which contains millions of photoreceptors, and matching the human eye’s exceptional lensing and dynamic range capabilities pose significant challenges. Neural integration further complicates the process. Despite these difficulties, ongoing research and prototyping have led to several major achievements in recent years.
Orthopedic bionics
Orthopedic bionics consist of advanced bionic limbs that use a person's neuromuscular system to control the bionic limb. A new advancement in the comprehension of brain function has led to the development and implementation of brain-machine interfaces (BMIs). BMIs allow for the processing of neural messaging between motor regions of the brain to muscles of a specific limb to initiate movement. BMIs contribute greatly to the restoration of a person's independent movement who has a bionic limb and or an exoskeleton.
Endoscopic robotics
These robotics can remove a polyp during a colonoscopy.
Animal-robot interactions
Animal-robot interactions is a field of Biorobotics that focuses on the blending of robotic compounds with animal individuals or populations. The domain can be subdivided into two main branches, one that relates mechatronic devices with individual animals, and another one with animal populations. Both branches have a variety of applications, ranging from animal cyborgs benefiting from animals' superior motor capabilities to ethological studies around animal collective behaviour. While this representation draws a globally accurate view of the domain, some animal-robot interactions cannot be strictly classified into one or the other of these branches, or are sometimes a mixture of both. This is the case namely for ethological robots that interact on a one-to-one basis or when eusocial animals are considered as a single superorganism interacting with a single robotic device. In the latter case, the term Bio-Hybrid superorganism is used to describe the blending of a robotic device with a superorganism to enable interaction, control and thus studying of the latter superorganism.
Bio-Hybrid organisms
Mixed societies
Mixed societies blend together a set of animals (animal society) with a set of robotic devices (artificial society). Care should be take when using the word society, as the noun could be misleading within the zoologist community that is involved in this domain; a more accurate word would be populations, which is also the one used for the rest of this section.
Typically, the robotic population is composed of robotic replica of the target animal individuals aimed to integrate within the animal population. To do this, stimuli naturally perceived by the animals are emitted by the robotic individuals, and this through different communication channels: visual cues, thermal pulses, vibration signals, etc. The degree to which the robotic individuals successfully blend with the animal population is related to as bio-acceptance, and is often key to enable further behavioural study of the target species.
Once interactions between the animal and robot population is achieved by establishing relevant communication channels, mixed societies offer the potential for adaptive robotic behaviours driven by real-time feedback from the animal population. By responding directly to animal behaviour, the robots can dynamically adjust their actions to better integrate into the group. This capability is particularly valuable for understanding collective behaviours in animal populations. Adaptive robots can be used to implement models of specific roles or interactions within a group, enabling the testing of hypotheses about coordination, decision-making, or social organisation. This approach bridges experimental and modelling techniques, in an attempt to offer insights into the underlying mechanisms of collective behaviour.
See also
Android (robot)
Bio-inspired robotics
Molecular machine#Biological
Biological devices
Biomechatronics
Biomimetics
Cultured neural networks
Cyborg
Cylon (reimagining)
Nanobot
Nanomedicine
Plantoid
Remote control animal
Replicant
Roborat
Technorganic
References
External links
The BioRobotics Institute, Scuola Superiore Sant'Anna, Pisa, Italy
The BioRobotics Lab. Robotics Institute, Carnegie Mellon University *
Bioroïdes - A timeline of the popularization of the idea (in French)
Harvard BioRobotics Laboratory, Harvard University
Locomotion in Mechanical and Biological Systems (LIMBS) Laboratory, Johns Hopkins University
BioRobotics Lab in Korea
Laboratory of Biomedical Robotics and Biomicrosystems, Italy
Tiny backpacks for cells (MIT News)
Biologically Inspired Robotics Lab, Case Western Reserve University
Bio-Robotics and Human Modeling Laboratory - Georgia Institute of Technology
Biorobotics Laboratory at École Polytechnique Fédérale de Lausanne (Switzerland)
BioRobotics Laboratory, Free University of Berlin (Germany)
Biorobotics research group, Institute of Movement Science, CNRS/Aix-Marseille University (France)
Center for Biorobotics, Tallinn University of Technology (Estonia)
Biopunk
Biotechnology
Cyberpunk
Cybernetics
Fictional technology
Postcyberpunk
Health care robotics
Science fiction themes
Robotics | Biorobotics | [
"Engineering",
"Biology"
] | 2,974 | [
"Biotechnology",
"nan",
"Robotics",
"Automation"
] |
1,006,651 | https://en.wikipedia.org/wiki/Amorphous%20carbon | Amorphous carbon is free, reactive carbon that has no crystalline structure. Amorphous carbon materials may be stabilized by terminating dangling-π bonds with hydrogen. As with other amorphous solids, some short-range order can be observed. Amorphous carbon is often abbreviated to aC for general amorphous carbon, aC:H or HAC for hydrogenated amorphous carbon, or to ta-C for tetrahedral amorphous carbon (also called diamond-like carbon).
In mineralogy
In mineralogy, amorphous carbon is the name used for coal, carbide-derived carbon, and other impure forms of carbon that are neither graphite nor diamond. In a crystallographic sense, however, the materials are not truly amorphous but rather polycrystalline materials of graphite or diamond within an amorphous carbon matrix. Commercial carbon also usually contains significant quantities of other elements, which may also form crystalline impurities.
In modern science
With the development of modern thin film deposition and growth techniques in the latter half of the 20th century, such as chemical vapour deposition, sputter deposition, and cathodic arc deposition, it became possible to fabricate truly amorphous carbon materials.
True amorphous carbon has localized π electrons (as opposed to the aromatic π bonds in graphite), and its bonds form with lengths and distances that are inconsistent with any other allotrope of carbon. It also contains a high concentration of dangling bonds; these cause deviations in interatomic spacing (as measured using diffraction) of more than 5% as well as noticeable variation in bond angle.
The properties of amorphous carbon films vary depending on the parameters used during deposition. The primary method for characterizing amorphous carbon is through the ratio of sp2 to sp3 hybridized bonds present in the material. Graphite consists purely of sp2 hybridized bonds, whereas diamond consists purely of sp3 hybridized bonds. Materials that are high in sp3 hybridized bonds are referred to as tetrahedral amorphous carbon, owing to the tetrahedral shape formed by sp3 hybridized bonds, or as diamond-like carbon (owing to the similarity of many physical properties to those of diamond).
Experimentally, sp2 to sp3 ratios can be determined by comparing the relative intensities of various spectroscopic peaks (including EELS, XPS, and Raman spectroscopy) to those expected for graphite or diamond. In theoretical works, the sp2 to sp3 ratios are often obtained by counting the number of carbon atoms with three bonded neighbors versus those with four bonded neighbors. (This technique requires deciding on a somewhat arbitrary metric for determining whether neighboring atoms are considered bonded or not, and is therefore merely used as an indication of the relative sp2-sp3 ratio.)
Although the characterization of amorphous carbon materials by the sp2-sp3 ratio may seem to indicate a one-dimensional range of properties between graphite and diamond, this is most definitely not the case. Research is currently ongoing into ways to characterize and expand on the range of properties offered by amorphous carbon materials.
All practical forms of hydrogenated carbon (e.g. smoke, chimney soot, mined coal such as bitumen and anthracite) contain large amounts of polycyclic aromatic hydrocarbon tars, and are therefore almost certainly carcinogenic.
Q-carbon
Q-carbon, short for quenched carbon, is claimed to be a type of amorphous carbon that is ferromagnetic, electrically conductive, harder than diamond, and able to exhibit high-temperature superconductivity. A research group led by Professor Jagdish Narayan and graduate student Anagh Bhaumik at North Carolina State University announced the discovery of Q-carbon in 2015. They have published numerous papers on the synthesis and characterization of Q-carbon, but years later, there is no independent experimental confirmation of this substance and its properties.
According to the researchers, Q-carbon exhibits a random amorphous structure that is a mix of 3-way (sp2) and 4-way (sp3) bonding, rather than the uniform sp3 bonding found in diamonds. Carbon is melted using nanosecond laser pulses, then quenched rapidly to form Q-carbon, or a mixture of Q-carbon and diamond. Q-carbon can be made to take multiple forms, from nanoneedles to large-area diamond films. The researchers also reported the creation of nitrogen-vacancy nanodiamonds and Q-boron nitride (Q-BN), as well as the conversion of carbon into diamond and h-BN into c-BN at ambient temperatures and air pressures. The group obtained patents on q-materials and intended to commercialize them.
In 2018, a team at University of Texas at Austin used simulations to propose theoretical explanations of the reported properties of Q-carbon, including the record high-temperature superconductivity, ferromagnetism and hardness. However, their simulations have not been verified by other researchers.
See also
Glassy carbon
Diamond-like carbon
Carbon black
Soot
Carbon
References
Allotropes of carbon
Amorphous solids | Amorphous carbon | [
"Physics",
"Chemistry"
] | 1,071 | [
"Amorphous solids",
"Allotropes of carbon",
"Unsolved problems in physics",
"Allotropes"
] |
1,007,110 | https://en.wikipedia.org/wiki/Analytic%20proof | In mathematics, an analytic proof is a proof of a theorem in analysis that only makes use of methods from analysis, and that does not predominantly make use of algebraic or geometrical methods. The term was first used by Bernard Bolzano, who first provided a non-analytic proof of his intermediate value theorem and then, several years later provided a proof of the theorem that was free from intuitions concerning lines crossing each other at a point, and so he felt happy calling it analytic (Bolzano 1817).
Bolzano's philosophical work encouraged a more abstract reading of when a demonstration could be regarded as analytic, where a proof is analytic if it does not go beyond its subject matter (Sebastik 2007). In proof theory, an analytic proof has come to mean a proof whose structure is simple in a special way, due to conditions on the kind of inferences that ensure none of them go beyond what is contained in the assumptions and what is demonstrated.
Structural proof theory
In proof theory, the notion of analytic proof provides the fundamental concept that brings out the similarities between a number of essentially distinct proof calculi, so defining the subfield of structural proof theory. There is no uncontroversial general definition of analytic proof, but for several proof calculi there is an accepted notion. For example:
In Gerhard Gentzen's natural deduction calculus the analytic proofs are those in normal form; that is, no formula occurrence is both the principal premise of an elimination rule and the conclusion of an introduction rule;
In Gentzen's sequent calculus the analytic proofs are those that do not use the cut rule.
However, it is possible to extend the inference rules of both calculi so that there are proofs that satisfy the condition but are not analytic. For example, a particularly tricky example of this is the analytic cut rule, used widely in the tableau method, which is a special case of the cut rule where the cut formula is a subformula of side formulae of the cut rule: a proof that contains an analytic cut is by virtue of that rule not analytic.
Furthermore, proof calculi that are not analogous to Gentzen's calculi have other notions of analytic proof. For example, the calculus of structures organises its inference rules into pairs, called the up fragment and the down fragment, and an analytic proof is one that only contains the down fragment.
See also
Proof-theoretic semantics
References
Bernard Bolzano (1817). Purely analytic proof of the theorem that between any two values which give results of opposite sign, there lies at least one real root of the equation. In Abhandlungen der koniglichen bohmischen Gesellschaft der Wissenschaften Vol. V, pp.225-48.
Frank Pfenning (1984). Analytic and Non-analytic Proofs. In Proc. 7th International Conference on Automated Deduction.
Jan Šebestik (2007). Bolzano's Logic. Entry in the Stanford Encyclopedia of Philosophy.
Proof theory
Methods of proof | Analytic proof | [
"Mathematics"
] | 628 | [
"Mathematical logic",
"Methods of proof",
"Proof theory"
] |
1,007,853 | https://en.wikipedia.org/wiki/JCB%20%28heavy%20equipment%20manufacturer%29 | J.C. Bamford Excavators Limited (JCB) is a British multinational manufacturer of equipment for construction, agriculture, waste handling, and demolition. It was founded in 1945 and is based in Rocester, Staffordshire, England.
The word "JCB" is also often used colloquially as a generic description for mechanical diggers and excavators, and the word even appears in the Oxford English Dictionary, although it is still held as a trademark.
History
Joseph Cyril Bamford Excavators Ltd. was founded by Joseph Cyril Bamford in October 1945 in Uttoxeter, Staffordshire, England. He rented a lock-up garage . In it, using a welding set which he bought second-hand for £1 from English Electric, he made his first vehicle, a tipping trailer from war-surplus materials. The trailer's sides and floor were made from steel sheet that had been part of air raid shelters. On the same day as his son Anthony was born, he sold the trailer at a nearby market for £45 (plus a part-exchanged farm cart) and at once made another trailer. At one time he made vehicles in Eckersley's coal yard in Uttoxeter. The first trailer and the welding set have been preserved.
In 1948, six people were working for the company, and it made the first hydraulic tipping trailer in Europe. In 1950, it moved to an old cheese factory in Rocester, still employing six. A year later, Bamford began painting his products yellow. In 1953, he developed JCB's first backhoe loader, and the JCB logo appeared for the first time. It was designed by Derby Media and advertising designer Leslie Smith. In 1957, the firm launched the "hydra-digga", incorporating the excavator and the major loader as a single all-purpose tool useful for the agricultural and construction industries.
By 1964, JCB had sold over 3,000 3C backhoe loaders. The next year, the first 360-degree excavator was introduced, the JCB 7.
In 1975, Anthony Bamford, Bamford's son, was made Chairman of the company.
In 1978, the Loadall machine was introduced. The next year, the firm started its operation in India. In 1991, the firm entered a joint venture with Sumitomo of Japan to produce excavators, which ended in 1998. Two years later, a JCB factory was completed in Pooler near Savannah, Georgia, in the US, and in 2012 a factory was opened in Brazil.
In 2005, JCB bought a company, purchasing the German equipment firm Vibromax. In the same year, it opened a new factory in Pudong, China. Planning of a new £40M JCB Heavy Products site began following the launch of an architectural design competition in 2007 managed by RIBA Competitions, and by the next year, the firm began to move from its old site on Pinfold Street in Uttoxeter to the new site beside the A50; the Pinfold Street site was demolished in 2009. During that year, JCB announced plans to make India its largest manufacturing hub. Its factory at Ballabgarh in Haryana was to become the world's largest backhoe loader manufacturing facility. Although JCB shed 2,000 jobs during the Great Recession, in 2010 it rehired up to 200 new workers.
In 2013, JCB set up its fourth manufacturing facility in India. In 2014, it was reported that three out of every four pieces of construction equipment sold in India was a JCB, and that its Indian operations accounted for 17.5% of its total revenue. JCB-based memes have also become prevalent in India.
JCB began manufacturing 20-30 tonne excavators in Solnechnogorsky District in Russia in 2017. Due to trade sanctions imposed following the 2022 Russian invasion of Ukraine, JCB suspended its operations in Russia in March 2022.
In 2020, JCB launched www.jcbexplore.com - a website dedicated to promoting constructive play and outdoor activities for kids.
Products
Many of the vehicles produced by JCB are variants of the backhoe loader, including tracked or wheeled variants, mini and large version and other variations, such as forklift vehicles and telescopic handlers for moving materials to the upper floors of a building site. The company also produces wheeled loading shovels and articulated dump trucks.
Its JCB Fastrac range of tractors, which entered production in 1990, can drive at speeds of up to 75 km/h (40 mph) on roads and was shown on the BBC television programme Tomorrow's World, and years later as Jeremy Clarkson's tractor of choice in Top Gear. The firm makes a range of military vehicles, including the JCB HMEE. It licenses a range of rugged feature phones and smartphones designed for construction sites. The design and marketing contract was awarded to Data Select in 2010, which then lost the exclusive rights in 2013.
JCB power systems make a hydrogen combustion engine which aims to be cost effective by reusing parts from the company's Dieselmax engines.
JCB Insurance Services is a fully owned subsidiary of JCB that provides insurance for customers with funding from another fully owned subsidiary, JCB Finance.
JCB Dieselmax
In April 2006, JCB announced that they were developing a diesel-powered land speed record vehicle known as the 'JCB Dieselmax'. The car is powered by two modified JCB 444 diesel power plants using a two-stage turbocharger to generate , one engine driving the front wheels and the other the rear wheels.
On 22 August 2006 the Dieselmax, driven by Andy Green, broke the diesel engine land speed record, attaining a speed of . The following day, the record was again broken, this time with a speed of .
Controversies and criticism
Violation of EU antitrust law
In December 2000, JCB was fined €39.6M by the European Commission for violating European Union antitrust law. The fine related to restrictions on sales outside allotted territories, purchases between authorised distributors, bonuses and fees which restricted out of territory sales, and occasional joint fixing of resale prices and discounts across different territories. JCB appealed the decision, with the European Court of First Instance upholding portions of the appeal and reducing the original fine by 25%. JCB appealed to the European Court of Justice but this final appeal was rejected in 2006, with the court slightly increasing the reduced fine by €864,000.
Tax avoidance
In 2017, a Reuters study of JCB group accounts found that between 2001 and 2013, the JCB group paid £577M to JCB Research, an unlimited company that does not have to file public accounts and which has only two shares, both owned by Anthony Bamford. JCB Research has been described as an obscure company, allegedly worth £27,000, but which donated £2M to the Conservative Party in the run up to the 2010 election, making it the largest donor. Ownership of the company which has never filed accounts is disputed by the Bamford brothers. According to a Guardian report, much of the Bamford money was held in shares in offshore trusts.
JCB Service, the main JCB holding company, is owned by a Dutch parent company, ‘Transmissions and engineering Netherlands BV’, which is ultimately controlled by “Bamford family interests”. According to Ethical Consumer, JCB has six subsidiaries in jurisdictions considered to be tax havens, in Singapore, the Netherlands, Hong Kong, Delaware and Switzerland.
Involvement in Israeli settlements
On 12 February 2020, the United Nations published a database of all business enterprises involved in certain specified activities related to the Israeli settlements in the Occupied Palestinian Territories, including East Jerusalem, and in the occupied Golan Heights. JCB has been listed on the database in light of its involvement in activities related to "the supply of equipment and materials facilitating the construction and the expansion of settlements and the wall, and associated infrastructures". The international community considers Israeli settlements built on land occupied by Israel to be in violation of international law.
In October 2020, the British government decided to investigate a complaint that JCB’s sale of equipment to Israel did not comply with the human rights guidelines set by the Organisation for Economic Co-operation and Development. The UK National Contact Point (NCP), part of the UK’s Department of International Trade, agreed to review a complaint against JCB submitted by a charity, Lawyers for Palestinian Human Rights. JCB said it had no “legal ownership” of its machinery once sold to Comasco, its sole distributor of JCB equipment in Israel.
Bailout loan
In 2020, JCB received a £600M loan in emergency financial aid from the UK government, during the coronavirus pandemic, despite its ultimate ownership being in the Netherlands and having reported a record £447M profit the previous year. Its chief executive Graeme Macdonald said: “Although not a public company, we are eligible for CCF because of our contribution to the UK economy. We don’t expect to utilise it in the short-term but it gives us an insurance policy if there is further disruption from or second spike or other impact around the world.”
Politics
JCB is a significant donor to the UK Conservative Party. Between 2007 and 2017, JCB and related Bamford entities donated £8.1m in cash or kind to the party. Between 2019 and 2021 JCB donated a further £2.5m.
In 2016, Anthony Bamford donated £100,000 to Vote Leave, the official pro-Brexit group, and wrote to JCB's 6,500 staff explaining why he supported the UK leaving the EU.
In October 2016, it was reported that JCB had left the CBI business lobby group in the summer of the same year due to the organisation's anti-Brexit stance. In May 2021, Anthony Bamford rejected an invitation to rejoin CBI, after previously having called it a "waste of time" that "didn’t represent my business or private companies".
References
External links
Construction equipment manufacturers of the United Kingdom
Mining equipment companies
Engine manufacturers of the United Kingdom
Manufacturing companies of England
Forklift truck manufacturers
Agricultural machinery manufacturers of the United Kingdom
Tractor manufacturers of the United Kingdom
Mobile phone manufacturers
English brands
Defence companies of the United Kingdom
Privately held companies of England
Family-owned companies of England
British companies established in 1945
Manufacturing companies established in 1945
Multinational companies headquartered in England
1945 establishments in England
Borough of East Staffordshire
Companies based in Staffordshire
Conservative Party (UK) donors
Electrical generation engine manufacturers
Automotive transmission makers | JCB (heavy equipment manufacturer) | [
"Engineering"
] | 2,164 | [
"Mining equipment",
"Mining equipment companies"
] |
1,007,903 | https://en.wikipedia.org/wiki/Generalized%20singular%20value%20decomposition | In linear algebra, the generalized singular value decomposition (GSVD) is the name of two different techniques based on the singular value decomposition (SVD). The two versions differ because one version decomposes two matrices (somewhat like the higher-order or tensor SVD) and the other version uses a set of constraints imposed on the left and right singular vectors of a single-matrix SVD.
First version: two-matrix decomposition
The generalized singular value decomposition (GSVD) is a matrix decomposition on a pair of matrices which generalizes the singular value decomposition. It was introduced by Van Loan in 1976 and later developed by Paige and Saunders, which is the version described here. In contrast to the SVD, the GSVD decomposes simultaneously a pair of matrices with the same number of columns. The SVD and the GSVD, as well as some other possible generalizations of the SVD, are extensively used in the study of the conditioning and regularization of linear systems with respect to quadratic semi-norms. In the following, let , or .
Definition
The generalized singular value decomposition of matrices and iswhere
is unitary,
is unitary,
is unitary,
is unitary,
is real diagonal with positive diagonal, and contains the non-zero singular values of in decreasing order,
,
is real non-negative block-diagonal, where with , , and ,
is real non-negative block-diagonal, where with , , and ,
,
,
,
.
We denote , , , and . While is diagonal, is not always diagonal, because of the leading rectangular zero matrix; instead is "bottom-right-diagonal".
Variations
There are many variations of the GSVD. These variations are related to the fact that it is always possible to multiply from the left by where is an arbitrary unitary matrix. We denote
, where is upper-triangular and invertible, and is unitary. Such matrices exist by RQ-decomposition.
. Then is invertible.
Here are some variations of the GSVD:
MATLAB (gsvd):
LAPACK (LA_GGSVD):
Simplified:
Generalized singular values
A generalized singular value of and is a pair such that
We have
By these properties we can show that the generalized singular values are exactly the pairs . We haveTherefore
This expression is zero exactly when and for some .
In, the generalized singular values are claimed to be those which solve . However, this claim only holds when , since otherwise the determinant is zero for every pair ; this can be seen by substituting above.
Generalized inverse
Define for any invertible matrix , for any zero matrix , and for any block-diagonal matrix. Then defineIt can be shown that as defined here is a generalized inverse of ; in particular a -inverse of . Since it does not in general satisfy , this is not the Moore–Penrose inverse; otherwise we could derive for any choice of matrices, which only holds for certain class of matrices.
Suppose , where and . This generalized inverse has the following properties:
Quotient SVD
A generalized singular ratio of and is . By the above properties, . Note that is diagonal, and that, ignoring the leading zeros, contains the singular ratios in decreasing order. If is invertible, then has no leading zeros, and the generalized singular ratios are the singular values, and and are the matrices of singular vectors, of the matrix . In fact, computing the SVD of is one of the motivations for the GSVD, as "forming and finding its SVD can lead to unnecessary and large numerical errors when is ill-conditioned for solution of equations". Hence the sometimes used name "quotient SVD", although this is not the only reason for using GSVD. If is not invertible, then is still the SVD of if we relax the requirement of having the singular values in decreasing order. Alternatively, a decreasing order SVD can be found by moving the leading zeros to the back: , where and are appropriate permutation matrices. Since rank equals the number of non-zero singular values, .
Construction
Let
be the SVD of , where is unitary, and and are as described,
, where and ,
, where and ,
by the SVD of , where , and are as described,
by a decomposition similar to a QR-decomposition, where and are as described.
ThenWe also haveThereforeSince has orthonormal columns, . ThereforeWe also have for each such that thatTherefore , and
Applications
The GSVD, formulated as a comparative spectral decomposition, has been successfully applied to signal processing and data science, e.g., in genomic signal processing.
These applications inspired several additional comparative spectral decompositions, i.e., the higher-order GSVD (HO GSVD) and the tensor GSVD.
It has equally found applications to estimate the spectral decompositions of linear operators when the eigenfunctions are parameterized with a linear model, i.e. a reproducing kernel Hilbert space.
Second version: weighted single-matrix decomposition
The weighted version of the generalized singular value decomposition (GSVD) is a constrained matrix decomposition with constraints imposed on the left and right singular vectors of the singular value decomposition. This form of the GSVD is an extension of the SVD as such. Given the SVD of an m×n real or complex matrix M
where
Where I is the identity matrix and where and are orthonormal given their constraints ( and ). Additionally, and are positive definite matrices (often diagonal matrices of weights). This form of the GSVD is the core of certain techniques, such as generalized principal component analysis and Correspondence analysis.
The weighted form of the GSVD is called as such because, with the correct selection of weights, it generalizes many techniques (such as multidimensional scaling and linear discriminant analysis).
References
Further reading
LAPACK manual
Linear algebra
Singular value decomposition | Generalized singular value decomposition | [
"Mathematics"
] | 1,213 | [
"Linear algebra",
"Algebra"
] |
1,008,028 | https://en.wikipedia.org/wiki/Sudo | sudo () is a program for Unix-like computer operating systems that enables users to run programs with the security privileges of another user, by default the superuser. It originally stood for "superuser do", as that was all it did, and this remains its most common usage; however, the official Sudo project page lists it as "su 'do. The current Linux manual pages for su define it as "substitute user", making the correct meaning of sudo "substitute user, do", because sudo can run a command as other users as well.
Unlike the similar command su, users must, by default, supply their own password for authentication, rather than the password of the target user. After authentication, and if the configuration file (typically /etc/sudoers) permits the user access, the system invokes the requested command. The configuration file offers detailed access permissions, including enabling commands only from the invoking terminal; requiring a password per user or group; requiring re-entry of a password every time or never requiring a password at all for a particular command line. It can also be configured to permit passing arguments or multiple commands.
History
Robert Coggeshall and Cliff Spencer wrote the original subsystem around 1980 at the Department of Computer Science at SUNY/Buffalo. Robert Coggeshall brought sudo with him to the University of Colorado Boulder. Between 1986 and 1993, the code and features were substantially modified by the IT staff of the University of Colorado Boulder Computer Science Department and the College of Engineering and Applied Science, including Todd C. Miller. The current version has been publicly maintained by OpenBSD developer Todd C. Miller since 1994, and has been distributed under an ISC-style license since 1999.
In November 2009 Thomas Claburn, in response to concerns that Microsoft had patented sudo, characterized such suspicions as overblown. The claims were narrowly framed to a particular GUI, rather than to the sudo concept.
The logo is a reference to an xkcd strip, where an order for a sandwich is accepted when preceded with 'sudo'.
Design
Unlike the command su, users supply their personal password to sudo (if necessary) rather than that of the superuser or other account. This allows authorized users to exercise altered privileges without compromising the secrecy of the other account's password. Users must be in a certain group to use the sudo command, typically either the wheel group or the sudo group. After authentication, and if the configuration file permits the user access, the system invokes the requested command. sudo retains the user's invocation rights through a grace period (typically 5 minutes) per pseudo terminal, allowing the user to execute several successive commands as the requested user without having to provide a password again.
As a security and auditing feature, sudo may be configured to log each command run. When a user attempts to invoke sudo without being listed in the configuration file, an exception indication is presented to the user indicating that the attempt has been recorded. If configured, the root user will be alerted via mail. By default, an entry is recorded in the system.
Configuration
The /etc/sudoers file contains a list of users or user groups with permission to execute a subset of commands while having the privileges of the root user or another specified user. The file is recommended to be edited by using the command sudo visudo. Sudo contains several configuration options such as allowing commands to be run as sudo without a password, changing which users can use sudo, and changing the message displayed upon entering an incorrect password. Sudo features an easter egg that can be enabled from the configuration file that will display an insult every time an incorrect password is entered.
Impact
In some system distributions, sudo has largely supplanted the default use of a distinct superuser login for administrative tasks, most notably in some Linux distributions as well as Apple's macOS. This allows for more secure logging of admin commands and prevents some exploits.
RBAC
In association with SELinux, sudo can be used to transition between roles in role-based access control (RBAC).
Tools and similar programs
visudo is a command-line utility that allows editing the sudo configuration file in a fail-safe manner. It prevents multiple simultaneous edits with locks and performs sanity and syntax checks.
Sudoedit is a program that symlinks to the sudo binary. When sudo is run via its sudoedit alias, sudo behaves as if the -e flag has been passed and allows users to edit files that require additional privileges to write to.
Microsoft released its own version of sudo for Windows in February 2024. It functions similar to its Unix counterpart by giving the ability to run elevated commands from an unelevated console session. The program runas provides comparable functionality in Windows, but it cannot pass current directories, environment variables or long command lines to the child. And while it supports running the child as another user, it does not support simple elevation. Hamilton C shell also includes true su and sudo for Windows that can pass all of that state information and start the child either elevated or as another user (or both).
Graphical user interfaces exist for sudo – notably gksudo – but are deprecated in Debian and no longer included in Ubuntu. Other user interfaces are not directly built on sudo, but provide similar temporary privilege elevation for administrative purposes, such as pkexec in Unix-like operating systems, User Account Control in Microsoft Windows and Mac OS X Authorization Services.
doas, available since OpenBSD 5.8 (October 2015), has been written in order to replace sudo in the OpenBSD base system, with the latter still being made available as a port.
gosu is a tool similar to sudo that is popular in containers where the terminal may not be fully functional or where there are undesirable effects from running sudo in a containerized environment.
See also
chroot
doas
runas
Comparison of privilege authorization features
References
External links
Computer security software
System administration
Unix user management and support-related utilities
Software using the ISC license | Sudo | [
"Technology",
"Engineering"
] | 1,275 | [
"Cybersecurity engineering",
"Information systems",
"Computer security software",
"System administration"
] |
3,043,886 | https://en.wikipedia.org/wiki/Enzyme%20kinetics | Enzyme kinetics is the study of the rates of enzyme-catalysed chemical reactions. In enzyme kinetics, the reaction rate is measured and the effects of varying the conditions of the reaction are investigated. Studying an enzyme's kinetics in this way can reveal the catalytic mechanism of this enzyme, its role in metabolism, how its activity is controlled, and how a drug or a modifier (inhibitor or activator) might affect the rate.
An enzyme (E) is a protein molecule that serves as a biological catalyst to facilitate and accelerate a chemical reaction in the body. It does this through binding of another molecule, its substrate (S), which the enzyme acts upon to form the desired product. The substrate binds to the active site of the enzyme to produce an enzyme-substrate complex ES, and is transformed into an enzyme-product complex EP and from there to product P, via a transition state ES*. The series of steps is known as the mechanism:
E + S ⇄ ES ⇄ ES* ⇄ EP ⇄ E + P
This example assumes the simplest case of a reaction with one substrate and one product. Such cases exist: for example, a mutase such as phosphoglucomutase catalyses the transfer of a phosphate group from one position to another, and isomerase is a more general term for an enzyme that catalyses any one-substrate one-product reaction, such as triosephosphate isomerase. However, such enzymes are not very common, and are heavily outnumbered by enzymes that catalyse two-substrate two-product reactions: these include, for example, the NAD-dependent dehydrogenases such as alcohol dehydrogenase, which catalyses the oxidation of ethanol by NAD+. Reactions with three or four substrates or products are less common, but they exist. There is no necessity for the number of products to be equal to the number of substrates; for example, glyceraldehyde 3-phosphate dehydrogenase has three substrates and two products.
When enzymes bind multiple substrates, such as dihydrofolate reductase (shown right), enzyme kinetics can also show the sequence in which these substrates bind and the sequence in which products are released. An example of enzymes that bind a single substrate and release multiple products are proteases, which cleave one protein substrate into two polypeptide products. Others join two substrates together, such as DNA polymerase linking a nucleotide to DNA. Although these mechanisms are often a complex series of steps, there is typically one rate-determining step that determines the overall kinetics. This rate-determining step may be a chemical reaction or a conformational change of the enzyme or substrates, such as those involved in the release of product(s) from the enzyme.
Knowledge of the enzyme's structure is helpful in interpreting kinetic data. For example, the structure can suggest how substrates and products bind during catalysis; what changes occur during the reaction; and even the role of particular amino acid residues in the mechanism. Some enzymes change shape significantly during the mechanism; in such cases, it is helpful to determine the enzyme structure with and without bound substrate analogues that do not undergo the enzymatic reaction.
Not all biological catalysts are protein enzymes: RNA-based catalysts such as ribozymes and ribosomes are essential to many cellular functions, such as RNA splicing and translation. The main difference between ribozymes and enzymes is that RNA catalysts are composed of nucleotides, whereas enzymes are composed of amino acids. Ribozymes also perform a more limited set of reactions, although their reaction mechanisms and kinetics can be analysed and classified by the same methods.
General principles
The reaction catalysed by an enzyme uses exactly the same reactants and produces exactly the same products as the uncatalysed reaction. Like other catalysts, enzymes do not alter the position of equilibrium between substrates and products. However, unlike uncatalysed chemical reactions, enzyme-catalysed reactions display saturation kinetics. For a given enzyme concentration and for relatively low substrate concentrations, the reaction rate increases linearly with substrate concentration; the enzyme molecules are largely free to catalyse the reaction, and increasing substrate concentration means an increasing rate at which the enzyme and substrate molecules encounter one another. However, at relatively high substrate concentrations, the reaction rate asymptotically approaches the theoretical maximum; the enzyme active sites are almost all occupied by substrates resulting in saturation, and the reaction rate is determined by the intrinsic turnover rate of the enzyme. The substrate concentration midway between these two limiting cases is denoted by KM. Thus, KM is the substrate concentration at which the reaction velocity is half of the maximum velocity.
The two important properties of enzyme kinetics are how easily the enzyme can be saturated with a substrate, and the maximum rate it can achieve. Knowing these properties suggests what an enzyme might do in the cell and can show how the enzyme will respond to changes in these conditions.
Enzyme assays
Enzyme assays are laboratory procedures that measure the rate of enzyme reactions. Since enzymes are not consumed by the reactions they catalyse, enzyme assays usually follow changes in the concentration of either substrates or products to measure the rate of reaction. There are many methods of measurement. Spectrophotometric assays observe the change in the absorbance of light between products and reactants; radiometric assays involve the incorporation or release of radioactivity to measure the amount of product made over time. Spectrophotometric assays are most convenient since they allow the rate of the reaction to be measured continuously. Although radiometric assays require the removal and counting of samples (i.e., they are discontinuous assays) they are usually extremely sensitive and can measure very low levels of enzyme activity. An analogous approach is to use mass spectrometry to monitor the incorporation or release of stable isotopes as the substrate is converted into product. Occasionally, an assay fails and approaches are essential to resurrect a failed assay.
The most sensitive enzyme assays use lasers focused through a microscope to observe changes in single enzyme molecules as they catalyse their reactions. These measurements either use changes in the fluorescence of cofactors during an enzyme's reaction mechanism, or of fluorescent dyes added onto specific sites of the protein to report movements that occur during catalysis. These studies provide a new view of the kinetics and dynamics of single enzymes, as opposed to traditional enzyme kinetics, which observes the average behaviour of populations of millions of enzyme molecules.
An example progress curve for an enzyme assay is shown above. The enzyme produces product at an initial rate that is approximately linear for a short period after the start of the reaction. As the reaction proceeds and substrate is consumed, the rate continuously slows (so long as the substrate is not still at saturating levels). To measure the initial (and maximal) rate, enzyme assays are typically carried out while the reaction has progressed only a few percent towards total completion. The length of the initial rate period depends on the assay conditions and can range from milliseconds to hours. However, equipment for rapidly mixing liquids allows fast kinetic measurements at initial rates of less than one second. These very rapid assays are essential for measuring pre-steady-state kinetics, which are discussed below.
Most enzyme kinetics studies concentrate on this initial, approximately linear part of enzyme reactions. However, it is also possible to measure the complete reaction curve and fit this data to a non-linear rate equation. This way of measuring enzyme reactions is called progress-curve analysis. This approach is useful as an alternative to rapid kinetics when the initial rate is too fast to measure accurately.
The Standards for Reporting Enzymology Data Guidelines provide minimum information required to comprehensively report kinetic and equilibrium data from investigations of enzyme activities including corresponding experimental conditions. The guidelines have been developed to report functional enzyme data with rigor and robustness.
Single-substrate reactions
Enzymes with single-substrate mechanisms include isomerases such as triosephosphateisomerase or bisphosphoglycerate mutase, intramolecular lyases such as adenylate cyclase and the hammerhead ribozyme, an RNA lyase. However, some enzymes that only have a single substrate do not fall into this category of mechanisms. Catalase is an example of this, as the enzyme reacts with a first molecule of hydrogen peroxide substrate, becomes oxidised and is then reduced by a second molecule of substrate. Although a single substrate is involved, the existence of a modified enzyme intermediate means that the mechanism of catalase is actually a ping–pong mechanism, a type of mechanism that is discussed in the Multi-substrate reactions section below.
Michaelis–Menten kinetics
As enzyme-catalysed reactions are saturable, their rate of catalysis does not show a linear response to increasing substrate. If the initial rate of the reaction is measured over a range of substrate concentrations (denoted as [S]), the initial reaction rate () increases as [S] increases, as shown on the right. However, as [S] gets higher, the enzyme becomes saturated with substrate and the initial rate reaches Vmax, the enzyme's maximum rate.
The Michaelis–Menten kinetic model of a single-substrate reaction is shown on the right. There is an initial bimolecular reaction between the enzyme E and substrate S to form the enzyme–substrate complex ES. The rate of enzymatic reaction increases with the increase of the substrate concentration up to a certain level called Vmax; at Vmax, increase in substrate concentration does not cause any increase in reaction rate as there is no more enzyme (E) available for reacting with substrate (S). Here, the rate of reaction becomes dependent on the ES complex and the reaction becomes a unimolecular reaction with an order of zero. Though the enzymatic mechanism for the unimolecular reaction ES ->[k_{cat}] E + P can be quite complex, there is typically one rate-determining enzymatic step that allows this reaction to be modelled as a single catalytic step with an apparent unimolecular rate constant kcat.
If the reaction path proceeds over one or several intermediates, kcat will be a function of several elementary rate constants, whereas in the simplest case of a single elementary reaction (e.g. no intermediates) it will be identical to the elementary unimolecular rate constant k2. The apparent unimolecular rate constant kcat is also called turnover number, and denotes the maximum number of enzymatic reactions catalysed per second.
The Michaelis–Menten equation describes how the (initial) reaction rate v0 depends on the position of the substrate-binding equilibrium and the rate constant k2.
(Michaelis–Menten equation)
with the constants
This Michaelis–Menten equation is the basis for most single-substrate enzyme kinetics. Two crucial assumptions underlie this equation (apart from the general assumption about the mechanism only involving no intermediate or product inhibition, and there is no allostericity or cooperativity). The first assumption is the so-called
quasi-steady-state assumption (or pseudo-steady-state hypothesis), namely that the concentration of the substrate-bound enzyme (and hence also the unbound enzyme) changes much more slowly than those of the product and substrate and thus the change over time of the complex can be set to zero
. The second assumption is that the total enzyme concentration does not change over time, thus .
The Michaelis constant KM is experimentally defined as the concentration at which the rate of the enzyme reaction is half Vmax, which can be verified by substituting [S] = KM into the Michaelis–Menten equation and can also be seen graphically. If the rate-determining enzymatic step is slow compared to substrate dissociation (), the Michaelis constant KM is roughly the dissociation constant KD of the ES complex.
If [S] is small compared to then the term and also very little ES complex is formed, thus [E]_{\rm tot} \approx [E]. Therefore, the rate of product formation is
Thus the product formation rate depends on the enzyme concentration as well as on the substrate concentration, the equation resembles a bimolecular reaction with a corresponding pseudo-second order rate constant . This constant is a measure of catalytic efficiency. The most efficient enzymes reach a in the range of . These enzymes are so efficient they effectively catalyse a reaction each time they encounter a substrate molecule and have thus reached an upper theoretical limit for efficiency (diffusion limit); and are sometimes referred to as kinetically perfect enzymes. But most enzymes are far from perfect: the average values of and are about and , respectively.
Direct use of the Michaelis–Menten equation for time course kinetic analysis
The observed velocities predicted by the Michaelis–Menten equation can be used to directly model the time course disappearance of substrate and the production of product through incorporation of the Michaelis–Menten equation into the equation for first order chemical kinetics. This can only be achieved however if one recognises the problem associated with the use of Euler's number in the description of first order chemical kinetics. i.e. e−k is a split constant that introduces a systematic error into calculations and can be rewritten as a single constant which represents the remaining substrate after each time period.
In 1983 Stuart Beal (and also independently Santiago Schnell and Claudio Mendoza in 1997) derived a closed form solution for the time course kinetics analysis of the Michaelis-Menten mechanism. The solution, , has the form:
where W[ ] is the Lambert-W function. and where F(t) is
This equation is encompassed by the equation below, obtained by Berberan-Santos, which is also valid when the initial substrate concentration is close to that of enzyme,
where W[ ] is again the Lambert-W function.
Linear plots of the Michaelis–Menten equation
The plot of v versus [S] above is not linear; although initially linear at low [S], it bends over to saturate at high [S]. Before the modern era of nonlinear curve-fitting on computers, this nonlinearity could make it difficult to estimate KM and Vmax accurately. Therefore, several researchers developed linearisations of the Michaelis–Menten equation, such as the Lineweaver–Burk plot, the Eadie–Hofstee diagram and the Hanes–Woolf plot. All of these linear representations can be useful for visualising data, but none should be used to determine kinetic parameters, as computer software is readily available that allows for more accurate determination by nonlinear regression methods.
The Lineweaver–Burk plot or double reciprocal plot is a common way of illustrating kinetic data. This is produced by taking the reciprocal of both sides of the Michaelis–Menten equation. As shown on the right, this is a linear form of the Michaelis–Menten equation and produces a straight line with the equation y = mx + c with a y-intercept equivalent to 1/Vmax and an x-intercept of the graph representing −1/KM.
Naturally, no experimental values can be taken at negative 1/[S]; the lower limiting value 1/[S] = 0 (the y-intercept) corresponds to an infinite substrate concentration, where 1/v=1/Vmax as shown at the right; thus, the x-intercept is an extrapolation of the experimental data taken at positive concentrations. More generally, the Lineweaver–Burk plot skews the importance of measurements taken at low substrate concentrations and, thus, can yield inaccurate estimates of Vmax and KM. A more accurate linear plotting method is the Eadie–Hofstee plot. In this case, v is plotted against v/[S]. In the third common linear representation, the Hanes–Woolf plot, [S]/v is plotted against [S].
In general, data normalisation can help diminish the amount of experimental work and can increase the reliability of the output, and is suitable for both graphical and numerical analysis.
Practical significance of kinetic constants
The study of enzyme kinetics is important for two basic reasons. Firstly, it helps explain how enzymes work, and secondly, it helps predict how enzymes behave in living organisms. The kinetic constants defined above, KM and Vmax, are critical to attempts to understand how enzymes work together to control metabolism.
Making these predictions is not trivial, even for simple systems. For example, oxaloacetate is formed by malate dehydrogenase within the mitochondrion. Oxaloacetate can then be consumed by citrate synthase, phosphoenolpyruvate carboxykinase or aspartate aminotransferase, feeding into the citric acid cycle, gluconeogenesis or aspartic acid biosynthesis, respectively. Being able to predict how much oxaloacetate goes into which pathway requires knowledge of the concentration of oxaloacetate as well as the concentration and kinetics of each of these enzymes. This aim of predicting the behaviour of metabolic pathways reaches its most complex expression in the synthesis of huge amounts of kinetic and gene expression data into mathematical models of entire organisms. Alternatively, one useful simplification of the metabolic modelling problem is to ignore the underlying enzyme kinetics and only rely on information about the reaction network's stoichiometry, a technique called flux balance analysis.
Michaelis–Menten kinetics with intermediate
One could also consider the less simple case
{E} + S
<=>[k_{1}][k_{-1}]
ES
->[k_2]
EI
->[k_3]
{E} + P
where a complex with the enzyme and an intermediate exists and the intermediate is converted into product in a second step. In this case we have a very similar equation
but the constants are different
We see that for the limiting case , thus when the last step from EI -> E + P is much faster than the previous step, we get again the original equation. Mathematically we have then and .
Multi-substrate reactions
Multi-substrate reactions follow complex rate equations that describe how the substrates bind and in what sequence. The analysis of these reactions is much simpler if the concentration of substrate A is kept constant and substrate B varied. Under these conditions, the enzyme behaves just like a single-substrate enzyme and a plot of v by [S] gives apparent KM and Vmax constants for substrate B. If a set of these measurements is performed at different fixed concentrations of A, these data can be used to work out what the mechanism of the reaction is. For an enzyme that takes two substrates A and B and turns them into two products P and Q, there are two types of mechanism: ternary complex and substituted-enzyme mechanisms.
Ternary-complex mechanisms
In these enzymes, both substrates bind to the enzyme at the same time to produce an EAB ternary complex. The order of binding can either be random (in a random mechanism) or substrates have to bind in a particular sequence (in an ordered mechanism). When a set of v by [S] curves (fixed A, varying B) from an enzyme with a ternary-complex mechanism are plotted in a Lineweaver–Burk plot, the set of lines produced will intersect.
Enzymes with ternary-complex mechanisms include glutathione S-transferase, dihydrofolate reductase and DNA polymerase. The following links show short animations of the ternary-complex mechanisms of the enzymes dihydrofolate reductase and DNA polymerase.
Substituted-enzyme ("ping–pong") mechanisms
As shown on the right, enzymes with a substituted-enzyme mechanism can exist in two states, E and a chemically modified form of the enzyme E*; this modified enzyme is known as an intermediate. In such mechanisms, substrate A binds, changes the enzyme to E* by, for example, transferring a chemical group to the active site, and is then released. Only after the first substrate is released can substrate B bind and react with the modified enzyme, regenerating the unmodified E form. When a set of v by [S] curves (fixed A, varying B) from an enzyme with a substituted-enzyme mechanism are plotted in a Lineweaver–Burk plot, a set of parallel lines will be produced. This is called a secondary plot.
Enzymes with substituted-enzyme mechanisms include some oxidoreductases such as thioredoxin peroxidase, transferases such as acylneuraminate cytidylyltransferase and serine proteases such as trypsin and chymotrypsin. Serine proteases are a very common and diverse family of enzymes, including digestive enzymes (trypsin, chymotrypsin, and elastase), several enzymes of the blood clotting cascade and many others. In these serine proteases, the E* intermediate is an acyl-enzyme species formed by the attack of an active site serine residue on a peptide bond in a protein substrate. A short animation showing the mechanism of chymotrypsin is linked here.
Memory effects
Both of these two types of mechanism can display enzyme memory, with very different causes and consequences in the two cases. In ternary complex mechanisms these are possible if the mechanism includes slow processes and the binding steps are not at quasi-equilibrium, because the intermediates may be swept away very fast. This can generate cooperativity, even in monomeric enzymes. In a substituted-enzyme mechanism slow steps are not needed to generate memory effects. Instead, for an enzyme with several alternative substrates the kinetic properties of the second half reaction may vary with different substrates in the first half reaction, even though the same substituted enzyme seems to be transformed.
Reversible catalysis and the Haldane equation
External factors may limit the ability of an enzyme to catalyse a reaction in both directions (whereas the nature of a catalyst in itself means that it cannot catalyse just one direction, according to the principle of microscopic reversibility). We consider the case of an enzyme that catalyses the reaction in both directions:
{E} + {S}
<=>[k_{1}][k_{-1}]
ES
<=>[k_{2}][k_{-2}]
{E} + {P}
The steady-state, initial rate of the reaction is
is positive if the reaction proceed in the forward direction () and negative otherwise.
Equilibrium requires that , which occurs when . This shows that thermodynamics forces a relation between the values of the 4 rate constants.
The values of the forward and backward maximal rates, obtained for , , and , , respectively, are and , respectively. Their ratio is not equal to the equilibrium constant, which implies that thermodynamics does not constrain the ratio of the maximal rates. This explains that enzymes can be much "better catalysts" (in terms of maximal rates) in one particular direction of the reaction.
On can also derive the two Michaelis constants and . The Haldane equation is the relation .
Therefore, thermodynamics constrains the ratio between the forward and backward values, not the ratio of values.
Non-Michaelis–Menten kinetics
Many different enzyme systems follow non Michaelis-Menten behavior. A select few examples include kinetics of self-catalytic enzymes, cooperative and allosteric enzymes, interfacial and intracellular enzymes, processive enzymes and so forth. Some enzymes produce a sigmoid v by [S] plot, which often indicates cooperative binding of substrate to the active site. This means that the binding of one substrate molecule affects the binding of subsequent substrate molecules. This behavior is most common in multimeric enzymes with several interacting active sites. Here, the mechanism of cooperation is similar to that of hemoglobin, with binding of substrate to one active site altering the affinity of the other active sites for substrate molecules. Positive cooperativity occurs when binding of the first substrate molecule increases the affinity of the other active sites for substrate. Negative cooperativity occurs when binding of the first substrate decreases the affinity of the enzyme for other substrate molecules.
Allosteric enzymes include mammalian tyrosyl tRNA-synthetase, which shows negative cooperativity, and bacterial aspartate transcarbamoylase and phosphofructokinase, which show positive cooperativity.
Cooperativity is surprisingly common and can help regulate the responses of enzymes to changes in the concentrations of their substrates. Positive cooperativity makes enzymes much more sensitive to [S] and their activities can show large changes over a narrow range of substrate concentration. Conversely, negative cooperativity makes enzymes insensitive to small changes in [S].
The Hill equation is often used to describe the degree of cooperativity quantitatively in non-Michaelis–Menten kinetics. The derived Hill coefficient n measures how much the binding of substrate to one active site affects the binding of substrate to the other active sites. A Hill coefficient of <1 indicates negative cooperativity and a coefficient of >1 indicates positive cooperativity.
Pre-steady-state kinetics
In the first moment after an enzyme is mixed with substrate, no product has been formed and no intermediates exist. The study of the next few milliseconds of the reaction is called pre-steady-state kinetics. Pre-steady-state kinetics is therefore concerned with the formation and consumption of enzyme–substrate intermediates (such as ES or E*) until their steady-state concentrations are reached.
This approach was first applied to the hydrolysis reaction catalysed by chymotrypsin. Often, the detection of an intermediate is a vital piece of evidence in investigations of what mechanism an enzyme follows. For example, in the ping–pong mechanisms that are shown above, rapid kinetic measurements can follow the release of product P and measure the formation of the modified enzyme intermediate E*. In the case of chymotrypsin, this intermediate is formed by an attack on the substrate by the nucleophilic serine in the active site and the formation of the acyl-enzyme intermediate.
In the figure to the right, the enzyme produces E* rapidly in the first few seconds of the reaction. The rate then slows as steady state is reached. This rapid burst phase of the reaction measures a single turnover of the enzyme. Consequently, the amount of product released in this burst, shown as the intercept on the y-axis of the graph, also gives the amount of functional enzyme which is present in the assay.
Chemical mechanism
An important goal of measuring enzyme kinetics is to determine the chemical mechanism of an enzyme reaction, i.e., the sequence of chemical steps that transform substrate into product. The kinetic approaches discussed above will show at what rates intermediates are formed and inter-converted, but they cannot identify exactly what these intermediates are.
Kinetic measurements taken under various solution conditions or on slightly modified enzymes or substrates often shed light on this chemical mechanism, as they reveal the rate-determining step or intermediates in the reaction. For example, the breaking of a covalent bond to a hydrogen atom is a common rate-determining step. Which of the possible hydrogen transfers is rate determining can be shown by measuring the kinetic effects of substituting each hydrogen by deuterium, its stable isotope. The rate will change when the critical hydrogen is replaced, due to a primary kinetic isotope effect, which occurs because bonds to deuterium are harder to break than bonds to hydrogen. It is also possible to measure similar effects with other isotope substitutions, such as 13C/12C and 18O/16O, but these effects are more subtle.
Isotopes can also be used to reveal the fate of various parts of the substrate molecules in the final products. For example, it is sometimes difficult to discern the origin of an oxygen atom in the final product; since it may have come from water or from part of the substrate. This may be determined by systematically substituting oxygen's stable isotope 18O into the various molecules that participate in the reaction and checking for the isotope in the product. The chemical mechanism can also be elucidated by examining the kinetics and isotope effects under different pH conditions, by altering the metal ions or other bound cofactors, by site-directed mutagenesis of conserved amino acid residues, or by studying the behaviour of the enzyme in the presence of analogues of the substrate(s).
Enzyme inhibition and activation
Enzyme inhibitors are molecules that reduce or abolish enzyme activity, while enzyme activators are molecules that increase the catalytic rate of enzymes. These interactions can be either reversible (i.e., removal of the inhibitor restores enzyme activity) or irreversible (i.e., the inhibitor permanently inactivates the enzyme).
Reversible inhibitors
Traditionally reversible enzyme inhibitors have been classified as competitive, uncompetitive, or non-competitive, according to their effects on KM and Vmax. These different effects result from the inhibitor binding to the enzyme E, to the enzyme–substrate complex ES, or to both, respectively. The division of these classes arises from a problem in their derivation and results in the need to use two different binding constants for one binding event. The binding of an inhibitor and its effect on the enzymatic activity are two distinctly different things, another problem the traditional equations fail to acknowledge. In noncompetitive inhibition the binding of the inhibitor results in 100% inhibition of the enzyme only, and fails to consider the possibility of anything in between. In noncompetitive inhibition, the inhibitor will bind to an enzyme at its allosteric site; therefore, the binding affinity, or inverse of KM, of the substrate with the enzyme will remain the same. On the other hand, the Vmax will decrease relative to an uninhibited enzyme. On a Lineweaver-Burk plot, the presence of a noncompetitive inhibitor is illustrated by a change in the y-intercept, defined as 1/Vmax. The x-intercept, defined as −1/KM, will remain the same. In competitive inhibition, the inhibitor will bind to an enzyme at the active site, competing with the substrate. As a result, the KM will increase and the Vmax will remain the same. The common form of the inhibitory term also obscures the relationship between the inhibitor binding to the enzyme and its relationship to any other binding term be it the Michaelis–Menten equation or a dose response curve associated with ligand receptor binding. To demonstrate the relationship the following rearrangement can be made:
Adding zero to the bottom ([I]-[I])
Dividing by [I]+Ki
This notation demonstrates that similar to the Michaelis–Menten equation, where the rate of reaction depends on the percent of the enzyme population interacting with substrate, the effect of the inhibitor is a result of the percent of the enzyme population interacting with inhibitor. The only problem with this equation in its present form is that it assumes absolute inhibition of the enzyme with inhibitor binding, when in fact there can be a wide range of effects anywhere from 100% inhibition of substrate turn over to just >0%. To account for this the equation can be easily modified to allow for different degrees of inhibition by including a delta Vmax term.
or
This term can then define the residual enzymatic activity present when the inhibitor is interacting with individual enzymes in the population. However the inclusion of this term has the added value of allowing for the possibility of activation if the secondary Vmax term turns out to be higher than the initial term. To account for the possibly of activation as well the notation can then be rewritten replacing the inhibitor "I" with a modifier term denoted here as "X".
While this terminology results in a simplified way of dealing with kinetic effects relating to the maximum velocity of the Michaelis–Menten equation, it highlights potential problems with the term used to describe effects relating to the KM. The KM relating to the affinity of the enzyme for the substrate should in most cases relate to potential changes in the binding site of the enzyme which would directly result from enzyme inhibitor interactions. As such a term similar to the one proposed above to modulate Vmax should be appropriate in most situations:
Irreversible inhibitors
Enzyme inhibitors can also irreversibly inactivate enzymes, usually by covalently modifying active site residues. These reactions, which may be called suicide substrates, follow exponential decay functions and are usually saturable. Below saturation, they follow first order kinetics with respect to inhibitor. Irreversible inhibition could be classified into two distinct types. Affinity labelling is a type of irreversible inhibition where a functional group that is highly reactive modifies a catalytically critical residue on the protein of interest to bring about inhibition. Mechanism-based inhibition, on the other hand, involves binding of the inhibitor followed by enzyme mediated alterations that transform the latter into a reactive group that irreversibly modifies the enzyme.
Philosophical discourse on reversibility and irreversibility of inhibition
Having discussed reversible inhibition and irreversible inhibition in the above two headings, it would have to be pointed out that the concept of reversibility (or irreversibility) is a purely theoretical construct exclusively dependent on the time-frame of the assay, i.e., a reversible assay involving association and dissociation of the inhibitor molecule in the minute timescales would seem irreversible if an assay assess the outcome in the seconds and vice versa. There is a continuum of inhibitor behaviors spanning reversibility and irreversibility at a given non-arbitrary assay time frame. There are inhibitors that show slow-onset behavior and most of these inhibitors, invariably, also show tight-binding to the protein target of interest.
Mechanisms of catalysis
The favoured model for the enzyme–substrate interaction is the induced fit model. This model proposes that the initial interaction between enzyme and substrate is relatively weak, but that these weak interactions rapidly induce conformational changes in the enzyme that strengthen binding. These conformational changes also bring catalytic residues in the active site close to the chemical bonds in the substrate that will be altered in the reaction. Conformational changes can be measured using circular dichroism or dual polarisation interferometry. After binding takes place, one or more mechanisms of catalysis lower the energy of the reaction's transition state by providing an alternative chemical pathway for the reaction. Mechanisms of catalysis include catalysis by bond strain; by proximity and orientation; by active-site proton donors or acceptors; covalent catalysis and quantum tunnelling.
Enzyme kinetics cannot prove which modes of catalysis are used by an enzyme. However, some kinetic data can suggest possibilities to be examined by other techniques. For example, a ping–pong mechanism with burst-phase pre-steady-state kinetics would suggest covalent catalysis might be important in this enzyme's mechanism. Alternatively, the observation of a strong pH effect on Vmax but not KM might indicate that a residue in the active site needs to be in a particular ionisation state for catalysis to occur.
History
In 1902 Victor Henri proposed a quantitative theory of enzyme kinetics, but at the time the experimental significance of the hydrogen ion concentration was not yet recognized. After Peter Lauritz Sørensen had defined the logarithmic pH-scale and introduced the concept of buffering in 1909 the German chemist Leonor Michaelis and Dr. Maud Leonora Menten (a postdoctoral researcher in Michaelis's lab at the time) repeated Henri's experiments and confirmed his equation, which is now generally referred to as Michaelis-Menten kinetics (sometimes also Henri-Michaelis-Menten kinetics). Their work was further developed by G. E. Briggs and J. B. S. Haldane, who derived kinetic equations that are still widely considered today a starting point in modeling enzymatic activity.
The major contribution of the Henri-Michaelis-Menten approach was to think of enzyme reactions in two stages. In the first, the substrate binds reversibly to the enzyme, forming the enzyme-substrate complex. This is sometimes called the Michaelis complex. The enzyme then catalyzes the chemical step in the reaction and releases the product. The kinetics of many enzymes is adequately described by the simple Michaelis-Menten model, but all enzymes have internal motions that are not accounted for in the model and can have significant contributions to the overall reaction kinetics. This can be modeled by introducing several Michaelis-Menten pathways that are connected with fluctuating rates, which is a mathematical extension of the basic Michaelis Menten mechanism.
Software
ENZO (Enzyme Kinetics) is a graphical interface tool for building kinetic models of enzyme catalyzed reactions. ENZO automatically generates the corresponding differential equations from a stipulated enzyme reaction scheme. These differential equations are processed by a numerical solver and a regression algorithm which fits the coefficients of differential equations to experimentally observed time course curves. ENZO allows rapid evaluation of rival reaction schemes and can be used for routine tests in enzyme kinetics.
See also
Protein dynamics
Diffusion limited enzyme
Langmuir adsorption model
Footnotes
α. Link: Interactive Michaelis–Menten kinetics tutorial (Java required)
β. Link: dihydrofolate reductase mechanism (Gif)
γ. Link: DNA polymerase mechanism (Gif)
δ. Link: Chymotrypsin mechanism (Flash required)
References
Further reading
Introductory
Advanced
External links
Animation of an enzyme assay — Shows effects of manipulating assay conditions
MACiE — A database of enzyme reaction mechanisms
ENZYME — Expasy enzyme nomenclature database
ENZO — Web application for easy construction and quick testing of kinetic models of enzyme catalyzed reactions.
ExCatDB — A database of enzyme catalytic mechanisms
BRENDA — Comprehensive enzyme database, giving substrates, inhibitors and reaction diagrams
SABIO-RK — A database of reaction kinetics
Joseph Kraut's Research Group, University of California San Diego — Animations of several enzyme reaction mechanisms
Symbolism and Terminology in Enzyme Kinetics — A comprehensive explanation of concepts and terminology in enzyme kinetics
An introduction to enzyme kinetics — An accessible set of on-line tutorials on enzyme kinetics
Enzyme kinetics animated tutorial — An animated tutorial with audio
Catalysis | Enzyme kinetics | [
"Chemistry"
] | 8,002 | [
"Catalysis",
"Chemical kinetics",
"Enzyme kinetics"
] |
3,044,148 | https://en.wikipedia.org/wiki/Thermal%20inertia | Thermal inertia is a term commonly used to describe the observed delays in a body's temperature response during heat transfers. The phenomenon exists because of a body's ability to both store and transport heat relative to its environment. Since the configuration of system components and mix of transport mechanisms (e.g. conduction, convection, radiation, phase change) vary substantially between instances, there is no generally applicable mathematical definition of closed form for thermal inertia.
Bodies with relatively large mass and heat capacity typically exhibit slower temperature responses. However heat capacity alone cannot accurately quantify thermal inertia. Measurements of it further depend on how heat flows are distributed inside and outside a body.
Whether thermal inertia is an intensive or extensive quantity depends upon context. Some authors have identified it as an intensive material property, for example in association with thermal effusivity. It has also been evaluated as an extensive quantity based upon the measured or simulated spatial-temporal behavior of a system during transient heat transfers. A time constant is then sometimes appropriately used as a simple parametrization for thermal inertia of a selected component or subsystem.
Description
A thermodynamic system containing one or more components with large heat capacity indicates that dynamic, or transient, effects must be considered when measuring or modelling system behavior. Steady-state calculations, many of which produce valid estimates of equilibrium heat flows and temperatures without an accounting for thermal inertia, nevertheless yield no information on the pace of changes between equilibrium states. Nowadays the spatial-temporal behavior of complex systems can be precisely evaluated with detailed numerical simulation. In some cases a lumped system analysis can estimate a thermal time constant.
A larger heat capacity for a component generally means a longer time to reach equilibrium. The transition rate also occurs in conjunction with the component's internal and environmental heat transfer coefficients, as referenced over an interface area . The time constant for an estimated exponential transition of the component's temperature will adjust as under conditions which obey Newton's law of cooling; and when characterized by a ratio or Biot number, much less than one.
Analogies of thermal inertia to the temporal behaviors observed in other disciplines of engineering and physics can sometimes be used with caution. In building performance simulation, thermal inertia is also known as the thermal flywheel effect, and the heat capacity of a structure's mass (sometimes called the thermal mass) can produce a delay between diurnal heat flow and temperature which is similar to the delay between current and voltage in an AC-driven RC circuit. Thermal inertia is less directly comparable to the mass-and-velocity term used in mechanics, where inertia restricts the acceleration of an object. In a similar way, thermal inertia can be a measure of heat capacity of a mass, and of the velocity of the thermal wave which controls the surface temperature of a body.
Thermal effusivity
For a semi-infinite rigid body where heat transfer is dominated by the diffusive process of conduction only, the thermal inertia response at a surface can be approximated from the material's thermal effusivity, also called thermal responsivity . It is defined as the square root of the product of the material's bulk thermal conductivity and volumetric heat capacity, where the latter is the product of density and specific heat capacity:
is thermal conductivity, with unit W⋅m−1⋅K−1
is density, with unit kg⋅m−3
is specific heat capacity, with unit J⋅kg−1⋅K−1
Thermal effusivity has units of a heat transfer coefficient multiplied by square root of time:
SI units of W⋅m−2⋅K−1⋅s1/2 or J⋅m−2⋅K−1⋅s−1/2.
Non-SI units of kieffers: Cal⋅cm−2⋅K−1⋅s−1/2, are also used informally in older references.
When a constant flow of heat is abruptly imposed upon a surface, performs nearly the same role in limiting the surface's initial dynamic "thermal inertia" response:
as the rigid body's usual heat transfer coefficient plays in determining the surface's final static surface temperature.
See also
List of thermodynamic properties
Thermal analysis
References
Thermodynamic properties
Physical quantities
Heat transfer | Thermal inertia | [
"Physics",
"Chemistry",
"Mathematics"
] | 884 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Thermodynamic properties",
"Physical quantities",
"Quantity",
"Thermodynamics",
"Physical properties"
] |
3,046,323 | https://en.wikipedia.org/wiki/Gibbs%20algorithm | In statistical mechanics, the Gibbs algorithm, introduced by J. Willard Gibbs in 1902, is a criterion for choosing a probability distribution for the statistical ensemble of microstates of a thermodynamic system by minimizing the average log probability
subject to the probability distribution satisfying a set of constraints (usually expectation values) corresponding to the known macroscopic quantities. in 1948, Claude Shannon interpreted the negative of this quantity, which he called information entropy, as a measure of the uncertainty in a probability distribution. In 1957, E.T. Jaynes realized that this quantity could be interpreted as missing information about anything, and generalized the Gibbs algorithm to non-equilibrium systems with the principle of maximum entropy and maximum entropy thermodynamics.
Physicists call the result of applying the Gibbs algorithm the Gibbs distribution for the given constraints, most notably Gibbs's grand canonical ensemble for open systems when the average energy and the average number of particles are given. (See also partition function).
This general result of the Gibbs algorithm is then a maximum entropy probability distribution. Statisticians identify such distributions as belonging to exponential families.
References
Statistical mechanics
Particle statistics
Entropy and information | Gibbs algorithm | [
"Physics",
"Mathematics"
] | 234 | [
"Statistical mechanics stubs",
"Physical quantities",
"Particle statistics",
"Entropy and information",
"Entropy",
"Statistical mechanics",
"Dynamical systems"
] |
3,046,584 | https://en.wikipedia.org/wiki/Kraton%20%28polymer%29 | Kraton is the trade name given to a number of high-performance elastomers manufactured by Kraton Polymers, and used as synthetic replacements for rubber. Kraton polymers offer many of the properties of natural rubber, such as flexibility, high traction, and sealing abilities, but with increased resistance to heat, weathering, and chemicals.
Company
The origin of Kraton polymers goes back to the synthetic rubber (GR-S) program funded by the U.S. government during World War II to develop and establish a domestic supply capability for synthetic styrene butadiene rubber (SBR) as an alternative to natural rubber.
Shell Oil Company purchased the Torrance, California facility from the U.S. government that was built to make synthetic styrene butadiene rubber. The company formed Elastomers Division that eventually became Kraton Corporation. Shell Oil Company broaden the product portfolio of elastomers in the 1950s, under the technical leadership of Murray Luftglass and Norman R. Legge.
As part of the divestment program that was announced by Shell in December 1998, the Kraton elastomers business was sold to a private equity firm Ripplewood Holdings in 2000. Kraton completed IPO on December 17, 2009, to became a separate publicly traded company. In 2021 Kraton employees won an ASC Innovation Award for "Next Generation of Biobased Tackifiers REvolutionTM". Kraton employees accept an ASC Innovation Award
Properties
Kraton polymers are styrenic block copolymer (SBC) consisting of polystyrene blocks and rubber blocks. The rubber blocks consist of polybutadiene, polyisoprene, or their hydrogenated equivalents. The tri-block with polystyrene blocks at both extremities linked together by a rubber block is the most important polymer structure observed in SBC. If the rubber block consists of polybutadiene, the corresponding triblock structure is: poly(styrene-block-butadiene-block-styrene) usually abbreviated as SBS. Kraton D (SBS and SIS) and their selectively hydrogenated versions Kraton G (SEBS and SEPS) are the major Kraton polymer structures. The microstructure of SBS consists of domains of polystyrene arranged regularly in a matrix of polybutadiene, as shown in the TEM micrograph. The picture was obtained on a thin film of polymer cast onto mercury from solution, and then stained with osmium tetroxide.
The glass transition temperature (Tg) of the polybutadiene blocks is typically −90 °C and Tg of the polystyrene blocks is +100 °C. So, at any temperature between about −90 °C and +100 °C Kraton SBS will act as a physically crosslinked elastomer. If Kraton polymers are heated substantially above the Tg of the styrene-derived blocks, that is, above about 100 °C, like 170 °C the physical cross-links change from rigid glassy regions to flowable melt regions and the entire material flows and therefore can be cast, molded, or extruded into any desired form. On cooling, this new form resumes its elastomeric character. This is the reason such a material is called a thermoplastic elastomer (TPE). The polystyrene blocks form domains of nanometre size in the microstructure, and they stabilize the form of the molded material. Depending on the rubber-to-polystyrene ratio in the material, the polystyrene domains can be spherical or form cylinders or lamellae. The hydrogenated Kraton polymers named Kraton G exhibit improved resistance to temperature (processing at 200–230 °C is common), to oxidation, and to UV. SEBS and SEPS due to their polyolefinic rubber nature present excellent compatibility with polyolefins and paraffinic oils.
Applications
Kraton polymers are always used in blends with various other ingredients like paraffinic oils, polyolefins, polystyrene, bitumen, tackifying resins, and fillers to provide a very large range of end-use products ranging from hot melt adhesives to impact-modified transparent polypropylene bins, from medical TPE compounds to modified bitumen roofing felts or from oil gel toys (including sex toys) to elastic attachments in diapers.
It can make asphalt flexible, which is necessary if the asphalt is to be used to coat a surface that is below grade or for highly demanding paving applications like F1 racing tracks.
Kraton-based compounds are also used in non-slip knife handles.
The earliest commercial components using Kraton G (thermoplastic rubber) in the automobile industry were in 1970s. The implementation of U.S. requirements for automobile bumpers to absorb impacts with no damage to the car's safety equipment lead to the first successful commercial automotive application of specialized flexible polymers as fascia for the 1974 AMC Matador.
American Motors Corporation (AMC) also used this polymer plastic on the AMC Eagle for the color matched flexible wheel arch flares that flowed into rocker panel extensions. This was needed because of the Eagle's 2-inch wider track compared to the AMC Concord platform on which the AWD cars were based on. The Eagle's Kraton bodywork was lightweight, flexible, and did not crack in cold weather as is typical of fiberglass automobile body components.
Some grades of Kraton can also be dissolved into hydrocarbon oils to create "shear thinning" grease-type products that are used in the manufacture of telecommunications cables containing optical fibers.
References
Polymers
Copolymers
Brand name materials | Kraton (polymer) | [
"Chemistry",
"Materials_science"
] | 1,198 | [
"Polymers",
"Polymer chemistry"
] |
3,048,699 | https://en.wikipedia.org/wiki/Moist%20desquamation | Moist desquamation is a description of the clinical pattern seen as a consequence of radiation exposure where the skin thins and then begins to weep because of loss of integrity of the epithelial barrier and decreased oncotic pressure. Moist desquamation is a rare complication for most forms of radiology, however it is far more common in fluoroscopy where threshold doses lie between 10-15 Gy and increasingly common above 15 Gy. It has been noted that fractionation of fluoroscopic procedures significantly reduces the likelihood of moist desquamation occurring. In animal studies done on pig skin, moist desquamation was found to occur with a 50% of the time after a single dose of 28 Gy, however a 2×18 Gy fractionation scheme (36 Gy total dose) was needed to produce the same 50% occurrence.
Moist desquamation is a common side effect of radiotherapy treatment, where approximately 36% of radiotherapy patients will present with symptoms of moist desquamation. While modern megavoltage external beam radiotherapy have peak radiation doses below the skin, older orthvoltage systems have peak radiation doses at the skin of a patient. As such, moist desquamtation and other skin related radiotherapy complications were significantly more commonplace before the introduction of higher energy cobalt therapy and linear accelerator systems between the 1950s to 1970s.
Historically, this was a common phenomenon in Hiroshima and Nagasaki during World War II with the atomic bomb attacks from the United States. The phenomenon was described by John Hersey in his 1946 article, and later book, Hiroshima.
Clinical characteristics
Sloughing of the epidermis and exposure of the dermal layer clinically characterize moist desquamation. Moist desquamation presents as tender, red skin associated with serous exudate, hemorrhagic crusting, and has the potential for development of bullae.
Treatment
Due to the deterministic nature of moist desquamation, once symptoms occur the condition itself can not be reversed and a patient must wait for the condition to subside. Management of these partial-thickness wounds has been influenced by the Winter principle of moist wound healing, which suggests that wounds heal more rapidly in a moist environment. Hydrocolloid dressings applied directly to these wounds prevent the evaporation of moisture from the exposed dermis and create a moist environment at the wound site that promotes cell migration. As additional radiation exposure may either exacerbate or cause the re-occurrence of moist dequamation, patients are advised to use sunscreen over the irradiated area after completion of treatment.
References
Radiation health effects | Moist desquamation | [
"Chemistry",
"Materials_science"
] | 533 | [
"Radiation effects",
"Radiation health effects",
"Radioactivity"
] |
3,049,420 | https://en.wikipedia.org/wiki/E1cB-elimination%20reaction | The E1cB elimination reaction is a type of elimination reaction which occurs under basic conditions, where the hydrogen to be removed is relatively acidic, while the leaving group (such as -OH or -OR) is a relatively poor one. Usually a moderate to strong base is present. E1cB is a two-step process, the first step of which may or may not be reversible. First, a base abstracts the relatively acidic proton to generate a stabilized anion. The lone pair of electrons on the anion then moves to the neighboring atom, thus expelling the leaving group and forming a double or triple bond. The name of the mechanism - E1cB - stands for Elimination Unimolecular conjugate Base. Elimination refers to the fact that the mechanism is an elimination reaction and will lose two substituents. Unimolecular refers to the fact that the rate-determining step of this reaction only involves one molecular entity. Finally, conjugate base refers to the formation of the carbanion intermediate, which is the conjugate base of the starting material.
E1cB should be thought of as being on one end of a continuous spectrum, which includes the E1 mechanism at the opposite end and the E2 mechanism in the middle. The E1 mechanism usually has the opposite characteristics: the leaving group is a good one (like -OTs or -Br), while the hydrogen is not particularly acidic and a strong base is absent. Thus, in the E1 mechanism, the leaving group leaves first to generate a carbocation. Due to the presence of an empty p orbital after departure of the leaving group, the hydrogen on the neighboring carbon becomes much more acidic, allowing it to then be removed by the weak base in the second step. In an E2 reaction, the presence of a strong base and a good leaving group allows proton abstraction by the base and the departure of the leaving group to occur simultaneously, leading to a concerted transition state in a one-step process.
Mechanism
There are two main requirements to have a reaction proceed down an E1cB mechanistic pathway. The compound must have an acidic hydrogen on its β-carbon and a relatively poor leaving group on the α- carbon.
The first step of an E1cB mechanism is the deprotonation of the β-carbon, resulting in the formation of an anionic transition state, such as a carbanion. The greater the stability of this transition state, the more the mechanism will favor an E1cB mechanism. This transition state can be stabilized through induction or delocalization of the electron lone pair through resonance. In general it can be claimed that an electron withdrawing group on the substrate, a strong base, a poor leaving group and a polar solvent triggers the E1cB mechanism. An example of an E1cB mechanism that has a stable transition state can be seen in the degradation of ethiofencarb - a carbamate insecticide that has a relatively short half-life in Earth's atmosphere. Upon deprotonation of the amine, the resulting amide is relatively stable because it is conjugated with the neighboring carbonyl.
In addition to containing an acidic hydrogen on the β-carbon, a relatively poor leaving group is also necessary. A bad leaving group is necessary because a good leaving group will leave before the ionization of the molecule. As a result, the compound will likely proceed through an E2 pathway. Some examples of compounds that contain poor leaving groups and can undergo the E1cB mechanism are alcohols and fluoroalkanes.
It has also been suggested that the E1cB mechanism is more common among alkenes eliminating to alkynes than from an alkane to alkene. One possible explanation for this is that the sp2 hybridization creates slightly more acidic protons. Although this mechanism is not limited to carbon-based eliminations. It has been observed with other heteroatoms, such as nitrogen in the elimination of a phenol derivative from ethiofencarb.
Distinguishing E1cB-elimination reactions from E1- and E2-elimination reactions
All elimination reactions involve the removal of two substituents from a pair of atoms in a compound. Alkene, alkynes, or similar heteroatom variations (such as carbonyl and cyano) will form. The E1cB mechanism is just one of three types of elimination reaction. The other two elimination reactions are E1 and E2 reactions. Although the mechanisms are similar, they vary in the timing of the deprotonation of the α-carbon and the loss of the leaving group. E1 stands for unimolecular elimination, and E2 stands for bimolecular elimination.
In an E1 mechanism, the molecule contains a good leaving group that departs before deprotonation of the α-carbon. This results in the formation of a carbocation intermediate. The carbocation is then deprotonated resulting in the formation of a new pi bond. The molecule involved must also have a very good leaving group such as bromine or chlorine, and it should have a relatively less acidic α-carbon.
In an E2-elimination reaction, both the deprotonation of the α-carbon and the loss of the leaving group occur simultaneously in one concerted step. Molecules that undergo E2-elimination mechanisms have more acidic α-carbons than those that undergo E1 mechanisms, but their α-carbons are not as acidic as those of molecules that undergo E1cB mechanisms. The key difference between the E2 vs E1cb pathways is a distinct carbanion intermediate as opposed to one concerted mechanism. Studies have been shown that the pathways differ by using different halogen leaving groups. One example uses chlorine as a better stabilizing halogen for the anion than fluorine, which makes fluorine the leaving group even though chlorine is a much better leaving group. This provides evidence that the carbanion is formed because the products are not possible through the most stable concerted E2 mechanism.
The following table summarizes the key differences between the three elimination reactions; however, the best way to identify which mechanism is playing a key role in a particular reaction involves the application of chemical kinetics.
Chemical kinetics of E1cB-elimination mechanisms
When trying to determine whether or not a reaction follows the E1cB mechanism, chemical kinetics are essential. The best way to identify the E1cB mechanism involves the use of rate laws and the kinetic isotope effect. These techniques can also help further differentiate between E1cB, E1, and E2-elimination reactions.
Rate law
When trying to experimentally determine whether or not a reaction follows the E1cB mechanism, chemical kinetics are essential. The best ways to identify the E1cB mechanism involves the use of rate laws and the kinetic isotope effect.
The rate law that governs E1cB mechanisms is relatively simple to determine. Consider the following reaction scheme.
Assuming that there is a steady-state carbanion concentration in the mechanism, the rate law for an E1cB mechanism.
From this equation, it is clear the second order kinetics will be exhibited.
E1cB mechanisms kinetics can vary slightly based on the rate of each step. As a result, the E1cB mechanism can be broken down into three categories:
E1cBanion is when the carbanion is stable and/or a strong base is used in excess of the substrate, making deprotonation irreversible, followed by rate-determining loss of the leaving group (k1[base] ≫ k2).
E1cBrev is when the first step is reversible but the formation of product is slower than reforming the starting material, this again results from a slow second step (k−1[conjugate acid] ≫ k2).
E1cBirr is when the first step is slow, but once the anion is formed the product quickly follows (k2 ≫ k−1[conjugate acid]). This leads to an irreversible first step but unlike E1cBanion, deprotonation is rate determining.
Kinetic isotope effect
Deuterium
Deuterium exchange and a deuterium kinetic isotope effect can help distinguish among E1cBrev, E1cBanion, and E1cBirr. If the solvent is protic and contains deuterium in place of hydrogen (e.g., CH3OD), then the exchange of protons into the starting material can be monitored. If the recovered starting material contains deuterium, then the reaction is most likely undergoing an E1cBrev type mechanism. Recall, in this mechanism protonation of the carbanion (either by the conjugate acid or by solvent) is faster than loss of the leaving group. This means after the carbanion is formed, it will quickly remove a proton from the solvent to form the starting material.
If the reactant contains deuterium at the β position, a primary kinetic isotope effect indicates that deprotonation is rate determining. Of the three E1cB mechanisms, this result is only consistent with the E1cBirr mechanism, since the isotope is already removed in E1cBanion and leaving group departure is rate determining in E1cBrev.
Fluorine-19 and carbon-11
Another way that the kinetic isotope effect can help distinguish E1cB mechanisms involves the use of 19F. Fluorine is a relatively poor leaving group, and it is often employed in E1cB mechanisms. Fluorine kinetic isotope effects are also applied in the labeling of Radiopharmaceuticals and other compounds in medical research. This experiment is very useful in determining whether or not the loss of the leaving group is the rate-determining step in the mechanism and can help distinguish between E1cBirr and E2 mechanisms. 11C can also be used to probe the nature of the transition state structure. The use of 11C can be used to study the formation of the carbanion as well as study its lifetime which can not only show that the reaction is a two-step E1cB mechanism (as opposed to the concerted E2 mechanism), but it can also address the lifetime and stability of the transition state structure which can further distinguish between the three different types of E1cB mechanisms.
Aldol reactions
The most well known reaction that undergoes E1cB elimination is the aldol condensation reaction under basic conditions. This involves the deprotonation of a compound containing a carbonyl group that results in the formation of an enolate. The enolate is the very stable conjugate base of the starting material, and is one of the intermediates in the reaction. This enolate then acts as a nucleophile and can attack an electrophilic aldehyde. The Aldol product is then deprotonated forming another enolate followed by the elimination of water in an E1cB dehydration reaction. Aldol reactions are a key reaction in organic chemistry because they provide a means of forming carbon-carbon bonds, allowing for the synthesis of more complex molecules.
Photo-induced E1cB
A photochemical version of E1cB has been reported by Lukeman et al. In this report, a photochemically induced decarboxylation reaction generates a carbanion intermediate, which subsequently eliminates the leaving group. The reaction is unique from other forms of E1cB since it does not require a base to generate the carbanion. The carbanion formation step is irreversible, and should thus be classified as E1cBirr.
In biology
The E1cB-elimination reaction is an important reaction in biology. For example, the penultimate step of glycolysis involves an E1cB mechanism. This step involves the conversion of 2-phosphoglycerate to phosphoenolpyruvate, facilitated by the enzyme enolase.
See also
Elimination reaction
Reaction mechanism
Carbocation
Carbanion
References
Elimination reactions
Reaction mechanisms | E1cB-elimination reaction | [
"Chemistry"
] | 2,495 | [
"Reaction mechanisms",
"Chemical kinetics",
"Physical organic chemistry"
] |
3,050,160 | https://en.wikipedia.org/wiki/Static%20universe | In cosmology, a static universe (also referred to as stationary, infinite, static infinite or static eternal) is a cosmological model in which the universe is both spatially and temporally infinite, and space is neither expanding nor contracting. Such a universe does not have so-called spatial curvature; that is to say that it is 'flat' or Euclidean. A static infinite universe was first proposed by English astronomer Thomas Digges (1546–1595).
In contrast to this model, Albert Einstein proposed a temporally infinite but spatially finite model - static eternal universe - as his preferred cosmology during 1917, in his paper Cosmological Considerations in the General Theory of Relativity.
After the discovery of the redshift-distance relationship (deduced by the inverse correlation of galactic brightness to redshift) by American astronomers Vesto Slipher and Edwin Hubble, the Belgian astrophysicist and priest Georges Lemaître interpreted the redshift as evidence of universal expansion and thus a Big Bang, whereas Swiss astronomer Fritz Zwicky proposed that the redshift was caused by the photons losing energy as they passed through the matter and/or forces in intergalactic space. Zwicky's proposal would come to be termed 'tired light'—a term invented by the major Big Bang proponent Richard Tolman.
The Einstein universe
During 1917, Albert Einstein added a positive cosmological constant to his equations of general relativity to counteract the attractive effects of gravity on ordinary matter, which would otherwise cause a static, spatially finite universe to either collapse or expand forever.
This model of the universe became known as the Einstein World or Einstein's static universe.
This motivation ended after the proposal by the astrophysicist and Roman Catholic priest Georges Lemaître that the universe seems to be not static, but expanding. Edwin Hubble had researched data from the observations made by astronomer Vesto Slipher to confirm a relationship between redshift and distance, which forms the basis for the modern expansion paradigm that was introduced by Lemaître. According to George Gamow this caused Einstein to declare this cosmological model, and especially the introduction of the cosmological constant, his "biggest blunder".
Einstein's static universe is closed (i.e. has hyperspherical topology and positive spatial curvature), and contains uniform dust and a positive cosmological constant with value precisely , where is Newtonian gravitational constant, is the energy density of the matter in the universe and is the speed of light. The radius of curvature of space of the Einstein universe is equal to
The Einstein universe is one of Friedmann's solutions to Einstein's field equation for dust with density , cosmological constant , and radius of curvature . It is the only non-trivial static solution to Friedmann's equations.
Because the Einstein universe soon was recognized to be inherently unstable, it was presently abandoned as a viable model for the universe. It is unstable in the sense that any slight change in either the value of the cosmological constant, the matter density, or the spatial curvature will result in a universe that either expands and accelerates forever or re-collapses to a singularity.
After Einstein renounced his cosmological constant, and embraced the Friedmann-LeMaitre model of an expanding universe, most physicists of the twentieth century assumed that the cosmological constant is zero. If so (absent some other form of dark energy), the expansion of the universe would be decelerating. However, after Saul Perlmutter, Brian P. Schmidt, and Adam G. Riess introduced the theory of an accelerating universe during 1998, a positive cosmological constant has been revived as a simple explanation for dark energy.
In 1976 Irving Segal revived the static universe in his chronometric cosmology. Similar to Zwicky, he ascribed the red shift of distant galaxies to curvature in the cosmos. Though he claimed vindication in astronomic data, others find the results to be inconclusive.
Requirements of a static infinite model
In order for a static infinite universe model to be viable, it must explain three things:
First, it must explain the intergalactic redshift. Second, it must explain the cosmic microwave background radiation. Third, it must have a mechanism to re-create matter (particularly hydrogen atoms) from radiation or other sources in order to avoid a gradual 'running down' of the universe due to the conversion of matter into energy in stellar processes. With the absence of such a mechanism, the universe would consist of dead objects such as black holes and black dwarfs.
See also
Milne model
Steady State theory
Plasma cosmology
References
In George Gamow's autobiography, My World Line (1970), he says of Einstein: "Much later, when I was discussing cosmological problems with Einstein, he remarked that the introduction of the cosmological term was the biggest blunder of his life."
Physical cosmology
Exact solutions in general relativity
Universe
Obsolete theories in physics | Static universe | [
"Physics",
"Astronomy",
"Mathematics"
] | 1,029 | [
"Exact solutions in general relativity",
"Theoretical physics",
"Mathematical objects",
"Astrophysics",
"Equations",
"Physical cosmology",
"Astronomical sub-disciplines",
"Obsolete theories in physics"
] |
3,050,547 | https://en.wikipedia.org/wiki/Rahul%20Sarpeshkar | Rahul Sarpeshkar is the Thomas E. Kurtz Professor and a professor of engineering, professor of physics, professor of microbiology & immunology, and professor of molecular and systems biology at Dartmouth. Sarpeshkar, whose interdisciplinary work is in bioengineering, electrical engineering, quantum physics, and biophysics, is the inaugural chair of the William H. Neukom cluster of computational science, which focuses on analog, quantum, and biological computation. The clusters, designed by faculty from across the institution to address major global challenges, are part of President Philip Hanlon's vision for strengthening academic excellence at Dartmouth. Prior to Dartmouth, Sarpeshkar was a tenured professor at the Massachusetts Institute of Technology and led the Analog Circuits and Biological Systems Group. He is now also a visiting scientist at MIT's Research Laboratory of Electronics.
Research fields
His research has contributed to the fields of:
Analog circuits and analog computation
Molecular, systems, and synthetic biology
Ultra-low-power and ultra-energy-efficient systems
Energy-harvesting design
Glucose-powered medical implants
Bioelectronics
Bio-inspired and biomimetic systems
Cytomorphic (cell-inspired) systems
Analog supercomputing systems
Quantum and quantum-inspired analog computers
Medical devices
Cochlear implants
Brain-machine interfaces
Control theory
Research summary
Sarpeshkar's recent TEDx talk 'Analog Supercomputers: From Quantum Atom to Living Body' summarizes some of his unique and interdisciplinary research . His research leverages analog circuits and analog computation to architect innovations in bioengineering and synthetic biology, biological supercomputing, ultra energy efficient computing, and quantum computing. For example, by mapping log-domain analog electronic circuits to log-domain analog DNA-protein circuits in living cells
,
Professor Sarpeshkar's work in the May 2013 edition of Nature (doi: 10.1038/nature12148) pioneered the field of analog synthetic biology . Recently, three awarded patents and one pending patent of his have shown how to emulate quantum physics with classical analog circuits rigorously. He has used it to create novel quantum-inspired architectures that do spectrum analysis like the biological inner ear or cochlea, i.e. a 'Quantum Cochlea'. Professor Sarpeshkar's book introduced a novel form of electronics termed Cytomorphic electronics, i.e., electronics inspired by cell biology . It is based on the astounding similarity between the Boltzmann exponential equations of noisy molecular flux in chemical reactions and the Boltzmann exponential equations of noisy electron flow in transistors. Hence circuits in biology and chemistry can be mapped to circuits in electronics and vice versa. Therefore, this 'cytomorphic mapping' enables one to map analog electronic motifs to analog molecular circuit motifs in living cells as in the work in Nature and also to simulate large-scale feedback networks in cells with analog electronic supercomputers. Thus, his work has led to a novel and fundamental analog circuits approach to the fields of synthetic biology and systems biology, both of which are highly important in the future of biotechnology and medicine and . For example, the synthesis of biofuels, chemicals, energy, molecular and cellular sensors, network drug design, treatments for cancer, diabetes, auto-immune, infectious, and neural diseases can all impacted be impacted by his fundamental work on analog synthetic and systems biology.
Sarpeshkar's work on glucose powered medical implants has been featured in the Economist, WIRED, and Science News and was highlighted by Scientific American among 2012's top scientific breakthroughs . Professor Sarpeshkar's work on a hybrid analog-digital circuit that mimics feedback networks in the brain has appeared on the cover of the journal Nature and has received wide media attention . His work on an ultra-low-power analog cochlear-implant processor for the deaf has had wide impact and been featured in articles in the New York Times , Technology Review, and IEEE Spectrum, as has his work on ultra-low-power brain-machine interfaces for the blind and paralyzed and for cardiac and non-invasive monitoring. His group holds several first and best world records in the fields of medical devices, medical electronics, ultra low power, analog, and bio-inspired design .
He has authored more than 139 technical publications and is an inventor on more than forty two awarded patents. He is the inventor of the RF Cochlea, a rapid radio-frequency spectrum analyzer inspired by the human ear . His book Ultra Low Power Bioelectronics: Fundamentals, Biomedical Applications, and Bio-inspired Systems is published by Cambridge University Press and provides a broad and deep treatment of the fields of analog, ultra low power, biomedical, biological, energy-harvesting and bio-inspired design. It is based on a course that Sarpeshkar has taught at MIT for many years, which emphasizes how the universal language of analog circuits provides a pictorial and intuitive method for analyzing differential equations in physics, chemistry, biology, engineering, and medicine. He has won the Junior Bose award and the Ruth and Joel Spira award for excellence in teaching at MIT.
Sarpeshkar has received several awards including the NSF Career Award, the ONR Young Investigator Award, the Packard Fellows Award, and the Indus Technovator Award. He is a Fellow of the IEEE and a Fellow of the National Academy of Inventors. He is an Associate Editor of the IEEE Transactions on Biomedical Circuits and Systems and serves on the program committees of several technical conferences. His recent TEDx talk 'Analog Supercomputers: From Quantum Atom to Living Body' summarizes some of his unique and interdisciplinary research . His invited Google Tech talk at the 2011 Frontiers of Engineering Conference, hosted by the National Academy of Engineering (NAE) summarizes his earlier work on an ultra low power programmable analog cochlear implant processor and other ultra-low-power implantable devices .
Education
Sarpeshkar received B.S. degrees in electrical engineering and physics from the Massachusetts Institute of Technology and the Ph.D. degree in computation and neural systems from the California Institute of Technology. His adviser at Caltech was Carver A. Mead. He was a member of technical staff at Bell Labs in its Department of Biological Computation within its Physics Division.
References
External links
Analog Supercomputers:From Quantum Atom to Living Body TEDx Video
Cell Power
Thomas E. Kurtz Chair
Analog Circuits and Biological Systems Group
Analog Synthetic Biology
Glucose Powered Medical Implants receives wide media attention
Professor Sarpeshkar's book
Transistors Mimick Cells MIT news article.
Cytomorphic Electronics MIT news article.
RF Cochlea MIT news release on the RF Cochlea that led to several other news articles.
American bioengineers
American electronics engineers
21st-century American physicists
Living people
Synthetic biologists
Year of birth missing (living people) | Rahul Sarpeshkar | [
"Biology"
] | 1,410 | [
"Synthetic biology",
"Synthetic biologists"
] |
10,927,720 | https://en.wikipedia.org/wiki/EPrivacy%20Directive | Privacy and Electronic Communications Directive 2002/58/EC on Privacy and Electronic Communications, otherwise known as ePrivacy Directive (ePD), is an EU directive on data protection and privacy in the digital age. It presents a continuation of earlier efforts, most directly the Data Protection Directive. It deals with the regulation of a number of important issues such as confidentiality of information, treatment of traffic data, spam and cookies. This Directive has been amended by Directive 2009/136, which introduces several changes, especially in what concerns cookies, that are now subject to prior consent.
There are some interplays between the ePrivacy Regulation (ePR) and the General Data Protection Regulation (GDPR). Some EU lawmakers had hoped the ePrivacy Regulation (ePR) could come into force at the same time as the General Data Protection Regulation (GDPR) in May 2018. In this way, it would repeal the ePrivacy Directive 2002/58/EC and accompany the GDPR in regulating the requirements for consent to the use of cookies and opt-out options.
Subject-matter and Scope
The Electronic Privacy Directive has been drafted specifically to address the requirements of new digital technologies and ease the advance of electronic communications services. The Directive complements the Data Protection Directive and applies to all matters which are not specifically covered by that Directive. In particular, the subject of the Directive is the "right to privacy in the electronic communication sector" and free movement of data, communication equipment and services.
The Directive does not apply to Titles V and VI (Second and Third Pillars constituting the European Union). Likewise, it does not apply to issues concerning public security and defence, state security and criminal law. The interception of data was however covered by the EU Data Retention Directive, prior to its annulment by the Court of Justice of the European Union.
Contrary to the Data Protection Directive, which specifically addresses only individuals, Article 1(2) makes it clear that ePrivacy Directive also applies to legal persons.
Main provisions
The first general obligation in the Directive is to provide security of services. The addressees are providers of electronic communications services. This obligation also includes the duty to inform the subscribers whenever there is a particular risk, such as a virus or other malware attack.
The second general obligation is for the confidentiality of information to be maintained. The addressees are Member States, who should prohibit listening, tapping, storage or other kinds of interception or surveillance of communication and "related traffic", unless the users have given their consent or conditions of Article 15(1) have been fulfilled.
Data retention and other issues
The directive obliges the providers of services to erase or anonymise the traffic data processed when no longer needed, unless the conditions from Article 15 have been fulfilled. Retention is allowed for billing purposes but only as long as the statute of limitations allows the payment to be lawfully pursued. Data may be retained upon a user's consent for marketing and value-added services. For both previous uses, the data subject must be informed why and for how long the data is being processed.
Subscribers have the right to non-itemised billing. Likewise, the users must be able to opt out of calling-line identification.
Where data relating to location of users or other traffic can be processed, Article 9 provides that this will only be permitted if such data is anonymised, where users have given consent, or for provision of value-added services. Like in the previous case, users must be informed beforehand of the character of information collected and have the option to opt out.
Unsolicited e-mail and other messages
Article 13 prohibits the use of email addresses for marketing purposes. The Directive establishes the opt-in regime, according to which unsolicited emails may be sent only with prior agreement of the recipient. A natural or legal person who initially collects address data in the context of the sale of a product or service, has the right to use it for commercial purposes provided the customers have a prior opportunity to reject such communication where it was initially collected and subsequently. Member States have the obligation to ensure that unsolicited communication will be prohibited, except in circumstances given in Article 13.
Two categories of emails (or communication in general) will also be excluded from the scope of the prohibition. The first is the exception for existing customer relationships and the second for marketing of similar products and services.
The sending of unsolicited text messages, either in the form of SMS messages, push mail messages or any similar format designed for consumer portable devices (mobile phones, PDAs) also falls under the prohibition of Article 13.
Cookies
The Directive provision applicable to cookies is Article 5(3). Recital 25 of the Preamble recognises the importance and usefulness of cookies for the functioning of modern Internet and directly relates Article 5(3) to them but Recital 24 also warns of the danger that such instruments may present to privacy. The change in the law does not affect all types of cookies; those that are deemed to be "strictly necessary for the delivery of a service requested by the user", such as for example, cookies that track the contents of a user's shopping cart on an online shopping service, are exempted.
The article is technology neutral, not naming any specific technological means which may be used to store data, but applies to any information that a website causes to be stored in a user's browser. This reflects the EU legislator's desire to leave the regime of the directive open to future technological developments.
The addressees of the obligation are Member States, who must ensure that the use of electronic communications networks to store information in a visitor's browser is only allowed if the user is provided with "clear and comprehensive information", in accordance with the Data Protection Directive, about the purposes of the storage of, or access to, that information; and has given their consent.
The regime so set-up can be described as opt-in, effectively meaning that the consumer must give their consent before cookies or any other form of data is stored in their browser. The UK Regulations allow for consent to be signified by future browser settings, which have yet to be introduced but which must be capable of presenting enough information so that a user can give their informed consent and indicating to a target website that consent has been obtained. Initial consent can be carried over into repeated content requests to a website. The Directive does not give any guidelines as to what may constitute an opt-out, but requires that cookies, other than those "strictly necessary for the delivery of a service requested by the user" are not to be placed without user consent.
Literature
Full text of Directive
Guidance from the UK's ICO
Guidance from the French DPA CNIL (Translated into English)
Article 29 Data Protection Working Party Opinion 2/2010
Article 29 Data Protection Working Party Opinion 16/2011
History of the decision making
On spam: Asscher, L, Hoogcarspel, S.A, Regulating Spam: A European Perspective after the Adoption of the ePrivacy Directive (T.M.C. Asser Press 2006)
Edwards, L, "Articles 6 – 7, ECD; Privacy and Electronics Communications Directive 2002" in Edwards, L. (ed.) The New Legal Framework for E-Commerce in Europe (Hart 2005)
References
Information privacy
Privacy legislation
European Union data protection law
European Union directives
Spamming
Email
2002 in law
2002 in the European Union | EPrivacy Directive | [
"Engineering"
] | 1,516 | [
"Cybersecurity engineering",
"Information privacy"
] |
10,928,189 | https://en.wikipedia.org/wiki/TBARS | Thiobarbituric acid reactive substances (TBARS) are formed as a byproduct of lipid peroxidation (i.e. as degradation products of fats) which can be detected by the TBARS assay using thiobarbituric acid as a reagent. TBARS can be upregulated, for example, by heart attack or by certain kinds of stroke.
Because reactive oxygen species (ROS) have extremely short half-lives, they are difficult to measure directly. Instead, what can be measured are several products of the damage produced by oxidative stress, such as TBARS.
Assay of TBARS measures malondialdehyde (MDA) present in the sample, as well as malondialdehyde generated from lipid hydroperoxides by the hydrolytic conditions of the reaction. MDA is one of several low-molecular-weight end products formed via the decomposition of certain primary and secondary lipid peroxidation products. However, only certain lipid peroxidation products generate MDA, and MDA is neither the sole end product of fatty peroxide formation and decomposition, nor a substance generated exclusively through lipid peroxidation. These and other considerations from the extensive literature on MDA, TBA reactivity, and oxidative lipid degradation support the conclusion that MDA determination and the TBA test can offer, at best, a narrow and somewhat empirical window on the complex process of lipid peroxidation. Use of MDA analysis and/or the TBA test and interpretation of sample MDA content and TBA test response in studies of lipid peroxidation require caution, discretion, and (especially in biological systems) correlative data from other indices of fatty peroxide formation and decomposition.
Malondialdehyde reacts with both barbiturate and thiobarbiturate, and the end-product of the TBARS assay is almost identical to the end product of the pyridine-barbiturate cyanide assay. This suggests that some cyanide poisoning cases that relied on the pyridine-barbiturate diagnostic could be false positives with elevated blood malondialdehyde, and no cyanide present at all. The cases of Urooj Khan, lottery winner of Chicago, and Autumn Klein, doctor of Pittsburgh, both fit these characteristics, since neither patient exhibited cyanide poisoning symptoms, yet both appeared to have suffered heart attacks, with Urooj Khan's blocked arteries noted at autopsy and Autumn Klein's evidence for heart abnormalities noted at trial and as a central part of her husband's conviction appeal.
References
Toxins
Free radicals | TBARS | [
"Chemistry",
"Biology",
"Environmental_science"
] | 563 | [
"Toxicology",
"Free radicals",
"Senescence",
"Biomolecules",
"Toxins"
] |
10,930,478 | https://en.wikipedia.org/wiki/SUMO%20enzymes | SUMO enzymatic cascade catalyzes the dynamic posttranslational modification process of sumoylation (i.e. transfer of SUMO protein to other proteins). The Small Ubiquitin-related Modifier, SUMO-1, is a ubiquitin-like family member that is conjugated to its substrates through three discrete enzymatic steps (see the figure on the right): activation, involving the E1 enzyme (SAE1/SAE2); conjugation, involving the E2 enzyme (UBE2I); substrate modification, through the cooperation of the E2 and E3 protein ligases.
SUMO pathway modifies hundreds of proteins that participate in diverse cellular processes. SUMO pathway is the most studied ubiquitin-like pathway that regulates a wide range of cellular events, evidenced by a large number of sumoylated proteins identified in more than ten large-scale studies.
See also
Metabolism
Metabolic network
Metabolic network modelling
References
Metabolism
Post-translational modification
Proteins | SUMO enzymes | [
"Chemistry",
"Biology"
] | 208 | [
"Biomolecules by chemical classification",
"Gene expression",
"Biochemical reactions",
"Post-translational modification",
"Cellular processes",
"Molecular biology",
"Biochemistry",
"Proteins",
"Metabolism"
] |
10,932,739 | https://en.wikipedia.org/wiki/Doppler%20spectroscopy | Doppler spectroscopy (also known as the radial-velocity method, or colloquially, the wobble method) is an indirect method for finding extrasolar planets and brown dwarfs from radial-velocity measurements via observation of Doppler shifts in the spectrum of the planet's parent star.
As of November 2022, about 19.5% of known extrasolar planets (18 of the total) have been discovered using Doppler spectroscopy.
History
Otto Struve proposed in 1952 the use of powerful spectrographs to detect distant planets. He described how a very large planet, as large as Jupiter, for example, would cause its parent star to wobble slightly as the two objects orbit around their center of mass. He predicted that the small Doppler shifts to the light emitted by the star, caused by its continuously varying radial velocity, would be detectable by the most sensitive spectrographs as tiny redshifts and blueshifts in the star's emission. However, the technology of the time produced radial-velocity measurements with errors of 1,000 m/s or more, making them useless for the detection of orbiting planets. The expected changes in radial velocity are very small – Jupiter causes the Sun to change velocity by about 12.4 m/s over a period of 12 years, and the Earth's effect is only 0.1 m/s over a period of 1 year – so long-term observations by instruments with a very high resolution are required.
Advances in spectrometer technology and observational techniques in the 1980s and 1990s produced instruments capable of detecting the first of many new extrasolar planets. The ELODIE spectrograph, installed at the Haute-Provence Observatory in Southern France in 1993, could measure radial-velocity shifts as low as 7 m/s, low enough for an extraterrestrial observer to detect Jupiter's influence on the Sun. Using this instrument, astronomers Michel Mayor and Didier Queloz identified 51 Pegasi b, a "Hot Jupiter" in the constellation Pegasus. Although planets had previously been detected orbiting pulsars, 51 Pegasi b was the first planet ever confirmed to be orbiting a main-sequence star, and the first detected using Doppler spectroscopy.
In November 1995, the scientists published their findings in the journal Nature; the paper has since been cited over 1,000 times. Since that date, over 1,000 exoplanet candidates have been identified, many of which have been detected by Doppler search programs based at the Keck, Lick, and Anglo-Australian Observatories (respectively, the California, Carnegie and Anglo-Australian planet searches), and teams based at the Geneva Extrasolar Planet Search.
Beginning in the early 2000s, a second generation of planet-hunting spectrographs permitted far more precise measurements. The HARPS spectrograph, installed at the La Silla Observatory in Chile in 2003, can identify radial-velocity shifts as small as 0.3 m/s, enough to locate many possibly rocky, Earth-like planets. A third generation of spectrographs is expected to come online in 2017. With measurement errors estimated below 0.1 m/s, these new instruments would allow an extraterrestrial observer to detect even Earth.
Procedure
A series of observations is made of the spectrum of light emitted by a star. Periodic variations in the star's spectrum may be detected, with the wavelength of characteristic spectral lines in the spectrum increasing and decreasing regularly over a period of time. Statistical filters are then applied to the data set to cancel out spectrum effects from other sources. Using mathematical best-fit techniques, astronomers can isolate the tell-tale periodic sine wave that indicates a planet in orbit.
If an extrasolar planet is detected, a minimum mass for the planet can be determined from the changes in the star's radial velocity. To find a more precise measure of the mass requires knowledge of the inclination of the planet's orbit. A graph of measured radial velocity versus time will give a characteristic curve (sine curve in the case of a circular orbit), and the amplitude of the curve will allow the minimum mass of the planet to be calculated using the binary mass function.
The Bayesian Kepler periodogram is a mathematical algorithm, used to detect single or multiple extrasolar planets from successive radial-velocity measurements of the star they are orbiting. It involves a Bayesian statistical analysis of the radial-velocity data, using a prior probability distribution over the space determined by one or more sets of Keplerian orbital parameters. This analysis may be implemented using the Markov chain Monte Carlo (MCMC) method.
The method has been applied to the HD 208487 system, resulting in an apparent detection of a second planet with a period of approximately 1000 days. However, this may be an artifact of stellar activity. The method is also applied to the HD 11964 system, where it found an apparent planet with a period of approximately 1 year. However, this planet was not found in re-reduced data, suggesting that this detection was an artifact of the Earth's orbital motion around the Sun.
Although radial-velocity of the star only gives a planet's minimum mass, if the planet's spectral lines can be distinguished from the star's spectral lines then the radial-velocity of the planet itself can be found and this gives the inclination of the planet's orbit and therefore the planet's actual mass can be determined. The first non-transiting planet to have its mass found this way was Tau Boötis b in 2012 when carbon monoxide was detected in the infrared part of the spectrum.
Example
The graph to the right illustrates the sine curve using Doppler spectroscopy to observe the radial velocity of an imaginary star which is being orbited by a planet in a circular orbit. Observations of a real star would produce a similar graph, although eccentricity in the orbit will distort the curve and complicate the calculations below.
This theoretical star's velocity shows a periodic variance of ±1 m/s, suggesting an orbiting mass that is creating a gravitational pull on this star. Using Kepler's third law of planetary motion, the observed period of the planet's orbit around the star (equal to the period of the observed variations in the star's spectrum) can be used to determine the planet's distance from the star () using the following equation:
where:
r is the distance of the planet from the star
G is the gravitational constant
Mstar is the mass of the star
Pstar is the observed period of the star
Having determined , the velocity of the planet around the star can be calculated using Newton's law of gravitation, and the orbit equation:
where is the velocity of planet.
The mass of the planet can then be found from the calculated velocity of the planet:
where is the velocity of parent star. The observed Doppler velocity, , where i is the inclination of the planet's orbit to the line perpendicular to the line-of-sight.
Thus, assuming a value for the inclination of the planet's orbit and for the mass of the star, the observed changes in the radial velocity of the star can be used to calculate the mass of the extrasolar planet.
Radial-velocity comparison tables
Ref:
For MK-type stars with planets in the habitable zone
Limitations
The major limitation with Doppler spectroscopy is that it can only measure movement along the line-of-sight, and so depends on a measurement (or estimate) of the inclination of the planet's orbit to determine the planet's mass. If the orbital plane of the planet happens to line up with the line-of-sight of the observer, then the measured variation in the star's radial velocity is the true value. However, if the orbital plane is tilted away from the line-of-sight, then the true effect of the planet on the motion of the star will be greater than the measured variation in the star's radial velocity, which is only the component along the line-of-sight. As a result, the planet's true mass will be greater than measured.
To correct for this effect, and so determine the true mass of an extrasolar planet, radial-velocity measurements can be combined with astrometric observations, which track the movement of the star across the plane of the sky, perpendicular to the line-of-sight. Astrometric measurements allows researchers to check whether objects that appear to be high mass planets are more likely to be brown dwarfs.
A further disadvantage is that the gas envelope around certain types of stars can expand and contract, and some stars are variable. This method is unsuitable for finding planets around these types of stars, as changes in the stellar emission spectrum caused by the intrinsic variability of the star can swamp the small effect caused by a planet.
The method is best at detecting very massive objects close to the parent star – so-called "hot Jupiters" – which have the greatest gravitational effect on the parent star, and so cause the largest changes in its radial velocity. Hot Jupiters have the greatest gravitational effect on their host stars because they have relatively small orbits and large masses. Observation of many separate spectral lines and many orbital periods allows the signal-to-noise ratio of observations to be increased, increasing the chance of observing smaller and more distant planets, but planets like the Earth remain undetectable with current instruments.
See also
Methods of detecting exoplanets
Systemic (amateur extrasolar planet search project)
References
External links
California and Carnegie Extrasolar Planet Search
The Radial Velocity Equation in the Search for Exoplanets ( The Doppler Spectroscopy or Wobble Method )
Astronomical spectroscopy | Doppler spectroscopy | [
"Physics",
"Chemistry"
] | 1,959 | [
"Astronomical spectroscopy",
"Spectroscopy",
"Spectrum (physical sciences)",
"Astrophysics"
] |
10,935,861 | https://en.wikipedia.org/wiki/Worldbeam | Also known as the Inside-Out web, Worldbeam is the brainchild of David Gelernter and Ajay Royan proposed in 2007; it envisions a single logical repository for all information on the internet, a concept not unlike what is now referred to as the "cloud".
See also
Information explosion
References
"The Next Fifty Years: Science in the First Half of the Twenty-first Century" Edited by John Brockman, Vintage (2002)
External links
The Inside-Out Web
Information Age | Worldbeam | [
"Technology"
] | 103 | [
"Computing stubs",
"Information Age",
"World Wide Web stubs",
"Computing and society"
] |
7,257,439 | https://en.wikipedia.org/wiki/Ekman%20current%20meter | The Ekman current meter is a mechanical flowmeter invented by Vagn Walfrid Ekman, a Swedish oceanographer, in 1903. It comprises a propeller with a mechanism to record the number of revolutions, a compass and a recorder with which to record the direction, and a vane that orients the instrument so the propeller faces the current. It is mounted on a free-swinging vertical axis suspended from a wire and has a weight attached below.
The balanced propeller, with four to eight blades, rotates inside a protective ring. The position of a lever controls the propeller. In down position the propeller is stopped and the instrument is lowered, after reaching the desired depth a weight called a messenger is dropped to move the lever into the middle position which allows the propeller to turn freely. When the measurement has been taken another weight is dropped to push the lever to its highest position at which the propeller is again stopped.
The propeller revolutions are counted via a simple mechanism that gears down the revolutions and counts them on an indicator dial. The direction is indicated by a device connected to the directional vane that drops a small metal ball about every 100 revolutions. The ball falls into one of thirty-six compartments in the bottom of the compass box that indicate direction in increments of 10 degrees. If the direction changes while the measurement is being performed the balls will drop into separate compartments and a weighted mean is taken to determine the average current direction.
This is a simple and reliable instrument whose main disadvantage is that is must be hauled up to be read and reset after each measurement. Ekman solved this problem by designed a repeating current meter which could take up to forty-seven measurements before needing to be hauled up and reset. This device used a more complicated system of dropping small numbered metal balls at regular intervals to record the separate measurements.
Bibliography
Harald U. Sverdrup, Martin W. Johnson, and Richard H. Fleming, The Oceans: Their Physics, Chemistry, and General Biology, Prentice-Hall, Inc., 1942
See also
Oceanic current
Ekman spiral
Ekman water bottle
Physical oceanography
fr:Courantomètre#Courantomètre d'Ekman | Ekman current meter | [
"Physics"
] | 435 | [
"Applied and interdisciplinary physics",
"Physical oceanography"
] |
14,581,057 | https://en.wikipedia.org/wiki/Heat%20flux | In physics and engineering, heat flux or thermal flux, sometimes also referred to as heat flux density, heat-flow density or heat-flow rate intensity, is a flow of energy per unit area per unit time. Its SI units are watts per square metre (W/m2). It has both a direction and a magnitude, and so it is a vector quantity. To define the heat flux at a certain point in space, one takes the limiting case where the size of the surface becomes infinitesimally small.
Heat flux is often denoted , the subscript specifying heat flux, as opposed to mass or momentum flux. Fourier's law is an important application of these concepts.
Fourier's law
For most solids in usual conditions, heat is transported mainly by conduction and the heat flux is adequately described by Fourier's law.
Fourier's law in one dimension
where is the thermal conductivity. The negative sign shows that heat flux moves from higher temperature regions to lower temperature regions.
Multi-dimensional extension
The multi-dimensional case is similar, the heat flux goes "down" and hence the temperature gradient has the negative sign:
where is the gradient operator.
Measurement
The measurement of heat flux can be performed in a few different manners.
With a given thermal conductivity
A commonly known, but often impractical, method is performed by measuring a temperature difference over a piece of material with a well-known thermal conductivity. This method is analogous to a standard way to measure an electric current, where one measures the voltage drop over a known resistor. Usually this method is difficult to perform since the thermal resistance of the material being tested is often not known. Accurate values for the material's thickness and thermal conductivity would be required in order to determine thermal resistance. Using the thermal resistance, along with temperature measurements on either side of the material, heat flux can then be indirectly calculated.
With unknown thermal conductivity
A second method of measuring heat flux is by using a heat flux sensor, or heat flux transducer, to directly measure the amount of heat being transferred to/from the surface that the heat flux sensor is mounted to. The most common type of heat flux sensor is a differential temperature thermopile which operates on essentially the same principle as the first measurement method that was mentioned except it has the advantage in that the thermal resistance/conductivity does not need to be a known parameter. These parameters do not have to be known since the heat flux sensor enables an in-situ measurement of the existing heat flux by using the Seebeck effect. However, differential thermopile heat flux sensors have to be calibrated in order to relate their output signals [μV] to heat flux values [W/(m2⋅K)]. Once the heat flux sensor is calibrated it can then be used to directly measure heat flux without requiring the rarely known value of thermal resistance or thermal conductivity.
Science and engineering
One of the tools in a scientist's or engineer's toolbox is the energy balance. Such a balance can be set up for any physical system, from chemical reactors to living organisms, and generally takes the following form
where the three terms stand for the time rate of change of respectively the total amount of incoming energy, the total amount of outgoing energy and the total amount of accumulated energy.
Now, if the only way the system exchanges energy with its surroundings is through heat transfer, the heat rate can be used to calculate the energy balance, since
where we have integrated the heat flux over the surface of the system.
In real-world applications one cannot know the exact heat flux at every point on the surface, but approximation schemes can be used to calculate the integral, for example Monte Carlo integration.
See also
Radiant flux
Latent heat flux
Rate of heat flow
Insolation
Heat flux sensor
Relativistic heat conduction
Notes
Thermodynamic properties
Customary units of measurement in the United States | Heat flux | [
"Physics",
"Chemistry",
"Mathematics"
] | 794 | [
"Thermodynamic properties",
"Quantity",
"Thermodynamics",
"Physical quantities"
] |
14,582,412 | https://en.wikipedia.org/wiki/Gladstone%E2%80%93Dale%20relation | The Gladstone–Dale relation is a mathematical relation used for optical analysis of liquids, the determination of composition from optical measurements. It can also be used to calculate the density of a liquid for use in fluid dynamics (e.g., flow visualization). The relation has also been used to calculate refractive index of glass and minerals in optical mineralogy.
Uses
In the Gladstone–Dale relation, , the index of refraction (n) or the density (ρ in g/cm3) of miscible liquids that are mixed in mass fraction (m) can be calculated from characteristic optical constants (the molar refractivity k in cm3/g) of pure molecular end-members. For example, for any mass (m) of ethanol added to a mass of water, the alcohol content is determined by measuring density or index of refraction (Brix refractometer). Mass (m) per unit volume (V) is the density m/V. Mass is conserved on mixing, but the volume of 1 cm3 of ethanol mixed with 1 cm3 of water is reduced to less than 2 cm3 due to the formation of ethanol-water bonds. The plot of volume or density versus molecular fraction of ethanol in water is a quadratic curve. However, the plot of index of refraction versus molecular fraction of ethanol in water is linear, and the weight fraction equals the fractional density
In the 1900s, the Gladstone–Dale relation was applied to glass, synthetic crystals and minerals. Average values for the refractivity of oxides such as MgO or SiO2 give good to excellent agreement between the calculated and measured average indices of refraction of minerals. However, specific values of refractivity are required to deal with different structure-types, and the relation required modification to deal with structural polymorphs and the birefringence of anisotropic crystal structures.
In recent optical crystallography, Gladstone–Dale constants for the refractivity of ions were related to the inter-ionic distances and angles of the crystal structure. The ionic refractivity depends on 1/d2, where d is the inter-ionic distance, indicating that a particle-like photon refracts locally due to the electrostatic Coulomb force between ions.
Expression
The Gladstone–Dale relation can be expressed as an equation of state by re-arranging the terms to .
where n is the index of refraction, d = density and constant = Gladstone-Dale constant.
The macroscopic values (n) and (V) determined on bulk material are now calculated as a sum of atomic or molecular properties. Each molecule has a characteristic mass (due to the atomic weights of the elements) and atomic or molecular volume that contributes to the bulk density, and a characteristic refractivity due to a characteristic electric structure that contributes to the net index of refraction.
The refractivity of a single molecule is the refractive volume k(MW)/NA in nm3, where MW is the molecular weight and NA is the Avogadro constant. To calculate the optical properties of materials using the polarizability or refractivity volumes in nm3, the Gladstone–Dale relation competes with the Kramers–Kronig relation and Lorentz–Lorenz relation but differs in optical theory.
The index of refraction (n) is calculated from the change of angle of a collimated monochromatic beam of light from vacuum into liquid using Snell's law for refraction. Using the theory of light as an electromagnetic wave, light takes a straight-line path through water at reduced speed (v) and wavelength (λ). The ratio v/λ is a constant equal to the frequency (ν) of the light, as is the quantized (photon) energy using the Planck constant and . Compared to the constant speed of light in vacuum (c), the index of refraction of water is .
The Gladstone–Dale term is the non-linear optical path length or time delay. Using Isaac Newton's theory of light as a stream of particles refracted locally by (electric) forces acting between atoms, the optic path length is due to refraction at constant speed by displacement about each atom. For light passing through 1 m of water with , light traveled an extra 0.33 m compared to light that traveled 1 m in a straight line in vacuum. As the speed of light is a ratio (distance per unit time in m/s), light also took an extra 0.33 s to travel through water compared to light traveling 1 s in vacuum.
Compatibility index
Mandarino, in his review of the Gladstone–Dale relationship in minerals proposed the concept of the Compatibility Index in comparing the physical and optical properties of minerals. This compatibility index is a required calculation for approval as a new mineral species (see IMA guidelines).
The compatibility index (CI) is defined as follows:
Where, KP = Gladstone-Dale Constant derived from physical properties.
Requirements
The Gladstone–Dale relation requires a particle model of light because the continuous wave-front required by wave theory cannot be maintained if light encounters atoms or molecules that maintain a local electric structure with a characteristic refractivity. Similarly, the wave theory cannot explain the photoelectric effect or absorption by individual atoms and one requires a local particle of light (see Wave–particle duality).
A local model of light consistent with these electrostatic refraction calculations occurs if the electromagnetic energy is restricted to a finite region of space. An electric-charge monopole must occur perpendicular to dipole loops of magnetic flux, but if local mechanisms for propagation are required, a periodic oscillatory exchange of electromagnetic energy occurs with transient mass. In the same manner, a change of mass occurs as an electron binds to a proton. This local photon has zero rest mass and no net charge, but has wave properties with spin-1 symmetry on trace over time. In this modern version of Newton's corpuscular theory of light, the local photon acts as a probe of the molecular or crystal structure.
References
Fluid dynamics
Optics | Gladstone–Dale relation | [
"Physics",
"Chemistry",
"Engineering"
] | 1,245 | [
"Applied and interdisciplinary physics",
"Optics",
"Chemical engineering",
"Piping",
" molecular",
"Atomic",
"Fluid dynamics",
" and optical physics"
] |
14,583,981 | https://en.wikipedia.org/wiki/Dimethylamphetamine | Dimethylamphetamine (Metrotonin), also known as dimetamfetamine (INN), dimephenopan and N,N-dimethylamphetamine, is a stimulant drug of the phenethylamine and amphetamine chemical classes. Dimethylamphetamine has weaker stimulant effects than amphetamine or methamphetamine and is considerably less addictive and less neurotoxic compared to methamphetamine. However, it still retains some mild stimulant effects and abuse potential, and is a Schedule I controlled drug.
Dimethylamphetamine has occasionally been found in illicit methamphetamine laboratories, but is usually an impurity rather than the desired product. It may be produced by accident when methamphetamine is synthesised by methylation of amphetamine if the reaction temperature is too high or an excess of methylating agent is used.
It is said to be a prodrug of amphetamine/methamphetamine.
References
Dimethylamino compounds
Methamphetamines
Norepinephrine-dopamine releasing agents
Prodrugs
Substituted amphetamines | Dimethylamphetamine | [
"Chemistry"
] | 253 | [
"Chemicals in medicine",
"Prodrugs"
] |
14,586,564 | https://en.wikipedia.org/wiki/Process%20control%20monitoring | In the application of integrated circuits, process control monitoring (PCM) is the procedure followed to obtain detailed information about the process used.
PCM is associated with designing and fabricating special structures that can monitor technology specific parameters such as Vth in CMOS and Vbe in bipolars. These structures are placed across the wafer at specific locations along with the chip produced so that a closer look into the process variation is possible.
References
Integrated circuits | Process control monitoring | [
"Technology",
"Engineering"
] | 91 | [
"Computer engineering",
"Integrated circuits"
] |
14,590,163 | https://en.wikipedia.org/wiki/Myosin%20light%20chain | A myosin light chain is a light chain (small polypeptide subunit) of myosin. Myosin light chains were discovered by Chinese biochemist Cao Tianqin (Tien-chin Tsao) when he was a graduate student at the University of Cambridge in England.
Structure and function
Myosin light chain classes
Structurally, myosin light chains belong to the EF-hand family, a large family of Ca2+- binding proteins. MLCs contain two Ca2+ - binding EF-hand motifs. MLCs isoforms modulate the Ca2+of force transduction and cross-bridge kinetics.
Myosin light chains (MLCs) can be broadly classified into two groups:
Essential or alkali MLC (MLC1 or ELC),
Regulatory MLC (MLC2 or RLC).
Essential and regulatory MLCs have molecular masses of 22 and 19 kDa, respectively. Structurally, MLC2 contains a serine residue that is lacking in MLC1. The presence of this amino acids allows the regulation of the conformational changes (from compacted to an elongated form) by a Ca2+-mediated phosphorylation mechanism. MLC1, in contrast with MLC2, has a N-terminal sequence able to bind actin, contributing to force production.
MLCs are structurally and functionally distinct from myosin heavy chains (MHCs). Nevertheless, the association of MLCs with the neck region of MHCs is necessary for the assembly of the macromolecular complexes that result in the functional motor protein, myosin. The interaction of MLCs with the α-helical neck region of MHC molecule stabilizes the complex .
Genes in mammalians
To this day, eight genes encoding for MLCs in mammalians have been described; several isoforms have also been characterized. Four out of the 8 genes are MLC1 genes, whilst the remaining are MLC2 genes.
MLC1 genes:
MYL1 (chromosome 2q24.11); expressed in striated muscle
MYL3 (chromosome 3p21.3); expressed in striated muscle
MYL4 (chromosome 17q21.32); expressed in striated muscle
MYL6 (chromosome 12q13.2); expressed in non-muscle and smooth muscle
MLC2 genes:
MYL2 (chromosome 12q24.11); found in the sarcomere
MYL5 (chromosome 4p16.3); found in the sarcomere
MYL7 (chromosome 12q13.2); found in the sarcomere
MYL9 (chromosome 20q11.23); expressed in smooth muscle
Other proteins and enzymes related to MLC function have been described. Among them are, for example, MYL6B, MYLIP, MYLK, and MYLK2,
Diseases associated with MLCs
Several diseases have been associated with mutations in the genes encoding for myosin light chain proteins. The majority of these diseases are cardiomyopathies, such as hypertrophic (HCM) or dilated (DCM) cardiomyopathy and sudden cardiac death. Mutations in MYL2 and MYL3 have been reported for these diseases.
One study, published in 2012, found that valvular myosin 'LC1', in the hearts of three patients with valvular heart diseases, had structures similar to those of valvular myosin of people who were in their early stages of DCMP and HCMP. The researchers hypothesized that the structure distortion of these valvular myosin were due to adaptational changes by the body in an attempt to improve the functioning of the heart.
MLCKs as Biological Drugs
Myosin light chain kinase (MLCK) inhibitors are one of the few peptides that can cross the plasma membrane relatively quickly. Under stressful conditions, MLCK's in the human body promotes increased permeability of microvessels. It is thought that MLCK phosphorylates endothelial myosin, leading to cell contraction. This reaction prevents disengaged cells that are adjacent to one another from reestablishing connections, thus contributing to the maintenance of the gaps between cells. With their strong ability to cross the plasma membrane with little resistance from the cell, along with their specificity for a single target-substrate, MLCK inhibitors can potentially evolve into novel antiedemic drugs.
Interaction of MLCs with non-myosin proteins
MYL9, MYL12a, and MYL12b (MYL9/12) have been described as new functional interaction partners with CD69 in the pathogenesis of inflammation of the airways.
A novel mechanism of activated T cell recruitment into inflammatory tissues has been proposed, known was "CD69/Myl9/12 system". The proposed mechanism state that "Myl9/12-containing net-like structures are created in inflammatory vessels, which play an important role as a platform for recruitment of CD69-expressing leukocytes into inflammatory tissues. T cells that are activated in the lymph nodes proliferate, down-regulate CD69 expression, and then leave the lymph nodes to migrate into inflammatory sites in an S1PR1-dependent manner."
The proposed mechanisms of action of CD69/Myl9/12 system are related to the regulation of airway inflammatory processes and thus can prove to be a novel therapeutic target for chronic inflammatory diseases, in general.
See also
Myosin light-chain kinase
Myosin-light-chain phosphatase
References
Motor proteins | Myosin light chain | [
"Chemistry"
] | 1,170 | [
"Molecular machines",
"Motor proteins"
] |
14,591,538 | https://en.wikipedia.org/wiki/Suction%20filtration | Vacuum filtration is a fast filtration technique used to separate solids from liquids.
Principle
By flowing through the aspirator, water will suck out the air contained in the vacuum flask and the Büchner flask. There is therefore a difference in pressure between the exterior and the interior of the flasks : the contents of the Büchner funnel are sucked towards the vacuum flask. The filter, which is placed at the bottom of the Büchner funnel, separates the solids from the liquids.
The solid residue, which remains at the top of the Büchner funnel, is therefore recovered more efficiently: it is much drier than it would be with a simple filtration.
The rubber conical seal ensures the apparatus is hermetically closed, preventing the passage of air between the Büchner funnel and the vacuum flask. It maintains the vacuum in the apparatus and also avoids physical points of stress (glass against glass.)
Diagram annotations
Filter
Büchner funnel
Conic seal
Büchner flask
Air tube
Vacuum flask
Water tap
Aspirator
Uses
Filtration is a unit operation that is commonly used both in laboratory and production conditions. This apparatus, adapted for laboratory work, is often used to isolate the product of synthesis of a reaction when the product is a solid in suspension. The product of synthesis is then recovered faster, and the solid is drier than in the case of a simple filtration. Other than isolating a solid, filtration is also a stage of purification: the soluble impurities in the solvent are eliminated in the filtrate (liquid).
This apparatus is often used to purify a liquid. When a synthesised product is filtered, the insolubles (catalysers, impurities, sub-products of the reaction, salts, ...) remain in the filter. In this case, vacuum filtration is also more efficient that a simple filtration: there is more liquid recovered, and the yield is therefore better.
Practical aspects
It is often necessary to maintain the Büchner flask and, incidentally, the vacuum flask. The rigidity of the vacuum pipes and the difference in height between the different parts of the apparatus (as visible in the diagram) make such an apparatus relatively unstable.
Therefore, a three-pronged clamp should be used to maintain the Büchner flask. This clamp should be placed such that the two prongs surround the part of the flask connected to the vacuum tube, the lasting prong resting on the other side.
If it is also necessary to maintain the vacuum flask we use either a mandible clamp, or a three-pronged clamp, depending on the apparatus and its stability. The clamp to use is left to the judgement of the operator.
Before closing the tap, it is necessary to "break the vacuum" (letting in the air in through any area in the apparatus, by removing the funnel for example), otherwise water goes up the apparatus from the aspirator. The vacuum flask prevents the water from going up the Büchner flask.
Sources
Laboratory techniques
Analytical chemistry
Filters | Suction filtration | [
"Chemistry",
"Engineering"
] | 641 | [
"Chemical equipment",
"Filters",
"Filtration",
"nan"
] |
14,593,084 | https://en.wikipedia.org/wiki/Nested%20set%20collection | A nested set collection or nested set family is a collection of sets that consists of chains of subsets forming a hierarchical structure, like Russian dolls.
It is used as reference concept in scientific hierarchy definitions, and many technical approaches, like the tree in computational data structures or nested set model of relational databases.
Sometimes the concept is confused with a collection of sets with a hereditary property (like finiteness in a hereditarily finite set).
Formal definition
Some authors regard a nested set collection as a family of sets. Others prefer to classify it relation as an inclusion order.
Let B be a non-empty set and C a collection of subsets of B. Then C is a nested set collection if:
(and, for some authors, )
The first condition states that the whole set B, which contains all the elements of every subset, must belong to the nested set collection. Some authors do not assume that B is nonempty.
The second condition states that the intersection of every couple of sets in the nested set collection is not the empty set only if one set is a subset of the other.
In particular, when scanning all pairs of subsets at the second condition, it is true for any combination with B.
Example
Using a set of atomic elements, as the set of the playing card suits:
B = {♠, ♥, ♦, ♣}; B1 = {♠, ♥}; B2 = {♦, ♣}; B3 = {♣}; C = {B, B1, B2, B3}.
The second condition of the formal definition can be checked by combining all pairs:
.
There is a hierarchy that can be expressed by two branches and its nested order: .
Derived concepts
As sets, that are general abstraction and foundations for many concepts, the nested set is the foundation for "nested hierarchy", "containment hierarchy" and others.
Nested hierarchy
A nested hierarchy or inclusion hierarchy is a hierarchical ordering of nested sets. The concept of nesting is exemplified in Russian matryoshka dolls. Each doll is encompassed by another doll, all the way to the outer doll. The outer doll holds all of the inner dolls, the next outer doll holds all the remaining inner dolls, and so on. Matryoshkas represent a nested hierarchy where each level contains only one object, i.e., there is only one of each size of doll; a generalized nested hierarchy allows for multiple objects within levels but with each object having only one parent at each level. Illustrating the general concept:
A square can always also be referred to as a quadrilateral, polygon or shape. In this way, it is a hierarchy. However, consider the set of polygons using this classification. A square can only be a quadrilateral; it can never be a triangle, hexagon, etc.
Nested hierarchies are the organizational schemes behind taxonomies and systematic classifications. For example, using the original Linnaean taxonomy (the version he laid out in the 10th edition of Systema Naturae), a human can be formulated as:
Taxonomies may change frequently (as seen in biological taxonomy), but the underlying concept of nested hierarchies is always the same.
Containment hierarchy
A containment hierarchy is a direct extrapolation of the nested hierarchy concept. All of the ordered sets are still nested, but every set must be "strict" — no two sets can be identical. The shapes example above can be modified to demonstrate this:
The notation means x is a subset of y but is not equal to y.
Containment hierarchy is used in class inheritance of object-oriented programming.
See also
Hereditarily countable set
Hereditary property
Hierarchy (mathematics)
Nested set model for storing hierarchical information in relational databases
References
Set theory | Nested set collection | [
"Mathematics"
] | 787 | [
"Mathematical logic",
"Set theory"
] |
14,593,201 | https://en.wikipedia.org/wiki/Principal%20root%20of%20unity | In mathematics, a principal n-th root of unity (where n is a positive integer) of a ring is an element satisfying the equations
In an integral domain, every primitive n-th root of unity is also a principal -th root of unity. In any ring, if n is a power of 2, then any n/2-th root of −1 is a principal n-th root of unity.
A non-example is in the ring of integers modulo ; while and thus is a cube root of unity, meaning that it is not a principal cube root of unity.
The significance of a root of unity being principal is that it is a necessary condition for the theory of the discrete Fourier transform to work out correctly.
References
Algebraic numbers
Cyclotomic fields
Polynomials
1 (number)
Complex numbers | Principal root of unity | [
"Mathematics"
] | 165 | [
"Polynomials",
"Mathematical objects",
"Algebraic numbers",
"Complex numbers",
"Numbers",
"Algebra"
] |
14,593,776 | https://en.wikipedia.org/wiki/De%20Longchamps%20point | In geometry, the de Longchamps point of a triangle is a triangle center named after French mathematician Gaston Albert Gohierre de Longchamps. It is the reflection of the orthocenter of the triangle about the circumcenter.
Definition
Let the given triangle have vertices , , and , opposite the respective sides , , and , as is the standard notation in triangle geometry. In the 1886 paper in which he introduced this point, de Longchamps initially defined it as the center of a circle orthogonal to the three circles , , and , where is centered at with radius and the other two circles are defined symmetrically. De Longchamps then also showed that the same point, now known as the de Longchamps point, may be equivalently defined as the orthocenter of the anticomplementary triangle of , and that it is the reflection of the orthocenter of around the circumcenter.
The Steiner circle of a triangle is concentric with the nine-point circle and has radius 3/2 the circumradius of the triangle; the de Longchamps point is the homothetic center of the Steiner circle and the circumcircle.
Additional properties
As the reflection of the orthocenter around the circumcenter, the de Longchamps point belongs to the line through both of these points, which is the Euler line of the given triangle. Thus, it is collinear with all the other triangle centers on the Euler line, which along with the orthocenter and circumcenter include the centroid and the center of the nine-point circle.
The de Longchamp point is also collinear, along a different line, with the incenter and the Gergonne point of its triangle. The three circles centered at , , and , with radii , , and respectively (where is the semiperimeter) are mutually tangent, and there are two more circles tangent to all three of them, the inner and outer Soddy circles; the centers of these two circles also lie on the same line with the de Longchamp point and the incenter. The de Longchamp point is the point of concurrence of this line with the Euler line, and with three other lines defined in a similar way as the line through the incenter but using instead the three excenters of the triangle.
The Darboux cubic may be defined from the de Longchamps point, as the locus of points such that , the isogonal conjugate of , and the de Longchamps point are collinear. It is the only cubic curve invariant of a triangle that is both isogonally self-conjugate and centrally symmetric; its center of symmetry is the circumcenter of the triangle. The de Longchamps point itself lies on this curve, as does its reflection the orthocenter.
References
External links
Triangle centers | De Longchamps point | [
"Physics",
"Mathematics"
] | 604 | [
"Point (geometry)",
"Triangle centers",
"Points defined for a triangle",
"Geometric centers",
"Symmetry"
] |
2,206,783 | https://en.wikipedia.org/wiki/Direct%20fluorescent%20antibody | A direct fluorescent antibody (DFA or dFA), also known as "direct immunofluorescence", is an antibody that has been tagged in a direct fluorescent antibody test. Its name derives from the fact that it directly tests the presence of an antigen with the tagged antibody, unlike western blotting, which uses an indirect method of detection, where the primary antibody binds the target antigen, with a secondary antibody directed against the primary, and a tag attached to the secondary antibody.
Commercial DFA testing kits are available, which contain fluorescently labelled antibodies, designed to specifically target unique antigens present in the bacteria or virus, but not present in mammals (Eukaryotes). This technique can be used to quickly determine if a subject has a specific viral or bacterial infection.
In the case of respiratory viruses, many of which have similar broad symptoms, detection can be carried out using nasal wash samples from the subject with the suspected infection. Although shedding cells in the respiratory tract can be obtained, it is often in low numbers, and so an alternative method can be adopted where compatible cell culture can be exposed to infected nasal wash samples, so if the virus is present it can be grown up to a larger quantity, which can then give a clearer positive or negative reading.
As with all types of fluorescence microscopy, the correct absorption wavelength needs to be determined in order to excite the fluorophore tag attached to the antibody, and detect the fluorescence given off, which indicates which cells are positive for the presence of the virus or bacteria being detected.
Direct immunofluorescence can be used to detect deposits of immunoglobulins and complement proteins in biopsies of skin, kidney and other organs. Their presence is indicative of an autoimmune disease. When skin not exposed to the sun is tested, a positive direct IF (the so-called Lupus band test) is an evidence of systemic lupus erythematosus. Direct fluorescent antibody can also be used to detect parasitic infections, as was pioneered by Sadun, et al. (1960).
See also
Immunofluorescence
References
External links
Laboratory techniques
Clinical pathology
Immunologic tests
Reagents for biochemistry | Direct fluorescent antibody | [
"Chemistry",
"Biology"
] | 451 | [
"Biochemistry methods",
"Immunologic tests",
"nan",
"Biochemistry",
"Reagents for biochemistry"
] |
2,206,793 | https://en.wikipedia.org/wiki/Immunomagnetic%20separation | Immunomagnetic separation (IMS) is a laboratory tool that can efficiently isolate cells out of body fluid or cultured cells. It can also be used as a method of quantifying the pathogenicity of food, blood or feces. DNA analysis have supported the combined use of both this technique and Polymerase Chain Reaction (PCR). Another laboratory separation tool is the affinity magnetic separation (AMS), which is more suitable for the isolation of prokaryotic cells.
IMS deals with the isolation of cells, proteins, and nucleic acids through the specific capture of biomolecules through the attachment of small-magnetized particles, beads, containing antibodies and lectins. These beads are coated to bind to targeted biomolecules, gently separated and goes through multiple cycles of washing to obtain targeted molecules bound to these super paramagnetic beads, which can differentiate based on strength of magnetic field and targeted molecules, are then eluted to collect supernatant and then are able to determine the concentration of specifically targeted biomolecules. IMS obtains certain concentrations of specific molecules within targeted bacteria.
A mixture of cell population will be put into a magnetic field where cells then are attached to super paramagnetic beads, specific example are Dynabeads (4.5-μm), will remain once excess substrate is removed binding to targeted antigen. Dynabeads consists of iron-containing cores, which is covered by a thin layer of a polymer shell allowing the absorption of biomolecules. The beads are coated with primary antibodies, specific-specific antibodies, lectins, enzymes, or streptavidin; the linkage between magnetized beads coated materials are cleavable DNA linker cell separation from the beads when the culturing of cells is more desirable.
Many of these beads have the same principles of separation; however, the presence and different strength s of magnetic fields requires certain sizes of beads, based on the ramifications of the separation of the cell population. The larger sized beads (>2μm) are the most commonly used range that was produced by Dynal (Dynal [UK] Ltd., Wirral, Mersyside, UK; Dynal, Inc., Lake Success, NY). Where as smaller beads (<100 nm) are mostly used for MACS system that was produced by Miltenyi Biotech (Miltenyi Biotech Ltd., Bisley, Surrey, UK; Miltenyi Biotech Inc., Auburn, CA).
Immunomagnetic separation is used in a variety of scientific fields including molecular biology, microbiology, and immunology. (3) This technique of separation does not only consist of separation of cells within the blood, but can also be used for techniques of separation from primary tumors and in metastases research, through separation into component parts, creating a singular-cell delay, then allowing the suitable antibody to label the cell. In metastasis research this separation technique may become necessary to separate when given a cell population and wanting to isolate tumors cells in tumors, peripheral blood, and bone marrow.
Technique
Antibodies coating paramagnetic beads will bind to antigens present on the surface of cells thus capturing the cells and facilitate the concentration of these bead-attached cells. The concentration process is created by a magnet placed on the side of the test tube bringing the beads to it.
MACS systems (Magnetic Cell Separation system):
Through the usage of smaller super paramagnetic beads (<100 nm), which requires a stronger magnetic field to separate cells. Cells are labeled with primary antibodies and then MACS beads are coated with specific- specific antibodies. These labeled cell suspension is then put into a separation column in a strong magnetic field. The labeled cells are contained, magnetized, while in the magnetic field and the unlabeled cells are suspended, un-magnetized, to be collected. Once removed from magnetic field positive cells are eluted. These MACS beads are then incorporated by the cells allowing them to remain in the column because they do not intrude with the cell attachment to the culture surface to cell-cell interactions. A bead removal reagent is then applied to have an enzymatically release of the MACS beads allowing those cells to become relabeled with some other marker, which then is sorted.
References
Laboratory techniques
Molecular biology | Immunomagnetic separation | [
"Chemistry",
"Biology"
] | 909 | [
"Biochemistry",
"nan",
"Molecular biology"
] |
2,207,789 | https://en.wikipedia.org/wiki/Surface%20power%20density | In physics and engineering, surface power density is power per unit area.
Applications
The intensity of electromagnetic radiation can be expressed in W/m2. An example of such a quantity is the solar constant.
Wind turbines are often compared using a specific power measuring watts per square meter of turbine disk area, which is , where r is the length of a blade. This measure is also commonly used for solar panels, at least for typical applications.
Radiance is surface power density per unit of solid angle (steradians) in a specific direction. Spectral radiance is radiance per unit of frequency (Hertz) at a specific (or as a function of) frequency, or per unit of wavelength (e.g. nm) at a specific (or as a function of) wavelength.
Surface power densities of energy sources
Surface power density is an important factor in comparison of industrial energy sources. The concept was popularised by geographer Vaclav Smil. The term is usually shortened to "power density" in the relevant literature, which can lead to confusion with homonymous or related terms.
Measured in W/m2 it describes the amount of power obtained per unit of Earth surface area used by a specific energy system, including all supporting infrastructure, manufacturing, mining of fuel (if applicable) and decommissioning., Fossil fuels and nuclear power are characterized by high power density which means large power can be drawn from power plants occupying relatively small area. Renewable energy sources have power density at least three orders of magnitude smaller and for the same energy output they need to occupy accordingly larger area, which has been already highlighted as a limiting factor of renewable energy in German Energiewende.
The following table shows median surface power density of renewable and non-renewable energy sources.
Background
As an electromagnetic wave travels through space, energy is transferred from the source to other objects (receivers). The rate of this energy transfer depends on the strength of the EM field components. Simply put, the rate of energy transfer per unit area (power density) is the product of the electric field strength (E) times the magnetic field strength (H).
Pd (Watts/meter2) = E × H (Volts/meter × Amperes/meter)where
Pd = the power density,
E = the RMS electric field strength in volts per meter,
H = the RMS magnetic field strength in amperes per meter.
The above equation yields units of W/m2 . In the USA the units of mW/cm2, are more often used when making surveys. One mW/cm2 is the same power density as 10 W/m2. The following equation can be used to obtain these units directly:
Pd = 0.1 × E × H mW/cm2
The simplified relationships stated above apply at distances of about two or more wavelengths from the radiating source. This distance can be a far distance at low frequencies, and is called the far field. Here the ratio between E and H becomes a fixed constant (377 Ohms) and is called the characteristic impedance of free space. Under these conditions we can determine the power density by measuring only the E field component (or H field component, if you prefer) and calculating the power density from it.
This fixed relationship is useful for measuring radio frequency or microwave (electromagnetic) fields. Since power is the rate of energy transfer, and the squares of E and H are proportional to power, E2 and H2 are proportional to the energy transfer rate and the energy absorption of a given material. [??? This would imply that with no absorption, E and H are both zero, i.e. light or radio waves cannot travel in a vacuum. The intended meaning of this statement is unclear.]
Far field
The region extending farther than about 2 wavelengths away from the source is called the far field. As the source emits electromagnetic radiation of a given wavelength, the far-field electric component of the wave E, the far-field magnetic component H, and power density are related by the equations: E = H × 377 and Pd = E × H.
Pd = H2 × 377 and Pd = E2 ÷ 377
where Pd is the power density in watts per square meter (one W/m2 is equal to 0.1 mW/cm2),
H2 = the square of the value of the magnetic field in amperes RMS squared per meter squared,
E2 = the square of the value of the electric field in volts RMS squared per meter squared.
References
Physical quantities
Area-specific quantities | Surface power density | [
"Physics",
"Mathematics"
] | 925 | [
"Physical phenomena",
"Physical quantities",
"Area-specific quantities",
"Quantity",
"Physical properties"
] |
2,207,861 | https://en.wikipedia.org/wiki/Power%20density | Power density, defined as the amount of power (the time rate of energy transfer) per unit volume, is a critical parameter used across a spectrum of scientific and engineering disciplines. This metric, typically denoted in watts per cubic meter (W/m3), serves as a fundamental measure for evaluating the efficacy and capability of various devices, systems, and materials based on their spatial energy distribution.
The concept of power density finds extensive application in physics, engineering, electronics, and energy technologies. It plays a pivotal role in assessing the efficiency and performance of components and systems, particularly in relation to the power they can handle or generate relative to their physical dimensions or volume.
In the domain of energy storage and conversion technologies, such as batteries, fuel cells, motors, and power supply units, power density is a crucial consideration. Here, power density often refers to the volume power density, quantifying how much power can be accommodated or delivered within a specific volume (W/m3).
For instance, when examining reciprocating internal combustion engines, power density assumes a distinct importance. In this context, power density is commonly defined as power per swept volume or brake horsepower per cubic centimeter. This measure is derived from the internal capacity of the engine, providing insight into its power output relative to its internal volume rather than its external size. This extends to advancement in material science where new materials which can withstand higher power densities can reduce size or weight of devices, or just increase their performance.
The significance of power density extends beyond these examples, impacting the design and optimization of a myriad of systems and devices. Notably, advancements in power density often drive innovations in areas ranging from renewable energy technologies to aerospace propulsion systems.
Understanding and enhancing power density can lead to substantial improvements in the performance and efficiency of various applications. Researchers and engineers continually explore ways to push the limits of power density, leveraging advancements in materials science, manufacturing techniques, and computational modeling.
By engaging with these educational resources and specialized coursework, students and professionals can deepen their understanding of power density and its implications across diverse industries. The pursuit of higher power densities continues to drive innovation and shape the future of energy systems and technological development.
Examples
See also
Surface power density, energy per unit of area
Energy density, energy per unit volume
Specific energy, energy per unit mass
Power-to-weight ratio/specific power, power per unit mass
Specific absorption rate (SAR)
References
Power (physics) | Power density | [
"Physics",
"Mathematics"
] | 495 | [
"Force",
"Physical quantities",
"Quantity",
"Power (physics)",
"Energy (physics)",
"Wikipedia categories named after physical quantities"
] |
2,207,911 | https://en.wikipedia.org/wiki/Kelvin%20probe%20force%20microscope | Kelvin probe force microscopy (KPFM), also known as surface potential microscopy, is a noncontact variant of atomic force microscopy (AFM). By raster scanning in the x,y plane the work function of the sample can be locally mapped for correlation with sample features. When there is little or no magnification, this approach can be described as using a scanning Kelvin probe (SKP). These techniques are predominantly used to measure corrosion and coatings.
With KPFM, the work function of surfaces can be observed at atomic or molecular scales. The work function relates to many surface phenomena, including catalytic activity, reconstruction of surfaces, doping and band-bending of semiconductors, charge trapping in dielectrics and corrosion. The map of the work function produced by KPFM gives information about the composition and electronic state of the local structures on the surface of a solid.
History
The SKP technique is based on parallel plate capacitor experiments performed by Lord Kelvin in 1898. In the 1930s William Zisman built upon Lord Kelvin's experiments to develop a technique to measure contact potential differences of dissimilar metals.
Working principle
In SKP the probe and sample are held parallel to each other and electrically connected to form a parallel plate capacitor. The probe is selected to be of a different material to the sample, therefore each component initially has a distinct Fermi level. When electrical connection is made between the probe and the sample electron flow can occur between the probe and the sample in the direction of the higher to the lower Fermi level. This electron flow causes the equilibration of the probe and sample Fermi levels. Furthermore, a surface charge develops on the probe and the sample, with a related potential difference known as the contact potential (Vc). In SKP the probe is vibrated along a perpendicular to the plane of the sample. This vibration causes a change in probe to sample distance, which in turn results in the flow of current, taking the form of an ac sine wave. The resulting ac sine wave is demodulated to a dc signal through the use of a lock-in amplifier. Typically the user must select the correct reference phase value used by the lock-in amplifier. Once the dc potential has been determined, an external potential, known as the backing potential (Vb) can be applied to null the charge between the probe and the sample. When the charge is nullified, the Fermi level of the sample returns to its original position. This means that Vb is equal to -Vc, which is the work function difference between the SKP probe and the sample measured.
The cantilever in the AFM is a reference electrode that forms a capacitor with the surface, over which it is scanned laterally at a constant separation. The cantilever is not piezoelectrically driven at its mechanical resonance frequency ω0 as in normal AFM although an alternating current (AC) voltage is applied at this frequency.
When there is a direct-current (DC) potential difference between the tip and the surface, the AC+DC voltage offset will cause the cantilever to vibrate. The origin of the force can be understood by considering that the energy of the capacitor formed by the cantilever and the surface is
plus terms at DC. Only the cross-term proportional to the VDC·VAC product is at the resonance frequency ω0. The resulting vibration of the cantilever is detected using usual scanned-probe microscopy methods (typically involving a diode laser and a four-quadrant detector). A null circuit is used to drive the DC potential of the tip to a value which minimizes the vibration. A map of this nulling DC potential versus the lateral position coordinate therefore produces an image of the work function of the surface.
A related technique, electrostatic force microscopy (EFM), directly measures the force produced on a charged tip by the electric field emanating from the surface. EFM operates much like magnetic force microscopy in that the frequency shift or amplitude change of the cantilever oscillation is used to detect the electric field. However, EFM is much more sensitive to topographic artifacts than KPFM. Both EFM and KPFM require the use of conductive cantilevers, typically metal-coated silicon or silicon nitride. Another AFM-based technique for the imaging of electrostatic surface potentials, scanning quantum dot microscopy, quantifies surface potentials based on their ability to gate a tip-attached quantum dot.
Factors affecting SKP measurements
The quality of an SKP measurement is affected by a number of factors. This includes the diameter of the SKP probe, the probe to sample distance, and the material of the SKP probe. The probe diameter is important in the SKP measurement because it affects the overall resolution of the measurement, with smaller probes leading to improved resolution. On the other hand, reducing the size of the probe causes an increase in fringing effects which reduces the sensitivity of the measurement by increasing the measurement of stray capacitances. The material used in the construction of the SKP probe is important to the quality of the SKP measurement. This occurs for a number of reasons. Different materials have different work function values which will affect the contact potential measured. Different materials have different sensitivity to humidity changes. The material can also affect the resulting lateral resolution of the SKP measurement. In commercial probes tungsten is used, though probes of platinum, copper, gold, and NiCr has been used. The probe to sample distance affects the final SKP measurement, with smaller probe to sample distances improving the lateral resolution and the signal-to-noise ratio of the measurement. Furthermore, reducing the SKP probe to sample distance increases the intensity of the measurement, where the intensity of the measurement is proportional to 1/d2, where d is the probe to sample distance. The effects of changing probe to sample distance on the measurement can be counteracted by using SKP in constant distance mode.
Work function
The Kelvin probe force microscope or Kelvin force microscope (KFM) is based on an AFM set-up and the determination of the work function is based on the measurement of the electrostatic forces between the small AFM tip and the sample. The conducting tip and the sample are characterized by (in general) different work functions, which represent the difference between the Fermi level and the vacuum level for each material. If both elements were brought in contact, a net electric current would flow between them until the Fermi levels were aligned. The difference between the work functions is called the contact potential difference and is denoted generally with VCPD. An electrostatic force exists between tip and sample, because of the electric field between them. For the measurement a voltage is applied between tip and sample, consisting of a DC-bias VDC and an AC-voltage VAC sin(ωt) of frequency ω.
Tuning the AC-frequency to the resonant frequency of the AFM cantilever results in an improved sensitivity. The electrostatic force in a capacitor may be found by differentiating the energy function with respect to the separation of the elements and can be written as
where C is the capacitance, z is the separation, and V is the voltage, each between tip and surface. Substituting the previous formula for voltage (V) shows that the electrostatic force can be split up into three contributions, as the total electrostatic force F acting on the tip then has spectral components at the frequencies ω and 2ω.
The DC component, FDC, contributes to the topographical signal, the term Fω at the characteristic frequency ω is used to measure the contact potential and the contribution F2ω can be used for capacitance microscopy.
Contact potential measurements
For contact potential measurements a lock-in amplifier is used to detect the cantilever oscillation at ω. During the scan VDC will be adjusted so that the electrostatic forces between the tip and the sample become zero and thus the response at the frequency ω becomes zero. Since the electrostatic force at ω depends on VDC − VCPD, the value of VDC that minimizes the ω-term corresponds to the contact potential. Absolute values of the sample work function can be obtained if the tip is first calibrated against a reference sample of known work function. Apart from this, one can use the normal topographic scan methods at the resonance frequency ω independently of the above. Thus, in one scan, the topography and the contact potential of the sample are determined simultaneously.
This can be done in (at least) two different ways: 1) The topography is captured in AC mode which means that the cantilever is driven by a piezo at its resonant frequency. Simultaneously the AC voltage for the KPFM measurement is applied at a frequency slightly lower than the resonant frequency of the cantilever. In this measurement mode the topography and the contact potential difference are captured at the same time and this mode is often called single-pass. 2) One line of the topography is captured either in contact or AC mode and is stored internally. Then, this line is scanned again, while the cantilever remains on a defined distance to the sample without a mechanically driven oscillation but the AC voltage of the KPFM measurement is applied and the contact potential is captured as explained above. It is important to note that the cantilever tip must not be too close to the sample in order to allow good oscillation with applied AC voltage. Therefore, KPFM can be performed simultaneously during AC topography measurements but not during contact topography measurements.
Applications
The Volta potential measured by SKP is directly proportional to the corrosion potential of a material, as such SKP has found widespread use in the study of the fields of corrosion and coatings. In the field of coatings for example, a scratched region of a self-healing shape memory polymer coating containing a heat generating agent on aluminium alloys was measured by SKP. Initially after the scratch was made the Volta potential was noticeably higher and wider over the scratch than over the rest of the sample, implying this region is more likely to corrode. The Volta potential decreased over subsequent measurements, and eventually the peak over the scratch completely disappeared implying the coating has healed. Because SKP can be used to investigate coatings in a non-destructive way it has also been used to determine coating failure. In a study of polyurethane coatings, it was seen that the work function increases with increasing exposure to high temperature and humidity. This increase in work function is related to decomposition of the coating likely from hydrolysis of bonds within the coating.
Using SKP the corrosion of industrially important alloys has been measured. In particular with SKP it is possible to investigate the effects of environmental stimulus on corrosion. For example, the microbially induced corrosion of stainless steel and titanium has been examined. SKP is useful to study this sort of corrosion because it usually occurs locally, therefore global techniques are poorly suited. Surface potential changes related to increased localized corrosion were shown by SKP measurements. Furthermore, it was possible to compare the resulting corrosion from different microbial species. In another example SKP was used to investigate biomedical alloy materials, which can be corroded within the human body. In studies on Ti-15Mo under inflammatory conditions, SKP measurements showed a lower corrosion resistance at the bottom of a corrosion pit than at the oxide protected surface of the alloy. SKP has also been used to investigate the effects of atmospheric corrosion, for example to investigate copper alloys in marine environment. In this study Kelvin potentials became more positive, indicating a more positive corrosion potential, with increased exposure time, due to an increase in thickness of corrosion products. As a final example SKP was used to investigate stainless steel under simulated conditions of gas pipeline. These measurements showed an increase in difference in corrosion potential of cathodic and anodic regions with increased corrosion time, indicating a higher likelihood of corrosion. Furthermore, these SKP measurements provided information about local corrosion, not possible with other techniques.
SKP has been used to investigate the surface potential of materials used in solar cells, with the advantage that it is a non-contact, and therefore a non-destructive technique. It can be used to determine the electron affinity of different materials in turn allowing the energy level overlap of conduction bands of differing materials to be determined. The energy level overlap of these bands is related to the surface photovoltage response of a system.
As a non-contact, non-destructive technique SKP has been used to investigate latent fingerprints on materials of interest for forensic studies. When fingerprints are left on a metallic surface they leave behind salts which can cause the localized corrosion of the material of interest. This leads to a change in Volta potential of the sample, which is detectable by SKP. SKP is particularly useful for these analyses because it can detect this change in Volta potential even after heating, or coating by, for example, oils.
SKP has been used to analyze the corrosion mechanisms of schreibersite-containing meteorites. The aim of these studies has been to investigate the role in such meteorites in releasing species utilized in prebiotic chemistry.
In the field of biology SKP has been used to investigate the electric fields associated with wounding, and acupuncture points.
In the field of electronics, KPFM is used to investigate the charge trapping in High-k gate oxides/interfaces of electronic devices.
See also
Scanning probe microscopy
Surface photovoltage
References
External links
– Full description of the principles with good illustrations to aid comprehension
Transport measurements by Scanning Probe Microscopy
Introduction to Kelvin Probe Force Microscopy (KPFM)
Dynamic Kelvin Probe Force Microscopy
Kelvin Probe Force Microscopy of Lateral Devices
Kelvin Probe Force Microscopy in Liquids
Current-voltage Measurements in Scanning Probe Microscopy
Dynamic IV measurements in SPM
Scanning probe microscopy
Condensed matter physics
Surface science
Electric and magnetic fields in matter
Probe force microscope | Kelvin probe force microscope | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 2,845 | [
"Phases of matter",
"Electric and magnetic fields in matter",
"Surface science",
"Materials science",
"Condensed matter physics",
"Scanning probe microscopy",
"Microscopy",
"Nanotechnology",
"Matter"
] |
2,208,744 | https://en.wikipedia.org/wiki/Macro-engineering | In engineering, macro-engineering (alternatively known as mega engineering) is the implementation of large-scale design projects. It can be seen as a branch of civil engineering or structural engineering applied on a large landmass. In particular, macro-engineering is the process of marshaling and managing of resources and technology on a large scale to carry out complex tasks that last over a long period. In contrast to conventional engineering projects, macro-engineering projects (called macro-projects or mega-projects) are multidisciplinary, involving collaboration from all fields of study. Because of the size of macro-projects they are usually international.
Macro-engineering is an evolving field that has only recently started to receive attention. Because we routinely deal with challenges that are multinational in scope, such as global warming and pollution, macro-engineering is emerging as a transcendent solution to worldwide problems.
Macro-engineering is distinct from Megascale engineering due to the scales where they are applied. Where macro-engineering is currently practical, mega-scale engineering is still within the domain of speculative fiction because it deals with projects on a planetary or stellar scale.
Projects
Macro engineering examples include the construction of the Panama Canal and the Suez Canal.
Planned projects
Examples of projects include the Channel Tunnel and the planned Gibraltar Tunnel.
Two intellectual centers focused on macro-engineering theory and practice are the Candida Oancea Institute in Bucharest, and The Center for Macro Projects and Diplomacy at Roger Williams University in Bristol, Rhode Island.
See also
Afforestation
Agroforestry
Atlantropa (Gibraltar Dam)
Analog forestry
Bering Strait bridge
Buffer strip
Biomass
Biomass (ecology)
Climate engineering (Geoengineering)
Collaborative innovation network
Deforestation
Deforestation during the Roman period
Ecological engineering
Ecological engineering methods
Ecotechnology
Energy-efficient landscaping
Forest gardening
Forest farming
Great Plains Shelterbelt
Green Wall of China
IBTS Greenhouse
Home gardens
Human ecology
Megascale engineering
Permaculture
Permaforestry
Sahara Forest Project
Qattara Depression Project
Red Sea dam
Sand fence
Seawater Greenhouse
Sustainable agriculture
Terraforming
Windbreak
Wildcrafting
References
Frank P. Davidson and Kathleen Lusk Brooke, BUILDING THE WORLD: AN ENCYCLOPEDIA OF THE GREAT ENGINEERING PROJECTS IN HISTORY, two volumes (Greenwood Publishing Group, Oxford UK, 2006)
V. Badescu, R.B. Cathcart and R.D. Schuiling, MACRO-ENGINEERING: A CHALLENGE FOR THE FUTURE (Springer, The Netherlands, 2006)
R.B. Cathcart, V. Badescu with Ramesh Radhakrishnan, (2006): Macro-Engineers' Dreams PDF, 175pp. Accessed 24 May 2013
Alexander Bolonkin and Richard B. Cathcart, Macro-Projects (NOVA Publishing, 2009)
Viorel Badescu and R.B. Cathcart, Macro-engineering Seawater (Springer, 2010), 880 pages.
R.B. Cathcart, MACRO-IMAGINEERING OUR DOSMOZOICUM. (Lambert Academic Publishing, 2018) 154 pages.
External links
Engineering and the Future of Technology
Megaengineering at Popular Mechanics | Macro-engineering | [
"Engineering"
] | 626 | [
"Macro-engineering"
] |
2,208,748 | https://en.wikipedia.org/wiki/Super%20black | Super black is a surface treatment developed at the National Physical Laboratory (NPL) in the United Kingdom. It absorbs approximately 99.6% of visible light at normal incidence, while conventional black paint absorbs about 97.5%. At other angles of incidence, super black is even more effective: at an angle of 45°, it absorbs 99.9% of light.
Technology
The technology to create super black involves chemically etching a nickel-phosphorus alloy.
Applications of super black are in specialist optical instruments for reducing unwanted reflections. The disadvantage of this material is its low optical thickness, as it is a surface treatment. As a result, infrared light of a wavelength longer than a few micrometers penetrates through the dark layer and has much higher reflectivity. The reported spectral dependence increases from about 1% at 3 μm to 50% at 20 μm.
In 2009, a competitor to the super black material, Vantablack, was developed based on carbon nanotubes. It has a relatively flat reflectance in a wide spectral range.
In 2011, NASA and the US Army began funding research in the use of nanotube-based super black coatings in sensitive optics.
Nanotube-based superblack arrays and coatings have recently become commercially available.
See also
Vantablack
Emissivity
Black hole
Black body
References
External links
Materials science
Optical materials
Shades of black | Super black | [
"Physics",
"Materials_science",
"Engineering"
] | 287 | [
"Applied and interdisciplinary physics",
"Materials science",
"Materials",
"Optical materials",
"nan",
"Matter"
] |
2,209,432 | https://en.wikipedia.org/wiki/Inelastic%20mean%20free%20path | The inelastic mean free path (IMFP) is an index of how far an electron on average travels through a solid before losing energy.
If a monochromatic, primary beam of electrons is incident on a solid surface, the majority of incident electrons lose their energy because they interact strongly with matter, leading to plasmon excitation, electron-hole pair formation, and vibrational excitation. The intensity of the primary electrons, , is damped as a function of the distance, , into the solid. The intensity decay can be expressed as follows:
where is the intensity after the primary electron beam has traveled through the solid to a distance . The parameter , termed the inelastic mean free path (IMFP), is defined as the distance an electron beam can travel before its intensity decays to of its initial value. (Note that this is equation is closely related to the Beer–Lambert law.)
The inelastic mean free path of electrons can roughly be described by a universal curve that is the same for all materials.
The knowledge of the IMFP is indispensable for several electron spectroscopy and microscopy measurements.
Applications of the IMFP in XPS
Following, the IMFP is employed to calculate the effective attenuation length (EAL), the mean escape depth (MED) and the information depth (ID). Besides, one can utilize the IMFP to make matrix corrections for the relative sensitivity factor in quantitative surface analysis. Moreover, the IMFP is an important parameter in Monte Carlo simulations of photoelectron transport in matter.
Calculations of the IMFP
Calculations of the IMFP are mostly based on the algorithm (full Penn algorithm, FPA) developed by Penn, experimental optical constants or calculated optical data (for compounds). The FPA considers an inelastic scattering event and the dependence of the energy-loss function (EFL) on momentum transfer which describes the probability for inelastic scattering as a function of momentum transfer.
Experimental measurements of the IMFP
To measure the IMFP, one well known method is elastic-peak electron spectroscopy (EPES). This method measures the intensity of elastically backscattered electrons with a certain energy from a sample material in a certain direction. Applying a similar technique to materials whose IMFP is known, the measurements are compared with the results from the Monte Carlo simulations under the same conditions. Thus, one obtains the IMFP of a certain material in a certain energy spectrum. EPES measurements show a root-mean-square (RMS) difference between 12% and 17% from the theoretical expected values. Calculated and experimental results show higher agreement for higher energies.
For electron energies in the range 30 keV – 1 MeV, IMFP can be directly measured by electron energy loss spectroscopy inside a transmission electron microscope, provided the sample thickness is known. Such measurements reveal that IMFP in elemental solids is not a smooth, but an oscillatory function of the atomic number.
For energies below 100 eV, IMFP can be evaluated in high-energy secondary electron yield (SEY) experiments. Therefore, the SEY for an arbitrary incident energy between 0.1 keV-10 keV is analyzed. According to these experiments, a Monte Carlo model can be used to simulate the SEYs and determine the IMFP below 100 eV.
Predictive formulas
Using the dielectric formalism, the IMFP can be calculated by solving the following integral:
with the minimum (maximum) energy loss (), the dielectric function , the energy loss function (ELF) and the smallest and largest momentum transfer . In general, solving this integral is quite challenging and only applies for energies above 100 eV. Thus, (semi)empirical formulas were introduced to determine the IMFP.
A first approach is to calculate the IMFP by an approximate form of the relativistic Bethe equation for inelastic scattering of electrons in matter. Equation holds for energies between 50 eV and 200 keV:
with
and
and the electron energy in eV above the Fermi level (conductors) or above the bottom of the conduction band (non-conductors). is the electron mass, the vacuum velocity of light, is the number of valence electrons per atom or molecule, describes the density (in ), is the atomic or molecular weight and , , and are parameters determined in the following. Equation calculates the IMFP and its dependence on the electron energy in condensed matter.
Equation was further developed to find the relations for the parameters , , and for energies between 50 eV and 2 keV:
Here, the bandgap energy is given in eV. Equation an are also known as the TTP-2M equations and are in general applicable for energies between 50 eV and 200 keV. Neglecting a few materials (diamond, graphite, Cs, cubic-BN and hexagonal BN) that are not following these equations (due to deviations in ), the TTP-2M equations show precise agreement with the measurements.
Another approach based on Equation to determine the IMFP is the S1 formula. This formula can be applied for energies between 100 eV and 10 keV:
with the atomic number (average atomic number for a compound), or ( is the heat of formation of a compound in eV per atom) and the average atomic spacing :
with the Avogadro constant and the stoichiometric coefficients and describing binary compounds . In this case, the atomic number becomes
with the atomic numbers and of the two constituents. This S1 formula shows higher agreement with measurements compared to Equation .
Calculating the IMFP with either the TTP-2M formula or the S1 formula requires different knowledge of some parameters. Applying the TTP-2M formula one needs to know , and for conducting materials (and also for non-conductors). Employing S1 formula, knowledge of the atomic number (average atomic number for compounds), and is required for conductors. If non-conducting materials are considered, one also needs to know either or .
An analytical formula for calculating the IMFP down to 50 eV was proposed in 2021. Therefore, an exponential term was added to an analytical formula already derived from that was applicible for energies down to 500 eV:
For relativistic electrons it holds:
with the electron velocity , and . denotes the velocity of light. and are given in nanometers. The constants in and are defined as following:
IMFP data
IMFP data can be collected from the National Institute of Standards and Technology (NIST) Electron Inelastic-Mean-Free-Path Database or the NIST Database for the Simulation of Electron Spectra for Surface Analysis (SESSA). The data contains IMFPs determined by EPES for energies below 2 keV. Otherwise, IMFPs can be determined from the TPP-2M or the S1 formula.
See also
Beer–Lambert law
Scattering theory
References
Atomic, molecular, and optical physics | Inelastic mean free path | [
"Physics",
"Chemistry"
] | 1,411 | [
"Atomic",
" molecular",
" and optical physics"
] |
2,209,688 | https://en.wikipedia.org/wiki/Einstein%20coefficients | In atomic, molecular, and optical physics, the Einstein coefficients are quantities describing the probability of absorption or emission of a photon by an atom or molecule. The Einstein A coefficients are related to the rate of spontaneous emission of light, and the Einstein B coefficients are related to the absorption and stimulated emission of light. Throughout this article, "light" refers to any electromagnetic radiation, not necessarily in the visible spectrum.
These coefficients are named after Albert Einstein, who proposed them in 1916.
Spectral lines
In physics, one thinks of a spectral line from two viewpoints.
An emission line is formed when an atom or molecule makes a transition from a particular discrete energy level of an atom, to a lower energy level , emitting a photon of a particular energy and wavelength. A spectrum of many such photons will show an emission spike at the wavelength associated with these photons.
An absorption line is formed when an atom or molecule makes a transition from a lower, , to a higher discrete energy state, , with a photon being absorbed in the process. These absorbed photons generally come from background continuum radiation (the full spectrum of electromagnetic radiation) and a spectrum will show a drop in the continuum radiation at the wavelength associated with the absorbed photons.
The two states must be bound states in which the electron is bound to the atom or molecule, so the transition is sometimes referred to as a "bound–bound" transition, as opposed to a transition in which the electron is ejected out of the atom completely ("bound–free" transition) into a continuum state, leaving an ionized atom, and generating continuum radiation.
A photon with an energy equal to the difference between the energy levels is released or absorbed in the process. The frequency at which the spectral line occurs is related to the photon energy by Bohr's frequency condition where denotes the Planck constant.
Emission and absorption coefficients
An atomic spectral line refers to emission and absorption events in a gas in which is the density of atoms in the upper-energy state for the line, and is the density of atoms in the lower-energy state for the line.
The emission of atomic line radiation at frequency may be described by an emission coefficient with units of energy/(time × volume × solid angle). ε dt dV dΩ is then the energy emitted by a volume element in time into solid angle . For atomic line radiation,
where is the Einstein coefficient for spontaneous emission, which is fixed by the intrinsic properties of the relevant atom for the two relevant energy levels.
The absorption of atomic line radiation may be described by an absorption coefficient with units of 1/length. The expression κ' dx gives the fraction of intensity absorbed for a light beam at frequency while traveling distance dx. The absorption coefficient is given by
where and are the Einstein coefficients for photon absorption and induced emission respectively. Like the coefficient , these are also fixed by the intrinsic properties of the relevant atom for the two relevant energy levels. For thermodynamics and for the application of Kirchhoff's law, it is necessary that the total absorption be expressed as the algebraic sum of two components, described respectively by and , which may be regarded as positive and negative absorption, which are, respectively, the direct photon absorption, and what is commonly called stimulated or induced emission.
The above equations have ignored the influence of the spectroscopic line shape. To be accurate, the above equations need to be multiplied by the (normalized) spectral line shape, in which case the units will change to include a 1/Hz term.
Under conditions of thermodynamic equilibrium, the number densities and , the Einstein coefficients, and the spectral energy density provide sufficient information to determine the absorption and emission rates.
Equilibrium conditions
The number densities and are set by the physical state of the gas in which the spectral line occurs, including the local spectral radiance (or, in some presentations, the local spectral radiant energy density). When that state is either one of strict thermodynamic equilibrium, or one of so-called "local thermodynamic equilibrium", then the distribution of atomic states of excitation (which includes and ) determines the rates of atomic emissions and absorptions to be such that Kirchhoff's law of equality of radiative absorptivity and emissivity holds. In strict thermodynamic equilibrium, the radiation field is said to be black-body radiation and is described by Planck's law. For local thermodynamic equilibrium, the radiation field does not have to be a black-body field, but the rate of interatomic collisions must vastly exceed the rates of absorption and emission of quanta of light, so that the interatomic collisions entirely dominate the distribution of states of atomic excitation. Circumstances occur in which local thermodynamic equilibrium does not prevail, because the strong radiative effects overwhelm the tendency to the Maxwell–Boltzmann distribution of molecular velocities. For example, in the atmosphere of the Sun, the great strength of the radiation dominates. In the upper atmosphere of the Earth, at altitudes over 100 km, the rarity of intermolecular collisions is decisive.
In the cases of thermodynamic equilibrium and of local thermodynamic equilibrium, the number densities of the atoms, both excited and unexcited, may be calculated from the Maxwell–Boltzmann distribution, but for other cases, (e.g. lasers) the calculation is more complicated.
Einstein coefficients
In 1916, Albert Einstein proposed that there are three processes occurring in the formation of an atomic spectral line. The three processes are referred to as spontaneous emission, stimulated emission, and absorption. With each is associated an Einstein coefficient, which is a measure of the probability of that particular process occurring. Einstein considered the case of isotropic radiation of frequency and spectral energy density . Paul Dirac derived the coefficients in a 1927 paper titled "The Quantum Theory of the Emission and Absorption of Radiation".
Various formulations
Hilborn has compared various formulations for derivations for the Einstein coefficients, by various authors. For example, Herzberg works with irradiance and wavenumber; Yariv works with energy per unit volume per unit frequency interval, as is the case in the more recent (2008) formulation. Mihalas & Weibel-Mihalas work with radiance and frequency, as does Chandrasekhar, and Goody & Yung; Loudon uses angular frequency and radiance.
Spontaneous emission
Spontaneous emission is the process by which an electron "spontaneously" (i.e. without any outside influence) decays from a higher energy level to a lower one. The process is described by the Einstein coefficient A21 (s−1), which gives the probability per unit time that an electron in state 2 with energy will decay spontaneously to state 1 with energy , emitting a photon with an energy . Due to the energy-time uncertainty principle, the transition actually produces photons within a narrow range of frequencies called the spectral linewidth. If is the number density of atoms in state i , then the change in the number density of atoms in state 2 per unit time due to spontaneous emission will be
The same process results in an increase in the population of state 1:
Stimulated emission
Stimulated emission (also known as induced emission) is the process by which an electron is induced to jump from a higher energy level to a lower one by the presence of electromagnetic radiation at (or near) the frequency of the transition. From the thermodynamic viewpoint, this process must be regarded as negative absorption. The process is described by the Einstein coefficient (m3 J−1 s−2), which gives the probability per unit time per unit energy density of the radiation field per unit frequency that an electron in state 2 with energy will decay to state 1 with energy , emitting a photon with an energy . The change in the number density of atoms in state 1 per unit time due to induced emission will be
where denotes the spectral energy density of the isotropic radiation field at the frequency of the transition (see Planck's law).
Stimulated emission is one of the fundamental processes that led to the development of the laser. Laser radiation is, however, very far from the present case of isotropic radiation.
Photon absorption
Absorption is the process by which a photon is absorbed by the atom, causing an electron to jump from a lower energy level to a higher one. The process is described by the Einstein coefficient (m3 J−1 s−2), which gives the probability per unit time per unit energy density of the radiation field per unit frequency that an electron in state 1 with energy will absorb a photon with an energy and jump to state 2 with energy . The change in the number density of atoms in state 1 per unit time due to absorption will be
Detailed balancing
The Einstein coefficients are fixed probabilities per time associated with each atom, and do not depend on the state of the gas of which the atoms are a part. Therefore, any relationship that we can derive between the coefficients at, say, thermodynamic equilibrium will be valid universally.
At thermodynamic equilibrium, we will have a simple balancing, in which the net change in the number of any excited atoms is zero, being balanced by loss and gain due to all processes. With respect to bound-bound transitions, we will have detailed balancing as well, which states that the net exchange between any two levels will be balanced. This is because the probabilities of transition cannot be affected by the presence or absence of other excited atoms. Detailed balance (valid only at equilibrium) requires that the change in time of the number of atoms in level 1 due to the above three processes be zero:
Along with detailed balancing, at temperature we may use our knowledge of the equilibrium energy distribution of the atoms, as stated in the Maxwell–Boltzmann distribution, and the equilibrium distribution of the photons, as stated in Planck's law of black body radiation to derive universal relationships between the Einstein coefficients.
From Boltzmann distribution we have for the number of excited atomic species i:
where n is the total number density of the atomic species, excited and unexcited, k is the Boltzmann constant, T is the temperature, is the degeneracy (also called the multiplicity) of state i, and Z is the partition function. From Planck's law of black-body radiation at temperature we have for the spectral radiance (radiance is energy per unit time per unit solid angle per unit projected area, when integrated over an appropriate spectral interval) at frequency
where
where is the speed of light and is the Planck constant.
Substituting these expressions into the equation of detailed balancing and remembering that yields
or
The above equation must hold at any temperature, so from one gets
and from
Therefore, the three Einstein coefficients are interrelated by
and
When this relation is inserted into the original equation, one can also find a relation between and , involving Planck's law.
Oscillator strengths
The oscillator strength is defined by the following relation to the cross section for absorption:
where is the electron charge, is the electron mass, and and are normalized distribution functions in frequency and angular frequency respectively.
This allows all three Einstein coefficients to be expressed in terms of the single oscillator strength associated with the particular atomic spectral line:
Dipole approximation
The value of A and B coefficients can be calculated using quantum mechanics where dipole approximations in time dependent perturbation theory is used. While the calculation of B coefficient can be done easily, that of A coefficient requires using results of second quantization. This is because the theory developed by dipole approximation and time dependent perturbation theory gives a semiclassical description of electronic transition which goes to zero as perturbing fields go to zero. The A coefficient which governs spontaneous emission should not go to zero as perturbing fields go to zero. The result for transition rates of different electronic levels as a result of spontaneous emission is given as (in SI units):
For B coefficient, straightforward application of dipole approximation in time dependent perturbation theory yields (in SI units):
Note that the rate of transition formula depends on dipole moment operator. For higher order approximations, it involves quadrupole moment and other similar terms.
Here, the B coefficients are chosen to correspond to energy distribution function. Often these different definitions of B coefficients are distinguished by superscript, for example, where term corresponds to frequency distribution and term corresponds to distribution. The formulas for B coefficients varies inversely to that of the energy distribution chosen, so that the transition rate is same regardless of convention.
Hence, AB coefficients are calculated using dipole approximation as:
where and B coefficients correspond to energy distribution function.
Hence the following ratios are also derived:
and
Derivation of Planck's law
It follows from theory that:
where and are number of occupied energy levels of and respectively, where . Note that from time dependent perturbation theory application, the fact that only radiation whose is close to value of can produce respective stimulated emission or absorption, is used.
Where Maxwell distribution involving and ensures
Solving for for equilibrium condition using the above equations and ratios while generalizing to , we get:
which is the angular frequency energy distribution from Planck's law.
See also
Transition dipole moment
Oscillator strength
Breit–Wigner distribution
Electronic configuration
Fano resonance
Siegbahn notation
Atomic spectroscopy
Molecular radiation, continuous spectra emitted by molecules
References
Cited bibliography
Chandrasekhar, S. (1950). Radiative Transfer, Oxford University Press, Oxford.
Garrison, J. C., Chiao, R. Y. (2008). Quantum Optics, Oxford University Press, Oxford UK, .
Goody, R. M., Yung, Y. L. (1989). Atmospheric Radiation: Theoretical Basis, 2nd edition, Oxford University Press, Oxford, New York, 1989, .
Translated as "Quantum-theoretical Re-interpretation of kinematic and mechanical relations" in
Herzberg, G. (1950). Molecular Spectroscopy and Molecular Structure, vol. 1, Diatomic Molecules, second edition, Van Nostrand, New York.
Loudon, R. (1973/2000). The Quantum Theory of Light, (first edition 1973), third edition 2000, Oxford University Press, Oxford UK, .
Mihalas, D., Weibel-Mihalas, B. (1984). Foundations of Radiation Hydrodynamics, Oxford University Press, New York .
Yariv, A. (1967/1989). Quantum Electronics, third edition, John Wiley & sons, New York, .
Other reading
External links
Emission Spectra from various light sources
Emission spectroscopy
bg:Атомна спектрална линия
it:Linea spettrale atomica
pl:Widmo liniowe
zh:原子谱线#爱因斯坦系数 | Einstein coefficients | [
"Physics",
"Chemistry"
] | 3,060 | [
"Emission spectroscopy",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
2,210,759 | https://en.wikipedia.org/wiki/Finite%20strain%20theory | In continuum mechanics, the finite strain theory—also called large strain theory, or large deformation theory—deals with deformations in which strains and/or rotations are large enough to invalidate assumptions inherent in infinitesimal strain theory. In this case, the undeformed and deformed configurations of the continuum are significantly different, requiring a clear distinction between them. This is commonly the case with elastomers, plastically deforming materials and other fluids and biological soft tissue.
Displacement field
Deformation gradient tensor
The deformation gradient tensor is related to both the reference and current configuration, as seen by the unit vectors and , therefore it is a two-point tensor.
Two types of deformation gradient tensor may be defined.
Due to the assumption of continuity of , has the inverse , where is the spatial deformation gradient tensor. Then, by the implicit function theorem, the Jacobian determinant must be nonsingular, i.e.
The material deformation gradient tensor is a second-order tensor that represents the gradient of the mapping function or functional relation , which describes the motion of a continuum. The material deformation gradient tensor characterizes the local deformation at a material point with position vector , i.e., deformation at neighbouring points, by transforming (linear transformation) a material line element emanating from that point from the reference configuration to the current or deformed configuration, assuming continuity in the mapping function , i.e. differentiable function of and time , which implies that cracks and voids do not open or close during the deformation. Thus we have,
Relative displacement vector
Consider a particle or material point with position vector in the undeformed configuration (Figure 2). After a displacement of the body, the new position of the particle indicated by in the new configuration is given by the vector position . The coordinate systems for the undeformed and deformed configuration can be superimposed for convenience.
Consider now a material point neighboring , with position vector . In the deformed configuration this particle has a new position given by the position vector . Assuming that the line segments and joining the particles and in both the undeformed and deformed configuration, respectively, to be very small, then we can express them as and . Thus from Figure 2 we have
where is the relative displacement vector, which represents the relative displacement of with respect to in the deformed configuration.
Taylor approximation
For an infinitesimal element , and assuming continuity on the displacement field, it is possible to use a Taylor series expansion around point , neglecting higher-order terms, to approximate the components of the relative displacement vector for the neighboring particle as
Thus, the previous equation can be written as
Time-derivative of the deformation gradient
Calculations that involve the time-dependent deformation of a body often require a time derivative of the deformation gradient to be calculated. A geometrically consistent definition of such a derivative requires an excursion into differential geometry but we avoid those issues in this article.
The time derivative of is
where is the (material) velocity. The derivative on the right hand side represents a material velocity gradient. It is common to convert that into a spatial gradient by applying the chain rule for derivatives, i.e.,
where is the spatial velocity gradient and where is the spatial (Eulerian) velocity at . If the spatial velocity gradient is constant in time, the above equation can be solved exactly to give
assuming at . There are several methods of computing the exponential above.
Related quantities often used in continuum mechanics are the rate of deformation tensor and the spin tensor defined, respectively, as:
The rate of deformation tensor gives the rate of stretching of line elements while the spin tensor indicates the rate of rotation or vorticity of the motion.
The material time derivative of the inverse of the deformation gradient (keeping the reference configuration fixed) is often required in analyses that involve finite strains. This derivative is
The above relation can be verified by taking the material time derivative of and noting that .
Polar decomposition of the deformation gradient tensor
The deformation gradient , like any invertible second-order tensor, can be decomposed, using the polar decomposition theorem, into a product of two second-order tensors (Truesdell and Noll, 1965): an orthogonal tensor and a positive definite symmetric tensor, i.e., where the tensor is a proper orthogonal tensor, i.e., and , representing a rotation; the tensor is the right stretch tensor; and the left stretch tensor. The terms right and left means that they are to the right and left of the rotation tensor , respectively. and are both positive definite, i.e. and for all non-zero , and symmetric tensors, i.e. and , of second order.
This decomposition implies that the deformation of a line element in the undeformed configuration onto in the deformed configuration, i.e., , may be obtained either by first stretching the element by , i.e. , followed by a rotation , i.e., ; or equivalently, by applying a rigid rotation first, i.e., , followed later by a stretching , i.e., (See Figure 3).
Due to the orthogonality of
so that and have the same eigenvalues or principal stretches, but different eigenvectors or principal directions and , respectively. The principal directions are related by
This polar decomposition, which is unique as is invertible with a positive determinant, is a corollary of the singular-value decomposition.
Transformation of a surface and volume element
To transform quantities that are defined with respect to areas in a deformed configuration to those relative to areas in a reference configuration, and vice versa, we use Nanson's relation, expressed as where is an area of a region in the deformed configuration, is the same area in the reference configuration, and is the outward normal to the area element in the current configuration while is the outward normal in the reference configuration, is the deformation gradient, and .
The corresponding formula for the transformation of the volume element is
Fundamental strain tensors
A strain tensor is defined by the IUPAC as:
"A symmetric tensor that results when a deformation gradient tensor is factorized into a rotation tensor followed or preceded by a symmetric tensor".
Since a pure rotation should not induce any strains in a deformable body, it is often convenient to use rotation-independent measures of deformation in continuum mechanics. As a rotation followed by its inverse rotation leads to no change () we can exclude the rotation by multiplying the deformation gradient tensor by its transpose.
Several rotation-independent deformation gradient tensors (or "deformation tensors", for short) are used in mechanics. In solid mechanics, the most popular of these are the right and left Cauchy–Green deformation tensors.
Cauchy strain tensor (right Cauchy–Green deformation tensor)
In 1839, George Green introduced a deformation tensor known as the right Cauchy–Green deformation tensor or Green's deformation tensor (the IUPAC recommends that this tensor be called the Cauchy strain tensor), defined as:
Physically, the Cauchy–Green tensor gives us the square of local change in distances due to deformation, i.e.
Invariants of are often used in the expressions for strain energy density functions. The most commonly used invariants are
where is the determinant of the deformation gradient and are stretch ratios for the unit fibers that are initially oriented along the eigenvector directions of the right (reference) stretch tensor (these are not generally aligned with the three axis of the coordinate systems).
Finger strain tensor
The IUPAC recommends that the inverse of the right Cauchy–Green deformation tensor (called the Cauchy strain tensor in that document), i. e., , be called the Finger strain tensor. However, that nomenclature is not universally accepted in applied mechanics.
Green strain tensor (left Cauchy–Green deformation tensor)
Reversing the order of multiplication in the formula for the right Cauchy-Green deformation tensor leads to the left Cauchy–Green deformation tensor which is defined as:
The left Cauchy–Green deformation tensor is often called the Finger deformation tensor, named after Josef Finger (1894).
The IUPAC recommends that this tensor be called the Green strain tensor.
Invariants of are also used in the expressions for strain energy density functions. The conventional invariants are defined as
where is the determinant of the deformation gradient.
For compressible materials, a slightly different set of invariants is used:
Piola strain tensor (Cauchy deformation tensor)
Earlier in 1828, Augustin-Louis Cauchy introduced a deformation tensor defined as the inverse of the left Cauchy–Green deformation tensor, . This tensor has also been called the Piola strain tensor by the IUPAC and the Finger tensor in the rheology and fluid dynamics literature.
Spectral representation
If there are three distinct principal stretches , the spectral decompositions of and is given by
Furthermore,
Observe that
Therefore, the uniqueness of the spectral decomposition also implies that . The left stretch () is also called the spatial stretch tensor while the right stretch () is called the material stretch tensor.
The effect of acting on is to stretch the vector by and to rotate it to the new orientation , i.e.,
In a similar vein,
Examples
Uniaxial extension of an incompressible material
This is the case where a specimen is stretched in 1-direction with a stretch ratio of . If the volume remains constant, the contraction in the other two directions is such that or . Then:
Simple shear
Rigid body rotation
Derivatives of stretch
Derivatives of the stretch with respect to the right Cauchy–Green deformation tensor are used to derive the stress-strain relations of many solids, particularly hyperelastic materials. These derivatives are
and follow from the observations that
Physical interpretation of deformation tensors
Let be a Cartesian coordinate system defined on the undeformed body and let be another system defined on the deformed body. Let a curve in the undeformed body be parametrized using . Its image in the deformed body is .
The undeformed length of the curve is given by
After deformation, the length becomes
Note that the right Cauchy–Green deformation tensor is defined as
Hence,
which indicates that changes in length are characterized by .
Finite strain tensors
The concept of strain is used to evaluate how much a given displacement differs locally from a rigid body displacement. One of such strains for large deformations is the Lagrangian finite strain tensor, also called the Green-Lagrangian strain tensor or Green–St-Venant strain tensor, defined as
or as a function of the displacement gradient tensor
or
The Green-Lagrangian strain tensor is a measure of how much differs from .
The Eulerian finite strain tensor, or Eulerian-Almansi finite strain tensor, referenced to the deformed configuration (i.e. Eulerian description) is defined as
or as a function of the displacement gradients we have
Seth–Hill family of generalized strain tensors
B. R. Seth from the Indian Institute of Technology Kharagpur was the first to show that the Green and Almansi strain tensors are special cases of a more general strain measure. The idea was further expanded upon by Rodney Hill in 1968. The Seth–Hill family of strain measures (also called Doyle-Ericksen tensors) can be expressed as
For different values of we have:
Green-Lagrangian strain tensor
Biot strain tensor
Logarithmic strain, Natural strain, True strain, or Hencky strain
Almansi strain
The second-order approximation of these tensors is
where is the infinitesimal strain tensor.
Many other different definitions of tensors are admissible, provided that they all satisfy the conditions that:
vanishes for all rigid-body motions
the dependence of on the displacement gradient tensor is continuous, continuously differentiable and monotonic
it is also desired that reduces to the infinitesimal strain tensor as the norm
An example is the set of tensors
which do not belong to the Seth–Hill class, but have the same 2nd-order approximation as the Seth–Hill measures at for any value of .
Physical interpretation of the finite strain tensor
The diagonal components of the Lagrangian finite strain tensor are related to the normal strain, e.g.
where is the normal strain or engineering strain in the direction .
The off-diagonal components of the Lagrangian finite strain tensor are related to shear strain, e.g.
where is the change in the angle between two line elements that were originally perpendicular with directions and , respectively.
Under certain circumstances, i.e. small displacements and small displacement rates, the components of the Lagrangian finite strain tensor may be approximated by the components of the infinitesimal strain tensor
Compatibility conditions
The problem of compatibility in continuum mechanics involves the determination of allowable single-valued continuous fields on bodies. These allowable conditions leave the body without unphysical gaps or overlaps after a deformation. Most such conditions apply to simply-connected bodies. Additional conditions are required for the internal boundaries of multiply connected bodies.
Compatibility of the deformation gradient
The necessary and sufficient conditions for the existence of a compatible field over a simply connected body are
Compatibility of the right Cauchy–Green deformation tensor
The necessary and sufficient conditions for the existence of a compatible field over a simply connected body are
We can show these are the mixed components of the Riemann–Christoffel curvature tensor. Therefore, the necessary conditions for -compatibility are that the Riemann–Christoffel curvature of the deformation is zero.
Compatibility of the left Cauchy–Green deformation tensor
General sufficiency conditions for the left Cauchy–Green deformation tensor in three-dimensions were derived by Amit Acharya. Compatibility conditions for two-dimensional fields were found by Janet Blume.
See also
Infinitesimal strain
Compatibility (mechanics)
Curvilinear coordinates
Piola–Kirchhoff stress tensor, the stress tensor for finite deformations.
Stress measures
Strain partitioning
References
Further reading
External links
Prof. Amit Acharya's notes on compatibility on iMechanica
Tensors
Continuum mechanics
Elasticity (physics)
Non-Newtonian fluids
Solid mechanics | Finite strain theory | [
"Physics",
"Materials_science",
"Engineering"
] | 2,919 | [
"Solid mechanics",
"Physical phenomena",
"Tensors",
"Elasticity (physics)",
"Continuum mechanics",
"Deformation (mechanics)",
"Classical mechanics",
"Mechanics",
"Physical properties"
] |
2,211,120 | https://en.wikipedia.org/wiki/Beryllium%20oxide | Beryllium oxide (BeO), also known as beryllia, is an inorganic compound with the formula BeO. This colourless solid is an electrical insulator with a higher thermal conductivity than any other non-metal except diamond, and exceeds that of most metals. As an amorphous solid, beryllium oxide is white. Its high melting point leads to its use as a refractory material. It occurs in nature as the mineral bromellite. Historically and in materials science, beryllium oxide was called glucina or glucinium oxide, owing to its sweet taste.
Preparation and chemical properties
Beryllium oxide can be prepared by calcining (roasting) beryllium carbonate, dehydrating beryllium hydroxide, or igniting metallic beryllium:
BeCO3 → BeO + CO2
Be(OH)2 → BeO + H2O
2 Be + O2 → 2 BeO
Igniting beryllium in air gives a mixture of BeO and the nitride Be3N2. Unlike the oxides formed by the other Group 2 elements (alkaline earth metals), beryllium oxide is amphoteric rather than basic.
Beryllium oxide formed at high temperatures (>800 °C) is inert, but dissolves easily in hot aqueous ammonium bifluoride (NH4HF2) or a solution of hot concentrated sulfuric acid (H2SO4) and ammonium sulfate ((NH4)2SO4).
Structure
BeO crystallizes in the hexagonal wurtzite structure, featuring tetrahedral Be2+ and O2− centres, like lonsdaleite and w-BN (with both of which it is isoelectronic). In contrast, the oxides of the larger group-2 metals, i.e., MgO, CaO, SrO, BaO, crystallize in the cubic rock salt motif with octahedral geometry about the dications and dianions. At high temperature the structure transforms to a tetragonal form.
In the vapour phase, beryllium oxide is present as discrete diatomic molecules. In the language of valence bond theory, these molecules can be described as adopting sp orbital hybridisation on both atoms, featuring one σ bond (between one sp orbital on each atom) and one π bond (between aligned p orbitals on each atom oriented perpendicular to the molecular axis). Molecular orbital theory provides a slightly different picture with no net σ bonding (because the 2s orbitals of the two atoms combine to form a filled sigma bonding orbital and a filled sigma* anti-bonding orbital) and two π bonds formed between both pairs of p orbitals oriented perpendicular to the molecular axis. The sigma orbital formed by the p orbitals aligned along the molecular axis is unfilled. The corresponding ground state is ...(2sσ)2(2sσ*)2(2pπ)4 (as in the isoelectronic C2 molecule), where both bonds can be considered as dative bonds from oxygen towards beryllium.
Applications
High-quality crystals may be grown hydrothermally, or otherwise by the Verneuil method. For the most part, beryllium oxide is produced as a white amorphous powder, sintered into larger shapes. Impurities, like carbon, can give rise to a variety of colours to the otherwise colourless host crystals.
Sintered beryllium oxide is a very stable ceramic. Beryllium oxide is used in rocket engines and as a transparent protective over-coating on aluminised telescope mirrors. Metal-coated beryllium oxide (BeO) plates are used in the control systems of aircraft drive devices.
Beryllium oxide is used in many high-performance semiconductor parts for applications such as radio equipment because it has good thermal conductivity while also being a good electrical insulator. It is used as a filler in some thermal interface materials such as thermal grease. It is also employed in heat sinks and spreaders that cool electronic devices, such as CPUs, lasers, and power amplifiers. Some power semiconductor devices have used beryllium oxide ceramic between the silicon chip and the metal mounting base of the package to achieve a lower value of thermal resistance than a similar construction of aluminium oxide. It is also used as a structural ceramic for high-performance microwave devices, vacuum tubes, cavity magnetrons , and gas lasers. BeO has been proposed as a neutron moderator for naval marine high-temperature gas-cooled reactors (MGCR), as well as NASA's Kilopower nuclear reactor for space applications.
Safety
BeO is carcinogenic in powdered form and may cause a chronic allergic-type lung disease berylliosis. Once fired into solid form, it is safe to handle if not subjected to machining that generates dust. Clean breakage releases little dust, but crushing or grinding actions can pose a risk.
References
Cited sources
External links
Beryllium Oxide MSDS from American Beryllia
IARC Monograph "Beryllium and Beryllium Compounds"
International Chemical Safety Card 1325
National Pollutant Inventory – Beryllium and compounds
NIOSH Pocket guide to Chemical Hazards
Beryllium compounds
Oxides
IARC Group 1 carcinogens
Ceramic materials
Nuclear technology
II-VI semiconductors
Wurtzite structure type | Beryllium oxide | [
"Physics",
"Chemistry",
"Engineering"
] | 1,110 | [
"Inorganic compounds",
"Semiconductor materials",
"Oxides",
"Salts",
"Nuclear technology",
"II-VI semiconductors",
"Ceramic materials",
"Nuclear physics",
"Ceramic engineering"
] |
9,427,669 | https://en.wikipedia.org/wiki/Dilution%20assay | The term dilution assay is generally used to designate a special type of bioassay in which one or more preparations (e.g. a drug) are administered to experimental units at different dose levels inducing a measurable biological response. The dose levels are prepared by dilution in a diluent that is inert in respect of the response. The experimental units can for example be cell-cultures, tissues, organs or living animals. The biological response may be quantal (e.g. positive/negative) or quantitative (e.g. growth). The goal is to relate the response to the dose, usually by interpolation techniques, and in many cases to express the potency/activity of the test preparation(s) relative to a standard of known potency/activity.
Dilution assays can be direct or indirect. In a direct dilution assay the amount of dose needed to produce a specific (fixed) response is measured, so that the dose is a stochastic variable defining the tolerance distribution. Conversely, in an indirect dilution assay the dose levels are administered at fixed dose levels, so that the response is a stochastic variable.
In some assays, there may be strong reasons for believing that all the constituents of the test preparation except one, are without any effect on the studied response of the subjects. An assay of the preparation against a standard preparation of the effective constituent, is then equivalent to an analysis for determining the content of the constituent. This may be described as analytical dilution assay.
Statistical models
For a mathematical definition of a dilution assay an observation space is defined and a function so that the responses are mapped to the set of real numbers. It is now assumed that a function exists which relates the dose to the response
in which is an error term with expectation 0. is usually assumed to be continuous and monotone. In situations where a standard preparation is included it is furthermore assumed that the test preparation behaves like a dilution (or concentration) of the standard
, for all
where is the relative potency of . This is the fundamental assumption of similarity of dose-response curves which is necessary for a meaningful and unambiguous definition of the relative potency. In many cases it is convenient to apply a power transformation with or a logarithmic transformation . The latter can be shown to be a limit case of so if is written for the log transformation the above equation can be redefined as
, for all .
Estimates of are usually restricted to be member of a well-defined parametric family of functions, for example the family of linear functions characterized by an intercept and a slope. Statistical techniques such as optimization by Maximum Likelihood can be used to calculate estimates of the parameters. Of notable importance in this respect is the theory of Generalized Linear Models with which a wide range of dilution assays can be modelled. Estimates of may describe satisfactorily over the range of doses tested, but they do not necessarily have to describe beyond that range. However, this does not mean that dissimilar curves can be restricted to an interval where they happen to be similar.
In practice, itself is rarely of interest. More of interest is an estimate of or an estimate of the dose that induces a specific response. These estimates involve taking ratios of statistically dependent parameter estimates. Fieller's theorem can be used to compute confidence intervals of these ratios.
Some special cases deserve particular mention because of their widespread use: If is linear and this is known as a slope-ratio model. If is linear and this is known as a parallel line model. Another commonly applied model is the probit model where is the cumulative normal distribution function, and follows a binomial distribution.
Example: Microbiological assay of antibiotics
An antibiotic standard (shown in red) and test preparation (shown in blue) are applied at three dose levels to sensitive microorganisms on a layer of agar in petri dishes. The stronger the dose the larger the zone of inhibition of growth of the microorganisms. The biological response is in this case the zone of inhibition and the diameter of this zone can be used as the measurable response. The doses are transformed to logarithms and the method of least squares is used to fit two parallel lines to the data. The horizontal distance between the two lines (shown in green) serves as an estimate of the potency of the test preparation relative to the standard.
Software
The major statistical software packages do not cover dilution assays although a statistician should not have difficulties to write suitable scripts or macros to that end. Several special purpose software packages for dilution assays exist.
References
Finney, D.J. (1971). Probit Analysis, 3rd Ed. Cambridge University Press, Cambridge.
Finney, D.J. (1978). Statistical Method in Biological Assay, 3rd Ed. Griffin, London.
Govindarajulu, Z. (2001). Statistical Techniques in Bioassay, 2nd revised and enlarged edition, Karger, New York.
External links
Software for dilution assays:
PLA
CombiStats
Unistat
BioAssay
Drug manufacturing
Drug discovery
Biostatistics | Dilution assay | [
"Chemistry",
"Biology"
] | 1,061 | [
"Life sciences industry",
"Medicinal chemistry",
"Drug discovery"
] |
9,427,744 | https://en.wikipedia.org/wiki/Isogonal | Isogonal, a mathematical term meaning "having similar angles", may refer to:
Isogonal figure or polygon, polyhedron, polytope or tiling
Isogonal trajectory, in curve theory
Isogonal conjugate, in triangle geometry
See also
Isogonic line, in the study of Earth's magnetic field, a line of constant magnetic declination
Geometry | Isogonal | [
"Mathematics"
] | 77 | [
"Geometry"
] |
9,428,917 | https://en.wikipedia.org/wiki/Snub%20%28geometry%29 | In geometry, a snub is an operation applied to a polyhedron. The term originates from Kepler's names of two Archimedean solids, for the snub cube () and snub dodecahedron ().
In general, snubs have chiral symmetry with two forms: with clockwise or counterclockwise orientation. By Kepler's names, a snub can be seen as an expansion of a regular polyhedron: moving the faces apart, twisting them about their centers, adding new polygons centered on the original vertices, and adding pairs of triangles fitting between the original edges.
The terminology was generalized by Coxeter, with a slightly different definition, for a wider set of uniform polytopes.
Conway snubs
John Conway explored generalized polyhedron operators, defining what is now called Conway polyhedron notation, which can be applied to polyhedra and tilings. Conway calls Coxeter's operation a semi-snub.
In this notation, snub is defined by the dual and gyro operators, as s = dg, and it is equivalent to an alternation of a truncation of an ambo operator. Conway's notation itself avoids Coxeter's alternation (half) operation since it only applies for polyhedra with only even-sided faces.
In 4-dimensions, Conway suggests the snub 24-cell should be called a semi-snub 24-cell because, unlike 3-dimensional snub polyhedra are alternated omnitruncated forms, it is not an alternated omnitruncated 24-cell. It is instead actually an alternated truncated 24-cell.
Coxeter's snubs, regular and quasiregular
Coxeter's snub terminology is slightly different, meaning an alternated truncation, deriving the snub cube as a snub cuboctahedron, and the snub dodecahedron as a snub icosidodecahedron. This definition is used in the naming of two Johnson solids: the snub disphenoid and the snub square antiprism, and of higher dimensional polytopes, such as the 4-dimensional snub 24-cell, with extended Schläfli symbol s{3,4,3}, and Coxeter diagram .
A regular polyhedron (or tiling), with Schläfli symbol , and Coxeter diagram , has truncation defined as , and , and has snub defined as an alternated truncation , and . This alternated construction requires q to be even.
A quasiregular polyhedron, with Schläfli symbol or r{p,q}, and Coxeter diagram or , has quasiregular truncation defined as or tr{p,q}, and or , and has quasiregular snub defined as an alternated truncated rectification or htr{p,q} = sr{p,q}, and or .
For example, Kepler's snub cube is derived from the quasiregular cuboctahedron, with a vertical Schläfli symbol , and Coxeter diagram , and so is more explicitly called a snub cuboctahedron, expressed by a vertical Schläfli symbol , and Coxeter diagram . The snub cuboctahedron is the alternation of the truncated cuboctahedron, , and .
Regular polyhedra with even-order vertices can also be snubbed as alternated truncations, like the snub octahedron, as , , is the alternation of the truncated octahedron, , and . The snub octahedron represents the pseudoicosahedron, a regular icosahedron with pyritohedral symmetry.
The snub tetratetrahedron, as , and , is the alternation of the truncated tetrahedral symmetry form, , and .
Coxeter's snub operation also allows n-antiprisms to be defined as or , based on n-prisms or , while is a regular n-hosohedron, a degenerate polyhedron, but a valid tiling on the sphere with digon or lune-shaped faces.
The same process applies for snub tilings:
Examples
Nonuniform snub polyhedra
Nonuniform polyhedra with all even-valance vertices can be snubbed, including some infinite sets; for example:
Coxeter's uniform snub star-polyhedra
Snub star-polyhedra are constructed by their Schwarz triangle (p q r), with rational ordered mirror-angles, and all mirrors active and alternated.
Coxeter's higher-dimensional snubbed polytopes and honeycombs
In general, a regular polychoron with Schläfli symbol , and Coxeter diagram , has a snub with extended Schläfli symbol , and .
A rectified polychoron = r{p,q,r}, and has snub symbol = sr{p,q,r}, and .
Examples
There is only one uniform convex snub in 4-dimensions, the snub 24-cell. The regular 24-cell has Schläfli symbol, , and Coxeter diagram , and the snub 24-cell is represented by , Coxeter diagram . It also has an index 6 lower symmetry constructions as or s{31,1,1} and , and an index 3 subsymmetry as or sr{3,3,4}, and or .
The related snub 24-cell honeycomb can be seen as a or s{3,4,3,3}, and , and lower symmetry or sr{3,3,4,3} and or , and lowest symmetry form as or s{31,1,1,1} and .
A Euclidean honeycomb is an alternated hexagonal slab honeycomb, s{2,6,3}, and or sr{2,3,6}, and or sr{2,3[3]}, and .
Another Euclidean (scaliform) honeycomb is an alternated square slab honeycomb, s{2,4,4}, and or sr{2,41,1} and :
The only uniform snub hyperbolic uniform honeycomb is the snub hexagonal tiling honeycomb, as s{3,6,3} and , which can also be constructed as an alternated hexagonal tiling honeycomb, h{6,3,3}, . It is also constructed as s{3[3,3]} and .
Another hyperbolic (scaliform) honeycomb is a snub order-4 octahedral honeycomb, s{3,4,4}, and .
See also
Snub polyhedron
References
Coxeter, H.S.M. Regular Polytopes, (3rd edition, 1973), Dover edition, (pp. 154–156 8.6 Partial truncation, or alternation)
Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, , Googlebooks
(Paper 17) Coxeter, The Evolution of Coxeter–Dynkin diagrams, [Nieuw Archief voor Wiskunde 9 (1991) 233–248]
(Paper 22) H.S.M. Coxeter, Regular and Semi Regular Polytopes I, [Math. Zeit. 46 (1940) 380–407, MR 2,10]
(Paper 23) H.S.M. Coxeter, Regular and Semi-Regular Polytopes II, [Math. Zeit. 188 (1985) 559–591]
(Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3–45]
Coxeter, The Beauty of Geometry: Twelve Essays, Dover Publications, 1999, (Chapter 3: Wythoff's Construction for Uniform Polytopes)
Norman Johnson Uniform Polytopes, Manuscript (1991)
N.W. Johnson: The Theory of Uniform Polytopes and Honeycombs, Ph.D. Dissertation, University of Toronto, 1966
John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008,
Richard Klitzing, Snubs, alternated facetings, and Stott–Coxeter–Dynkin diagrams, Symmetry: Culture and Science, Vol. 21, No.4, 329–344, (2010)
Geometry
Snub tilings | Snub (geometry) | [
"Physics",
"Mathematics"
] | 1,857 | [
"Tessellation",
"Snub tilings",
"Geometry",
"Symmetry"
] |
9,430,536 | https://en.wikipedia.org/wiki/LIESST | In chemistry and physics, LIESST (Light-Induced Excited Spin-State Trapping) is a method of changing the electronic spin state of a compound by means of irradiation with light.
Many transition metal complexes with electronic configuration d4-d7 are capable of spin crossover (and d8 when molecular symmetry is lower than Oh). Spin crossover refers to where a transition from the high spin (HS) state to the low spin (LS) state or vice versa occurs. Alternatives to LIESST include using thermal changes and pressure to induce spin crossover. The metal most commonly exhibiting spin crossover is iron, with the first known example, an iron(III) tris(dithiocarbamato) complex, reported by Cambi et al. in 1931.
For iron complexes, LIESST involves excitation of the low spin complex with green light to a triplet state. Two successive steps of intersystem crossing result in the high spin complex. Movement from the high spin complex to the low spin complex requires excitation with red light.
References
Laboratory techniques
Coordination chemistry | LIESST | [
"Chemistry"
] | 221 | [
"Coordination chemistry",
"nan"
] |
9,430,598 | https://en.wikipedia.org/wiki/Glycoazodyes | Glycoazodyes (or GADs) are a family of "naturalised" synthetic dyes, so called because they are the conjugation of common commercial azo dyes with sugar through a "linker". This principle is summarised in the scheme below.
Generations, Structure, and Synthesis
First-generation
The first-generation of Glycoazodyes was first reported in 2007. These Glycoazodyes use a diester linker, specifically a succinyl bridge. An ester group bonds the sugar to an n-alkane spacer, and the spacer bonds to the dye through another ester group.
Synthesis
First-generation Glycoazodyes are synthesized using glucose, galactose or lactose as the sugar group. The point of esterification is controlled by selectively protecting alcohol groups on the sugar, or by choosing an azo dye with a different alcohol group position. The dye or the sugar group can be succinylated by reacting a free alcohol group with succinic anhydride. The resulting hemisuccinate then reacts with a free alcohol group on the dye or the sugar. The condensation product is then deprotected.
Second-generation
The second-generation of Glycoazodyes was first reported in 2008. These Glycoazodyes use an etherel linker. An ether group bonds the sugar and the dye to an n-alkane spacer, and the spacer bonds to the dye through another ether group. Like first-generation Glycoazodyes, second-generation Glycoazodyes use glucose, galactose or lactose as the sugar group.
Synthesis
Like first-generation Glycoazodyes, second-generation Glycoazodyes are synthesized using a glucose, galactose, or lactose sugar group. The point of the ether bond is controlled by selectively protecting alcohol groups on the sugar, or by choosing an azo dye with a different alcohol group position. An unprotected alcohol group of either the sugar or the dye is reacted with an n-carbon, terminal dibromoalkane in a solution of potassium hydroxide and 18-crown-6 ether, using non-anhydrous tetrahydrofuran as the solvent. The potassium hydroxide produces an alkoxide ion from the alcohol while the 18-crown-6 ether acts as a phase-transfer agent. The reaction proceeds through a classic SN-2 nucleophilic substitution. A terminal Bromo group is eliminated, and a bond is formed between the oxygen of the alcohol and the carbon of the alkane. An ether is produced between the n-carbon linker and the sugar or the dye. At this stage, the terminal Bromo group that remains may react under the same conditions with the free alcohol of a corresponding sugar or dye. The condensation product is then deprotected.
Third-generation
The third-generation of Glycoazodyes was first reported in 2015. These Glycoazodyes use an amido-ester linker. An amide group bonds the sugar to an n-alkane spacer, and the spacer is bonded to the dye through an ester group.
Synthesis
Third-generation Glycoazodyes are synthesized using amino sugars such as 6-amino-6-deoxy-D-galactose or 6' amino-6'-deoxylactose. The point of the amide bond is controlled by protecting the alcohol groups on the sugar and allowing the free amine to react. The point of the ester group is controlled by choosing a azo dye with a different alcohol group position. Either the dye or the sugar is reacted with succinic anhydride. This forms an amide group with the sugar or an ester group with the dye. The free carboxylic acid may then react with the alcohol group or amine group on the corresponding dye or sugar. The condensation product is then deprotected.
Properties
A variety of fabrics such as wool, silk, nylon, polyester, polyacrylic, polyacetate, and polyurethane may be dyed with Glycoazodyes under moderate temperatures and pressures in aqueous solutions. First-generation Glycoazodyes dye cotton poorly. However, second-generation Glycoazodyes dye cotton effectively. Wool dyed with Glycoazodyes shows good fastness when exposed to the ISO 105-C06 washing and ISO 105 X12 rubbing tests.
Glycoazodyes vary in their water solubility. They may be soluble in cold to warm water and may dissolve after stirring or upon addition.
Minor variations in absorption spectra occur when Glycoazodye solutions are prepared, using water, acetone, or methanol solvents. Converting a parent azo dye to a Glycoazodye may produce a small hypsochromic shift in the absorption spectra.
Environmental impact
Several properties may make Glycoazodyes an environmentally friendly alternative to traditional synthetic dyes. The increased hydrophilicity of Glycoazodyes allows for the elimination of surfactants, mordants, and salts, during the dyeing process and permits the aqueous dying of a variety of textiles at moderate temperatures and pressures. The unique structure may also allow for the treatment of textile effluent through biological means. Fusarium oxysporum efficiently decolourizes the first-generation Glycoazodye 4-{N,N-Bis[2-(D-galactopyranos-6-yloxy)ethyl]-amino}azobenzene. Various other Ascomycota fungi show a similar potential to decolourise Glycoazodyes, but to a lesser extent. Detoxification has been measured, using the Daphnia magna acute toxicity test, showing a 92% dye detoxification after 6 days. This detoxification method produces low concentrations of nitrobenzene, aniline, and nitrosobenzene.
External links
http://onlinelibrary.wiley.com/doi/10.1002/ejoc.200600686/abstract
References
Dyes
Azo compounds
Organic pigments
Carbohydrate chemistry
Carbohydrates | Glycoazodyes | [
"Chemistry"
] | 1,347 | [
"Biomolecules by chemical classification",
"Carbohydrates",
"Organic compounds",
"Carbohydrate chemistry",
"nan",
"Chemical synthesis",
"Glycobiology"
] |
9,431,554 | https://en.wikipedia.org/wiki/Preprophase%20band | The preprophase band is a microtubule array found in plant cells that are about to undergo cell division and enter the preprophase stage of the plant cell cycle. Besides the phragmosome, it is the first microscopically visible sign that a plant cell is about to enter mitosis. The preprophase band was first observed and described by Jeremy Pickett-Heaps and Donald Northcote at Cambridge University in 1966.
Just before mitosis starts, the preprophase band forms as a dense band of microtubules around the phragmosome and the future division plane just below the plasma membrane. It encircles the nucleus at the equatorial plane of the future mitotic spindle when dividing cells enter the G2 phase of the cell cycle after DNA replication is complete. The preprophase band consists mainly of microtubules and microfilaments (actin) and is generally 2-3 μm wide. When stained with fluorescent markers, it can be seen as two bright spots close to the cell wall on either side of the nucleus.
Plant cells lack centrosomes as microtubule organizing centers. Instead, the microtubules of the mitotic spindle aggregate on the nuclear surface and are reoriented to form the spindle at the end of prophase. The preprophase band also functions in properly orienting the mitotic spindle, and contributes to efficient spindle formation during prometaphase
The preprophase band disappears as soon as the nuclear envelope breaks down and the mitotic spindle forms, leaving behind an actin-depleted zone. However, its position marks the future fusion sites for the new cell plate with the existing cell wall during telophase. When mitosis is completed, the cell plate and new cell wall form starting from the center along the plane occupied by the phragmosome. The cell plate grows outwards until it fuses with the cell wall of the dividing cell at exactly the spots predicted by the position of the preprophase band.
Bibliography
P.H. Raven, R.F. Evert, S.E. Eichhorn (2005): Biology of Plants, 7th Edition, W.H. Freeman and Company Publishers, New York, NY,
L. Taiz, E. Zeiger (2006): Plant Physiology, 4th Edition, Sinauer Associates, Inc., Publishers, Sunderland, MA,
Notes and references
Cell cycle
Mitosis
Plant cells | Preprophase band | [
"Biology"
] | 509 | [
"Cell cycle",
"Cellular processes",
"Mitosis"
] |
9,431,918 | https://en.wikipedia.org/wiki/Phosphatidylethanolamine | Phosphatidylethanolamine (PE) is a class of phospholipids found in biological membranes. They are synthesized by the addition of cytidine diphosphate-ethanolamine to diglycerides, releasing cytidine monophosphate. S-Adenosyl methionine can subsequently methylate the amine of phosphatidylethanolamines to yield phosphatidylcholines.
Function
In cells
Phosphatidylethanolamines are found in all living cells, composing 25% of all phospholipids. In human physiology, they are found particularly in nervous tissue such as the white matter of brain, nerves, neural tissue, and in spinal cord, where they make up 45% of all phospholipids.
Phosphatidylethanolamines play a role in membrane fusion and in disassembly of the contractile ring during cytokinesis in cell division. Additionally, it is thought that phosphatidylethanolamine regulates membrane curvature. Phosphatidylethanolamine is an important precursor, substrate, or donor in several biological pathways.
As a polar head group, phosphatidylethanolamine creates a more viscous lipid membrane compared to phosphatidylcholine. For example, the melting temperature of di-oleoyl-phosphatidylethanolamine is -16 °C while the melting temperature of di-oleoyl-phosphatidylcholine is -20 °C. If the lipids had two palmitoyl chains, phosphatidylethanolamine would melt at 63 °C while phosphatidylcholine would melt already at 41 °C. Lower melting temperatures correspond, in a simplistic view, to more fluid membranes.
In humans
In humans, metabolism of phosphatidylethanolamine is thought to be important in the heart. When blood flow to the heart is restricted, the asymmetrical distribution of phosphatidylethanolamine between membrane leaflets is disrupted, and as a result the membrane is disrupted. Additionally, phosphatidylethanolamine plays a role in the secretion of lipoproteins in the liver. This is because vesicles for secretion of very low-density lipoproteins coming off of the Golgi apparatus have a significantly higher phosphatidylethanolamine concentration when compared to other vesicles containing very low-density lipoproteins. Phosphatidylethanolamine has also shown to be able to propagate infectious prions without the assistance of any proteins or nucleic acids, which is a unique characteristic of it. Phosphatidylethanolamine is also thought to play a role in blood clotting, as it works with phosphatidylserine to increase the rate of thrombin formation by promoting binding to factor V and factor X, two proteins which catalyze the formation of thrombin from prothrombin. The synthesis of endocannabinoid anandamide is performed from the phosphatidylethanolamine by the successive action of two enzymes, N-acetyltransferase and phospholipase-D.
In bacteria
Where phosphatidylcholine is the principal phospholipid in animals, phosphatidylethanolamine is the principal one in bacteria. One of the primary roles for phosphatidylethanolamine in bacterial membranes is to spread out the negative charge caused by anionic membrane phospholipids. In the bacterium E. coli, phosphatidylethanolamine play a role in supporting lactose permeases active transport of lactose into the cell, and may play a role in other transport systems as well. Phosphatidylethanolamine plays a role in the assembly of lactose permease and other membrane proteins. It acts as a 'chaperone' to help the membrane proteins correctly fold their tertiary structures so that they can function properly. When phosphatidylethanolamine is not present, the transport proteins have incorrect tertiary structures and do not function correctly.
Phosphatidylethanolamine also enables bacterial multidrug transporters to function properly and allows the formation of intermediates that are needed for the transporters to properly open and close.
Structure
As a lecithin, phosphatidylethanolamine consists of a combination of glycerol esterified with two fatty acids and phosphoric acid. Whereas the phosphate group is combined with choline in phosphatidylcholine, it is combined with ethanolamine in phosphatidylethanolamine. The two fatty acids may be identical or different, and are usually found in positions 1,2 (less commonly in positions 1,3).
Synthesis
The phosphatidylserine decarboxylation pathway and the cytidine diphosphate-ethanolamine pathways are used to synthesize phosphatidylethanolamine. Phosphatidylserine decarboxylase is the enzyme that is used to decarboxylate phosphatidylserine in the first pathway. The phosphatidylserine decarboxylation pathway is the main source of synthesis for phosphatidylethanolamine in the membranes of the mitochondria. Phosphatidylethanolamine produced in the mitochondrial membrane is also transported throughout the cell to other membranes for use. In a process that mirrors phosphatidylcholine synthesis, phosphatidylethanolamine is also made via the cytidine diphosphate-ethanolamine pathway, using ethanolamine as the substrate. Through several steps taking place in both the cytosol and endoplasmic reticulum, the synthesis pathway yields the end product of phosphatidylethanolamine. Phosphatidylethanolamine is also found abundantly in soy or egg lecithin and is produced commercially using chromatographic separation.
Regulation
Synthesis of phosphatidylethanolamine through the phosphatidylserine decarboxylation pathway occurs rapidly in the inner mitochondrial membrane. However, phosphatidylserine is made in the endoplasmic reticulum. Because of this, the transport of phosphatidylserine from the endoplasmic reticulum to the mitochondrial membrane and then to the inner mitochondrial membrane limits the rate of synthesis via this pathway. The mechanism for this transport is currently unknown but may play a role in the regulation of the rate of synthesis in this pathway.
Presence in food, health issues
Phosphatidylethanolamines in food break down to form phosphatidylethanolamine-linked Amadori products as a part of the Maillard reaction. These products accelerate membrane lipid peroxidation, causing oxidative stress to cells that come in contact with them. Oxidative stress is known to cause food deterioration and several diseases. Significant levels of Amadori-phosphatidylethanolamine products have been found in a wide variety of foods such as chocolate, soybean milk, infant formula, and other processed foods. The levels of Amadori-phosphatidylethanolamine products are higher in foods with high lipid and sugar concentrations that have high temperatures in processing. Additional studies have found that Amadori-phosphatidylethanolamine may play a role in vascular disease, act as the mechanism by which diabetes can increase the incidence of cancer, and potentially play a role in other diseases as well. Amadori-phosphatidylethanolamine has a higher plasma concentration in diabetes patients than healthy people, indicating it may play a role in the development of the disease or be a product of the disease.
See also
N-Acylphosphatidylethanolamine
Phosphatidyl ethanolamine methyltransferase
References
External links
Phosphatidylethanolamine at the AOCS Lipid Library.
Cholinergics
Phospholipids
Membrane biology
Phosphatidylethanolamines | Phosphatidylethanolamine | [
"Chemistry"
] | 1,770 | [
"Phospholipids",
"Molecular biology",
"Membrane biology",
"Signal transduction"
] |
9,431,966 | https://en.wikipedia.org/wiki/AFm%20phases | An AFm phase is an "alumina, ferric oxide, monosubstituted" phase, or aluminate ferrite monosubstituted, or , mono, in cement chemist notation (CCN). AFm phases are important hydration products in the hydration of Portland cements and hydraulic cements.
They are crystalline hydrates with generic, simplified, formula , where:
CaO, , represent calcium oxide, aluminium oxide, and ferric oxide, respectively;
CaX represents a calcium salt, where X replaces an oxide ion;
X is the substituted anion in CaX: – divalent (, …) with y = 1, or;– monovalent (, …) with y = 2.
n represents the number of water molecules in the hydrate and may be comprised between 13 and 19.
AFm form inter alia when tricalcium aluminate in CCN, reacts with dissolved calcium sulfate (), or calcium carbonate (). As the sulfate form is the dominant one in AFm phases in the hardened cement paste (HCP) in concrete, AFm is often simply referred to as Aluminate Ferrite monosulfate or calcium aluminate monosulfate. However, carbonate-AFm phases also exist (monocarbonate and hemicarbonate) and are thermodynamically more stable than the sulfate-AFm phase. During concrete carbonation by the atmospheric , sulfate-AFm phase is also slowly transformed into carbonate-AFm phases.
Different AFm phases
AFm phases belong to the class of layered double hydroxides (LDH). LDHs are hydroxides with a double layer structure. The main cation is divalent () and its electrical charge is compensated by 2 anions: . Some cations are replaced by a trivalent one (). This creates an excess of positive electrical charges which needs to be compensated by the same number of negative electrical charges born by anions. These anions are located in the space present in between adjacent hydroxide layers. The interlayers in LDHs are also occupied by water molecules accompanying the anions counterbalancing the excess of positive charges created by the cation isomorphic substitution in the hydroxides sheets.
In the most studied class of LDHs, the positive layer (c), consisting of divalent and trivalent cations, can be represented by the generic formula:
[()2]x+ [(Xn−)x/n · y]x-
where Xn− is the intercalating anion.
In AFm, the divalent cation is a calcium ion (), while the substituting trivalent cation is an aluminium ion (). The nature of the counterbalancing anion () can be very diverse: , , , , , . The thickness of the interlayer is sufficient to host a variety of relatively large anions often present as impurities: , , ... As other LDHs, AFm can incorporate in their structure toxic elements such as boron and selenium. Some AFm phases are presented in the table here below as a function of the nature of the anion counterbalancing the excess of positive charges in the hydroxide sheets. As in portlandite (), the hydroxide sheets of AFm are made of hexa-coordinated octahedral cations located in a same plane, but due to the excess of positive electrical charges, the hydroxide sheets are distorted.
To convert the oxide notation in LDH formula, the mass balance in the system has to respect the principle of the conservation of matter. Oxide ions () and water are transformed into 2 hydroxide anions () according to the acid-base reaction between and (a strong base) as typically exemplified by the quicklime (CaO) slaking process:
,
or simply,
AFm structure
AFm phases encompass a class of calcium aluminate hydrates (C-A-H) whose structure derives from that of hydrocalumite: , in which anions are partly replaced by or anions. The different mineral phases resulting from these anionic substitutions do not easily form solid solutions but behave as independent phases. The replacement of hydroxide ions by sulfate ions does not exceed . So, AFm does not refer to a single pure mineralogical phase but rather to a mix of several AFm phases co-existing in hydrated cement paste (HCP).
Considering a monovalent anion X, the chemical formula can be rearranged and expressed as 2 (or , as presented in the table in the former section). The octahedral ions are located in a plane as for calcium or magnesium hydroxides in portlandite or brucite hexagonal sheets respectively. The replacement of one divalent cation by a trivalent cation, or to a lesser extent by a cation, with a Ca:Al ratio of 2:1 (one Al substituted for every 3 cations) causes an excess of positive charge in the sheet: 2[2Ca to be compensated by 2 negative charges X–. The anions X– counterbalancing the positive charge imbalance born by the sheet are located in the interlayer whose spacing is much larger than in the layered structure of brucite or portlandite. This allows the AFm structure to accommodate larger anionic species along with water molecules.
The crystal structure of AFm phases is that of layered double hydroxide (LDH) and AFm phases also exhibit the same anion exchange properties. The carbonate anion () occupies the interlayer space in a privileged way with the highest selectivity coefficient and is more retained in the interlayer than other divalent or monovalent anions such as .
According to Miyata (1983), the equilibrium constant (selectivity coefficient) for anion exchange varies in the order for divalent anions, and for monovalent anions, but this order is not universal and varies with the nature of the LDH.
Thermodynamic stability
The thermodynamic stability of AFm phases studied at 25 °C depends on the nature of the anion present in the interlayer: stabilises AFm and displaces anions at their concentrations typically found in hardened cement paste (HCP). Different sources of carbonate can contribute to the carbonation of AFm phases: Addition of limestone filler finely ground, atmospheric , carbonate present as impurity in the gypsum interground with the clinker to avoid cement flash setting, and "alkali sulfates" condensed onto clinker during its cooling, or from added clinker kiln dust. Carbonation can rapidly occur within the fresh concrete during its setting and hardening (internal carbonate sources), or slowly continue in the long-term in the hardened cement paste in concrete exposed to external sources of carbonate: from the air, or bicarbonate anion () present in groundwater (immersed structures) or clay porewater (foundations and underground structures).
When the carbonate concentration increases in the hardened cement paste (HCP), hydroxy-AFm are progressively replaced, first by hemicarboaluminate and then by monocarboaluminate. The stability of AFm phases increases with their carbonate content as shown by Damidot and Glasser (1995) by means of their thermodynamic calculations of the system at .
When carbonate displaces sulfate from AFm, the sulfate released in the concrete pore water may react with portlandite () to form ettringite (), the main AFt phase present in the hydrated cement system.
As stressed by Matschei et al. (2007), the impact of small amounts of carbonate on the nature and stability of the AFm phases is noteworthy. Divet (2000) also notes that micromolar amount of carbonate can inhibit the formation of AFm sulfate, favoring so the crystallisation of ettringite (AFt sulfate).
See also
AFt phases
Concrete degradation#Chloride attack
Layered double hydroxides (LDH)
Friedel's salt
Ettringite (AFt)
Pitting corrosion of rebar induced by chloride attack
References
Further reading
Aluminium compounds
Cement
Concrete
Hydrates
Iron compounds
Iron(III) compounds
Silicates
Sulfate minerals
Sulfates
Carbonate minerals | AFm phases | [
"Chemistry",
"Engineering"
] | 1,721 | [
"Structural engineering",
"Sulfates",
"Hydrates",
"Salts",
"Concrete"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.