id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
5,380,260 | https://en.wikipedia.org/wiki/P700 | P700, or photosystem I primary donor, is a molecular dimer of chlorophyll a associated with the reaction-center of photosystem I in plants, algae, and cyanobacteria.
Etymology
Its name is derived from the word “pigment” (P) and the presence of a major bleaching band centered around 695-700 nm in the flash-induced absorbance difference spectra of P700/ P700+•.
Components
The structure of P700 consists of a heterodimer with two distinct chlorophyll molecules, most notably chlorophyll a and chlorophyll a’, giving it an additional name of “special pair”. Inevitably, however, the special pair of P700 behaves as if it were just one unit. This species is vital due to its ability to absorb light energy with a wavelength approximately between 430 nm-700 nm, and transfer high-energy electrons to a series of acceptors that are situated near it, like Fe-S complex, Ferridoxyn(FD), which have a higher redox potential i.e. greater affinity to electron.
Action and functions
Photosystem I operates with the functions of producing NADPH, the reduced form of NADP(Fd2-red + NADH + 2 NADP+ + H+ = Fdox + NAD+ + 2 NADPH.), at the end of the photosynthetic reaction through electron transfer, and of providing energy to a proton pump and eventually ATP, for instance in cyclic electron transport.
Excitation
When photosystem I absorbs light, an electron is excited to a higher energy level in the P700 chlorophyll. The resulting P700 with an excited electron is designated as P700*, which is a strong reducing agent due to its very negative redox potential of -1.2V .
Electron transport chain
Following the excitation of P700, one of its electrons is passed on to an electron acceptor, A, triggering charge separation producing an anionic A and cationic P700. Subsequently, electron transfer continues from A to a phylloquinone molecule known as A, and then to three iron-sulfur clusters.
Type I photosystems use iron-sulfur cluster proteins as terminal electron acceptors. Thus, the electron is transferred from F to another iron sulfur cluster, F, and then passed on to the last iron-sulfur cluster serving as an electron acceptor, F. Eventually, the electron is transferred to the protein ferredoxin, causing it to transform into its reduced form, which subsequently finalizes the process by reducing NADP to NADPH.
Linear electron transport
The rate of electrons being passed from P700* to the subsequent electron acceptors is high, preventing the electron from being transferred back to P700. Consequently, in most cases, the electrons transferring within photosystem I follow a linear pathway, from the excitation of the P700 special pair to the production of NADPH.
Cyclic electron transport
In certain situations, it is vital for the photosynthetic organism to recycle the electrons being transferred, resulting in the electron from the terminal iron-sulfur cluster F transferring back to the cytochrome b6f complex (adaptor between photosystems II and I). Utilizing the energy of P700, the cyclic pathway creates a proton gradient useful for the production of ATP, while no NADPH is produced, since the protein ferredoxin does not become reduced.
Recovery of P700
P700 recovers its lost electron by oxidizing plastocyanin, which regenerates P700.
See also
P680
Photosystem I
Photosystem II
References
Photosynthesis
Light reactions | P700 | [
"Chemistry",
"Biology"
] | 792 | [
"Biochemistry",
"Light reactions",
"Photosynthesis",
"Biochemical reactions"
] |
5,381,096 | https://en.wikipedia.org/wiki/Complex%20reflection%20group | In mathematics, a complex reflection group is a finite group acting on a finite-dimensional complex vector space that is generated by complex reflections: non-trivial elements that fix a complex hyperplane pointwise.
Complex reflection groups arise in the study of the invariant theory of polynomial rings. In the mid-20th century, they were completely classified in work of Shephard and Todd. Special cases include the symmetric group of permutations, the dihedral groups, and more generally all finite real reflection groups (the Coxeter groups or Weyl groups, including the symmetry groups of regular polyhedra).
Definition
A (complex) reflection r (sometimes also called pseudo reflection or unitary reflection) of a finite-dimensional complex vector space V is an element of finite order that fixes a complex hyperplane pointwise, that is, the fixed-space has codimension 1.
A (finite) complex reflection group is a finite subgroup of that is generated by reflections.
Properties
Any real reflection group becomes a complex reflection group if we extend the scalars from
R to C. In particular, all finite Coxeter groups or Weyl groups give examples of complex reflection groups.
A complex reflection group W is irreducible if the only W-invariant proper subspace of the corresponding vector space is the origin. In this case, the dimension of the vector space is called the rank of W.
The Coxeter number of an irreducible complex reflection group W of rank is defined as where denotes the set of reflections and denotes the set of reflecting hyperplanes.
In the case of real reflection groups, this definition reduces to the usual definition of the Coxeter number for finite Coxeter systems.
Classification
Any complex reflection group is a product of irreducible complex reflection groups, acting on the sum of the corresponding vector spaces. So it is sufficient to classify the irreducible complex reflection groups.
The irreducible complex reflection groups were classified by . They proved that every irreducible belonged to an infinite family G(m, p, n) depending on 3 positive integer parameters (with p dividing m) or was one of 34 exceptional cases, which they numbered from 4 to 37. The group G(m, 1, n) is the generalized symmetric group; equivalently, it is the wreath product of the symmetric group Sym(n) by a cyclic group of order m. As a matrix group, its elements may be realized as monomial matrices whose nonzero elements are mth roots of unity.
The group G(m, p, n) is an index-p subgroup of G(m, 1, n). G(m, p, n) is of order mnn!/p. As matrices, it may be realized as the subset in which the product of the nonzero entries is an (m/p)th root of unity (rather than just an mth root). Algebraically, G(m, p, n) is a semidirect product of an abelian group of order mn/p by the symmetric group Sym(n); the elements of the abelian group are of the form (θa1, θa2, ..., θan), where θ is a primitive mth root of unity and Σai ≡ 0 mod p, and Sym(n) acts by permutations of the coordinates.
The group G(m,p,n) acts irreducibly on Cn except in the cases m = 1, n > 1 (the symmetric group) and G(2, 2, 2) (the Klein four-group). In these cases, Cn splits as a sum of irreducible representations of dimensions 1 and n − 1.
Special cases of G(m, p, n)
Coxeter groups
When m = 2, the representation described in the previous section consists of matrices with real entries, and hence in these cases G(m,p,n) is a finite Coxeter group. In particular:
G(1, 1, n) has type An−1 = [3,3,...,3,3] = ...; the symmetric group of order n!
G(2, 1, n) has type Bn = [3,3,...,3,4] = ...; the hyperoctahedral group of order 2nn!
G(2, 2, n) has type Dn = [3,3,...,31,1] = ..., order 2nn!/2.
In addition, when m = p and n = 2, the group G(p, p, 2) is the dihedral group of order 2p; as a Coxeter group, type I2(p) = [p] = (and the Weyl group G2 when p = 6).
Other special cases and coincidences
The only cases when two groups G(m, p, n) are isomorphic as complex reflection groups are that G(ma, pa, 1) is isomorphic to G(mb, pb, 1) for any positive integers a, b (and both are isomorphic to the cyclic group of order m/p). However, there are other cases when two such groups are isomorphic as abstract groups.
The groups G(3, 3, 2) and G(1, 1, 3) are isomorphic to the symmetric group Sym(3). The groups G(2, 2, 3) and G(1, 1, 4) are isomorphic to the symmetric group Sym(4). Both G(2, 1, 2) and G(4, 4, 2) are isomorphic to the dihedral group of order 8. And the groups G(2p, p, 1) are cyclic of order 2, as is G(1, 1, 2).
List of irreducible complex reflection groups
There are a few duplicates in the first 3 lines of this list; see the previous section for details.
ST is the Shephard–Todd number of the reflection group.
Rank is the dimension of the complex vector space the group acts on.
Structure describes the structure of the group. The symbol * stands for a central product of two groups. For rank 2, the quotient by the (cyclic) center is the group of rotations of a tetrahedron, octahedron, or icosahedron (T = Alt(4), O = Sym(4), I = Alt(5), of orders 12, 24, 60), as stated in the table. For the notation 21+4, see extra special group.
Order is the number of elements of the group.
Reflections describes the number of reflections: 26412 means that there are 6 reflections of order 2 and 12 of order 4.
Degrees gives the degrees of the fundamental invariants of the ring of polynomial invariants. For example, the invariants of group number 4 form a polynomial ring with 2 generators of degrees 4 and 6.
For more information, including diagrams, presentations, and codegrees of complex reflection groups, see the tables in .
Degrees
Shephard and Todd proved that a finite group acting on a complex vector space is a complex reflection group if and only if its ring of invariants is a polynomial ring (Chevalley–Shephard–Todd theorem). For being the rank of the reflection group, the degrees of the generators of the ring of invariants are called degrees of W and are listed in the column above headed "degrees". They also showed that many other invariants of the group are determined by the degrees as follows:
The center of an irreducible reflection group is cyclic of order equal to the greatest common divisor of the degrees.
The order of a complex reflection group is the product of its degrees.
The number of reflections is the sum of the degrees minus the rank.
An irreducible complex reflection group comes from a real reflection group if and only if it has an invariant of degree 2.
The degrees di satisfy the formula
Codegrees
For being the rank of the reflection group, the codegrees of W can be defined by
For a real reflection group, the codegrees are the degrees minus 2.
The number of reflection hyperplanes is the sum of the codegrees plus the rank.
Well-generated complex reflection groups
By definition, every complex reflection group is generated by its reflections. The set of reflections is not a minimal generating set, however, and every irreducible complex reflection groups of rank has a minimal generating set consisting of either or reflections. In the former case, the group is said to be well-generated.
The property of being well-generated is equivalent to the condition for all . Thus, for example, one can read off from the classification that the group is well-generated if and only if p = 1 or m.
For irreducible well-generated complex reflection groups, the Coxeter number defined above equals the largest degree, . A reducible complex reflection group is said to be well-generated if it is a product of irreducible well-generated complex reflection groups. Every finite real reflection group is well-generated.
Shephard groups
The well-generated complex reflection groups include a subset called the Shephard groups. These groups are the symmetry groups of regular complex polytopes. In particular, they include the symmetry groups of regular real polyhedra. The Shephard groups may be characterized as the complex reflection groups that admit a "Coxeter-like" presentation with a linear diagram. That is, a Shephard group has associated positive integers and such that there is a generating set satisfying the relations
for ,
if ,
and
where the products on both sides have terms, for .
This information is sometimes collected in the Coxeter-type symbol , as seen in the table above.
Among groups in the infinite family , the Shephard groups are those in which . There are also 18 exceptional Shephard groups, of which three are real.
Cartan matrices
An extended Cartan matrix defines the unitary group. Shephard groups of rank n group have n generators.
Ordinary Cartan matrices have diagonal elements 2, while unitary reflections do not have this restriction.
For example, the rank 1 group of order p (with symbols p[], ) is defined by the matrix .
Given: .
See also
Parabolic subgroup of a reflection group
References
Hiller, Howard Geometry of Coxeter groups. Research Notes in Mathematics, 54. Pitman (Advanced Publishing Program), Boston, Mass.-London, 1982. iv+213 pp. *
Coxeter, Finite Groups Generated by Unitary Reflections, 1966, 4. The Graphical Notation, Table of n-dimensional groups generated by n Unitary Reflections. pp. 422–423
External links
MAGMA Computational Algebra System page
Reflection groups
Geometry
Group theory | Complex reflection group | [
"Physics",
"Mathematics"
] | 2,225 | [
"Euclidean symmetries",
"Reflection groups",
"Group theory",
"Fields of abstract algebra",
"Geometry",
"Symmetry"
] |
5,381,408 | https://en.wikipedia.org/wiki/Engineering%20mathematics | Mathematical engineering (or engineering mathematics) is a branch of applied mathematics, concerning mathematical methods and techniques that are typically used in engineering and industry. Along with fields like engineering physics and engineering geology, both of which may belong in the wider category engineering science, engineering mathematics is an interdisciplinary subject motivated by engineers' needs both for practical, theoretical and other considerations outside their specialization, and to deal with constraints to be effective in their work.
Description
Historically, engineering mathematics consisted mostly of applied analysis, most notably: differential equations; real and complex analysis (including vector and tensor analysis); approximation theory (broadly construed, to include asymptotic, variational, and perturbative methods, representations, numerical analysis); Fourier analysis; potential theory; as well as linear algebra and applied probability, outside of analysis. These areas of mathematics were intimately tied to the development of Newtonian physics, and the mathematical physics of that period. This history also left a legacy: until the early 20th century subjects such as classical mechanics were often taught in applied mathematics departments at American universities, and fluid mechanics may still be taught in (applied) mathematics as well as engineering departments.
The success of modern numerical computer methods and software has led to the emergence of computational mathematics, computational science, and computational engineering (the last two are sometimes lumped together and abbreviated as CS&E), which occasionally use high-performance computing for the simulation of phenomena and the solution of problems in the sciences and engineering. These are often considered interdisciplinary fields, but are also of interest to engineering mathematics.
Specialized branches include engineering optimization and engineering statistics.
Engineering mathematics in tertiary education typically consists of mathematical methods and models courses.
See also
Industrial mathematics
Control theory, a mathematical discipline concerned with engineering
Further mathematics and additional mathematics, A-level mathematics courses with similar content
Mathematical methods in electronics, signal processing and radio engineering
References
Applied mathematics | Engineering mathematics | [
"Mathematics"
] | 378 | [
"Applied mathematics"
] |
5,381,677 | https://en.wikipedia.org/wiki/Bromodeoxyuridine | Bromodeoxyuridine (5-bromo-2'-deoxyuridine, BrdU, BUdR, BrdUrd, broxuridine) is a synthetic nucleoside analogue with a chemical structure similar to thymidine. BrdU is commonly used to study cell proliferation in living tissues and has been studied as a radiosensitizer and diagnostic tool in people with cancer.
During the S phase of the cell cycle (when DNA replication occurs), BrdU can be incorporated in place of thymidine in newly synthesized DNA molecules of dividing cells. Cells that have recently performed DNA replication or DNA repair can be detected with antibodies specific for BrdU using techniques such as immunohistochemistry or immunofluorescence. BrdU-labelled cells in humans can be detected up to two years after BrdU infusion.
Because BrdU can replace thymidine during DNA replication, it can cause mutations, and its use is therefore potentially a health hazard. However, because it is neither radioactive nor myelotoxic at labeling concentrations, it is widely preferred for in vivo studies of cancer cell proliferation. However, at radiosensitizing concentrations, BrdU becomes myelosuppressive, thus limiting its use for radiosensitizing.
BrdU differs from thymidine in that BrdU substitutes a bromine atom for thymidine's CH3 group. The Br substitution can be used in X-ray diffraction experiments in crystals containing either DNA or RNA. The Br atom acts as an anomalous scatterer and its larger size will affect the crystal's X-ray diffraction enough to detect isomorphous differences as well.
Bromodeoxyuridine releases gene silencing caused by DNA methylation.
BrdU can also be used to identify microorganisms that respond to specific carbon substrates in aquatic and soil environments. A carbon substrate added to the incubations of environmental samples will cause the growth of microorganisms that can utilize that substrate. These microorganisms will then incorporate BrdU into their DNA as they grow. Community DNA can then be isolated and BrdU-labeled DNA purified using an immunocapture technique. Subsequent sequencing of the labeled DNA can then be used to identify the microbial taxa that participated in the degradation of the added carbon source.
However, it is not certain whether all microbes present in an environmental sample can incorporate BrdU into their biomass during de novo DNA synthesis. Therefore, a group of microorganisms may respond to a carbon source but go undetected using this technique. Additionally, this technique is biased towards identifying microorganisms with A- and T-rich genomes.
DNA with BrdU transcribes as usual DNA, with guanine included in RNA as a complement to BrdU.
See also
5-Bromouracil
5-Bromouridine
5-Ethynyl-2'-deoxyuridine
Trypan blue
References
External links
BrdU at OpenWetWare
BrdU Modifications at IDT DNA
Genetics techniques
Nucleosides
Staining dyes
Organobromides
Pyrimidinediones
Hydroxymethyl compounds | Bromodeoxyuridine | [
"Engineering",
"Biology"
] | 662 | [
"Genetics techniques",
"Genetic engineering"
] |
5,381,714 | https://en.wikipedia.org/wiki/Frangibility | A material is said to be frangible if through deformation it tends to break up into fragments, rather than deforming elastically and retaining its cohesion as a single object. Common crackers are examples of frangible materials, while fresh bread, which deforms plastically, is not frangible.
A structure is frangible if it breaks, distorts, or yields on impact so as to present a minimum hazard. A frangible structure is usually designed to be of minimum mass.
Light poles
A frangible light pole base is designed to break away when a vehicle strikes it. This lessens the risk of injury to occupants of the vehicle.
Airport structures
Following a serious incident where an aircraft hit a donut lighting structure at San Francisco International airport, the Federal Aviation Administration (FAA) instigated frangible design rules for such structures. A frangible object was defined as "an object of low mass, designed to break, distort or yield on impact, so as to present the minimum hazard to aircraft". This characteristic is seemingly contradictory to the operational requirements for stiffness and rigidity imposed on this type of equipment.
In order to develop international regulation for the frangibility of equipment or installations at airports, required for air navigation purposes (e.g., approach lighting towers, meteorological equipment, radio navigational aids) and their support structures, the International Civil Aviation Organization (ICAO) initiated the "Frangible Aids Study Group" in 1981, with the task to define design requirements, design guidelines and test procedures. This work has resulted in part 6 of the Aerodrome Design Manual, dedicated to frangibility.
An overview of the activities carried out to achieve these results is given in "Frangibility of Approach Lighting Structures at Airports". The missing reference (17) in this article is in "Impact simulation of a frangible approach light structure by an aircraft wing section". With the evolution of numerical methods suitable for impact analysis, a Chapter 6 was added to the Aerodrome Design Manual part 6, dedicated to "numerical simulation methods for evaluating frangibility". It states that numerical methods can be used to evaluate the frangibility of structures, but that the analytical models should still be verified through a series of representative field tests.
Of all equipment or installations at airports required for air navigation purposes, ICAO has not yet formulated frangibility criteria for the tower structure supporting the ILS glide path antenna, "considering its unique nature", basically: its size. A first publication on this subject is given in "Frangible design of instrument landing system/glide slope towers".
Bullets
A frangible bullet is one that is designed to disintegrate into tiny particles upon impact to minimize their penetration for reasons of range safety, to limit environmental impact, or to limit the danger behind the intended target. Examples are the Glaser Safety Slug and the breaching round.
Frangible bullets will disintegrate upon contact with a surface harder than the bullet itself. Frangible bullets are often used by shooters engaging in close quarter combat training to avoid ricochets; targets are placed on steel backing plates that serve to completely fragment the bullet. Frangible bullets are typically made of non-toxic metals, and are frequently used on "green" ranges and outdoor ranges where lead abatement is a concern.
Glass
Tempered glass is said to be frangible when it fractures and breaks into many small pieces.
Other
Some security tapes and labels are intentionally weak or have brittle components. The intent is to deter tampering by making it almost impossible to remove intact.
In 2024, Aspen Cooling Osberton International Horse Trials cross-country course started using frangible trakehner fences to allow the wooden fence to give way in order to prevent a horse from falling.
See also
Friability
Sacrificial part
Spall
References
Ammunition
Fracture mechanics | Frangibility | [
"Materials_science",
"Engineering"
] | 778 | [
"Structural engineering",
"Materials degradation",
"Materials science",
"Fracture mechanics"
] |
5,382,184 | https://en.wikipedia.org/wiki/Plasmadynamics%20and%20Electric%20Propulsion%20Laboratory | Plasmadynamics and Electric Propulsion Laboratory (PEPL) is a University of Michigan laboratory facility for electric propulsion and plasma application research. The primary goals of PEPL are to increase efficiency of electric propulsion systems, understand integration issues of plasma thrusters with spacecraft, and to identify non-propulsion applications of electric propulsion technology. It was founded by Professor Alec D. Gallimore and is currently directed by Professor Benjamin A. Jorns.
PEPL currently houses the Large Vacuum Test Facility (LVTF). The chamber was constructed in the 1960s by Bendix Corporation for testing of the Apollo Lunar Roving Vehicle and was later donated to the University of Michigan in 1982. The cylindrical 9 m long by 6 m diameter long stainless-steel clad tank is utilized for Hall effect thruster, electrostatic ion thruster, magnetoplasmadynamic thruster, and arcjet testing as well as space tether and plasma diagnostics research.
See also
Nonequilibrium Gas and Plasma Dynamics Group
References
External links
Plasmadynamics and Electric Propulsion Laboratory Official Website
University of Michigan
Plasma physics facilities | Plasmadynamics and Electric Propulsion Laboratory | [
"Physics"
] | 221 | [
"Plasma physics stubs",
"Plasma physics facilities",
"Plasma physics"
] |
44,557,148 | https://en.wikipedia.org/wiki/Laves%20graph | In geometry and crystallography, the Laves graph is an infinite and highly symmetric system of points and line segments in three-dimensional Euclidean space, forming a periodic graph. Three equal-length segments meet at 120° angles at each point, and all cycles use ten or more segments. It is the shortest possible triply periodic graph, relative to the volume of its fundamental domain. One arrangement of the Laves graph uses one out of every eight of the points in the integer lattice as its points, and connects all pairs of these points that are nearest neighbors, at distance . It can also be defined, divorced from its geometry, as an abstract undirected graph, a covering graph of the complete graph on four vertices.
named this graph after Fritz Laves, who first wrote about it as a crystal structure in 1932. It has also been called the K4 crystal, (10,3)-a network, diamond twin, triamond, and the srs net. The regions of space nearest each vertex of the graph are congruent 17-sided polyhedra that tile space. Its edges lie on diagonals of the regular skew polyhedron, a surface with six squares meeting at each integer point of space.
Several crystalline chemicals have known or predicted structures in the form of the Laves graph. Thickening the edges of the Laves graph to cylinders produces a related minimal surface, the gyroid, which appears physically in certain soap film structures and in the wings of butterflies.
Constructions
From the integer grid
As describes, the vertices of the Laves graph can be defined by selecting one out of every eight points in the three-dimensional integer lattice, and forming their nearest neighbor graph. Specifically, one chooses the points
and all the other points formed by adding multiples of four to these coordinates. The edges of the Laves graph connect pairs of points whose Euclidean distance from each other is the square root of two, , as the points of each pair differ by one unit in two coordinates, and are the same in the third coordinate. The edges meet at 120° angles at each vertex, in a flat plane. All pairs of vertices that are non-adjacent are farther apart, at a distance of at least from each other. The edges of the resulting geometric graph are diagonals of a subset of the faces of the regular skew polyhedron with six square faces per vertex, so the Laves graph is embedded in this skew polyhedron.
It is possible to choose a larger set of one out of every four points of the integer lattice, so that the graph of distance- pairs of this larger set forms two mirror-image copies of the Laves graph, disconnected from each other, with all other pairs of points farther than apart.
As a covering graph
As an abstract graph, the Laves graph can be constructed as the maximal abelian covering graph of the complete graph . Being an abelian covering graph of means that the vertices of the Laves graph can be four-colored such that each vertex has neighbors of the other three colors and so that there are color-preserving symmetries taking any vertex to any other vertex with the same color. For the Laves graph in its geometric form with integer coordinates, these symmetries are translations that add even numbers to each coordinate (additionally, the offsets of all three coordinates must be congruent modulo four). When applying two such translations in succession, the net translation is irrespective of their order: they commute with each other, forming an abelian group. The translation vectors of this group form a three-dimensional lattice. Finally, being a maximal abelian covering graph means that there is no other covering graph of involving a higher-dimensional lattice. This construction justifies an alternative name of the Laves graph, the crystal.
A maximal abelian covering graph can be constructed from any finite graph ; applied to , the construction produces the (abstract) Laves graph, but does not give it the same geometric layout. Choose a spanning tree of , let be the number of edges that are not in the spanning tree (in this case, three non-tree edges), and choose a distinct unit vector in for each of these non-tree edges. Then, fix the set of vertices of the covering graph to be the ordered pairs where is a vertex of and is a vector in . For each such pair, and each edge adjacent to in , make an edge from to where is the zero vector if belongs to the spanning tree, and is otherwise the basis vector associated with , and where the plus or minus sign is chosen according to the direction the edge is traversed. The resulting graph is independent of the chosen spanning tree, and the same construction can also be interpreted more abstractly using homology.
Using the same construction, the hexagonal tiling of the plane is the maximal abelian covering graph of the three-edge dipole graph, and the diamond cubic is the maximal abelian covering graph of the four-edge dipole. The -dimensional integer lattice (as a graph with unit-length edges) is the maximal abelian covering graph of a graph with one vertex and self-loops.
As a unit distance graph
The unit distance graph on the three-dimensional integer lattice has a vertex for each lattice point; each vertex has exactly six neighbors. It is possible to remove some of the points from the lattice, so that each remaining point has exactly three remaining neighbors, and so that the induced subgraph of these points has no cycles shorter than ten edges. There are four ways to do this, one of which is isomorphic as an abstract graph to the Laves graph. However, its vertices are in different positions than the more-symmetric, conventional geometric construction.
Another subgraph of the simple cubic net isomorphic to the Laves graph is obtained by removing half of the edges in a certain way. The resulting structure, called semi-simple cubic lattice, also has lower symmetry than the Laves graph itself.
Properties
The Laves graph is a cubic graph, meaning that there are exactly three edges at each vertex. Every pair of a vertex and adjacent edge can be transformed into every other such pair by a symmetry of the graph, so it is a symmetric graph. More strongly, for every two vertices and , every one-to-one correspondence between the three edges incident to and the three edges incident to can be realized by a symmetry. However, the overall structure is chiral: no sequence of translations and rotations can make it coincide with its mirror image. The symmetry group of the Laves graph is the space group .
The girth of this structure is 10—the shortest cycles in the graph have 10 vertices—and 15 of these cycles pass through each vertex. The numbers of vertices at distance 0, 1, 2, ... from any vertex (forming the coordination sequence of the Laves graph) are:
If the surrounding space is partitioned into the regions nearest each vertex—the cells of the Voronoi diagram of this structure—these form heptadecahedra with 17 faces each. They are plesiohedra, polyhedra that tile space isohedrally. Experimenting with the structures formed by these polyhedra led physicist Alan Schoen to discover the gyroid minimal surface, which is topologically equivalent to the surface obtained by thickening the edges of the Laves graph to cylinders and taking the boundary of their union.
The Laves graph is the unique shortest triply-periodic network, in the following sense. Triply-periodic means repeating infinitely in all three dimensions of space, so a triply-periodic network is a connected geometric graph with a three-dimensional lattice of translational symmetries. A fundamental domain is any shape that can tile space with its translated copies under these symmetries. Any lattice has infinitely many choices of fundamental domain, of varying shapes, but they all have the same volume . One can also measure the length of the edges of the network within a single copy of the fundamental domain; call this number . Similarly to , does not depend on the choice of fundamental domain, as long as the domain boundary only crosses the edges, rather than containing parts of their length. The Laves graph has four symmetry classes of vertices (orbits), because the symmetries considered here are only translations, not the rotations needed to map these four classes into each other. Each symmetry class has one vertex in any fundamental domain, so the fundamental domain contains twelve half-edges, with total length . The volume of its fundamental domain is 32. From these two numbers, the ratio (a dimensionless quantity) is therefore . This is in fact the minimum possible value: All triply-periodic networks have with equality only in the case of the Laves graph.
Physical examples
Art
A sculpture titled Bamboozle, by Jacobus Verhoeff and his son Tom Verhoeff, is in the form of a fragment of the Laves graph, with its vertices represented by multicolored interlocking acrylic triangles. It was installed in 2013 at the Eindhoven University of Technology.
Molecular crystals
The Laves graph has been suggested as an allotrope of carbon, analogous to the more common graphene and graphite carbon structure which also have three bonds per atom at 120° angles. In graphene, adjacent atoms have the same bonding planes as each other, whereas in the Laves graph structure the bonding planes of adjacent atoms are twisted by an angle of approximately 70.5° around the line of the bond. However, this hypothetical carbon allotrope turns out to be unstable.
The Laves graph may also give a crystal structure for boron, one which computations predict should be stable. Other chemicals that may form this structure include SrSi2 (from which the "srs net" name derives) and elemental nitrogen, as well as certain metal–organic frameworks and cyclic hydrocarbons.
The electronic band structure for the tight-binding model of the Laves graph has been studied, showing the existence of Dirac and Weyl points in this structure.
Other
The structure of the Laves graph, and of gyroid surfaces derived from it, has also been observed experimentally in soap-water systems, and in the chitin networks of butterfly wing scales.
References
External links
.
Crystallography
Infinite graphs
Regular graphs | Laves graph | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 2,106 | [
"Mathematical objects",
"Infinite graphs",
"Infinity",
"Materials science",
"Crystallography",
"Condensed matter physics"
] |
44,558,259 | https://en.wikipedia.org/wiki/Psathyrella%20piluliformis | Psathyrella piluliformis is a species of agaric fungus in the family Psathyrellaceae. It produces fruit bodies (mushrooms) with broadly convex caps measuring in diameter. The caps are chestnut to reddish brown, the color fading with age and with dry weather. The closely spaced gills have an adnate attachment to the stipe. They are initially tan until the spores mature, when the gills turn dark brown. Fragments of the partial veil may remain on the cap margin, and as a wispy band of hairs on the stipe. The stipe is 2–7 cm tall and 3–7 mm wide, white, smooth, hollow, and bulging at the base. Fruiting occurs in clusters at the base of hardwood stumps.
It is considered edible but of low quality, with fragile flesh and being difficult to identify. Similar species include Psathyrella carbonicola, P. longipes, P. longistriata, P. multipedata, P. spadicea, and Parasola conopilus.
See also
List of Psathyrella species
References
External links
Fungi described in 1783
Fungi of Europe
Fungi of North America
Psathyrellaceae
Edible fungi
Fungus species | Psathyrella piluliformis | [
"Biology"
] | 247 | [
"Fungi",
"Fungus species"
] |
44,559,112 | https://en.wikipedia.org/wiki/Stokes%20approximation%20and%20artificial%20time | This article provides an error analysis of time discretization applied to spatially discrete approximation of the stationary and nonstationary Navier-Stokes equations. The nonlinearity of the convection term is the main problem in solving a stationary or nonstationary Navier-Stokes equation or Euler equation problems. Stoke incorporated ‘the method of artificial compressibility’ to solve these problems.
Navier-Stokes equation
Stokes approximation
The Stokes approximation is developed from the Navier-Stokes equations by omission of the convective term. For small Reynolds numbers in the incompressible flow, this approximation is more useful. Then incompressible Navier Stokes equation can be written as-
.
Here linear diffusion term dominates the convection term.
In the stationary problem neglecting the convection term, we get-
Many theorems can be proved by using this process.
The main problem with the solution of the incompressible flow equation is the decoupling of the continuity and momentum equation due to the absence of pressure or density term. Chorin proposed the solution for this problem of the pressure decoupling; this approach is called artificial compressibility.
In the above equation stoke assume that at, non-stationary Navier Stokes problem converge towards the solution of the correspondent stationary problem. This solution will not depend upon the function .
If this is used for the above equation consisting of Navier stokes equation and continuity equations with time derivative of pressure, then the solution will be same as the stationary solution of the original Navier Stoke problem.
This process also introduce the new term artificial time as t→∞. Artificial compressibility method is combined with a dual time stepping procedure which involves iteration in pseudo-time within each physical time step. This guarantees a convergence towards the solution for the incompressible flow problem.
References
External links
https://books.google.com/books?isbn=3527627979
Fluid dynamics | Stokes approximation and artificial time | [
"Chemistry",
"Engineering"
] | 391 | [
"Piping",
"Chemical engineering",
"Fluid dynamics"
] |
34,913,689 | https://en.wikipedia.org/wiki/LIONsolver | LIONsolver is an integrated software for data mining, business intelligence, analytics, and modeling and reactive business intelligence approach. A non-profit version is also available as LIONoso.
LIONsolver is used to build models, visualize them, and improve business and engineering processes.
It is a tool for decision making based on data and quantitative model and it can be connected to most databases and external programs.
The software is fully integrated with the Grapheur business intelligence and intended for more advanced users.
Overview
LIONsolver originates from research principles in Reactive Search Optimization advocating the use of self-tuning schemes acting while a software
system is running. Learning and Intelligent OptimizatioN refers to the integration of online machine learning schemes into the optimization software, so that
it becomes capable of learning from its previous runs and from human feedback.
A related approach is that of Programming by Optimization,
which provides a direct way of defining design spaces involving Reactive Search Optimization, and
of Autonomous Search
advocating adapting problem-solving algorithms.
Version 2.0 of the software was released on Oct 1, 2011, covering also the Unix and Mac OS X operating
systems in addition to Windows.
The modeling components include neural networks, polynomials, locally weighted Bayesian regression, k-means clustering, and self-organizing maps. A free academic license for non-commercial use and class use is available.
The software architecture of LIONsolver
permits interactive multi-objective optimization, with a user interface for visualizing the results and facilitating
the solution analysis and decision-making process.
The architecture allows for problem-specific extensions, and it is
applicable as a post-processing tool for all optimization schemes with a number of
different potential solutions. When the architecture is tightly coupled to a specific
problem-solving or optimization method, effective interactive schemes where the
final decision maker is in the loop can be developed.
On Apr 24, 2013 LIONsolver received the first prize of the Michael J. Fox Foundation –
Kaggle Parkinson's Data Challenge, a contest leveraging "the wisdom of the crowd" to benefit people with Parkinson's disease.
See also
Multi-objective optimization
References
External links
LIONsolver official non-profit site
Time series software
Data analysis software
Data and information visualization software
Mathematical optimization software
Numerical software | LIONsolver | [
"Mathematics"
] | 449 | [
"Numerical software",
"Mathematical software"
] |
34,917,449 | https://en.wikipedia.org/wiki/PRESENT | PRESENT is a lightweight block cipher, developed by the Orange Labs (France), Ruhr University Bochum (Germany) and the Technical University of Denmark in 2007. PRESENT was designed by Andrey Bogdanov, Lars R. Knudsen, Gregor Leander, Christof Paar, Axel Poschmann, Matthew J. B. Robshaw, Yannick Seurin, and C. Vikkelsoe. The algorithm is notable for its compact size (about 2.5 times smaller than AES).
Overview
The block size is 64 bits and the key size can be 80 bit or 128 bit. The non-linear layer is based on a single 4-bit S-box which was designed with hardware optimizations in mind. PRESENT is intended to be used in situations where low-power consumption and high chip efficiency is desired. The International Organization for Standardization and the International Electrotechnical Commission included PRESENT in the new international standard for lightweight cryptographic methods.
Cryptanalysis
A truncated differential attack on 26 out of 31 rounds of PRESENT was suggested in 2014.
Several full-round attacks using biclique cryptanalysis have been introduced on PRESENT.
By design all block ciphers with a block size of 64 bit can have problems with block collisions if they are used with large amounts of data. Therefore, implementations need to make sure that the amount of data encrypted with the same key is limited and rekeying is properly implemented.
Performance
PRESENT uses bit-oriented permutations and is not software-friendly. It is clearly targeted at hardware, where bit-permutations are possible with simple wiring. Performance of PRESENT when evaluated in microcontroller software environment using FELICS (Fair Evaluation of Lightweight Cryptographic Systems), a benchmarking framework for evaluation of software implementations of lightweight cryptographic primitives.
Standardization
PRESENT is included in the following standards.
ISO/IEC 29167-11:2014, Information technology - Automatic identification and data capture techniques - Part 11: Crypto suite PRESENT-80 security services for air interface communications
ISO/IEC 29192-2:2019, Information security - Lightweight cryptography - Part 2: Block ciphers
References
External links
PRESENT: An Ultra-Lightweight Block Cipher
http://www.lightweightcrypto.org/implementations.php Software Implementations in C and Python
https://web.archive.org/web/20160809024354/http://cis.sjtu.edu.cn/index.php/Software_Implementation_of_Block_Cipher_PRESENT_for_8-Bit_Platforms C implementation
http://www.emsec.rub.de/media/crypto/veroeffentlichungen/2011/01/29/present_ches2007_slides.pdf Talk slides from Cryptographic Hardware and Embedded Systems
Block ciphers
Cryptography | PRESENT | [
"Mathematics",
"Engineering"
] | 589 | [
"Applied mathematics",
"Cryptography",
"Cybersecurity engineering"
] |
34,918,164 | https://en.wikipedia.org/wiki/Generalized%20Clifford%20algebra | In mathematics, a generalized Clifford algebra (GCA) is a unital associative algebra that generalizes the Clifford algebra, and goes back to the work of Hermann Weyl, who utilized and formalized these clock-and-shift operators introduced by J. J. Sylvester (1882), and organized by Cartan (1898) and Schwinger.
Clock and shift matrices find routine applications in numerous areas of mathematical physics, providing the cornerstone of quantum mechanical dynamics in finite-dimensional vector spaces. The concept of a spinor can further be linked to these algebras.
The term generalized Clifford algebra can also refer to associative algebras that are constructed using forms of higher degree instead of quadratic forms.
Definition and properties
Abstract definition
The -dimensional generalized Clifford algebra is defined as an associative algebra over a field , generated by
and
.
Moreover, in any irreducible matrix representation, relevant for physical applications, it is required that
, and gcd. The field is usually taken to be the complex numbers C.
More specific definition
In the more common cases of GCA, the -dimensional generalized Clifford algebra of order has the property , for all j,k, and . It follows that
and
for all j,k,ℓ = 1, . . . ,n, and
is the th root of 1.
There exist several definitions of a Generalized Clifford Algebra in the literature.
Clifford algebra
In the (orthogonal) Clifford algebra, the elements follow an anticommutation rule, with .
Matrix representation
The Clock and Shift matrices can be represented by matrices in Schwinger's canonical notation as
.
Notably, , (the Weyl braiding relations), and (the discrete Fourier transform).
With , one has three basis elements which, together with , fulfil the above conditions of the Generalized Clifford Algebra (GCA).
These matrices, and , normally referred to as "shift and clock matrices", were introduced by J. J. Sylvester in the 1880s. (Note that the matrices are cyclic permutation matrices that perform a circular shift; they are not to be confused with upper and lower shift matrices which have ones only either above or below the diagonal, respectively).
Specific examples
Case
In this case, we have = −1, and
thus
which constitute the Pauli matrices.
Case
In this case we have = , and
and may be determined accordingly.
See also
Clifford algebra
Generalizations of Pauli matrices
DFT matrix
Circulant matrix
References
Further reading
(In The legacy of Alladi Ramakrishnan in the mathematical sciences (pp. 465–489). Springer, New York, NY.)
Algebras
Clifford algebras
Ring theory
Quadratic forms
Mathematical physics | Generalized Clifford algebra | [
"Physics",
"Mathematics"
] | 552 | [
"Mathematical structures",
"Applied mathematics",
"Algebras",
"Theoretical physics",
"Ring theory",
"Fields of abstract algebra",
"Algebraic structures",
"Quadratic forms",
"Mathematical physics",
"Number theory"
] |
34,920,670 | https://en.wikipedia.org/wiki/Hologenome%20theory%20of%20evolution | The hologenome theory of evolution recasts the individual animal or plant (and other multicellular organisms) as a community or a "holobiont" – the host plus all of its symbiotic microbes. Consequently, the collective genomes of the holobiont form a "hologenome". Holobionts and hologenomes are structural entities that replace misnomers in the context of host-microbiota symbioses such as superorganism (i.e., an integrated social unit composed of conspecifics), organ, and metagenome. Variation in the hologenome may encode phenotypic plasticity of the holobiont and can be subject to evolutionary changes caused by selection and drift, if portions of the hologenome are transmitted between generations with reasonable fidelity. One of the important outcomes of recasting the individual as a holobiont subject to evolutionary forces is that genetic variation in the hologenome can be brought about by changes in the host genome and also by changes in the microbiome, including new acquisitions of microbes, horizontal gene transfers, and changes in microbial abundance within hosts. Although there is a rich literature on binary host–microbe symbioses, the hologenome concept distinguishes itself by including the vast symbiotic complexity inherent in many multicellular hosts.
Origin
Lynn Margulis coined the term holobiont in her 1991 book Symbiosis as a Source of Evolutionary Innovation: Speciation and Morphogenesis (MIT Press), though this was not in the context of diverse populations of microbes. The term holobiont is derived from the Ancient Greek ὅλος (hólos, "whole"), and the word biont for a unit of life.
In September 1994, Richard Jefferson coined the term hologenome when he introduced the hologenome theory of evolution at a presentation at Cold Spring Harbor Laboratory. At the CSH Symposium and earlier, the unsettling number and diversity of microbes that were being discovered through the powerful tool of PCR-amplification of 16S ribosomal RNA genes was exciting, but confusing interpretations in diverse studies. A number of speakers referred to microbial contributions to mammalian or plant DNA samples as 'contamination'. In his lecture, Jefferson argued that these were likely not contamination, but rather essential components of the samples that reflected the actual genetic composition of the organism being studied, integral to the complex system in which it lives. This implied that the logic of the organism's performance and capabilities would be embedded only in the hologenome. Observations on the ubiquity of microbes in plant and soil samples as well as laboratory work on molecular genetics of vertebrate-associated microbial enzymes impacting hormone action informed this hypothesis. References was made to work indicating that mating pheromones were only released after skin microbiota activated the precursors.
At the 14th South African Congress of Biochemistry and Molecular Biology in 1997, Jefferson described how the modulation of steroid and other hormone levels by microbial glucuronidases and arylsulfatase profoundly impacted the performance of the composite entity. Following on work done isolating numerous and diverse glucuronidases from microbial samples of African animal feces, and their differential cleavage of hormones, he hypothesized that this phenomenon, microbially-mediated hormone modulation, could underlie evolution of disease and social behavior as well as the holobiont fitness and system resilience. In his lectures, Jefferson coined and defined the term 'Ecotherapeutics', referring to adjustment of the population structure of the microbial composition in plants and animals - the microbiome - and their support ecosystem to improve performance. In 2007, Jefferson followed with a series of posts on the logic of hologenome theory on Cambia's Science as Social Enterprise page.
In 2008, Eugene Rosenberg and Ilana Zilber-Rosenberg apparently independently used the term hologenome and developed the hologenome theory of evolution. This theory was originally based on their observations of Vibrio shiloi-mediated bleaching of the coral Oculina patagonica. Since its first introduction, the theory has been promoted as a fusion of Lamarckism and Darwinism and expanded to all of evolution, not just that of corals. The history of the development of the hologenome theory and the logic undergirding its development was the focus of a cover article by Carrie Arnold in New Scientist in January, 2013. A comprehensive treatment of the theory, including updates by the Rosenbergs on neutrality, pathogenesis and multi-level selection, can be found in their 2013 book.
In 2013, Robert Brucker and Seth Bordenstein re-invigorated the hologenome concept by showing that the gut microbiomes of closely related Nasonia wasp species are distinguishable, and contribute to hybrid death. This set interactions between hosts and microbes in a conceptual continuum with interactions between genes in the same genome. In 2015, Bordenstein and Kevin R. Theis outlined a conceptual framework that aligns with pre-existing theories in biology.
Support from vertebrate biology
Multicellular life is made possible by the coordination of physically and temporally distinct processes, most prominently through hormones. Hormones mediate critical activities in vertebrates, including ontogeny, somatic and reproductive physiology, sexual development, performance and behaviour.
Many of these hormones – including most steroids and thyroxines – are secreted in inactive form through the endocrine and apocrine systems into epithelial corridors in which microbiota are widespread and diverse, including gut, urinary tract, lung and skin. There, the inactive hormones can be re-activated by cleavage of the glucuronide or sulfate residue, allowing them to be reabsorbed. Thus the concentration and bioavailability of many of the hormones is impacted by microbial cleavage of conjugated intermediaries, itself determined by a diverse population with redundant enzymatic capabilities. Aspects of enterohepatic circulation have been known for decades, but had been viewed as an ancillary effect of detoxification and excretion of metabolites and xenobiotics, including effects on lifetimes of pharmaceuticals, including birth control formulations.
The basic premise of Jefferson's first exposition of the hologenome theory is that a spectrum of hormones can be re-activated and resorbed from epithelia, potentially modulating effective time and dose relationships of many vertebrate hormones. The ability to alter and modulate, amplify and suppress, disseminate and recruit new capabilities as microbially-encoded 'traits' means that sampling, sensing and responding to the environment become intrinsic features and emergent capabilities of the holobiont, with mechanisms that can provide rapid, sensitive, nuanced and persistent performance changes.
Studies by Froebe et al. in 1990 indicating that essential mating pheromones, including androstenols, required activation by skin-associated microbial glucuronidases and sulfatases. In the absence of microbial populations in the skin, no detectable aromatic pheromone was released, as the pro-pheromone remained water-soluble and non-volatile. This effectively meant that the microbes in the skin were essential to produce a mating signal.
Support from coral biology
Subsequent re-articulation describing the hologenome theory by Rosenberg and Zilber-Rosenberg, published 13 years after Jefferson's definition of the theory, was based on their observations of corals, and the coral probiotic hypothesis.
Coral reefs are the largest structures created by living organisms, and contain abundant and highly complex microbial communities. A coral "head" is a colony of genetically identical polyps, which secrete an exoskeleton near the base. Depending on the species, the exoskeleton may be hard, based on calcium carbonate, or soft and proteinaceous. Over many generations, the colony creates a large skeleton that is characteristic of the species. Diverse forms of life take up residence in a coral colony, including photosynthetic algae such as Symbiodinium, as well as a wide range of bacteria including nitrogen fixers, and chitin decomposers, all of which form an important part of coral nutrition. The association between coral and its microbiota is species dependent, and different bacterial populations are found in mucus, skeleton and tissue from the same coral fragment.
Over the past several decades, major declines in coral populations have occurred. Climate change, water pollution and overfishing are three stress factors that have been described as leading to disease susceptibility. Over twenty different coral diseases have been described, but of these, only a handful have had their causative agents isolated and characterized.
Coral bleaching is the most serious of these diseases. In the Mediterranean Sea, the bleaching of Oculina patagonica was first described in 1994 and, through a rigorous application of Koch's Postulates, determined to be due to infection by Vibrio shiloi. From 1994 to 2002, bacterial bleaching of O. patagonica occurred every summer in the eastern Mediterranean. Surprisingly, however, after 2003, O. patagonica in the eastern Mediterranean has been resistant to V. shiloi infection, although other diseases still cause bleaching.
The surprise stems from the knowledge that corals are long lived, with lifespans on the order of decades, and do not have adaptive immune systems. Their innate immune systems do not produce antibodies, and they should seemingly not be able to respond to new challenges except over evolutionary time scales. Yet multiple researchers have documented variations in bleaching susceptibility that may be termed 'experience-mediated tolerance'. The puzzle of how corals managed to acquire resistance to a specific pathogen led Eugene Rosenberg and Ilana Zilber-Rosenberg to propose the Coral Probiotic Hypothesis. This hypothesis proposes that a dynamic relationship exists between corals and their symbiotic microbial communities. Beneficial mutations can arise and spread among the symbiotic microbes much faster than in the host corals. By altering its microbial composition, the "holobiont" can adapt to changing environmental conditions far more rapidly than by genetic mutation and selection in the host species alone.
Extrapolating the coral probiotic hypothesis to other organisms, including higher plants and animals, led to the Rosenberg's support for and publications around the hologenome theory of evolution.
Theory
Definition
The framework of the hologenome theory of evolution is as follows (condensed from Rosenberg et al., 2007):
"All animals and plants establish symbiotic relationships with microorganisms."
"Different host species contain different symbiont populations and individuals of the same species can also contain different symbiont populations."
"The association between a host organism and its microbial community affect both the host and its microbiota."
"The genetic information encoded by microorganisms can change under environmental demands more rapidly, and by more processes, than the genetic information encoded by the host organism."
"... the genome of the host can act in consortium with the genomes of the associated symbiotic microorganisms to create a hologenome. This hologenome...can change more rapidly than the host genome alone, thereby conferring greater adaptive potential to the combined holobiont evolution."
"Each of these points taken together [led Rosenberg et al. to propose that] the holobiont with its hologenome should be considered as the unit of natural selection in evolution."
Some authors supplement the above principles with an additional one. If a given holobiont is to be considered a unit of natural selection:
The hologenome must be heritable from generation to generation.
Ten principles of holobionts and hologenomes were presented in PLOS Biology:
I. Holobionts and hologenomes are units of biological organization
II. Holobionts and hologenomes are not organ systems, superorganisms, or metagenomes
III. The hologenome is a comprehensive gene system
IV. The hologenome concept reboots elements of Lamarckian evolution
V. Hologenomic variation integrates all mechanisms of mutation
VI. Hologenomic evolution is most easily understood by equating a gene in the nuclear genome to a microbe in the microbiome
VII. The hologenome concept fits squarely into genetics and accommodates multilevel selection theory
VIII. The hologenome is shaped by selection and neutrality
IX. Hologenomic speciation blends genetics and symbiosis
X. Holobionts and their hologenomes do not change the rules of evolutionary biology
Horizontally versus vertically transmitted symbionts
Many case studies clearly demonstrate the importance of an organism's associated microbiota to its existence. (For example, see the numerous case studies in the Microbiome article.) However, horizontal versus vertical transmission of endosymbionts must be distinguished. Endosymbionts whose transmission is predominantly vertical may be considered as contributing to the heritable genetic variation present in a host species.
In the case of colonial organisms such as corals, the microbial associations of the colony persist even though individual members of the colony, reproducing asexually, live and die. Corals also have a sexual mode of reproduction, resulting in planktonic larva; it is less clear whether microbial associations persist through this stage of growth. Also, the bacterial community of a colony may change with the seasons.
Many insects maintain heritable obligate symbiosis relationships with bacterial partners. For example, normal development of female wasps of the species Asobara tabida is dependent on Wolbachia infection. If "cured" of the infection, their ovaries degenerate. Transmission of the infection is vertical through the egg cytoplasm.
In contrast, many obligate symbiosis relationships have been described in the literature where transmission of the symbionts is via horizontal transfer. A well-studied example is the nocturnally feeding squid Euprymna scolopes, which camouflages its outline against the moonlit ocean surface by emitting light from its underside with the aid of the symbiotic bacterium Vibrio fischeri. The Rosenbergs cite this example within the context of the hologenome theory of evolution. Squid and bacterium maintain a highly co-evolved relationship. The newly hatched squid collects its bacteria from the sea water, and lateral transfer of symbionts between hosts permits faster transfer of beneficial mutations within a host species than are possible with mutations within the host genome.
Primary versus secondary symbionts
Another traditional distinction between endosymbionts has been between primary and secondary symbionts. Primary endosymbionts reside in specialized host cells that may be organized into larger, organ-like structures (in insects, the bacteriome). Associations between hosts and primary endosymbionts are usually ancient, with an estimated age of tens to hundreds of millions of years. According to endosymbiotic theory, extreme cases of primary endosymbionts include mitochondria, plastids (including chloroplasts), and possibly other organelles of eukaryotic cells. Primary endosymbionts are usually transmitted exclusively vertically, and the relationship is always mutualistic and generally obligate for both partners. Primary endosymbiosis is surprisingly common. An estimated 15% of insect species, for example, harbor this type of endosymbiont. In contrast, secondary endosymbiosis is often facultative, at least from the host point of view, and the associations are less ancient. Secondary endosymbionts do not reside in specialized host tissues, but may dwell in the body cavity dispersed in fat, muscle, or nervous tissue, or may grow within the gut. Transmission may be via vertical, horizontal, or both vertical and horizontal transfer. The relationship between host and secondary endosymbiont is not necessarily beneficial to the host; indeed, the relationship may be parasitic.
The distinction between vertical and horizontal transfer, and between primary and secondary endosymbiosis is not absolute, but follows a continuum, and may be subject to environmental influences. For example, in the stink bug Nezara viridula, the vertical transmission rate of symbionts, which females provide to offspring by smearing the eggs with gastric caeca, was 100% at 20 °C, but decreased to 8% at 30 °C. Likewise, in aphids, the vertical transmission of bacteriocytes containing the primary endosymbiont Buchnera is drastically reduced at high temperature. In like manner, the distinction between commensal, mutualistic, and parasitic relationships is also not absolute. An example is the relationship between legumes and rhizobial species: N2 uptake is energetically more costly than the uptake of fixed nitrogen from the soil, so soil N is preferred if not limiting. During the early stages of nodule formation, the plant-rhizobial relationship actually resembles a pathogenesis more than it does a mutualistic association.
Neo-Lamarckism within a Darwinian context
Lamarckism, the concept that an organism can pass on characteristics that it acquired during its lifetime to its offspring (also known as inheritance of acquired characteristics or soft inheritance) incorporated two common ideas of its time:
Use and disuse – individuals lose characteristics they do not require (or use) and develop characteristics that are useful.
Inheritance of acquired traits – individuals inherit the traits of their ancestors.
Although Lamarckian theory was rejected by the neo-Darwinism of the modern evolutionary synthesis in which evolution occurs through random variations being subject to natural selection, the hologenome theory has aspects that harken back to Lamarckian concepts. In addition to the traditionally recognized modes of variation (i.e. sexual recombination, chromosomal rearrangement, mutation), the holobiont allows for two additional mechanisms of variation that are specific to the hologenome theory: (1) changes in the relative population of existing microorganisms (i.e. amplification and reduction) and (2) acquisition of novel strains from the environment, which may be passed on to offspring.
Changes in the relative population of existing microorganisms corresponds to Lamarckian "use and disuse", while the ability to acquire novel strains from the environment, which may be passed on to offspring, corresponds to Lamarckian "inheritance of acquired traits". The hologenome theory, therefore, is said by its proponents to incorporate Lamarckian aspects within a Darwinian framework.
Additional case studies
The pea aphid Acyrthosiphon pisum maintains an obligate symbiotic relationship with the bacterium Buchnera aphidicola, which is transmitted maternally to the embryos that develop within the mother's ovarioles. Pea aphids live on sap, which is rich in sugars but deficient in amino acids. They rely on their Buchnera endosymbiotic population for essential amino acids, supplying in exchange nutrients as well as a protected intracellular environment that allows Buchnera to grow and reproduce. The relationship is actually more complicated than mutual nutrition; some strains of Buchnera increases host thermotolerance, while other strains do not. Both strains are present in field populations, suggesting that under some conditions, increased heat tolerance is advantageous to the host, while under other conditions, decreased heat tolerance but increased cold tolerance may be advantageous. One can consider the variant Buchnera genomes as alleles for the larger hologenome. The association between Buchnera and aphids began about 200 million years ago, with host and symbiont co-evolving since that time; in particular, it has been discovered that genome size in various Buchnera species has become extremely reduced, in some cases down to 450 kb, which is far smaller even than the 580 kb genome of Mycoplasma genitalium.
Development of mating preferences, i.e. sexual selection, is considered to be an early event in speciation. In 1989, Dodd reported mating preferences in Drosophila that were induced by diet. It has recently been demonstrated that when otherwise identical populations of Drosophila were switched in diet between molasses medium and starch medium, that the "molasses flies" preferred to mate with other molasses flies, while the "starch flies" preferred to mate with other starch flies. This mating preference appeared after only one generation and was maintained for at least 37 generations. The origin of these differences were changes in the flies' populations of a particular bacterial symbiont, Lactobacillus plantarum. Antibiotic treatment abolished the induced mating preferences. It has been suggested that the symbiotic bacteria changed the levels of cuticular hydrocarbon sex pheromones, however several other research papers have been unable to replicate this effect.
Zilber-Rosenberg and Rosenberg (2008) have tabulated many of the ways in which symbionts are transmitted and their contributions to the fitness of the holobiont, beginning with mitochondria found in all eukaryotes, chloroplast in plants, and then various associations described in specific systems. The microbial contributions to host fitness included provision of specific amino acids, growth at high temperatures, provision of nutritional needs from cellulose, nitrogen metabolism, recognition signals, more efficient food utilization, protection of eggs and embryos against metabolism, camouflage against predators, photosynthesis, breakdown of complex polymers, stimulation of the immune system, angiogenesis, vitamin synthesis, fiber breakdown, fat storage, supply of minerals from the soil, supply of organics, acceleration of mineralization, carbon cycling, and salt tolerance.
Criticism
The hologenome theory is debated. A major criticism by Ainsworth et al. has been their claim that V. shiloi was misidentified as the causative agent of coral bleaching, and that its presence in bleached O. patagonica was simply that of opportunistic colonization.
If this is true, the original observation that led to Rosenberg's later articulation of the theory would be invalid. On the other hand, Ainsworth et al. performed their samplings in 2005, two years after the Rosenberg group discovered O. patagonica no longer to be susceptible to V. shiloi infection; therefore their finding that bacteria are not the primary cause of present-day bleaching in Mediterranean coral O. patagonica should not be considered surprising. The rigorous satisfaction of Koch's postulates, as employed in Kushmaro et al. (1997), is generally accepted as providing a definitive identification of infectious disease agents.
Baird et al. (2009) have questioned basic assumptions made by Reshef et al. (2006) in presuming that (1) coral generation times are too slow to adjust to novel stresses over the observed time scales, and that (2) the scale of dispersal of coral larvae is too large to allow for adaptation to local environments. They may simply have underestimated the potential rapidity of conventional means of natural selection. In cases of severe stress, multiple cases have been documented of ecologically significant evolutionary change occurring over a handful of generations. Novel adaptive mechanisms such as switching symbionts might not be necessary for corals to adjust to rapid climate change or novel stressors.
Organisms in symbiotic relationships evolve to accommodate each other, and the symbiotic relationship increases the overall fitness of the participant species. Although the hologenome theory is still being debated, it has gained a significant degree of popularity within the scientific community as a way of explaining rapid adaptive changes that are difficult to accommodate within a traditional Darwinian framework.
Definitions and uses of the words holobiont and hologenome also differ between proponents and skeptics, and the misuse of the terms has led to confusions over what comprises evidence related to the hologenome. Ongoing discourse is attempting to clear this confusion. Theis et al. clarify that "critiquing the hologenome concept is not synonymous with critiquing coevolution, and arguing that an entity is not a primary unit of selection dismisses the fact that the hologenome concept has always embraced multilevel selection."
For instance, Chandler and Turelli (2014) criticize the conclusions of Brucker and Bordenstein (2013), noting that their observations are also consistent with an alternative explanation. Brucker and Bordenstein (2014) responded to these criticisms, claiming they were unfounded because of factual inaccuracies and altered arguments and definitions that were not advanced by Brucker and Bordenstein (2013).
Recently, Forest L Rohwer and colleagues developed a novel statistical test to examine the potential for the hologenome theory of evolution in coral species. They found that coral species do not inherit microbial communities, and are instead colonized by a core group of microbes that associate with a diversity of species. The authors conclude: "Identification of these two symbiont communities supports the holobiont model and calls into question the hologenome theory of evolution." However, other studies in coral adhere to the original and pluralistic definitions of holobionts and hologenomes. David Bourne, Kathleen Morrow and Nicole Webster clarify that "The combined genomes of this coral holobiont form a coral hologenome, and genomic interactions within the hologenome ultimately define the coral phenotype."
References
Further reading
For recent literature on holobionts and hologenomes published in an open access platform, see the following reference:
Biological evolution
Extended evolutionary synthesis
Lamarckism
Microbiology
1991 neologisms | Hologenome theory of evolution | [
"Chemistry",
"Biology"
] | 5,432 | [
"Microbiology",
"Obsolete biology theories",
"Lamarckism",
"Microscopy",
"Non-Darwinian evolution",
"Biology theories"
] |
23,242,297 | https://en.wikipedia.org/wiki/Equivalent%20oxide%20thickness | An equivalent oxide thickness usually given in nanometers (nm) is the thickness of silicon oxide film that provides the same electrical performance as that of a high-κ material being used.
The term is often used when describing field effect transistors, which rely on an electrically insulating pad of material between a gate and a doped semiconducting region. Device performance has typically been improved by reducing the thickness of a silicon oxide insulating pad. As the thickness of the insulating pad approached 5–10 nm, leakage current became a problem and alternate materials were necessary. These new materials had a lower equivalent oxide thickness so they could retain an appropriate gate oxide thickness to prevent leakage current while also increasing the switching speed. For example, a high-κ material with dielectric constant of 39 (compared to 3.9 for silicon oxide) would be ten times thicker than that of silicon oxide, helping to reduce the leakage of electrons across the dielectric pad, while achieving the same capacitance and high performance. In other words silicon oxide film of one-tenth the thickness of the high-κ film would be required to achieve similar performance while ignoring leakage current.
Commonly used high-κ gate dielectrics include hafnium oxide and more recently aluminum oxide for gate-all-around devices.
The EOT definition is useful to quickly compare different dielectric materials to the industry standard silicon oxide dielectric, as:
Semiconductors | Equivalent oxide thickness | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 299 | [
"Electrical resistance and conductance",
"Physical quantities",
"Semiconductors",
"Materials",
"Electronic engineering",
"Condensed matter physics",
"Solid state engineering",
"Matter"
] |
23,242,689 | https://en.wikipedia.org/wiki/AC-to-AC%20converter | A solid-state AC-to-AC converter converts an AC waveform to another AC waveform, where the output voltage and frequency can be set arbitrarily.
Categories
Referring to Fig 1, AC-AC converters can be categorized as follows:
Indirect AC-AC (or AC/DC-AC) converters (i.e., with rectifier, DC link and inverter), such as those used in variable frequency drives
Cycloconverters
Hybrid matrix converters
Matrix converters (MC)
AC voltage controllers
DC link converters
There are two types of converters with DC link:
Voltage-source inverter (VSI) converters (Fig. 2): In VSI converters, the rectifier consists of a diode-bridge and the DC link consists of a shunt capacitor.
Current-source inverter (CSI) converters (Fig. 3): In CSI converters, the rectifier consists of a phase-controlled switching device bridge and the DC link consists of 1 or 2 series inductors between one or both legs of the connection between rectifier and inverter.
Any dynamic braking operation required for the motor can be realized by means of braking DC chopper and resistor shunt connected across the rectifier. Alternatively, an anti-parallel thyristor bridge must be provided in the rectifier section to feed energy back into the AC line. Such phase-controlled thyristor-based rectifiers however have higher AC line distortion and lower power factor at low load than diode-based rectifiers.
An AC-AC converter with approximately sinusoidal input currents and bidirectional power flow can be realized by coupling a pulse-width modulation (PWM) rectifier and a PWM inverter to the DC-link. The DC-link quantity is then impressed by an energy storage element that is common to both stages, which is a capacitor C for the voltage DC-link or an inductor L for the current DC-link. The PWM rectifier is controlled in a way that a sinusoidal AC line current is drawn, which is in phase or anti-phase (for energy feedback) with the corresponding AC line phase voltage.
Due to the DC-link storage element, there is the advantage that both converter stages are to a large extent decoupled for control purposes. Furthermore, a constant, AC line independent input quantity exists for the PWM inverter stage, which results in high utilization of the converter’s power capability. On the other hand, the DC-link energy storage element has a relatively large physical volume, and when electrolytic capacitors are used, in the case of a voltage DC-link, there is potentially a reduced system lifetime.
Cycloconverters
A cycloconverter constructs an output, variable-frequency, approximately sinusoid waveform by switching segments of the input waveform to the output; there is no intermediate DC link. With switching elements such as SCRs, the output frequency must be lower than the input. Very large cycloconverters (on the order of 10 MW) are manufactured for compressor and wind-tunnel drives, or for variable-speed applications such as cement kilns.
Matrix converters
In order to achieve higher power density and reliability, it makes sense to consider Matrix Converters that achieve three-phase AC-AC conversion without any intermediate energy storage element. Conventional Direct Matrix Converters (Fig. 4) perform voltage and current conversion in one single stage.
There is the alternative option of indirect energy conversion by employing the Indirect Matrix Converter (Fig. 5) or the Sparse matrix converter which was invented by Prof. Johann W. Kolar from the ETH Zurich. As with the DC-link based VSI and CSI controllers (Fig. 2 and Fig. 3), separate stages are provided for voltage and current conversion, but the DC-link has no intermediate storage element. Generally, by employing matrix converters, the storage element in the DC-link is eliminated at the cost of a larger number of semiconductors. Matrix converters are often seen as a future concept for variable speed drives technology, but despite intensive research over the decades they have until now only achieved low industrial penetration. However, citing recent availability of low-cost, high performance semiconductors, one larger drive manufacturer has over past few years been actively promoting matrix converters.
See also
Variable-frequency drive
Frequency changer
Sparse matrix converter
References
Electronic circuits
Electric power conversion | AC-to-AC converter | [
"Engineering"
] | 959 | [
"Electronic engineering",
"Electronic circuits"
] |
23,243,595 | https://en.wikipedia.org/wiki/Medication%20therapy%20management | Medication therapy management, generally called medicine use review in the United Kingdom, is a service provided typically by pharmacists, medical affairs, and RWE scientists that aims to improve outcomes by helping people to better understand their health conditions and the medications used to manage them. This includes providing education on the disease state and medications used to treat the disease state, ensuring that medicines are taken correctly, reducing waste due to unused medicines, looking for any side effects, and providing education on how to manage any side effects. The process that can be broken down into five steps: medication therapy review, personal medication record, medication-related action plan, intervention and or referral, and documentation and follow-up.
The medication therapy review has the pharmacist review all of the prescribed medications, any over the counter medications, and all dietary supplements an individual is taking. This allows the pharmacist to look for any duplications or dangerous drug interactions. This service can be especially valuable for people who are older, have several chronic conditions, take multiple medications, or are seen by multiple doctors.
Effectiveness
The goal of medication review is to improve health and reduce morbidity and mortality in patients by optimizing the use of their current medications. Different hospital institutions and countries have different policies or approaches to medication review for their inpatients. The effectiveness of a formal medication review program in people who are hospitalized has not been well studied. There is some evidence that a medication review program reduces the number of people re-admitted to hospital and also may decrease the number of times they return to the emergency department. The effects on morbidity and any improvements on the quality of a person's life are not clear.
United States
In 2014, the US Centers for Medicare and Medicaid Services required Part D plans to include an MTM program, which led to an expansion of services offered. MTM services are provided free to eligible patients enrolled in a plan. As of 2019, to be eligible a patient must have at least two (or three, for some plans) chronic conditions, take multiple drugs covered by Part D, and are predicted to exceed a preset amount in annual out of pocket costs for their covered Part D drugs (set at $3,967 in 2018 and $4,044 in 2019). Plans are permitted to expand MTM eligibility to patients not meeting the minimum required criteria if they so choose.
Comprehensive medication review
As part of the minimum required services, plans must provide for a comprehensive medication review (CMR) once per year, usually conducted by a pharmacist. Per CMS guidance, the goal of the CMR is to "improve patients’ knowledge
of their prescriptions, over-the-counter (OTC) medications, herbal therapies and dietary supplements, identify and address problems or concerns that patients may have, and empower patients to self manage their medications and their health conditions." The CMR is conducted in an interactive manner either in person or through telehealth. A pharmacist or other provider conducting a CMR will use information from various sources, such as the pharmacy fill records, the patient's pill bottles, a patient interview, and/or discussion with caregivers to identify potential improvements that can be made in the patient's therapy. The pharmacist will then make any appropriate recommendations to the patient's doctor, as well as document their findings in a format similar to a SOAP note. The patient must be provided a medication action plan with a list of their medications, directions, and any steps they need to take to improve their therapy (such as using reminders, organizing, stopping old medications, etc). Most comprehensive medication reviews result in pharmacist intervention to recommend changes to therapy to a doctor, and/or recommendations to the patient to improve adherence/efficacy of their medications.
Targeted medication review
A targeted medication review (TMR, also called targeted intervention program or TIP) is a required service for eligible patients that focuses on a specific medication or disease state and is conducted once every three months. The goal of a TMR program is to improve adherence to medication and identify and fix drug therapy problems common in chronic diseases such as nonadherence, duplicate therapy, or sub-optimal therapy. The pharmacist or provider will contact the patient to ensure adherence, identify potential problems with the therapy, and make any appropriate recommendations to the prescriber. The provision of TMR services to patients with chronic diseases has been shown to decrease the number of inpatient admissions per 1000 patients by about 50 admissions per 1000 patients.
United Kingdom
A medicine use review (MUR) is an advanced service offered by pharmacies in the United Kingdom. It is part of the current contract pharmacies hold with the National Health Service (NHS). An MUR is an opportunity for patients to discuss their medicines with a qualified pharmacist. An MUR is a free NHS service that is held in a private consultation room at a local pharmacy. It is not meant to replace the role of the general practitioner but rather provide:
A review of all medicines to see if there is any overlapping or interactions
Give extra information on what medicines are for
Discuss side effects of medicines
Identify problems associated with medicines
Pharmacies in the United kingdom are paid £28 for each Medicines Use Review undertaken, up to a maximum of 400 per pharmacy, per year. At least 70% of patients must be in one of the four target groups:
taking certain high risk medicines on the national list
recently discharged from hospital with changes to their prescribed medicine
with a respiratory condition such as asthma or chronic obstructive pulmonary disease
with cardiovascular disease or risk factors, who are prescribed four or more regular medicines.
The introduction of pharmacists into GP surgeries means that the practice pharmacists can do more to ensure that reviews are carried out where necessary.
Abuse of system
There have been concerns over abuse of the system, whereby multiple pharmacies are using the system to charge the £28 fee for each 10- to 15-minute MUR, and pressuring pharmacists to meet targets for the number carried out, with the review more of a tick-box exercise than a benefit for the patient. There have also been cases of falsification of figures.
Research
The effectiveness of a medication review program for elderly people who require multiple medications (polypharmacy) is not clear and more research is needed to understand how to optimize medications in elderly inpatients.
See also
Adherence (medicine)
References
Pharmacy in the United States
Pharmacy in the United Kingdom | Medication therapy management | [
"Chemistry"
] | 1,343 | [
"Drug safety"
] |
23,245,423 | https://en.wikipedia.org/wiki/Fahrenheit%20hydrometer | The Fahrenheit hydrometer is a device used to measure the density of a liquid. It was invented by Daniel Gabriel Fahrenheit (1686–1736), better known for his work in thermometry. The Nicholson hydrometer, after William Nicholson (1753-1815), is similar in design, but instead of a weighted bulb at the bottom there is a small container ("basket") into which a sample can be placed.
Operation
The Fahrenheit hydrometer is a constant-volume device that will float in water. In the figure shown here, the hydrometer is floating vertically in a cylinder containing a liquid. At the bottom of the hydrometer is a weighted bulb and at the top is a pan for small weights. To use the hydrometer, one first accurately determines its weight (W) while it is dry. Next, the device is placed in water, and a weight (w) sufficient to sink a marked point on the rod to the water-line is placed on the pan. At that point, the weight of water displaced by the instrument equals W + w. The hydrometer is then removed, wiped dry, and placed in the liquid whose density is to be determined. A weight (x) sufficient to sink the hydrometer to the same marked point is placed in the pan. The density (D) of the second liquid is then given by D = (W + x) / (W + w).
The Fahrenheit hydrometer can be made of either glass or metal.
References
See also
Hydrometer
Measuring instruments | Fahrenheit hydrometer | [
"Technology",
"Engineering"
] | 322 | [
"Measuring instruments"
] |
23,251,097 | https://en.wikipedia.org/wiki/Sand%20rammer | A sand rammer is a piece of equipment used in foundry sand testing to make test specimen of molding sand by compacting bulk material by free fixed height drop of fixed weight for 3 times. It is also used to determine compactibility of sands by using special specimen tubes and a linear scale.
Mechanism
Sand rammer consists of calibrated sliding weight actuated by cam, a shallow cup to accommodate specimen tube below ram head, a specimen stripper to strip compacted specimen out of specimen tube, a specimen tube to prepare the standard specimen of 50 mm diameter by 50 mm height or 2 inch diameter by 2 inch height for an AFS standard specimen.
Specimen preparation
The cam is actuated by a user by rotating the handle, causing a cam to lift the weight and let it fall freely on the frame attached to the ram head. This produces a standard compacting action to a pre-measured amount of sand. Demonstration of this apparatus can be seen here:
Variety of standard specimen for Green Sand and Silicate based (CO2)sand are prepared using a sand rammer along with accessories
The object for producing the standard cylindrical specimen is to have the specimen become 2 inches high (plus or minus 1/32 inch) with three rams of the machine. After the specimen has been prepared inside the specimen tube, the specimen can be used for various standard sand tests such as the permeability test, the green sand compression test, the shear test, or other standard foundry tests.
The sand rammer machine can be used to measure compactability of prepared sand by filling the specimen tube with prepared sand so that it is level with the top of the tube. The tube is then placed under the ram head in the shallow cup and rammed three times. Compactability in percentage is then calculated from the resultant height of the sand inside the specimen tube.
A rammer is mounted on a base block on a solid foundation, which provides vibration damping to ensure consistent ramming.
Used for sand types
Green sand
Oil sand
CO2 sand
Raw sand i.e. base sand i.e. un-bonded sand.
Prerequisites
Prerequisite equipments for sand rammer may vary from case to case basis or testing scenario:
Case 1: If the prepared sand is ready
Tube filler accessory to fill sample tube with sand. Advantage is it lets the sand fill in from fixed distance and riddles it before filling.
Case 2: Experiment by preparing new sand sample
If sand needs to be prepared before making specimen following equipments may be needed
Laboratory sand muller or laboratory sand mixer (for core sands)
Case 3: For low compressive strength sands and mixtures:
Split specimen tube
References
Casting (manufacturing)
Metallurgical processes
Metalworking tools | Sand rammer | [
"Chemistry",
"Materials_science"
] | 551 | [
"Metallurgical processes",
"Metallurgy"
] |
23,251,268 | https://en.wikipedia.org/wiki/Cross-laminates | Cross-laminates are products that feature layers of material that are laid down at right angles to each other, in order to provide greater strength across a uniform surface. Cross-laminated timber (similar to plywood) is one example. There are also synthetic, flexible films with enhanced properties via mechanical manipulation of the film.
References
Materials | Cross-laminates | [
"Physics"
] | 69 | [
"Materials stubs",
"Materials",
"Matter"
] |
24,731,079 | https://en.wikipedia.org/wiki/Observer%20%28quantum%20physics%29 | Some interpretations of quantum mechanics posit a central role for an observer of a quantum phenomenon. The quantum mechanical observer is tied to the issue of observer effect, where a measurement necessarily requires interacting with the physical object being measured, affecting its properties through the interaction. The term "observable" has gained a technical meaning, denoting a Hermitian operator that represents a measurement.
Foundation
The theoretical foundation of the concept of measurement in quantum mechanics is a contentious issue deeply connected to the many interpretations of quantum mechanics. A key focus point is that of wave function collapse, for which several popular interpretations assert that measurement causes a discontinuous change into an eigenstate of the operator associated with the quantity that was measured, a change which is not time-reversible.
More explicitly, the superposition principle ( of quantum physics dictates that for a wave function , a measurement will result in a state of the quantum system of one of the possible eigenvalues , of the operator which is in the space of the eigenfunctions .
Once one has measured the system, one knows its current state; and this prevents it from being in one of its other states — it has apparently decohered from them without prospects of future strong quantum interference. This means that the type of measurement one performs on the system affects the end-state of the system.
An experimentally studied situation related to this is the quantum Zeno effect, in which a quantum state would decay if left alone, but does not decay because of its continuous observation. The dynamics of a quantum system under continuous observation are described by a quantum stochastic master equation known as the Belavkin equation. Further studies have shown that even observing the results after the photon is produced leads to collapsing the wave function and loading a back-history as shown by delayed choice quantum eraser.
When discussing the wave function which describes the state of a system in quantum mechanics, one should be cautious of a common misconception that assumes that the wave function amounts to the same thing as the physical object it describes. This flawed concept must then require existence of an external mechanism, such as a measuring instrument, that lies outside the principles governing the time evolution of the wave function , in order to account for the so-called "collapse of the wave function" after a measurement has been performed. But the wave function is not a physical object like, for example, an atom, which has an observable mass, charge and spin, as well as internal degrees of freedom. Instead, is an abstract mathematical function that contains all the statistical information that an observer can obtain from measurements of a given system. In this case, there is no real mystery in that this mathematical form of the wave function must change abruptly after a measurement has been performed.
A consequence of Bell's theorem is that measurement on one of two entangled particles can appear to have a nonlocal effect on the other particle. Additional problems related to decoherence arise when the observer is modeled as a quantum system.
Description
The Copenhagen interpretation, which is the most widely accepted interpretation of quantum mechanics among physicists, posits that an "observer" or a "measurement" is merely a physical process. One of the founders of the Copenhagen interpretation, Werner Heisenberg, wrote:
Of course the introduction of the observer must not be misunderstood to imply that some kind of subjective features are to be brought into the description of nature. The observer has, rather, only the function of registering decisions, i.e., processes in space and time, and it does not matter whether the observer is an apparatus or a human being; but the registration, i.e., the transition from the "possible" to the "actual," is absolutely necessary here and cannot be omitted from the interpretation of quantum theory.
Niels Bohr, also a founder of the Copenhagen interpretation, wrote:
all unambiguous information concerning atomic objects is derived from the permanent marks such as a spot on a photographic plate, caused by the impact of an electron left on the bodies which define the experimental conditions. Far from involving any special intricacy, the irreversible amplification effects on which the recording of the presence of atomic objects rests rather remind us of the essential irreversibility inherent in the very concept of observation. The description of atomic phenomena has in these respects a perfectly objective character, in the sense that no explicit reference is made to any individual observer and that therefore, with proper regard to relativistic exigencies, no ambiguity is involved in the communication of information.
Likewise, Asher Peres stated that "observers" in quantum physics are
similar to the ubiquitous "observers" who send and receive light signals in special relativity. Obviously, this terminology does not imply the actual presence of human beings. These fictitious physicists may as well be inanimate automata that can perform all the required tasks, if suitably programmed.
Critics of the special role of the observer also point out that observers can themselves be observed, leading to paradoxes such as that of Wigner's friend; and that it is not clear how much consciousness is required. As John Bell inquired, "Was the wave function waiting to jump for thousands of millions of years until a single-celled living creature appeared? Or did it have to wait a little longer for some highly qualified measurer—with a PhD?"
Anthropocentric interpretation
The prominence of seemingly subjective or anthropocentric ideas like "observer" in the early development of the theory has been a continuing source of disquiet and philosophical dispute. A number of new-age religious or philosophical views give the observer a more special role, or place constraints on who or what can be an observer. There is no credible peer-reviewed research that backs such claims. As an example of such claims, Fritjof Capra declared, "The crucial feature of atomic physics is that the human observer is not only necessary to observe the properties of an object, but is necessary even to define these properties."
Confusion with uncertainty principle
The uncertainty principle has been frequently confused with the observer effect, evidently even by its originator, Werner Heisenberg. The uncertainty principle in its standard form describes how precisely it is possible to measure the position and momentum of a particle at the same time. If the precision in measuring one quantity is increased, the precision in measuring the other decreases.
An alternative version of the uncertainty principle, more in the spirit of an observer effect, fully accounts for the disturbance the observer has on a system and the error incurred, although this is not how the term "uncertainty principle" is most commonly used in practice.
See also
Observer effect (physics)
Quantum foundations
References
Concepts in physics
Quantum mechanics
Interpretations of quantum mechanics | Observer (quantum physics) | [
"Physics"
] | 1,373 | [
"Theoretical physics",
"Quantum mechanics",
"Quantum measurement",
"nan",
"Interpretations of quantum mechanics"
] |
24,733,255 | https://en.wikipedia.org/wiki/C56H44O13 | {{DISPLAYTITLE:C56H44O13}}
The molecular formula C56H44O13 (molar mass: 924.94 g/mol, exact mass: 924.278191 u) may refer to:
Carasinol B, a stilbenoid
Kobophenol A, a stilbenoid
Molecular formulas | C56H44O13 | [
"Physics",
"Chemistry"
] | 76 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
24,734,729 | https://en.wikipedia.org/wiki/Boilery | A boilery or boiling house is a place of boiling, much as a bakery is a place of baking. Boilery can also mean the process and equipment for boiling. Although they are now generally confined to factories, and usually boil industrial products rather than food, historically they were more common in daily life. Boileries are typically for boiling large quantities of fluid.
In the 17th to 19th centuries, boileries were used to convert sugarcane juice into raw sugar. These boileries were usually sturdy places, built from stone, and contained several copper kettles, each with a furnace beneath it., Sugarcane juice was treated with lime in large clarifying vats, before it was heated in copper kettles over individual furnaces. Due to their importance, many Western sugar plantations had their own boileries on site.
Soap would also be made in a boiling house.
Another use for a boilery is to make salt through the evaporation of brine water.
References
Secondary sector of the economy
Food industry
Salts
Sugar production
Food technology | Boilery | [
"Chemistry"
] | 206 | [
"Salts"
] |
24,737,709 | https://en.wikipedia.org/wiki/Coca-Cola%20Freestyle | Coca-Cola Freestyle is a touch screen soda fountain introduced by The Coca-Cola Company in 2009. The machine features 165 different Coca-Cola drink products, as well as custom flavors. The machine allows users to select from mixtures of flavors of Coca-Cola branded products which are then individually dispensed. The machines are currently located in major Coca-Cola partners and retail locations as a part of a gradual and ongoing deployment.
In 2014, Pepsi launched a competing, similar machine, the Pepsi Spire.
Design
The cabinetry was designed by the Italian automotive design firm Pininfarina, via their Pininfarina Extra industrial and product design subsidiary. The Freestyle's beverage dispensing technology was designed by Dean Kamen, the inventor of the Segway, in return for Coca-Cola distributing his Slingshot water purification system.
The technologies involved include microdispensing technology and proprietary PurePour technology. Both technologies were originally developed to deliver precise doses of drugs. One Freestyle unit with a similar footprint to a current vending machine can dispense 126 kinds of carbonated and non-carbonated beverages. Microdosing blends one or more concentrated ingredients stored in packets with water and sweetener at the point where the beverage is dispensed, thus avoiding the use of traditional boxes of syrup (also known as a bag-in-a-box). Cartridges store concentrated ingredients in the dispenser cabinet and are RFID enabled. The machine uses RFID chips to detect its supplies and to radio resupplying needs to other units.
History
Testing began in Utah, Southern California, and Georgia in July 2009 with 60+ locations around America planned by the end of that summer. Test locations around Coca-Cola's home city of Atlanta included the World of Coca-Cola, AMC Theatres Southlake Pavilion 24 and Parkway Point 15, and area food chains, including Willy's Mexicana Grill. Three machines are available in the Universal Studios Florida and Universal Islands of Adventure theme parks as well as the AMC movie theater at Disney Springs shopping complex in Lake Buena Vista, Florida and the World Waterpark in West Edmonton Mall in Edmonton, Alberta and Wild Adventures in Valdosta, Georgia.
Coca-Cola deployed the machines to 500 more locations in the United States in June 2010, followed by deployment to some universities in the United States. Deployment has continued in select locations of restaurant chains such as Wingstop, Zaxby's, Wawa, Taco Time Northwest, Togo's, Roy Rogers, Davanni's, PDQ Dairy Queen, Fuddruckers, Five Guys, Kelly's Roast Beef,Firehouse Subs, Wendy's, Jack in the Box, Carl's Jr./Hardee's, Beef O'Brady's, Miami Grill, Hess Express, Subway, White Castle, Moe's Southwest Grill, and BurgerFi. Burger King announced in December 2011 that it plans to implement the Freestyle system in its 850 company-owned restaurants in the U.S. by early-to-mid 2012, and was encouraging its franchisees to follow suit.
Coca-Cola has installed Freestyle machines in Toronto in select Wendy's, Burger King, McDonald's, Hero Certified Burgers, and Nando's restaurants, as well as entertainment venues, such as Cineplex Entertainment cinemas and AMC Theatres.
In late June 2012, Coca-Cola started a limited trial in the UK (in association with Burger King UK), with the machine initially deployed in 16 locations around Greater London. They are also now in Five Guys UK branches and various other locations such as cinemas Including AMC owned Odeon. The selection of brands available from a UK Coke freestyle machine is different from the USA's. Schweppes Lemonade and still versions of Fanta are brands that are available. Coca-Cola's retail drinks in the UK would usually include real sugar, unlike their US versions which use high-fructose corn syrup. However, since the Freestyle machine was designed to use syrup based sweetener, the Freestyle version also uses fructose-glucose syrup (HFCS) in place of sugar in UK machines.
In late February 2015, the company updated their system to divide drinks into four different categories, including a full selection, fruit-flavored mixes, caffeine-free drinks and those with low- or zero-calorie formulations.
In March 2015, Freestyle machines were installed at Thorpe Park, Surrey, the first theme park in Europe to use Freestyle machines. This was later rolled out to further UK Merlin Entertainments theme parks in March 2020, including Legoland parks.
In 2018, Coca-Cola unveiled a new iteration of Freestyle (9100) that would begin deployment in 2019, with a 24-inch touchscreen, Bluetooth support to connect with a new, accompanying mobile app for consumers, and new hardware features intended for future capabilities and beverage options.
Products
Customers choose a base product, which they can supplement with additional flavoring. Diet and Zero products remain low or no calories even with flavorings added. The machines include flavors not previously available to the markets served by the machines, including Orange Coke, which was previously sold only in Russia and the Baltics (and briefly in the United Kingdom and Gibraltar). Customers may also download an app and create their own custom mixes with up to three different base products and three different flavor shots, which the fountain pours by scanning a QR code.
Flavors
Freestyle fountains located in Firehouse Subs locations offer the chain's signature Cherry Lime-Aid. Fountains at Sea World Orlando offer an exclusive vanilla-flavored freestyle flavor called "South Pole Chill", and those located in Moe's Southwest Grill locations offer an exclusive vanilla/peach-flavored freestyle flavor called "Vanilla at Peachtree". Moes introduced a new flavor called Moe-Rita in May 2017 which "combines limeade, lemonade, orange and original flavors for a refreshing margarita-inspired sip". Machines in Wendy's restaurants have featured unique beverages such as a flavored cream soda named after founder Dave Thomas. In addition, the University of South Florida added a mix called “Rocky’s Refresher” to its on-campus freestyle locations in 2023. Fountains at Universal Parks & Resorts offer several Secret Menu flavors. On Royal Caribbean cruise ships, there is an exclusive Sprite flavor called "Royal Berry Blast", which was added in 2019 in honor of the cruise line's 50th anniversary.
After Vault was discontinued in 2011, it was replaced with Mello Yello.
Dr Pepper and Diet Dr Pepper will be served instead of Pibb Xtra and Pibb Zero in areas where Dr Pepper is distributed by Coca-Cola bottlers, or at locations with separate fountain contracts to serve Dr Pepper.
Orange Vanilla Coke is a newer flavor that has been seen in new Freestyle machines. This flavor also exists in Coke Zero Sugar form.
Surge is available in Cherry, Vanilla, Grape, and Zero Sugar varieties in Freestyle machines at select Burger King restaurants.
Locations
Jack in the Box
Wendy's
Firehouse Subs
Five Guys
White Castle
Noodles & Company
Burger King
Popeyes
Dairy Queen
AMC Theatres
Pizza Hut
See also
Pepsi Spire
References
External links
Coca-Cola Freestyle Models
Coca-Cola
2009 introductions
Vending machines
Commercial machines
Soft drinks | Coca-Cola Freestyle | [
"Physics",
"Technology",
"Engineering"
] | 1,487 | [
"Machines",
"Commercial machines",
"Vending machines",
"Automation",
"Physical systems"
] |
24,739,572 | https://en.wikipedia.org/wiki/Graduate%20Institute%20of%20Ferrous%20Technology | The Graduate Institute of Ferrous Technology (GIFT POSTECH) is an institute for graduate-level education and research in the field of iron and steel technology at Pohang University of Science and Technology, South Korea. It has nine specialized laboratories covering all sides of metallurgy. However, the Institute now has a reduced focus on steels, having introduced laboratories on battery electronics.
History
POSCO, one of the world's biggest steel production companies, in 1986, initiated a founding of a science and technology university in the city of Pohang, about 200 miles southeast of Seoul, the capital city of Korea. Pohang University of Science and Technology (POSTECH) has now become one of the top research universities in Asia. GIFT was founded to provide an academic environment for education and research on ferrous materials.
Structure
The Graduate Institute of Ferrous Technology has nine laboratories with key areas of expertise:
Alternative Technology Lab:
Continuous casting-related innovation
Texture control
Alternative alloying and processing
Control and Automation Lab:
Computer control system
Process automation
Control theory & Applications
Measurement
Clean Steel Lab:
Thermochemistry
Physico-chemical properties
Fluid dynamics
Solidification and casting
Environmental Metallurgy Lab:
Reduction of CO2 emission
Improvement of energy efficiency
Gas alloying technology
Computational Metallurgy Lab:
Classical modeling and experiments
Phase field modeling and experiments
First principle calculation, quantum mechanical modeling
Microstructure Control Lab:
Phase transformation / electron microscopy
Microscopic deformation behavior
Toughness enhancement via microstructure control
Innovative processing (e.g., twin-roll casting)
Materials Design Lab:
Automotive Steels, Galvanized/Galvannealed Products
Electrical Steels
Stainless steels
Steel grades related to power generation
Materials Mechanics Lab:
Net Shape Forming (sheet forming, other forming)
Performance in service (fracture, crashworthiness, fatigue)
Surface Engineering Lab:
Composite coatings
Corrosion mechanism & lifetime prediction
Corrosion resistant alloy design
Metallic coatings
People
Among the faculty members who have worked at the Graduate Institute of Ferrous Technology, several Professors are distinguished world-widely:
Sir Professor Harshad_Bhadeshia
Professor Frédéric Barlat
Professor Nack Joon KIM
Professor Bruno De Cooman
Prof. Yasushi Sasaki
Prof. Hae-Geon Lee
Prof. Chong Soo Lee
References
Educational institutions established in 1993
Research institutes in South Korea
Pohang University of Science and Technology
Metallurgical organizations
1993 establishments in South Korea | Graduate Institute of Ferrous Technology | [
"Chemistry",
"Materials_science",
"Engineering"
] | 486 | [
"Metallurgy",
"Metallurgical organizations"
] |
24,740,764 | https://en.wikipedia.org/wiki/Center-surround%20antagonism | Center-surround antagonism refers to antagonistic interactions between center and surround regions of the receptive fields of photoreceptor cells in the retina. Center surround antagonism enables edge detection and contrast enhancement within the visual cortex.
References
Signal transduction | Center-surround antagonism | [
"Chemistry",
"Biology"
] | 55 | [
"Biochemistry",
"Neurochemistry",
"Signal transduction"
] |
28,005,582 | https://en.wikipedia.org/wiki/Bi-scalar%20tensor%20vector%20gravity | Bi-scalar tensor vector gravity theory (BSTV) is an extension of the tensor–vector–scalar gravity theory (TeVeS). TeVeS is a relativistic generalization of Mordehai Milgrom's Modified Newtonian Dynamics MOND paradigm proposed by Jacob Bekenstein. BSTV was proposed by R.H.Sanders. BSTV makes TeVeS more flexible by making a non-dynamical scalar field in TeVeS into a dynamical one.
References
Theories of gravity
Astrophysics | Bi-scalar tensor vector gravity | [
"Physics",
"Astronomy"
] | 108 | [
"Theoretical physics",
"Astronomy stubs",
"Astrophysics",
"Astrophysics stubs",
"Theories of gravity",
"Astronomical sub-disciplines"
] |
28,014,495 | https://en.wikipedia.org/wiki/Piezooptic%20effect | The piezooptic effect is manifest as a change in refractive index, n, of a material caused by a change in pressure on that material. Early demonstrations of the piezooptic effect were done on liquids. The effect has since been demonstrated in solid, crystalline materials.
References
Optics | Piezooptic effect | [
"Physics",
"Chemistry"
] | 59 | [
"Applied and interdisciplinary physics",
"Optics",
" molecular",
"Atomic",
" and optical physics"
] |
43,111,669 | https://en.wikipedia.org/wiki/Free-orbit%20experiment%20with%20laser%20interferometry%20X-rays | The Free-orbit Experiment with Laser Interferometry X-Rays (FELIX) belongs to a category of experiments exploring whether macroscopic systems can be in superposition states. It was originally proposed by the physicist Roger Penrose in his 2004 book, "The Road to Reality" specifically to prove whether unconventional decoherence processes such as gravitationally induced decoherence or spontaneous wave-function collapse of a quantum system occur.
Later revised to take place as a tabletop experiment, if successful, it is estimated that a mass of roughly 1014 atoms would have been superposed, approximately nine orders of magnitude more massive than any superposition observed to that date (2003).
Configuration
The proposed experimental setup is basically a variation of the Michelson interferometer but for a single photon. Additionally, one of the mirrors has to be very tiny and fixed on an isolated micromechanical-oscillator. This allows it to move when the photon is reflected on it, so that it may become superposed with the photon. The purpose is to vary the size of the mirror to investigate the effect of the mass on the time it takes for the quantum system to collapse.
Originally the arms of the interferometer had to stretch into the hundreds of thousands of kilometers to achieve a photon roundtrip-time comparable to the oscillator's period, but that meant that the experiment had to take place in-orbit, reducing its viability. The revised proposal requires that the mirrors be placed into high-finesse optical cavities that will trap the photons long enough to achieve the desired delay.
There are various technological challenges, but all are within high-end laboratory capabilities. The primary requirement is that the mass of the cavity remains as small as possible. To avoid noise on the interferometer and have a low probability of emitting more than one photon each time, a very low absolute temperature for the experiment is needed, on the order of 60 μK. For similar reasons, and to avoid decoherence, the experimental device has to be in ultra-high vacuum conditions. The wavelength of the photons was calculated to be roughly 630 nm so the reflecting surfaces can be as small as possible and yet avoid refraction and reflectivity issues. The micromechanical-oscillator can be similar to the cantilevers in atomic force microscopy and the reflective surfaces typically used in similar high-demanding experiments pose no real challenge. Various elaborate electromagnetic mechanisms have been proposed to "reset" the cavities to a stable state before each repetition of the experiment.
See also
Penrose interpretation
Objective collapse theory
References
Quantum mechanics | Free-orbit experiment with laser interferometry X-rays | [
"Physics"
] | 533 | [
"Theoretical physics",
"Quantum mechanics"
] |
43,114,042 | https://en.wikipedia.org/wiki/Media%20Technology%20and%20Society | Media Technology and Society: A History from the Telegraph to the Internet is a 1998 book by Brian Winston. The book's central thesis is that technology, rather than developing in relatively discontinuous revolutions, evolves as part of a larger evolutionary pattern. It was named 'Best Book of 1998' by the American Association for History and Computing.
Content
The book contains examples of ways in which technology, human behaviour and society are interconnected. Through historical accounts, Winston demonstrates how technology reinforces social trends, and how social conditions lead to specific inventions. It was written for the general public.
References
External links
WorldCat report
1998 non-fiction books
History books about technology
Technology books
Routledge books
Science and technology studies works | Media Technology and Society | [
"Technology"
] | 145 | [
"Science and technology studies works",
"Science and technology studies"
] |
21,754,732 | https://en.wikipedia.org/wiki/Tracy%E2%80%93Widom%20distribution | The Tracy–Widom distribution is a probability distribution from random matrix theory introduced by . It is the distribution of the normalized largest eigenvalue of a random Hermitian matrix. The distribution is defined as a Fredholm determinant.
In practical terms, Tracy–Widom is the crossover function between the two phases of weakly versus strongly coupled components in a system.
It also appears in the distribution of the length of the longest increasing subsequence of random permutations, as large-scale statistics in the Kardar-Parisi-Zhang equation, in current fluctuations of the asymmetric simple exclusion process (ASEP) with step initial condition, and in simplified mathematical models of the behavior of the longest common subsequence problem on random inputs. See and for experimental testing (and verifying) that the interface fluctuations of a growing droplet (or substrate) are described by the TW distribution (or ) as predicted by .
The distribution is of particular interest in multivariate statistics. For a discussion of the universality of , , see . For an application of to inferring population structure from genetic data see .
In 2017 it was proved that the distribution F is not infinitely divisible.
Definition as a law of large numbers
Let denote the cumulative distribution function of the Tracy–Widom distribution with given . It can be defined as a law of large numbers, similar to the central limit theorem.
There are typically three Tracy–Widom distributions, , with . They correspond to the three gaussian ensembles: orthogonal (), unitary (), and symplectic ().
In general, consider a gaussian ensemble with beta value , with its diagonal entries having variance 1, and off-diagonal entries having variance , and let be probability that an matrix sampled from the ensemble have maximal eigenvalue , then definewhere denotes the largest eigenvalue of the random matrix. The shift by centers the distribution, since at the limit, the eigenvalue distribution converges to the semicircular distribution with radius . The multiplication by is used because the standard deviation of the distribution scales as (first derived in ).
For example:
where the matrix is sampled from the gaussian unitary ensemble with off-diagonal variance .
The definition of the Tracy–Widom distributions may be extended to all (Slide 56 in , ).
One may naturally ask for the limit distribution of second-largest eigenvalues, third-largest eigenvalues, etc. They are known.
Functional forms
Fredholm determinant
can be given as the Fredholm determinant
of the kernel ("Airy kernel") on square integrable functions on the half line , given in terms of Airy functions Ai by
Painlevé transcendents
can also be given as an integral
in terms of a solution of a Painlevé equation of type II
with boundary condition This function is a Painlevé transcendent.
Other distributions are also expressible in terms of the same :
Functional equations
Define then
Occurrences
Other than in random matrix theory, the Tracy–Widom distributions occur in many other probability problems.
Let be the length of the longest increasing subsequence in a random permutation sampled uniformly from , the permutation group on n elements. Then the cumulative distribution function of converges to .
Asymptotics
Probability density function
Let be the probability density function for the distribution, thenIn particular, we see that it is severely skewed to the right: it is much more likely for to be much larger than than to be much smaller. This could be intuited by seeing that the limit distribution is the semicircle law, so there is "repulsion" from the bulk of the distribution, forcing to be not much smaller than .
At the limit, a more precise expression is (equation 49 )for some positive number that depends on .
Cumulative distribution function
At the limit,and at the limit,where is the Riemann zeta function, and .
This allows derivation of behavior of . For example,
Painlevé transcendent
The Painlevé transcendent has asymptotic expansion at (equation 4.1 of )This is necessary for numerical computations, as the solution is unstable: any deviation from it tends to drop it to the branch instead.
Numerics
Numerical techniques for obtaining numerical solutions to the Painlevé equations of the types II and V, and numerically evaluating eigenvalue distributions of random matrices in the beta-ensembles were first presented by using MATLAB. These approximation techniques were further analytically justified in and used to provide numerical evaluation of Painlevé II and Tracy–Widom distributions (for ) in S-PLUS. These distributions have been tabulated in to four significant digits for values of the argument in increments of 0.01; a statistical table for p-values was also given in this work. gave accurate and fast algorithms for the numerical evaluation of and the density functions
for . These algorithms can be used to compute numerically the mean, variance, skewness and excess kurtosis of the distributions .
Functions for working with the Tracy–Widom laws are also presented in the R package 'RMTstat' by and MATLAB package 'RMLab' by .
For a simple approximation based on a shifted gamma distribution see .
developed a spectral algorithm for the eigendecomposition of the integral operator , which can be used to rapidly evaluate Tracy–Widom distributions, or, more generally, the distributions of the th largest level at the soft edge scaling limit of Gaussian ensembles, to machine accuracy.
Tracy-Widom and KPZ universality
The Tracy-Widom distribution appears as a limit distribution in the universality class of the KPZ equation. For example it appears under scaling of the one-dimensional KPZ equation with fixed time.
See also
Wigner semicircle distribution
Marchenko–Pastur distribution
Footnotes
References
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Further reading
.
.
.
.
External links
.
.
.
At the Far Ends of a New Universal Law, Quanta Magazine
Continuous distributions
Random matrices
Special functions | Tracy–Widom distribution | [
"Physics",
"Mathematics"
] | 1,263 | [
"Random matrices",
"Special functions",
"Mathematical objects",
"Combinatorics",
"Matrices (mathematics)",
"Statistical mechanics"
] |
21,756,816 | https://en.wikipedia.org/wiki/Earthlight%20%28astronomy%29 | Earthlight is the diffuse reflection of sunlight reflected from Earth's surface and clouds. Earthshine (an example of planetshine), also known as the Moon's ashen glow, is the dim illumination of the otherwise unilluminated portion of the Moon by this indirect sunlight. Earthlight on the Moon during the waxing crescent is called "the old Moon in the new Moon's arms", while that during the waning crescent is called "the new Moon in the old Moon's arms".
Visibility
Earthlight has a calculated maximum apparent magnitude of −17.7 as viewed from the Moon. When the Earth is at maximum phase, the total radiance at the lunar surface is approximately from Earthlight. This is only 0.01% of the radiance from direct Sunlight. Earthshine has a calculated maximum apparent magnitude of −3.69 as viewed from Earth.
This phenomenon is most visible from Earth at night (or astronomical twilight) a few days before or after the day of new moon, when the lunar phase is a thin crescent. On these nights, the entire lunar disk is both directly and indirectly sunlit, and is thus unevenly bright enough to see. Earthshine is most clearly seen after dusk during the waxing crescent (in the western sky) and before dawn during the waning crescent (in the eastern sky).
The term earthlight would also be suitable for an observer on the Moon seeing Earth during the lunar night, or for an astronaut inside a spacecraft looking out the window. Arthur C. Clarke uses it in this sense in his 1955 novel Earthlight.
High contrast photography is also able to reveal the night side of the moon illuminated by Earthlight during a solar eclipse.
Radio frequency transmissions are also reflected by the moon; for example, see Earth–Moon–Earth communication.
History
The phenomenon was sketched and remarked upon in the 16th century by Leonardo da Vinci, who thought that the illumination came from reflections from the Earth's oceans (we now know that clouds account for much more reflected intensity than the oceans).
It is referenced in "The Ballad of Sir Patrick Spens" (Child Ballad No. 58), in the phrase "‘A saw the new muin late yestreen/ Wi the auld muin in her airm."
Astronaut Dr Sian Proctor was moved by seeing and experiencing earthlight from orbit as mission pilot of Inspiration4 space mission and wrote the poem, "Earthlight". In 2024, Proctor authored EarthLight: The Power of EarthLight and the Human Perspective on the concept and nature of earthlight.
See also
List of light sources
Starlight
Moonlight
Sunlight
Ashen light
References
External links
Lunar observation
Earth phenomena
Light sources | Earthlight (astronomy) | [
"Physics"
] | 555 | [
"Physical phenomena",
"Earth phenomena"
] |
21,757,046 | https://en.wikipedia.org/wiki/Hydrodynamic%20stability | In fluid dynamics, hydrodynamic stability is the field which analyses the stability and the onset of instability of fluid flows. The study of hydrodynamic stability aims to find out if a given flow is stable or unstable, and if so, how these instabilities will cause the development of turbulence. The foundations of hydrodynamic stability, both theoretical and experimental, were laid most notably by Helmholtz, Kelvin, Rayleigh and Reynolds during the nineteenth century. These foundations have given many useful tools to study hydrodynamic stability. These include Reynolds number, the Euler equations, and the Navier–Stokes equations. When studying flow stability it is useful to understand more simplistic systems, e.g. incompressible and inviscid fluids which can then be developed further onto more complex flows. Since the 1980s, more computational methods are being used to model and analyse the more complex flows.
Stable and unstable flows
To distinguish between the different states of fluid flow one must consider how the fluid reacts to a disturbance in the initial state. These disturbances will relate to the initial properties of the system, such as velocity, pressure, and density. James Clerk Maxwell expressed the qualitative concept of stable and unstable flow nicely when he said: "when an infinitely small variation of the present state will alter only by an infinitely small quantity the state at some future time, the condition of the system, whether at rest or in motion, is said to be stable but when an infinitely small variation in the present state may bring about a finite difference in the state of the system in a finite time, the system is said to be unstable."
That means that for a stable flow, any infinitely small variation, which is considered a disturbance, will not have any noticeable effect on the initial state of the system and will eventually die down in time. For a fluid flow to be considered stable it must be stable with respect to every possible disturbance. This implies that there exists no mode of disturbance for which it is unstable.
On the other hand, for an unstable flow, any variations will have some noticeable effect on the state of the system which would then cause the disturbance to grow in amplitude in such a way that the system progressively departs from the initial state and never returns to it. This means that there is at least one mode of disturbance with respect to which the flow is unstable, and the disturbance will therefore distort the existing force equilibrium.
Determining flow stability
Reynolds number
A key tool used to determine the stability of a flow is the Reynolds number (Re), first put forward by George Gabriel Stokes at the start of the 1850s. Associated with Osborne Reynolds who further developed the idea in the early 1880s, this dimensionless number gives the ratio of inertial terms and viscous terms. In a physical sense, this number is a ratio of the forces which are due to the momentum of the fluid (inertial terms), and the forces which arise from the relative motion of the different layers of a flowing fluid (viscous terms). The equation for this is
where
The Reynolds number is useful because it can provide cut off points for when flow is stable or unstable, namely the Critical Reynolds number . As it increases, the amplitude of a disturbance which could then lead to instability gets smaller. At high Reynolds numbers it is agreed that fluid flows will be unstable. High Reynolds number can be achieved in several ways, e.g. if is a small value or if and are high values. This means that instabilities will arise almost immediately and the flow will become unstable or turbulent.
Navier–Stokes equation and the continuity equation
In order to analytically find the stability of fluid flows, it is useful to note that hydrodynamic stability has a lot in common with stability in other fields, such as magnetohydrodynamics, plasma physics and elasticity; although the physics is different in each case, the mathematics and the techniques used are similar. The essential problem is modeled by nonlinear partial differential equations and the stability of known steady and unsteady solutions are examined. The governing equations for almost all hydrodynamic stability problems are the Navier–Stokes equation and the continuity equation. The Navier–Stokes equation is given by:
where
is the velocity field of fluid
is the pressure of fluid
is the body force acting on fluid e.g., gravity
is the kinematic viscosity
partial derivative of the velocity field with respect to time
is the gradient operator
Here is being used as an operator acting on the velocity field on the left hand side of the equation and then acting on the pressure on the right hand side.
and the continuity equation is given by:
where is the material derivative of the density.
Once again is being used as an operator on and is calculating the divergence of the velocity.
But if the fluid being considered is incompressible, which means the density is constant, then and hence:
The assumption that a flow is incompressible is a good one and applies to most fluids travelling at most speeds. It is assumptions of this form that will help to simplify the Navier–Stokes equation into differential equations, like Euler's equation, which are easier to work with.
Euler's equation
If one considers a flow which is inviscid, this is where the viscous forces are small and can therefore be neglected in the calculations, then one arrives at Euler's equations:
Although in this case we have assumed an inviscid fluid this assumption does not hold for flows where there is a boundary. The presence of a boundary causes some viscosity at the boundary layer which cannot be neglected and one arrives back at the Navier–Stokes equation. Finding the solutions to these governing equations under different circumstances and determining their stability is the fundamental principle in determining the stability of the fluid flow itself.
Linear stability analysis
To determine whether the flow is stable or unstable, one often employs the method of linear stability analysis. In this type of analysis, the governing equations and boundary conditions are linearized. This is based on the fact that the concept of 'stable' or 'unstable' is based on an infinitely small disturbance. For such disturbances, it is reasonable to assume that disturbances of different wavelengths evolve independently. (A nonlinear governing equation will allow disturbances of different wavelengths to interact with each other.)
Analysing flow stability
Bifurcation theory
Bifurcation theory is a useful way to study the stability of a given flow, with the changes that occur in the structure of a given system. Hydrodynamic stability is a series of differential equations and their solutions. A bifurcation occurs when a small change in the parameters of the system causes a qualitative change in its behavior,. The parameter that is being changed in the case of hydrodynamic stability is the Reynolds number. It can be shown that the occurrence of bifurcations falls in line with the occurrence of instabilities.
Laboratory and computational experiments
Laboratory experiments are a very useful way of gaining information about a given flow without having to use more complex mathematical techniques. Sometimes physically seeing the change in the flow over time is just as useful as a numerical approach and any findings from these experiments can be related back to the underlying theory. Experimental analysis is also useful because it allows one to vary the governing parameters very easily and their effects will be visible.
When dealing with more complicated mathematical theories such as Bifurcation theory and Weakly nonlinear theory, numerically solving such problems becomes very difficult and time-consuming but with the help of computers this process becomes much easier and quicker. Since the 1980s computational analysis has become more and more useful, the improvement of algorithms which can solve the governing equations, such as the Navier–Stokes equation, means that they can be integrated more accurately for various types of flow.
Applications
Kelvin–Helmholtz instability
The Kelvin–Helmholtz instability (KHI) is an application of hydrodynamic stability that can be seen in nature. It occurs when there are two fluids flowing at different velocities. The difference in velocity of the fluids causes a shear velocity at the interface of the two layers. The shear velocity of one fluid moving induces a shear stress on the other which, if greater than the restraining surface tension, then results in an instability along the interface between them. This motion causes the appearance of a series of overturning ocean waves, a characteristic of the Kelvin–Helmholtz instability. Indeed, the apparent ocean wave-like nature is an example of vortex formation, which are formed when a fluid is rotating about some axis, and is often associated with this phenomenon.
The Kelvin–Helmholtz instability can be seen in the bands in planetary atmospheres such as Saturn and Jupiter, for example in the giant red spot vortex. In the atmosphere surrounding the giant red spot there is the biggest example of KHI that is known of and is caused by the shear force at the interface of the different layers of Jupiter's atmosphere. There have been many images captured where the ocean-wave like characteristics discussed earlier can be seen clearly, with as many as 4 shear layers visible.
Weather satellites take advantage of this instability to measure wind speeds over large bodies of water. Waves are generated by the wind, which shears the water at the interface between it and the surrounding air. The computers on board the satellites determine the roughness of the ocean by measuring the wave height. This is done by using radar, where a radio signal is transmitted to the surface and the delay from the reflected signal is recorded, known as the "time of flight". From this meteorologists are able to understand the movement of clouds and the expected air turbulence near them.
Rayleigh–Taylor instability
The Rayleigh–Taylor instability is another application of hydrodynamic stability and also occurs between two fluids but this time the densities of the fluids are different. Due to the difference in densities, the two fluids will try to reduce their combined potential energy. The less dense fluid will do this by trying to force its way upwards, and the more dense fluid will try to force its way downwards. Therefore, there are two possibilities: if the lighter fluid is on top the interface is said to be stable, but if the heavier fluid is on top, then the equilibrium of the system is unstable to any disturbances of the interface. If this is the case then both fluids will begin to mix. Once a small amount of heavier fluid is displaced downwards with an equal volume of lighter fluid upwards, the potential energy is now lower than the initial state, therefore the disturbance will grow and lead to the turbulent flow associated with Rayleigh–Taylor instabilities.
This phenomenon can be seen in interstellar gas, such as the Crab Nebula. It is pushed out of the Galactic plane by magnetic fields and cosmic rays and then becomes Rayleigh–Taylor unstable if it is pushed past its normal scale height. This instability also explains the mushroom cloud which forms in processes such as volcanic eruptions and atomic bombs.
Rayleigh–Taylor instability has a big effect on the Earth's climate. Winds that come from the coast of Greenland and Iceland cause evaporation of the ocean surface over which they pass, increasing the salinity of the ocean water near the surface, and making the water near the surface denser. This then generates plumes which drive the ocean currents. This process acts as a heat pump, transporting warm equatorial water North. Without the ocean overturning, Northern Europe would likely face drastic drops in temperature.
Diffusiophoretic convective instability
The presence of colloid particles (typically with size in the range between 1 nanometer and 1 micron), uniformly dispersed in a binary liquid mixtures, is able to drive a convective hydrodynamic instability even though the system is initially in a condition of stable gravitational equilibrium (hence opposite to the Rayleigh-Taylor instability discussed above).
If a liquid contains a heavier molecular solute the concentration of which diminishes with the height, the system is gravitationally stable. Indeed, if a portion of fluid moves upwards due to a spontaneous fluctuation, it will end up being surrounded by less dense fluid and hence will be pushed back downwards. This mechanism thus inhibits convective motions. It has been shown, however, that this mechanism breaks down if the binary mixture contains uniformly dispersed colloidal particles. In that case, convective motions arise even if the system is gravitationally stable.
The key phenomenon to understand this instability is diffusiophoresis: in order to minimize the interfacial energy between colloidal particle and liquid solution, the gradient of molecular solute determines an internal migration of colloids which brings them upwards, thus depleting them at the bottom. In order words, since the colloids are slightly denser than the liquid mixture, this leads to a local increase of density with height. This instability, even in the absence of a thermal gradient, causes convective motions similar to those observed when a liquid is heated up from the bottom (known as Rayleigh-Bénard convection), where the upward migration is due to thermal dilation, and leads to pattern formation.
This instability explains how animals get their intricate and distinctive patterns such as colorful stripes of tropical fish.
See also
List of hydrodynamic instabilities
Laminar–turbulent transition
Plasma stability
Squire's theorem
Taylor–Couette flow
Notes
References
External links
Fluid dynamics | Hydrodynamic stability | [
"Chemistry",
"Engineering"
] | 2,732 | [
"Piping",
"Chemical engineering",
"Fluid dynamics"
] |
21,761,927 | https://en.wikipedia.org/wiki/Carter%20constant | The Carter constant is a conserved quantity for motion around black holes in the general relativistic formulation of gravity. Its SI base units are kg2⋅m4⋅s−2. Carter's constant was derived for a spinning, charged black hole by Australian theoretical physicist Brandon Carter in 1968. Carter's constant along with the energy , axial angular momentum , and particle rest mass provide the four conserved quantities necessary to uniquely determine all orbits in the Kerr–Newman spacetime (even those of charged particles).
Formulation
Carter noticed that the Hamiltonian for motion in Kerr spacetime was separable in Boyer–Lindquist coordinates, allowing the constants of such motion to be easily identified using Hamilton–Jacobi theory. The Carter constant can be written as follows:
,
where is the latitudinal component of the particle's angular momentum, is the conserved energy of the particle, is the particle's conserved axial angular momentum, is the rest mass of the particle, and is the spin parameter of the black hole. Note that here denotes the covariant components of the four-momentum in Boyer-Lindquist coordinates which may be calculated from the particle's position parameterized by the particle's proper time using its four-velocity as where is the four-momentum and is the Kerr metric. Thus, the conserved energy constant and angular momentum constant are not to be confused with the energy measured by an observer and the angular momentum
. The angular momentum component along is which coincides with .
Because functions of conserved quantities are also conserved, any function of and the three other constants of the motion can be used as a fourth constant in place of . This results in some confusion as to the form of Carter's constant. For example, it is sometimes more convenient to use:
in place of . The quantity is useful because it is always non-negative. In general any fourth conserved quantity for motion in the Kerr family of spacetimes may be referred to as "Carter's constant". In the limit, and , where is the norm of the angular momentum vector, see Schwarzschild limit below.
As generated by a Killing tensor
Noether's theorem states that each conserved quantity of a system generates a continuous symmetry of that system. Carter's constant is related to a higher order symmetry of the Kerr metric generated by a second order Killing tensor field (different than used above). In component form:
,
where is the four-velocity of the particle in motion. The components of the Killing tensor in Boyer–Lindquist coordinates are:
,
where are the components of the metric tensor and and are the components of the principal null vectors:
with
.
The parentheses in are notation for symmetrization:
Schwarzschild limit
The spherical symmetry of the Schwarzschild metric for non-spinning black holes allows one to reduce the problem of finding the trajectories of particles to three dimensions. In this case one only needs , , and to determine the motion; however, the symmetry leading to Carter's constant still exists. Carter's constant for Schwarzschild space is:
.
To see how this is related to the angular momentum two-form in spherical coordinates where and , where and and where and similarly for , we have
.
Since and represent an orthonormal basis, the Hodge dual of in an orthonormal basis is
consistent with although here and are with respect to proper time. Its norm is
.
Further since and , upon substitution we get
.
In the Schwarzschild case, all components of the angular momentum vector are conserved, so both
and are conserved, hence is clearly conserved. For Kerr, is conserved but and are not, nevertheless is conserved.
The other form of Carter's constant is
since here . This is also clearly conserved. In the Schwarzschild case both and , where are radial orbits and with corresponds to orbits confined to the equatorial plane of the coordinate system, i.e. for all times.
See also
Kerr metric
Kerr–Newman metric
Boyer–Lindquist coordinates
Hamilton–Jacobi equation
Euler's three-body problem
References
Black holes
Conservation laws | Carter constant | [
"Physics",
"Astronomy"
] | 845 | [
"Physical phenomena",
"Black holes",
"Physical quantities",
"Equations of physics",
"Conservation laws",
"Unsolved problems in physics",
"Astrophysics",
"Density",
"Stellar phenomena",
"Astronomical objects",
"Symmetry",
"Physics theorems"
] |
40,181,992 | https://en.wikipedia.org/wiki/DIN%20EN%20ISO%209712 | DIN EN ISO 9712:2012 is a certification issued by the German institute for standardization (Deutsches Institut für Normung). It certifies personnel working in Non-destructive testing. This standard evaluates and documents the competence of personnel whose tasks require knowledge of non-destructive tests. The certification process is performed by authorized independent certification bodies, such as Sector Cert, DQS, TÜV, DEKRA etc. They can be applied at the German accreditation body (Deutsche Akkreditierungsstelle GmbH, DAkkS).
Certification process
To be accepted for the certification exam, applicants have to:
visit a course
provide a proof of good eyesight
fulfill the required practical experience in non-destructive testing
After the exam has been successfully passed and the full industry experience can be proved, the certificate itself can be applied for. The certification stays valid for five years and has to be renewed. While this only includes a new application form, recertification after ten years consists of a further examination.
Replacement of DIN EN 473 and DIN EN ISO 9712
DIN EN ISO 9712:2012 was introduced in January 2013. DIN EN 473, as well as the former version of DIN EN ISO 9712 (DIN EN ISO 9712:2005), were replaced. Despite many similarities, the new version of DIN EN ISO 9712 contains numerous alterations:
For example, the updated standard consists of new testing methods, such as infrared thermography or testing with strain gauges.
The required practical experience in non-destructive testing before certification has also been introduced with the new ISO 9712.
Furthermore, written electronic tests can be taken online now.
References
Nondestructive testing
9712 EN ISO
09712
Product certification | DIN EN ISO 9712 | [
"Materials_science"
] | 351 | [
"Nondestructive testing",
"Materials testing"
] |
40,183,726 | https://en.wikipedia.org/wiki/C24H38O3 | {{DISPLAYTITLE:C24H38O3}}
The molecular formula C24H38O3 (molar mass: 374.557 g/mol) may refer to:
Androstanolone valerate, or dihydrotestosterone pentanoate
Canbisol
Molecular formulas | C24H38O3 | [
"Physics",
"Chemistry"
] | 69 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
40,184,186 | https://en.wikipedia.org/wiki/C22H29N3O2 | {{DISPLAYTITLE:C22H29N3O2}}
The molecular formula C22H29N3O2 (molar mass: 367.48 g/mol, exact mass: 367.2260 u) may refer to:
O-1238
18-Methylaminocoronaridine (18-MAC)
Molecular formulas | C22H29N3O2 | [
"Physics",
"Chemistry"
] | 77 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
44,841,714 | https://en.wikipedia.org/wiki/Time-dependent%20variational%20Monte%20Carlo | The time-dependent variational Monte Carlo (t-VMC) method is a quantum Monte Carlo approach to study the dynamics of closed, non-relativistic quantum systems in the context of the quantum many-body problem. It is an extension of the variational Monte Carlo method, in which a time-dependent pure quantum state is encoded by some variational wave function, generally parametrized as
where the complex-valued are time-dependent variational parameters, denotes a many-body configuration and are time-independent operators that define the specific ansatz. The time evolution of the parameters can be found upon imposing a variational principle to the wave function. In particular one can show that the optimal parameters for the evolution satisfy at each time the equation of motion
where is the Hamiltonian of the system, are connected averages, and the quantum expectation values are taken over the time-dependent variational wave function, i.e., .
In analogy with the Variational Monte Carlo approach and following the Monte Carlo method for evaluating integrals, we can interpret
as a probability distribution function over the multi-dimensional space spanned by the many-body configurations . The Metropolis–Hastings algorithm is then used to sample exactly from this probability distribution and, at each time , the quantities entering the equation of motion are evaluated as statistical averages over the sampled configurations. The trajectories of the variational parameters are then found upon numerical integration of the associated differential equation.
References
Quantum mechanics
Quantum Monte Carlo | Time-dependent variational Monte Carlo | [
"Chemistry"
] | 301 | [
"Quantum Monte Carlo",
"Quantum chemistry"
] |
44,842,803 | https://en.wikipedia.org/wiki/Spherical%20neutron%20polarimetry | Spherical neutron polarimetry (SNP) is a form of neutron polarimetry that measures the polarization of neutrons both before and after scattering. It uses controlled magnetic fields to manipulate the spin of the neutrons, which are then separated by the Meissner effect, allowing polarization to be measured.
References
Neutron-related techniques
Polarization (waves) | Spherical neutron polarimetry | [
"Physics"
] | 75 | [
"Particle physics stubs",
"Polarization (waves)",
"Particle physics",
"Astrophysics"
] |
44,847,430 | https://en.wikipedia.org/wiki/Interatomic%20potential | Interatomic potentials are mathematical functions to calculate the potential energy of a system of atoms with given positions in space. Interatomic potentials are widely used as the physical basis of molecular mechanics and molecular dynamics simulations in computational chemistry, computational physics and computational materials science to explain and predict materials properties. Examples of quantitative properties and qualitative phenomena that are explored with interatomic potentials include lattice parameters, surface energies, interfacial energies, adsorption, cohesion, thermal expansion, and elastic and plastic material behavior, as well as chemical reactions.
Functional form
Interatomic potentials can be written as a series expansion of
functional terms that depend on the position of one, two, three, etc.
atoms at a time. Then the total potential of the system can
be written as
Here is the one-body term, the two-body term, the
three body term, the number of atoms in the system,
the position of atom , etc. , and are indices
that loop over atom positions.
Note that in case the pair potential is given per atom pair, in the two-body
term the potential should be multiplied by 1/2 as otherwise each bond is counted
twice, and similarly the three-body term by 1/6. Alternatively,
the summation of the pair term can be restricted to cases
and similarly for the three-body term , if
the potential form is such that it is symmetric with respect to exchange
of the and indices (this may not be the case for potentials
for multielemental systems).
The one-body term is only meaningful if the atoms are in an external
field (e.g. an electric field). In the absence of external fields,
the potential should not depend on the absolute position of
atoms, but only on the relative positions. This means
that the functional form can be rewritten as a function
of interatomic distances
and angles between the bonds
(vectors to neighbours) .
Then, in the absence of external forces, the general
form becomes
In the three-body term the
interatomic distance is not needed
since the three terms
are sufficient to give the relative positions of three atoms in three-dimensional space. Any terms of order higher than
2 are also called many-body potentials.
In some interatomic potentials the many-body interactions are
embedded into the terms of a pair potential (see discussion on
EAM-like and bond order potentials below).
In principle the sums in the expressions run over all atoms.
However, if the range of the interatomic potential is finite,
i.e. the potentials above
some cutoff distance ,
the summing can be restricted to atoms within the cutoff
distance of each other. By also using a cellular method
for finding the neighbours, the MD algorithm can be
an O(N) algorithm. Potentials with an infinite
range can be summed up efficiently by Ewald summation
and its further developments.
Force calculation
The forces acting between atoms can be obtained by differentiation of
the total energy with respect to atom positions. That is,
to get the force on atom one should take the three-dimensional
derivative (gradient) of the potential with respect to the position of atom :
For two-body potentials this gradient reduces, thanks to the
symmetry with respect to in the potential form, to straightforward
differentiation with respect to the interatomic distances
. However, for many-body
potentials (three-body, four-body, etc.) the differentiation
becomes considerably more complex
since the potential may not be any longer symmetric with respect to exchange.
In other words, also the energy
of atoms that are not direct neighbours of can depend on the position
because of angular and other many-body terms, and hence contribute to the gradient
.
Classes of interatomic potentials
Interatomic potentials come in many different varieties, with
different physical motivations. Even for single well-known elements such as silicon,
a wide variety of potentials quite different in functional form and motivation have been developed.
The true interatomic interactions
are quantum mechanical in nature, and there is no known
way in which the true interactions described by
the Schrödinger equation or Dirac equation for
all electrons and nuclei could be cast into an analytical
functional form. Hence all analytical interatomic
potentials are by necessity approximations.
Over time interatomic potentials have largely grown more complex and more accurate, although this is not strictly true. This has included both increased descriptions of physics, as well as added parameters. Until recently, all interatomic potentials could be described as "parametric", having been developed and optimized with a fixed number of (physical) terms and parameters. New research focuses instead on non-parametric potentials which can be systematically improvable by using complex local atomic neighbor descriptors and separate mappings to predict system properties, such that the total number of terms and parameters are flexible. These non-parametric models can be significantly more accurate, but since they are not tied to physical forms and parameters, there are many potential issues surrounding extrapolation and uncertainties.
Parametric potentials
Pair potentials
The arguably simplest widely used interatomic interaction model is the Lennard-Jones potential
where is the depth of the potential well
and is the distance at which the potential crosses zero.
The attractive term proportional to in the potential comes from the scaling of van der Waals forces, while the repulsive term is much more approximate (conveniently the square of the attractive term). On its own, this potential is quantitatively accurate only for noble gases and has been extensively studied in the past decades, but is also widely used for qualitative studies and in systems where dipole interactions are significant, particularly in chemistry force fields to describe intermolecular interactions - especially in fluids.
Another simple and widely used pair potential is the
Morse potential, which consists simply of a sum of two exponentials.
Here is the equilibrium bond energy and
the bond distance. The Morse
potential has been applied to studies of molecular vibrations and solids, and also inspired the functional form of more accurate potentials such as the bond-order potentials.
Ionic materials are often described by a sum of a
short-range repulsive term, such as the
Buckingham pair potential, and a long-range Coulomb potential
giving the ionic interactions between the ions forming the material. The short-range
term for ionic materials can also be of many-body character
.
Pair potentials have some inherent limitations, such as the inability
to describe all 3 elastic constants of
cubic metals or correctly describe both cohesive energy and vacancy formation energy. Therefore, quantitative molecular dynamics simulations
are carried out with various of many-body potentials.
Repulsive potentials
For very short interatomic separations, important in radiation material science,
the interactions can be described quite accurately with screened Coulomb potentials which have the general form
Here, when . and are the charges of the interacting nuclei, and is the so-called screening parameter.
A widely used popular screening function is the "Universal ZBL" one.
and more accurate ones can be obtained from all-electron quantum chemistry calculations
In binary collision approximation simulations this kind of potential can be used
to describe the nuclear stopping power.
Many-body potentials
The Stillinger-Weber potential is a potential that has a
two-body and three-body terms of the standard form
where the three-body term describes how the potential energy changes with bond bending.
It was originally developed for pure Si, but has been extended to many other
elements and compounds
and also formed the basis for other Si potentials.
Metals are very commonly described with what can be called
"EAM-like" potentials, i.e. potentials that share
the same functional form as the embedded atom model.
In these potentials, the total potential energy is written
where is a so-called embedding function
(not to be confused with the force ) that is a function of the sum of the so-called electron density
.
is a pair potential that usually is purely repulsive. In the original
formulation the electron
density function was obtained
from true atomic electron densities, and the embedding function
was motivated from density-functional theory as the energy needed
to 'embed' an atom into the electron density.
.
However, many other potentials used for metals share the same functional
form but motivate the terms differently, e.g. based
on tight-binding theory
or other motivations
.
EAM-like potentials are usually implemented as numerical tables.
A collection of tables is available at the interatomic
potential repository at NIST
Covalently bonded materials are often described by
bond order potentials, sometimes also called
Tersoff-like or Brenner-like potentials.
These have in general a form that resembles a pair potential:
where the repulsive and attractive part are simple exponential
functions similar to those in the Morse potential.
However, the strength is modified by the environment of the atom via the term. If implemented without
an explicit angular dependence, these potentials
can be shown to be mathematically equivalent to
some varieties of EAM-like potentials
Thanks to this equivalence, the bond-order potential formalism has been implemented also for many metal-covalent mixed materials.
EAM potentials have also been extended to describe covalent bonding by adding angular-dependent terms to the electron density function , in what is called the modified embedded atom method (MEAM).
Force fields
A force field is the collection of parameters to describe the physical interactions between atoms or physical units (up to ~108) using a given energy expression. The term force field characterizes the collection of parameters for a given interatomic potential (energy function) and is often used within the computational chemistry community. The force field parameters make the difference between good and poor models. Force fields are used for the simulation of metals, ceramics, molecules, chemistry, and biological systems, covering the entire periodic table and multiphase materials. Today's performance is among the best for solid-state materials, molecular fluids, and for biomacromolecules, whereby biomacromolecules were the primary focus of force fields from the 1970s to the early 2000s. Force fields range from relatively simple and interpretable fixed-bond models (e.g. Interface force field, CHARMM, and COMPASS) to explicitly reactive models with many adjustable fit parameters (e.g. ReaxFF) and machine learning models.
Non-parametric potentials
It should first be noted that non-parametric potentials are often referred to as "machine learning" potentials. While the descriptor/mapping forms of non-parametric models are closely related to machine learning in general and their complex nature make machine learning fitting optimizations almost necessary, differentiation is important in that parametric models can also be optimized using machine learning.
Current research in interatomic potentials involves using systematically improvable, non-parametric mathematical forms and increasingly complex machine learning methods. The total energy is then writtenwhere is a mathematical representation of the atomic environment surrounding the atom , known as the descriptor. is a machine-learning model that provides a prediction for the energy of atom based on the descriptor output. An accurate machine-learning potential requires both a robust descriptor and a suitable machine learning framework. The simplest descriptor is the set of interatomic distances from atom to its neighbours, yielding a machine-learned pair potential. However, more complex many-body descriptors are needed to produce highly accurate potentials. It is also possible to use a linear combination of multiple descriptors with associated machine-learning models. Potentials have been constructed using a variety of machine-learning methods, descriptors, and mappings, including neural networks, Gaussian process regression, and linear regression.
A non-parametric potential is most often trained to total energies, forces, and/or stresses obtained from quantum-level calculations, such as density functional theory, as with most modern potentials. However, the accuracy of a machine-learning potential can be converged to be comparable with the underlying quantum calculations, unlike analytical models. Hence, they are in general more accurate than traditional analytical potentials, but they are correspondingly less able to extrapolate. Further, owing to the complexity of the machine-learning model and the descriptors, they are computationally far more expensive than their analytical counterparts.
Non-parametric, machine learned potentials may also be combined with parametric, analytical potentials, for example to include known physics such as the screened Coulomb repulsion, or to impose physical constraints on the predictions.
Potential fitting
Since the interatomic potentials are approximations, they by necessity all involve
parameters that need to be adjusted to some reference values. In simple
potentials such as the Lennard-Jones and Morse ones, the parameters are interpretable and can be set to match e.g. the equilibrium bond length and bond strength
of a dimer molecule or the surface energy of a solid
. Lennard-Jones potential can typically describe the lattice parameters, surface energies, and approximate mechanical properties. Many-body
potentials often contain tens or even hundreds of adjustable parameters with limited interpretability and no compatibility with common interatomic potentials for bonded molecules.
Such parameter sets can be fit to a larger set of experimental data, or materials
properties derived from less reliable data such as from density-functional theory. For solids, a many-body potential
can often describe the lattice constant of the equilibrium crystal structure, the cohesive energy, and linear elastic constants, as well as basic point defect properties of all the elements and stable compounds well, although deviations in surface energies often exceed 50%.
Non-parametric potentials in turn contain hundreds or even thousands of independent parameters to fit. For any but the simplest model forms, sophisticated optimization and machine learning methods are necessary for useful potentials.
The aim of most potential functions and fitting is to make the potential
transferable, i.e. that it can describe materials properties that are clearly
different from those it was fitted to (for examples of potentials explicitly aiming for this,
see e.g.). Key aspects here are the correct representation of chemical bonding, validation of structures and energies, as well as interpretability of all parameters. Full transferability and interpretability is reached with the Interface force field (IFF). An example of partial transferability, a review of interatomic potentials
of Si describes that Stillinger-Weber and Tersoff III potentials for Si can describe several (but not all) materials properties they were not fitted to.
The NIST interatomic potential repository provides a collection of fitted interatomic potentials, either as fitted parameter values or numerical
tables of the potential functions. The OpenKIM project also provides a repository of fitted potentials, along with collections of validation tests and a software framework for promoting reproducibility in molecular simulations using interatomic potentials.
Machine-learned interatomic potentials
Since the 1990s, machine learning programs have been employed to construct interatomic potentials, mapping atomic structures to their potential energies. These are generally referred to as 'machine learning potentials' (MLPs) or as 'machine-learned interatomic potentials' (MLIPs). Such machine learning potentials help fill the gap between highly accurate but computationally intensive simulations like density functional theory and computationally lighter, but much less precise, empirical potentials. Early neural networks showed promise, but their inability to systematically account for interatomic energy interactions limited their applications to smaller, low-dimensional systems, keeping them largely within the confines of academia. However, with continuous advancements in artificial intelligence technology, machine learning methods have become significantly more accurate, positioning machine learning as a significant player in potential fitting.
Modern neural networks have revolutionized the construction of highly accurate and computationally light potentials by integrating theoretical understanding of materials science into their architectures and preprocessing. Almost all are local, accounting for all interactions between an atom and its neighbor up to some cutoff radius. These neural networks usually intake atomic coordinates and output potential energies. Atomic coordinates are sometimes transformed with atom-centered symmetry functions or pair symmetry functions before being fed into neural networks. Encoding symmetry has been pivotal in enhancing machine learning potentials by drastically constraining the neural networks' search space.
Conversely, message-passing neural networks (MPNNs), a form of graph neural networks, learn their own descriptors and symmetry encodings. They treat molecules as three-dimensional graphs and iteratively update each atom's feature vectors as information about neighboring atoms is processed through message functions and convolutions. These feature vectors are then used to directly predict the final potentials. In 2017, the first-ever MPNN model, a deep tensor neural network, was used to calculate the properties of small organic molecules. Advancements in this technology led to the development of Matlantis in 2022, which commercially applies machine learning potentials for new materials discovery. Matlantis, which can simulate 72 elements, handle up to 20,000 atoms at a time, and execute calculations up to 20 million times faster than density functional theory with almost indistinguishable accuracy, showcases the power of machine learning potentials in the age of artificial intelligence.
Another class of machine-learned interatomic potential is the Gaussian approximation potential (GAP), which combines compact descriptors of local atomic environments with Gaussian process regression to machine learn the potential energy surface of a given system. To date, the GAP framework has been used to successfully develop a number of MLIPs for various systems, including for elemental systems such as Carbon Silicon, and Tungsten, as well as for multicomponent systems such as Ge2Sb2Te5 and austenitic stainless steel, Fe7Cr2Ni.
Reliability of interatomic potentials
Classical interatomic potentials often exceed the accuracy of simplified quantum mechanical methods such as density functional theory at a million times lower computational cost. The use of interatomic potentials is recommended for the simulation of nanomaterials, biomacromolecules, and electrolytes from atoms up to millions of atoms at the 100 nm scale and beyond. As a limitation, electron densities and quantum processes at the local scale of hundreds of atoms are not included. When of interest, higher level quantum chemistry methods can be locally used.
The robustness of a model at different conditions other than those used in the fitting process is often measured in terms of transferability of the potential.
See also
Computational chemistry
Computational materials science
Molecular dynamics
Force field (chemistry)
References
External links
NIST interatomic potential repository
NIST JARVIS-FF
Open Knowledgebase of Interatomic Models (OpenKIM)
Condensed matter physics
Computational physics
Materials science
Quantum mechanical potentials | Interatomic potential | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 3,843 | [
"Applied and interdisciplinary physics",
"Phases of matter",
"Quantum mechanics",
"Materials science",
"Computational physics",
"Quantum mechanical potentials",
"Condensed matter physics",
"nan",
"Matter"
] |
26,140,105 | https://en.wikipedia.org/wiki/Break-in%20%28mechanical%20run-in%29 | Break-in or breaking in, also known as run-in or running in, is the procedure of conditioning a new piece of equipment by giving it an initial period of running, usually under light load, but sometimes under heavy load or normal load. It is generally a process of moving parts wearing against each other to produce the last small bit of size and shape adjustment that will settle them into a stable relationship for the rest of their working life.
One of the most common examples of break-in is engine break-in for petrol engines and diesel engines.
Engine break-in
A new engine is broken in by following specific driving guidelines during the first few hours of its use. The focus of breaking in an engine is on the contact between the piston rings of the engine and the cylinder wall. There is no universal preparation or set of instructions for breaking in an engine. Most importantly, experts disagree on whether it is better to start engines on high or low power to break them in. While there are still consequences to an unsuccessful break-in, they are harder to quantify on modern engines than on older models. In general, people no longer break in the engines of their own vehicles after purchasing a car or motorcycle, because the process is done in production (citation needed). It is still common, even today, to find that an owner's manual recommends gentle use at first (often specified as the first 500 or 1000 kilometres or miles). But it is usually only normal use without excessive demands that is specified, as opposed to light/limited use. For example, the manual will specify that the car be driven normally, but not in excess of the highway speed limit.
Goal
The goal of modern engine break-ins is the settling of piston rings into an engine's cylinder wall. A cylinder wall is not perfectly smooth but has a deliberate slight roughness to help oil adhesion. As the engine is powered up, the piston rings between the pistons and cylinder wall will begin to seal against the wall's small ridges.
Additionally older design engines had a flat lifter that was pushed by the camshaft lobes. It needs to spin during operation to avoid excessive wear to the camshaft lobe. At idle speeds on a new engine poor machining tolerances could prevent the lifter from spinning and destroy the camshaft. After 20 minutes of wear, or "self machining" at higher engine speeds they would typically be able to spin freely.
In the past, the engine break-in period was very important to the overall life and durability of the engine. The break-in period required has changed over the years with improved piston ring materials and designs. In reference to small engines, the break-in period now (5–10 hours) is short in comparison with that of engines of the past. Aluminum cylinder bore engine piston rings break-in faster than those used on cast iron cylinder bores.
Preparation
There are important preparations which must be made before the actual process of running the engine. The break-in can take place either in the vehicle or on an engine stand. Each engine has specific preparation needs of its own due to factors such as the many different types of engine models, the vehicles it belongs to, and conflicting expert instructions. For example, each engine should be lubricated and run on oil specified by its designers which can be found in a manual.
Process
The main area of controversy among engine break-in instructions is whether to run the engine slowly or quickly to initiate the process. Those who promote raising the power settings steadily will recommend changing the engine setting from low to high powers as to not work the engine too hard and create excessive wear on the cylinder wall (which would require the pistons to be removed and wall fixed). Other experts disagree and believe that to start the engine at a high power is the best way to effectively set in the pistons. The following are examples of how the two processes can be carried out:
Start high power
Start with revolutions per minute (rpm) between 2500 and 4000, and run the engine for about 20 minutes while watching so that the oil pressure does not get too high, which is dangerous. After changing oil and checking that the engine functions, drive using lower power settings. A high power setting is relative to the vehicle type, so half as many rpm may be necessary if a car has a smaller cylinder wall.
Start low power
Revolutions per minute should be around 1500 rpm. Run for about half an hour while checking the oil pressure and there should not be any over-boiling of the engine's coolant, which is a combination of air, oil, and water. Once this initial step is completed, drive at varying speeds on the road (or stand) by accelerating between speeds of 30 and 50 miles per hour.
Consequences
The following are consequences of a bad engine break-in:
Oil will gather in the cylinder wall, and a vehicle will use much more of it than necessary.
If a ring does not set into the grooves of the cylinder wall but creates friction against them each time an engine runs, the cylinder wall will be worn out.
Unsuccessful setting of piston rings into a cylinder wall will result in the necessity of new engine parts, or the entire engine depending on how extensive the damage is.
Camshaft lobes wear down and are destroyed on flat type lifters in older engine designs.
Modern versus older break-in regimens
For many kinds of equipment (with automotive engines being the prime example), the time it takes to complete break-in procedures has decreased significantly from a number of days to a few hours, for several reasons.
The main reason is that the factories in which they are produced are now capable of better machining and assembly. For example, it is easier to hold tighter tolerances now, and the average surface finish of a new cylinder wall has improved. Manufacturers decades ago were capable of such accuracy and precision, but not with as low a unit cost or with as much ease. Therefore, the average engine made today resembles, in some technical respects, the top-end custom work of back then. Engine design has changed and most engines use roller lifters not flat lifters. For some equipment, break-in is now done at the factory, obviating end-user break-in. This is advantageous for several reasons. It is a selling point with customers who don't want to have to worry about break-in and want full performance "right out of the box". And it also aligns with the fact that compliance rates are always uncertain in the hands of end users. As with medical compliance or regulatory compliance, an authority can give all the instructions it wants, but there is no guarantee that the end user will follow them.
The other reason for shorter break-in regimens today is that a greater amount of science has been applied to the understanding of break-in, and this has led to the realization that some of the old, long, painstaking break-in regimens were based on specious reasoning. People developed elaborate theories on what was needed and why, and it was hard to sift the empirical evidence in trying to test or confirm the theories. Anecdotal evidence and confirmation bias definitely played at least some part. Today engineers can confidently advise users not to put too much stock in old theories of long, elaborate break-in regimens. Some users will not give credence to the engineers and will stick to their own ideas anyway; but their careful break-in beliefs are still harmless and serve roughly like a placebo in allowing them to assure themselves that they've maximized the equipment's working lifespan through their due diligence. The useful side effect of a "break-in at slower speeds" for vehicles is operator familiarization. An overly exuberant operator crashing the new vehicle hurts people, reputation and sales.
See also
Burn-in of electronic and other equipment
Break-in for shoes
References
Motor vehicle maintenance
Mechanical engineering | Break-in (mechanical run-in) | [
"Physics",
"Engineering"
] | 1,604 | [
"Applied and interdisciplinary physics",
"Mechanical engineering"
] |
26,140,673 | https://en.wikipedia.org/wiki/Belt%20friction | Belt friction is a term describing the friction forces between a belt and a surface, such as a belt wrapped around a bollard. When a force applies a tension to one end of a belt or rope wrapped around a curved surface, the frictional force between the two surfaces increases with the amount of wrap about the curved surface, and only part of that force (or resultant belt tension) is transmitted to the other end of the belt or rope. Belt friction can be modeled by the Belt friction equation.
In practice, the theoretical tension acting on the belt or rope calculated by the belt friction equation can be compared to the maximum tension the belt can support. This helps a designer of such a system determine how many times the belt or rope must be wrapped around a curved surface to prevent it from slipping. Mountain climbers and sailing crews demonstrate a working knowledge of belt friction when accomplishing tasks with ropes, pulleys, bollards and capstans.
Equation
The equation used to model belt friction is, assuming the belt has no mass and its material is a fixed composition:
where is the tension of the pulling side, is the tension of the resisting side, is the static friction coefficient, which has no units, and is the angle, in radians, formed by the first and last spots the belt touches the pulley, with the vertex at the center of the pulley.
The tension on the pulling side of the belt and pulley has the ability to increase exponentially if the magnitude of the belt angle increases (e.g. it is wrapped around the pulley segment numerous times).
Generalization for a rope lying on an arbitrary orthotropic surface
If a rope is laying in equilibrium under tangential forces on a rough orthotropic surface then three following conditions (all of them) are satisfied:
1. No separation – normal reaction is positive for all points of the rope curve:
, where is a normal curvature of the rope curve.
2. Dragging coefficient of friction and angle are satisfying the following criteria for all points of the curve
3. Limit values of the tangential forces:
The forces at both ends of the rope and are satisfying the following inequality
with ,
where is a geodesic curvature of the rope curve, is a curvature of a rope curve, is a coefficient of friction in the tangential direction.
If then .
This generalization has been obtained by Konyukhov A.,
Friction coefficient
There are certain factors that help determine the value of the friction coefficient. These determining factors are:
Belting material used – The age of the material also plays a part, where worn out and older material may become more rough or smoother, changing the sliding friction.
Construction of the drive-pulley system – This involves strength and stability of the material used, like the pulley, and how greatly it will oppose the motion of the belt or rope.
Conditions under which the belt and pulleys are operating – The friction between the belt and pulley may decrease substantially if the belt happens to be muddy or wet, as it may act as a lubricant between the surfaces. This also applies to extremely dry or warm conditions which will evaporate any water naturally found in the belt, nominally making friction greater.
Overall design of the setup – The setup involves the initial conditions of the construction, such as the angle which the belt is wrapped around and geometry of the belt and pulley system.
Applications
An understanding of belt friction is essential for sailing crews and mountain climbers. Their professions require being able to understand the amount of weight a rope with a certain tension capacity can hold versus the amount of wraps around a pulley. Too many revolutions around a pulley make it inefficient to retract or release rope, and too few may cause the rope to slip. Misjudging the ability of a rope and capstan system to maintain the proper frictional forces may lead to failure and injury.
See also
Capstan equation
Frictional contact mechanics
References
Mechanics | Belt friction | [
"Physics",
"Engineering"
] | 813 | [
"Mechanics",
"Mechanical engineering"
] |
26,143,098 | https://en.wikipedia.org/wiki/Rubanisation | Rubanisation is a model of human settlement in which the city and the countryside are considered as one space instead of two. It is informed by the belief that treating the rural and the urban as two distinct realms is inconsistent with social and environmental justice.
Background
In Rubanisation, a reverse migration back to the village is encouraged and made possible through the availability of viable choice, prior to returning to repair the city devastated by unjust accumulation. Focusing on the problems of existing mega-cities is only a stop-gap solution. The argument is that in the present mode of development, the countryside has been largely neglected as cities become 'the exclusive focus of development,' compelling those in the rural areas to migrate to the city in search of better opportunities. This has resulted in a massive population explosion in most cities in the developing world, which manifests itself in the growing presence of slums. In the case of developed societies, small towns and villages have been losing population to the lure of the big cities for the excitement that they offer. Rubanisation postulates that unless the problem of rural poverty, which 'still remains the main cause for mass rural-urban migration,' is solved, and people given a real choice in deciding between rural and urban living, the problems of urbanisation remain intractable.
"Urbanism has been blind to the plight of the countryside for too long, and the tide is turning as we face the global financial crisis and climate change. And so the dominance of urbanism as an ideology must give way to a new economy of distributed happiness for all, to be found through social justice and a change in culture, in which an appreciation of community and knowledge for their own sakes and love of nature are prime."
References
Human migration
Human settlement
Environmental design
Human habitats
Urban planning | Rubanisation | [
"Engineering"
] | 362 | [
"Environmental design",
"Urban planning",
"Design",
"Architecture"
] |
26,149,056 | https://en.wikipedia.org/wiki/IEC%2062351 | IEC 62351 is a standard developed by WG15 of IEC TC57. This is developed for handling the security of TC 57 series of protocols including IEC 60870-5 series, IEC 60870-6 series, IEC 61850 series, IEC 61970 series & IEC 61968 series. The different security objectives include authentication of data transfer through digital signatures, ensuring only authenticated access, prevention of eavesdropping, prevention of playback and spoofing, and intrusion detection.
Standard details
IEC 62351-1 — Introduction to the standard
IEC 62351-2 — Glossary of terms
IEC 62351-3 — Security for any profiles including TCP/IP.
TLS Encryption
Node Authentication by means of X.509 certificates
Message Authentication
IEC 62351-4 — Security for any profiles including MMS (e.g., ICCP-based IEC 60870-6, IEC 61850, etc.).
Authentication for MMS
TLS (RFC 2246)is inserted between RFC 1006 & RFC 793 to provide transport layer security
IEC 62351-5 — Security for any profiles including IEC 60870-5 (e.g., DNP3 derivative)
TLS for TCP/IP profiles and encryption for serial profiles.
IEC 62351-6 — Security for IEC 61850 profiles.
VLAN use is made as mandatory for GOOSE
RFC 2030 to be used for SNTP
IEC 62351-7 — Security through network and system management.
Defines Management Information Base (MIBs) that are specific for the power industry, to handle network and system management through SNMP based methods.
IEC 62351-8 — Role-based access control.
Covers the access control of users and automated agents to data objects in power systems by means of role-based access control (RBAC).
IEC 62351-9 — Key Management
Describes the correct and safe usage of safety-critical parameters, e.g. passwords, encryption keys.
Covers the whole life cycle of cryptographic information (enrollment, creation, distribution, installation, usage, storage and removal).
Methods for algorithms using asymmetric cryptography
Handling of digital certificates (public / private key)
Setup of the PKI environment with X.509 certificates
Certificate enrollment by means of SCEP / EST, while allowing the use of other enrollment protocols
Certificate revocation by means of CRL / OCSP
A secure distribution mechanism based on GDOI and the IKEv1 protocol is presented for the usage of symmetric keys, e.g. session keys.
IEC 62351-10 — Security Architecture
Explanation of security architectures for the entire IT infrastructure
Identifying critical points of the communication architecture, e.g. substation control center, substation automation
Appropriate mechanisms security requirements, e.g. data encryption, user authentication
Applicability of well-proven standards from the IT domain, e.g. VPN tunnel, secure FTP, HTTPS
IEC 62351-11 — Security for XML Files
Embedding of the original XML content into an XML container
Date of issue and access control for XML data
X.509 signature for authenticity of XML data
Optional data encryption
See also
IEC TC 57
List of IEC technical committees
External links
Application of the IEC 62351 at IPCOMM GmbH
Report about the implementation of IEC 62351-7
62351
Electric power
Computer network security | IEC 62351 | [
"Physics",
"Technology",
"Engineering"
] | 704 | [
"Cybersecurity engineering",
"Physical quantities",
"Computer standards",
"Computer networks engineering",
"IEC standards",
"Power (physics)",
"Electric power",
"Computer network security",
"Electrical engineering"
] |
4,009,257 | https://en.wikipedia.org/wiki/Kramers%E2%80%93Wannier%20duality | The Kramers–Wannier duality is a symmetry in statistical physics. It relates the free energy of a two-dimensional square-lattice Ising model at a low temperature to that of another Ising model at a high temperature. It was discovered by Hendrik Kramers and Gregory Wannier in 1941. With the aid of this duality Kramers and Wannier found the exact location of the critical point for the Ising model on the square lattice.
Similar dualities establish relations between free energies of other statistical models. For instance, in 3 dimensions the Ising model is dual to an Ising gauge model.
Intuitive idea
The 2-dimensional Ising model exists on a lattice, which is a collection of squares in a chessboard pattern. With the finite lattice, the edges can be connected to form a torus. In theories of this kind, one constructs an involutive transform. For instance, Lars Onsager suggested that the Star-Triangle transformation could be used for the triangular lattice. Now the dual of the discrete torus is itself. Moreover, the dual of a highly disordered system (high temperature) is a well-ordered system (low temperature). This is because the Fourier transform takes a high bandwidth signal (more standard deviation) to a low one (less standard deviation). So one has essentially the same theory with an inverse temperature.
When one raises the temperature in one theory, one lowers the temperature in the other. If there is only one phase transition, it will be at the point at which they cross, at which the temperatures are equal. Because the 2D Ising model goes from a disordered state to an ordered state, there is a near one-to-one mapping between the disordered and ordered phases.
The theory has been generalized, and is now blended with many other ideas. For instance, the square lattice is replaced by a circle, random lattice, nonhomogeneous torus, triangular lattice, labyrinth, lattices with twisted boundaries, chiral Potts model, and many others.
One of the consequences of Kramers–Wannier duality is an exact correspondence in the spectrum of excitations on each side of the critical point. This was recently demonstrated via THz spectroscopy in Kitaev chains.
Derivation
We define first the variables. In the two-dimensional square lattice Ising model the number of horizontal and vertical links are taken to be equal. The couplings of the spins in the two directions are different, and one sets and with .
The low temperature expansion of the spin partition function for (K*,L*)
obtained from the standard expansion
is
,
the factor 2 originating from a spin-flip symmetry for each .
Here the sum over stands for summation over closed polygons on the lattice resulting in the graphical correspondence from the sum over spins with values .
By using the following transformation to variables , i.e.
one obtains
where and . This yields a mapping relation between the low temperature expansion and the high-temperature expansion described as duality (here Kramers-Wannier duality). With the help of the relations
the above hyperbolic tangent relations defining and can be written more symmetrically as
With the free energy per site in the thermodynamic limit
the Kramers–Wannier duality gives
In the isotropic case where K = L, if there is a critical point at K = Kc then there is another at K = K*c. Hence, in the case of there being a unique critical point, it would be located at K = K* = K*c, implying sinh 2Kc = 1, yielding
.
The result can also be written and is obtained below as
Kramers-Wannier duality in other contexts
The Kramers-Wannier duality appears also in other contexts. We consider here particularly the two-dimensional theory of a scalar field In this case a more convenient variable than is
With this expression one can construct the self-dual quantity
In field theory contexts the quantity is called correlation length. Next set
This function is the beta function of renormalization theory. Now suppose there is a value of for which , i.e. . The zero of the beta function is usually related to a symmetry - but only if the zero is unique. The solution of yields (obtained with MAPLE)
.
Only the second solution is real and gives the critical value of Kramers and Wannier as
.
See also
Ising model
S-duality
Z N model
References
External links
Statistical mechanics
Exactly solvable models
Lattice models | Kramers–Wannier duality | [
"Physics",
"Materials_science"
] | 921 | [
"Statistical mechanics",
"Condensed matter physics",
"Lattice models",
"Computational physics"
] |
4,009,827 | https://en.wikipedia.org/wiki/Cheeger%20bound | In mathematics, the Cheeger bound is a bound of the second largest eigenvalue of the transition matrix of a finite-state, discrete-time, reversible stationary Markov chain. It can be seen as a special case of Cheeger inequalities in expander graphs.
Let be a finite set and let be the transition probability for a reversible Markov chain on . Assume this chain has stationary distribution .
Define
and for define
Define the constant as
The operator acting on the space of functions from to , defined by
has eigenvalues . It is known that . The Cheeger bound is a bound on the second largest eigenvalue .
Theorem (Cheeger bound):
See also
Stochastic matrix
Cheeger constant
Conductance
References
Probabilistic inequalities
Stochastic processes
Statistical inequalities | Cheeger bound | [
"Mathematics"
] | 174 | [
"Theorems in statistics",
"Statistical inequalities",
"Theorems in probability theory",
"Probabilistic inequalities",
"Inequalities (mathematics)"
] |
4,010,566 | https://en.wikipedia.org/wiki/Melamine%20foam | Melamine foam is a foam-like material consisting of a melamine-formaldehyde condensate. It is the active component of a number of abrasive cleaner sponges, notably the Magic Eraser.
It is also used as thermal insulation and as a soundproofing material.
Properties
The open-cell foam is microporous and its polymeric substance is very hard, so that when used for cleaning it works like extremely fine sandpaper, getting into tiny grooves and pits in the object being cleaned.
On a larger scale, the material feels soft because the reticulated foam bubbles interconnect. Its structure is a 3D network of very hard strands, when compared to the array of separate bubbles in a material such as styrofoam.
Being microporous, it also effectively absorbs sound waves.
Being open-cell, it entrains countless air bubbles, giving it low thermal conductivity and thereby making it an effective insulator.
Cleaning
In the early 21st century, it was discovered that melamine foam is an effective abrasive cleaner. Rubbing with a slightly moistened foam may remove otherwise "uncleanable" external markings from surfaces. For example, melamine foam can remove crayon, marker pen, and grease from painted walls and wood finishings, plastic-adhering paints from treated wooden tables, and adhesive residue and grime from hubcaps. If the surface being cleaned is not sufficiently hard, it may be finely scratched by the melamine material. Similarly to a pencil eraser, the foam wears away during use, leaving behind a slight residue which can be rinsed off.
Other uses
Naturally lightweight, melamine foam is also used as insulation for pipes and ductwork, and as a soundproofing material for studios, sound stages, auditoriums, and the like. One advantage of melamine foam over other soundproofing materials is that it's considered not flammable. Melamine foam’s fire rating is Class A/Class 1 in the United States and ULCS-102 for Canada. If heated to , the foam shrinks, and collapses. These properties suit it as the main sound and thermal insulation material for Shinkansen bullet trains.
Environmental impact
Recent research has highlighted that melamine sponges contribute significantly to microplastic pollution. A study published in Environmental Science & Technology found that these sponges release over a trillion microplastic fibers globally each month due to wear and tear. These fibers can contaminate water systems and enter the food chain, posing environmental risks. The study suggests that making denser sponges and using alternative cleaning methods could mitigate this issue.
See also
Melamine resin
References
External links
BASF Story about Mr. Clean Magic Eraser
Basotect
VIXUM
Re: spot cleaning walls in gallery
Dangerous Chemicals in Mr. Clean Magic Eraser Snopes.com article debunking rumour about supposed dangerous chemicals in Magic Eraser
Cleaning tools
BASF
Insulators
Artificial materials
Abrasives | Melamine foam | [
"Physics"
] | 611 | [
"Materials",
"Matter",
"Artificial materials"
] |
4,011,691 | https://en.wikipedia.org/wiki/Mercury%20telluride | Mercury telluride (HgTe) is a binary chemical compound of mercury and tellurium. It is a semi-metal related to the II-VI group of semiconductor materials. Alternative names are mercuric telluride and mercury(II) telluride.
HgTe occurs in nature as the mineral form coloradoite.
Physical properties
All properties are at standard temperature and pressure unless stated otherwise. The lattice parameter is about 0.646 nm in the cubic crystalline form. The bulk modulus is about 42.1 GPa. The thermal expansion coefficient is about 5.2×10−6/K. Static dielectric constant 20.8, dynamic dielectric constant 15.1. Thermal conductivity is low at 2.7 W·m2/(m·K). HgTe bonds are weak leading to low hardness values. Hardness 2.7×107 kg/m2.
Doping
N-type doping can be achieved with elements such as boron, aluminium, gallium, or indium. Iodine and iron will also dope n-type. HgTe is naturally p-type due to mercury vacancies. P-type doping is also achieved by introducing zinc, copper, silver, or gold.
Topological insulation
Mercury telluride was the first topological insulator discovered, in 2007. Topological insulators cannot support an electric current in the bulk, but electronic states confined to the surface can serve as charge carriers.
Chemistry
HgTe bonds are weak. Their enthalpy of formation, around −32kJ/mol, is less than a third of the value for the related compound cadmium telluride. HgTe is easily etched by acids, such as hydrobromic acid.
Growth
Bulk growth is from a mercury and tellurium melt in the presence of a high mercury vapour pressure. HgTe can also be grown epitaxially, for example, by sputtering or by metalorganic vapour phase epitaxy.
Nanoparticles of mercury telluride can be obtained via cation exchange from cadmium telluride nanoplatelets.
See also
Cadmium telluride
Mercury selenide
Mercury cadmium telluride
References
External links
Thermophysical properties database at Germany's Chemistry Information Centre, Berlin
Mercury(II) compounds
Tellurides
II-VI semiconductors
Zincblende crystal structure
Semimetals | Mercury telluride | [
"Physics",
"Chemistry",
"Materials_science"
] | 498 | [
"Inorganic compounds",
"Semiconductor materials",
"Materials",
"II-VI semiconductors",
"Condensed matter physics",
"Semimetals",
"Matter"
] |
4,012,438 | https://en.wikipedia.org/wiki/SageMath | SageMath (previously Sage or SAGE, "System for Algebra and Geometry Experimentation") is a computer algebra system (CAS) with features covering many aspects of mathematics, including algebra, combinatorics, graph theory, group theory, differentiable manifolds, numerical analysis, number theory, calculus and statistics.
The first version of SageMath was released on 24 February 2005 as free and open-source software under the terms of the GNU General Public License version 2, with the initial goals of creating an "open source alternative to Magma, Maple, Mathematica, and MATLAB". The originator and leader of the SageMath project, William Stein, was a mathematician at the University of Washington.
SageMath uses a syntax resembling Python's, supporting procedural, functional and object-oriented constructs.
Development
Stein realized when designing Sage that there were many open-source mathematics software packages already written in different languages, namely C, C++, Common Lisp, Fortran and Python.
Rather than reinventing the wheel, Sage (which is written mostly in Python and Cython) integrates many specialized CAS software packages into a common interface, for which a user needs to know only Python. However, Sage contains hundreds of thousands of unique lines of code adding new functions and creating the interfaces among its components.
SageMath uses both students and professionals for development. The development of SageMath is supported by both volunteer work and grants. However, it was not until 2016 that the first full-time Sage developer was hired (funded by an EU grant). The same year, Stein described his disappointment with a lack of academic funding and credentials for software development, citing it as the reason for his decision to leave his tenured academic position to work full-time on the project in a newly founded company, SageMath, Inc.
Achievements
2007: first prize in the scientific software division of Les Trophées du Libre, an international competition for free software.
2012: one of the projects selected for the Google Summer of Code.
2013: ACM/SIGSAM Jenks Prize.
Performance
Both binaries and source code are available for SageMath from the download page. If SageMath is built from source code, many of the included libraries such as OpenBLAS, FLINT, GAP (computer algebra system), and NTL will be tuned and optimized for that computer, taking into account the number of processors, the size of their caches, whether there is hardware support for SSE instructions, etc.
Cython can increase the speed of SageMath programs, as the Python code is converted into C.
Licensing and availability
SageMath is free software, distributed under the terms of the GNU General Public License version 3.
Windows: SageMath 10.0 (May 2023) requires Windows Subsystem for Linux in version 2, which in turn requires Windows to run as a Hyper-V client. SageMath 8.0 (July 2017), with development funded by the OpenDreamKit project, successfully built on Cygwin, and a binary installer for 64-bit versions of Windows was available. Although Microsoft was sponsoring a Windows version of SageMath, prior to 2016 users of Windows had to use virtualization technology such as VirtualBox to run SageMath.
Linux: Linux distributions in which SageMath is available as a package are Fedora, Arch Linux, Debian, Ubuntu and NixOS. In Gentoo, it is available via layman in the "sage-on-gentoo" overlay. The package used by NixOS is available for use on other distributions, due to the distribution-agnostic nature of its package manager, Nix.
Other operating systems: Gentoo prefix also provides Sage on other operating systems.
Software packages contained in SageMath
The philosophy of SageMath is to use existing open-source libraries wherever they exist. Therefore, it uses many libraries from other projects.
See also
CoCalc
Comparison of numerical-analysis software
Comparison of statistical packages
List of computer algebra systems
References
External links
Computer algebra system software for Linux
Computer algebra system software for macOS
Computer algebra system software for Windows
Free and open-source Android software
Free computer algebra systems
Free educational software
Free mathematics software
Free software programmed in Python
Mathematical software
Python (programming language) scientific libraries | SageMath | [
"Mathematics"
] | 887 | [
"Free mathematics software",
"Mathematical software"
] |
4,013,262 | https://en.wikipedia.org/wiki/Homogentisic%20acid | Homogentisic acid (2,5-dihydroxyphenylacetic acid) is a phenolic acid usually found in Arbutus unedo (strawberry-tree) honey. It is also present in the bacterial plant pathogen Xanthomonas campestris pv. phaseoli as well as in the yeast Yarrowia lipolytica where it is associated with the production of brown pigments. It is oxidatively dimerised to form hipposudoric acid, one of the main constituents of the 'blood sweat' of hippopotamuses.
It is less commonly known as melanic acid, the name chosen by William Prout.
Human pathology
Accumulation of excess homogentisic acid and its oxide, named alkapton, is a result of the failure of the enzyme homogentisic acid 1,2-dioxygenase (typically due to a mutation) in the degradative pathway of tyrosine, consequently associated with alkaptonuria.
Intermediate
It is an intermediate in the catabolism of aromatic amino acids such as phenylalanine and tyrosine.
4-Hydroxyphenylpyruvate (produced by transamination of tyrosine) is acted upon by the enzyme 4-hydroxyphenylpyruvate dioxygenase to yield homogentisate. If active and present, the enzyme homogentisate 1,2-dioxygenase further degrades homogentisic acid to yield 4-maleylacetoacetic acid.
References
Phenylacetic acids
Biochemical reactions
Hydroquinones
Hydroxy acids
Phenylethanoids | Homogentisic acid | [
"Chemistry",
"Biology"
] | 347 | [
"Biochemistry",
"Biomolecules by chemical classification",
"Phenylethanoids",
"Biochemical reactions"
] |
4,013,373 | https://en.wikipedia.org/wiki/2%2C2-Dimethylbutane | 2,2-Dimethylbutane, trivially known as neohexane at William Odling's 1876 suggestion, is an organic compound with formula C6H14 or (H3C-)3-C-CH2-CH3. It is therefore an alkane, indeed the most compact and branched of the hexane isomers — the only one with a quaternary carbon and a butane (C4) backbone.
Synthesis
Butlerov's student V. Goryainov originally discovered neohexane in 1872 by cross-coupling of zinc ethyl with tert-butyl iodide.
2,2-Dimethylbutane can be synthesised by the hydroisomerisation of 2,3-dimethylbutane using an acid catalyst.
It can also be synthesised by isomerization of n-pentane in the presence of a catalyst containing combinations of one or more of palladium, platinum, rhodium and rhenium on a matrix of zeolite, alumina, silicon dioxide or other materials. Such reactions create a mixture of final products including isopentane, n-hexane, 3-methylpentane, 2-methylpentane, 2,3-dimethylbutane and 2,2-dimethylbutane. Since the composition of the final mixture is temperature dependant the desired final component can be obtained choice of catalyst and by combinations of temperature control and distillations.
Uses
Neohexane is used as a high-octane anti-knock additive in gasoline and in the manufacture of agricultural chemicals. It is also used in a number of commercial, automobile and home maintenance products, such as adhesives, electronic contact cleaners and upholstery polish sprays.
In laboratory settings, it is commonly used as a probe molecule in techniques which study the active sites of metal catalysts. Such catalysts are used in hydrogen-deuterium exchange, hydrogenolysis, and isomerization reactions. It is well suited to this purpose as 2,2-dimethylbutane contains both an isobutyl and an ethyl group.
See also
Methylbutane (isopentane)
2-Methylpentane (isohexane)
References
Alkanes | 2,2-Dimethylbutane | [
"Chemistry"
] | 483 | [
"Organic compounds",
"Alkanes"
] |
4,013,613 | https://en.wikipedia.org/wiki/Coordinative%20definition | A coordinative definition is a postulate which assigns a partial meaning to the theoretical terms of a scientific theory by correlating the mathematical objects of the pure or formal/syntactical aspects of a theory with physical objects in the world. The idea was formulated by the logical positivists and arises out of a formalist vision of mathematics as pure symbol manipulation.
Formalism
In order to get a grasp on the motivations which inspired the development of the idea of coordinative definitions, it is important to understand the doctrine of formalism as it is conceived in the philosophy of mathematics. For the formalists, mathematics, and particularly geometry, is divided into two parts: the pure and the applied. The first part consists in an uninterpreted axiomatic system, or syntactic calculus, in which terms such as point, straight line and between (the so-called primitive terms) have their meanings assigned to them implicitly by the axioms in which they appear. On the basis of deductive rules eternally specified in advance, pure geometry provides a set of theorems derived in a purely logical manner from the axioms. This part of mathematics is therefore a priori but devoid of any empirical meaning, not synthetic in the sense of Kant.
It is only by connecting these primitive terms and theorems with physical objects such as rulers or rays of light that, according to the formalist, pure mathematics becomes applied mathematics and assumes an empirical meaning. The method of correlating the abstract mathematical objects of the pure part of theories with physical objects consists in coordinative definitions.
It was characteristic of logical positivism to consider a scientific theory to be nothing more than a set of sentences, subdivided into the class of theoretical sentences, the class of observational sentences, and the class of mixed sentences. The first class contains terms which refer to theoretical entities, that is to entities not directly observable such as electrons, atoms and molecules; the second class contains terms which denote quantities or observable entities, and the third class consists of precisely the coordinative definitions which contain both types of terms because they connect the theoretical terms with empirical procedures of measurement or with observable entities. For example, the interpretation of "the geodesic between two points" as correspondent to "the path of a light ray in a vacuum" provides a coordinative definition. This is very similar to, but distinct from an operational definition. The difference is that coordinative definitions do not necessarily define theoretical terms in terms of laboratory procedures or experimentation, as operationalism does, but may also define them in terms of observable or empirical entities.
In any case, such definitions (also called bridge laws or correspondence rules) were held to serve three important purposes. In the first place, by connecting the uninterpreted formalism with the observation language, they permit the assignment of synthetic content to theories. In the second, according to whether they express a factual or a purely conventional content, they allow for the subdivision of science into two parts: one factual and independent of human conventions, the other non-empirical and conventional. This distinction is reminiscent of Kant's division of knowledge into content and form. Lastly, they allow for the possibility to avoid certain vicious circles that arise with regard to such matters as the measurement of the speed of light in one direction. As has been pointed out by John Norton with regard to Hans Reichenbach's arguments about the nature of geometry: on the one hand, we cannot know if there are universal forces until we know the true geometry of spacetime, but on the other, we cannot know the true geometry of spacetime until we know whether there are universal forces. Such a circle can be broken by way of coordinative definition.(Norton 1992).
From the point of view of the logical empiricist, in fact, the question of the "true geometry" of spacetime does not arise, given that saving, e.g., Euclidean geometry by introducing universal forces which cause rulers to contract in certain directions, or postulating that such forces are equal to zero, does not mean saving the Euclidean geometry of actual space, but only changing the definitions of the corresponding terms. There are not really two incompatible theories to choose between, in the case of the true geometry of spacetime, for the empiricist (Euclidean geometry with universal forces not equal to zero, or non-Euclidean geometry with universal forces equal to zero), but only one theory formulated in two different ways, with different meanings to attribute to the fundamental terms on the basis of coordinative definitions. However, given that, according to formalism, interpreted or applied geometry does have empirical content, the problem is not resolved on the basis of purely conventionalist considerations and it is precisely the coordinative definitions, which bear the burden of finding the correspondences between mathematical and physical objects, which provide the basis for an empirical choice.
Objection
The problem is that coordinative definitions seem to beg the question. Since they are defined in conventional, non-empirical terms, it is difficult to see how they can resolve empirical questions. It would seem that the result of using coordinative definitions is simply to shift the problem of the geometric description of the world, for example, into a need to explain the mysterious "isomorphic coincidences" between the conventions given by the definitions and the structure of the physical world.
Even in the simple case of defining "the geodesic between two points" as the empirical phrase "a ray of light in a vacuum", the correspondence between mathematical and empirical is left unexplained.
References
Norton, J. The hole Argument in Proceedings of the 1988 Biennial Meeting of the Philosophy of Science Association. vol 2. pp. 55–56.
Further reading
Boniolo, Giovanni and Dorato, Mauro. Dalla Relatività galileiana alla relatività generale ("From Galilean relativity to general relativity") in Filosofia della Fisica ed. Giovanni Boniolo.
Reichenbach, Hans. The Philosophy of Space and Time, tr. Italian as La Filosofia dello Spazio e del Tempo. Feltrinelli. Milan. 1977.
Philosophy of science
Definition
Logical positivism | Coordinative definition | [
"Mathematics"
] | 1,289 | [
"Mathematical logic",
"Logical positivism"
] |
4,014,228 | https://en.wikipedia.org/wiki/Filamentation | Filamentation is the anomalous growth of certain bacteria, such as Escherichia coli, in which cells continue to elongate but do not divide (no septa formation). The cells that result from elongation without division have multiple chromosomal copies.
In the absence of antibiotics or other stressors, filamentation occurs at a low frequency in bacterial populations (4–8% short filaments and 0–5% long filaments in 1- to 8-hour cultures). The increased cell length can protect bacteria from protozoan predation and neutrophil phagocytosis by making ingestion of cells more difficult. Filamentation is also thought to protect bacteria from antibiotics, and is associated with other aspects of bacterial virulence such as biofilm formation.
The number and length of filaments within a bacterial population increases when the bacteria are exposed to different physical, chemical and biological agents (e.g. UV light, DNA synthesis-inhibiting antibiotics, bacteriophages). This is termed conditional filamentation. Some of the key genes involved in filamentation in E. coli include sulA, minCD and damX.
Filament formation
Antibiotic-induced filamentation
Some peptidoglycan synthesis inhibitors (e.g. cefuroxime, ceftazidime) induce filamentation by inhibiting the penicillin binding proteins (PBPs) responsible for crosslinking peptidoglycan at the septal wall (e.g. PBP3 in E. coli and P. aeruginosa). Because the PBPs responsible for lateral wall synthesis are relatively unaffected by cefuroxime and ceftazidime, cell elongation proceeds without any cell division and filamentation is observed.
DNA synthesis-inhibiting and DNA damaging antibiotics (e.g. metronidazole, mitomycin C, the fluoroquinolones, novobiocin) induce filamentation via the SOS response. The SOS response inhibits septum formation until the DNA can be repaired, this delay stopping the transmission of damaged DNA to progeny. Bacteria inhibit septation by synthesizing protein SulA, an FtsZ inhibitor that halts Z-ring formation, thereby stopping recruitment and activation of PBP3. If bacteria are deprived of the nucleobase thymine by treatment with folic acid synthesis inhibitors (e.g. trimethoprim), this also disrupts DNA synthesis and induces SOS-mediated filamentation. Direct obstruction of Z-ring formation by SulA and other FtsZ inhibitors (e.g. berberine) induces filamentation too.
Some protein synthesis inhibitors (e.g. kanamycin), RNA synthesis inhibitors (e.g. bicyclomycin) and membrane disruptors (e.g. daptomycin, polymyxin B) cause filamentation too, but these filaments are much shorter than the filaments induced by the above antibiotics.
Stress-induced filamentation
Filamentation is often a consequence of environmental stress. It has been observed in response to temperature shocks, low water availability, high osmolarity, extreme pH, and UV exposure. UV light damages bacterial DNA and induces filamentation via the SOS response. Starvation can also cause bacterial filamentation. For example, if bacteria are deprived of the nucleobase thymine, this disrupts DNA synthesis and induces SOS-mediated filamentation.
Nutrient-induced filamentation
Several macronutrients and biomolecules can cause bacterial cells to filament, including the amino acids glutamine, proline and arginine, and some branched-chain amino acids. Certain bacterial species, such as Paraburkholderia elongata, will also filament as a result of a tendency to accumulate phosphate in the form of polyphosphate, which can chelate metal cofactors needed by division proteins. In addition, filamentation is induced by nutrient-rich conditions in the intracellular pathogen Bordetella atropi. This occurs via the highly conserved UDP-glucose pathway. UDP-glucose biosynthesis and sensing suppresses bacterial cell division, with the ensuing filamentation allowing B. atropi to spread to neighboring cells.
Intrinsic dysbiosis-induced filamentation
Filamentation can also be induced by other pathways affecting thymidylate synthesis. For instance, partial loss of dihydrofolate reductase (DHFR) activity causes reversible filamentation. DHFR has a critical role in regulating the amount of tetrahydrofolate, which is essential for purine and thymidylate synthesis. DHFR activity can be inhibited by mutations or by high concentrations of the antibiotic trimethoprim (see antibiotic-induced filamentation above).
Overcrowding of the periplasm or envelope can also induce filamentation in Gram-negative bacteria by disrupting normal divisome function.
Filamentation and biotic interactions
Several examples of filamentation that result from biotic interactions between bacteria and other organisms or infectious agents have been reported. Filamentous cells are resistant to ingestion by bacterivores, and environmental conditions generated during predation can trigger filamentation. Filamentation can also be induced by signalling factors produced by other bacteria. In addition, Agrobacterium spp. filament in proximity to plant roots, and E. coli filaments when exposed to plant extracts. Lastly, bacteriophage infection can result in filamentation via the expression of proteins that inhibit divisome assembly.
See also
Bacterial morphological plasticity
Filamentous bacteriophage
Filamentous cyanobacteria
Segmented filamentous bacteria
References
Cellular processes
Microbiology | Filamentation | [
"Chemistry",
"Biology"
] | 1,233 | [
"Microbiology",
"Cellular processes",
"Microscopy"
] |
4,015,299 | https://en.wikipedia.org/wiki/Nominal%20Pipe%20Size | Nominal Pipe Size (NPS) is a North American set of standard sizes for pipes used for high or low pressures and temperatures. "Nominal" refers to pipe in non-specific terms and identifies the diameter of the hole with a non-dimensional number (for example – 2-inch nominal steel pipe" consists of many varieties of steel pipe with the only criterion being a outside diameter). Specific pipe is identified by pipe diameter and another non-dimensional number for wall thickness referred to as the Schedule (Sched. or Sch., for example – "2-inch diameter pipe, Schedule 40"). NPS is often incorrectly called National Pipe Size, due to confusion with the American standard for pipe threads, "national pipe straight", which also abbreviates as "NPS". The European and international designation equivalent to NPS is DN (diamètre nominal/nominal diameter/Nennweite), in which sizes are measured in millimetres, see ISO 6708. The term NB (nominal bore) is also frequently used interchangeably with DN.
In March 1927 the American Standards Association authorized a committee to standardize the dimensions of wrought steel and wrought iron pipe and tubing. At that time only a small selection of wall thicknesses were in use: standard weight (STD), extra-strong (XS), and double extra-strong (XXS), based on the iron pipe size (IPS) system of the day. However these three sizes did not fit all applications. Also, in 1939, it was hoped that the designations of STD, XS, and XXS would be phased out by schedule numbers, however those original terms are still in common use today (although sometimes referred to as standard, extra-heavy (XH), and double extra-heavy (XXH), respectively). Since the original schedules were created, there have been many revisions and additions to the tables of pipe sizes based on industry use and on standards from API, ASTM, and others.
Stainless steel pipes, which were coming into more common use in the mid 20th century, permitted the use of thinner pipe walls with much less risk of failure due to corrosion. By 1949 thinner schedules 5S and 10S, which were based on the pressure requirements modified to the nearest BWG number, had been created, and other "S" sizes followed later. Due to their thin walls, the smaller "S" sizes can not be threaded together according to ASME code, but must be fusion welded, brazed, roll grooved, or joined with press fittings.
Application
Based on the NPS and schedule of a pipe, the pipe outside diameter (OD) and wall thickness can be obtained from reference tables such as those below, which are based on ASME standards B36.10M and B36.19M. For example, NPS 14 Sch 40 has an OD of and a wall thickness of . However, the NPS and OD values are not always equal, which can create confusion.
For NPS to 12, the NPS and OD values are different. For example, the OD of an NPS 12 pipe is actually . To find the actual OD for each NPS value, refer to the tables below. (Note that for tubing, the size indicates actual dimensions, not nominal.)
For NPS 14 and up, the NPS and OD values are equal. In other words, an NPS 14 pipe is actually OD.
The reason for the discrepancy for NPS to 12 inches is that these NPS values were originally set to give the same inside diameter (ID) based on wall thicknesses standard at the time. However, as the set of available wall thicknesses evolved, the ID changed and NPS became only indirectly related to ID and OD.
For a given NPS, the OD stays fixed and the wall thickness increases with schedule. For a given schedule, the OD increases with NPS while the wall thickness stays constant or increases. Using equations and rules in ASME B31.3 Process Piping, it can be shown that pressure rating decreases with increasing NPS and constant schedule.
Some specifications use pipe schedules called standard wall (STD), extra strong (XS), and double extra strong (XXS), although these actually belong to an older system called iron pipe size (IPS). The IPS number is the same as the NPS number. STD is identical to SCH 40S, and 40S is identical to 40 for NPS to NPS 10, inclusive. XS is identical to SCH 80S, and 80S is identical to 80 for NPS to NPS 8, inclusive. XXS wall is thicker than schedule 160 from NPS in to NPS 6 in inclusive, and schedule 160 is thicker than XXS wall for NPS 8 in and larger.
Blockage or ball test
When a pipe is welded or bent the most common method to inspect blockages, misalignment, ovality, and weld bead dimensional conformity is to pass a round ball through the pipe coil or circuit. If the inner pipe dimension is to be measured then the weld bead should be subtracted, if welding is applicable. Typically, the clearance tolerance for the ball must not exceed . Allowable ovality of any pipe is measured on the inside dimension of the pipe, normally 5% to 10% ovality can be accepted. If no other test is conducted to verify ovality, or blockages, this test must be seen as a standard requirement. A flow test can not be used in lieu of a blockage or ball test. See pipe dimensional table, Specification ASME B36.10M or B36.19M for pipe dimensions per schedule.
Stainless steel pipe is most often available in standard weight sizes (noted by the S designation; for example, NPS Sch 10S). However stainless steel pipe can also be available in other schedules.
Both polyvinyl chloride pipe (PVC) and chlorinated polyvinyl chloride pipe (CPVC) are made in NPS sizes.
NPS tables for selected sizes
NPS to NPS
DN does not exactly correspond to a size in millimeters, because ISO 6708 defines it as being a dimensionless specification only indirectly related to a diameter. The ISO 6708 sizes provide a metric name for existing inch sizes, resulting in a 1:1 correlation between NPS and DN sizes. ISO 6708 does not include values for "DN 6" or "DN 8", however ASME B36.10M list the "DN 6" and "DN 8" . Also, the European Standard EN 12 516-1 (Industrial valves - Shell design strength - Part 1: Tabulation method for steel valve shells) specifies the dimensions "DN 6" and "DN 8", respectively their equivalents NPS "and NPS ".
Tolerance: The tolerance on pipe OD is + (0.0156) inch (), − (0.0312) inch ().
As per ASME B36.10M -2018 Pipe wall thickness are rounded to nearest , while converting wall thickness from inch to millimetre.
NPS 4 to NPS 9
NPS 10 to NPS 24
NPS 26 to NPS 36
Additional sizes (NPS)
See also
British standard pipe thread sizes
Copper tubing sizes
Pipe thread sizes
National pipe thread sizes
Pipe (fluid conveyance)
Pipe sizes
Standard dimension ratio
Notes
References
Bibliography
External links
Notes on Pipe—PVC Pipe weights and max PSI
Quick calculator to determine standard pipe dimensions For Carbon Steel and Stainless Steel pipes as per ANSI. (Requires Membership)
Piping
Mechanical standards
Customary units of measurement in the United States | Nominal Pipe Size | [
"Chemistry",
"Engineering"
] | 1,604 | [
"Mechanical standards",
"Building engineering",
"Chemical engineering",
"Mechanical engineering",
"Piping"
] |
38,869,556 | https://en.wikipedia.org/wiki/IAU%20%281976%29%20System%20of%20Astronomical%20Constants | The International Astronomical Union at its XVIth General Assembly in Grenoble in 1976, accepted (Resolution No. 1) a whole new consistent set of astronomical constants recommended for reduction of astronomical observations, and for computation of ephemerides. It superseded the IAU's previous recommendations of 1964 (see IAU (1964) System of Astronomical Constants), became in effect in the Astronomical Almanac from 1984 onward, and remained in use until the introduction of the IAU (2009) System of Astronomical Constants. In 1994 the IAU recognized that the parameters became outdated, but retained the 1976 set for sake of continuity, but also recommended to start maintaining a set of "current best estimates".
this "sub group for numerical standards" had published a list, which included new constants (like those for relativistic time scales).
The system of constants was prepared by Commission 4 on ephemerides led by P. Kenneth Seidelmann (after whom asteroid 3217 Seidelmann is named).
At the time, a new standard epoch (J2000.0) was accepted; followed later by a new reference system with fundamental catalogue (FK5), and expressions for precession of the equinoxes,
and in 1979 by new expressions for the relation between Universal Time and sidereal time, and in 1979 and 1980 by a theory of nutation. There were no reliable rotation elements for most planets, but a joint working group on Cartographic Coordinates and Rotational Elements was installed to compile recommended values.
Units
The IAU(1976) system is based on the astronomical system of units:
The astronomical unit of time is the day (D) of 86,400 SI seconds, which is close to the mean solar day of civil clock time.
The astronomical unit of mass is the mass of the Sun (S).
The astronomical unit of length is known as the astronomical unit (A or au), which in the IAU(1976) system is defined as the length for which the gravitational constant, more specifically the Gaussian gravitational constant k expressed in the astronomical units (i.e. k2 has units A3S−1D−2), takes the value of . This astronomical unit is approximately the mean distance between the Earth and the Sun. The value of k is the angular velocity in radians per day (i.e. the daily mean motion) of an infinitesimally small mass that moves around the Sun in a circular orbit at a distance of 1 AU.
Table of constants
Other quantities for use in the preparation of ephemerides
References
External links
IAU commission 4: ,
Astronomy
Physical constants
Constants | IAU (1976) System of Astronomical Constants | [
"Physics",
"Astronomy",
"Mathematics"
] | 549 | [
"Physical quantities",
"Quantity",
"Astrophysics",
"Physical constants",
"nan",
"Astronomical sub-disciplines"
] |
38,870,173 | https://en.wikipedia.org/wiki/Feature%20learning | In machine learning (ML), feature learning or representation learning is a set of techniques that allow a system to automatically discover the representations needed for feature detection or classification from raw data. This replaces manual feature engineering and allows a machine to both learn the features and use them to perform a specific task.
Feature learning is motivated by the fact that ML tasks such as classification often require input that is mathematically and computationally convenient to process. However, real-world data, such as image, video, and sensor data, have not yielded to attempts to algorithmically define specific features. An alternative is to discover such features or representations through examination, without relying on explicit algorithms.
Feature learning can be either supervised, unsupervised, or self-supervised:
In supervised feature learning, features are learned using labeled input data. Labeled data includes input-label pairs where the input is given to the model, and it must produce the ground truth label as the output. This can be leveraged to generate feature representations with the model which result in high label prediction accuracy. Examples include supervised neural networks, multilayer perceptrons, and dictionary learning.
In unsupervised feature learning, features are learned with unlabeled input data by analyzing the relationship between points in the dataset. Examples include dictionary learning, independent component analysis, matrix factorization, and various forms of clustering.
In self-supervised feature learning, features are learned using unlabeled data like unsupervised learning, however input-label pairs are constructed from each data point, enabling learning the structure of the data through supervised methods such as gradient descent. Classical examples include word embeddings and autoencoders. Self-supervised learning has since been applied to many modalities through the use of deep neural network architectures such as convolutional neural networks and transformers.
Supervised
Supervised feature learning is learning features from labeled data. The data label allows the system to compute an error term, the degree to which the system fails to produce the label, which can then be used as feedback to correct the learning process (reduce/minimize the error). Approaches include:
Supervised dictionary learning
Dictionary learning develops a set (dictionary) of representative elements from the input data such that each data point can be represented as a weighted sum of the representative elements. The dictionary elements and the weights may be found by minimizing the average representation error (over the input data), together with L1 regularization on the weights to enable sparsity (i.e., the representation of each data point has only a few nonzero weights).
Supervised dictionary learning exploits both the structure underlying the input data and the labels for optimizing the dictionary elements. For example, this supervised dictionary learning technique applies dictionary learning on classification problems by jointly optimizing the dictionary elements, weights for representing data points, and parameters of the classifier based on the input data. In particular, a minimization problem is formulated, where the objective function consists of the classification error, the representation error, an L1 regularization on the representing weights for each data point (to enable sparse representation of data), and an L2 regularization on the parameters of the classifier.
Neural networks
Neural networks are a family of learning algorithms that use a "network" consisting of multiple layers of inter-connected nodes. It is inspired by the animal nervous system, where the nodes are viewed as neurons and edges are viewed as synapses. Each edge has an associated weight, and the network defines computational rules for passing input data from the network's input layer to the output layer. A network function associated with a neural network characterizes the relationship between input and output layers, which is parameterized by the weights. With appropriately defined network functions, various learning tasks can be performed by minimizing a cost function over the network function (weights).
Multilayer neural networks can be used to perform feature learning, since they learn a representation of their input at the hidden layer(s) which is subsequently used for classification or regression at the output layer. The most popular network architecture of this type is Siamese networks.
Unsupervised
Unsupervised feature learning is learning features from unlabeled data. The goal of unsupervised feature learning is often to discover low-dimensional features that capture some structure underlying the high-dimensional input data. When the feature learning is performed in an unsupervised way, it enables a form of semisupervised learning where features learned from an unlabeled dataset are then employed to improve performance in a supervised setting with labeled data. Several approaches are introduced in the following.
K-means clustering
K-means clustering is an approach for vector quantization. In particular, given a set of n vectors, k-means clustering groups them into k clusters (i.e., subsets) in such a way that each vector belongs to the cluster with the closest mean. The problem is computationally NP-hard, although suboptimal greedy algorithms have been developed.
K-means clustering can be used to group an unlabeled set of inputs into k clusters, and then use the centroids of these clusters to produce features. These features can be produced in several ways. The simplest is to add k binary features to each sample, where each feature j has value one iff the jth centroid learned by k-means is the closest to the sample under consideration. It is also possible to use the distances to the clusters as features, perhaps after transforming them through a radial basis function (a technique that has been used to train RBF networks). Coates and Ng note that certain variants of k-means behave similarly to sparse coding algorithms.
In a comparative evaluation of unsupervised feature learning methods, Coates, Lee and Ng found that k-means clustering with an appropriate transformation outperforms the more recently invented auto-encoders and RBMs on an image classification task. K-means also improves performance in the domain of NLP, specifically for named-entity recognition; there, it competes with Brown clustering, as well as with distributed word representations (also known as neural word embeddings).
Principal component analysis
Principal component analysis (PCA) is often used for dimension reduction. Given an unlabeled set of n input data vectors, PCA generates p (which is much smaller than the dimension of the input data) right singular vectors corresponding to the p largest singular values of the data matrix, where the kth row of the data matrix is the kth input data vector shifted by the sample mean of the input (i.e., subtracting the sample mean from the data vector). Equivalently, these singular vectors are the eigenvectors corresponding to the p largest eigenvalues of the sample covariance matrix of the input vectors. These p singular vectors are the feature vectors learned from the input data, and they represent directions along which the data has the largest variations.
PCA is a linear feature learning approach since the p singular vectors are linear functions of the data matrix. The singular vectors can be generated via a simple algorithm with p iterations. In the ith iteration, the projection of the data matrix on the (i-1)th eigenvector is subtracted, and the ith singular vector is found as the right singular vector corresponding to the largest singular of the residual data matrix.
PCA has several limitations. First, it assumes that the directions with large variance are of most interest, which may not be the case. PCA only relies on orthogonal transformations of the original data, and it exploits only the first- and second-order moments of the data, which may not well characterize the data distribution. Furthermore, PCA can effectively reduce dimension only when the input data vectors are correlated (which results in a few dominant eigenvalues).
Local linear embedding
Local linear embedding (LLE) is a nonlinear learning approach for generating low-dimensional neighbor-preserving representations from (unlabeled) high-dimension input. The approach was proposed by Roweis and Saul (2000). The general idea of LLE is to reconstruct the original high-dimensional data using lower-dimensional points while maintaining some geometric properties of the neighborhoods in the original data set.
LLE consists of two major steps. The first step is for "neighbor-preserving", where each input data point Xi is reconstructed as a weighted sum of K nearest neighbor data points, and the optimal weights are found by minimizing the average squared reconstruction error (i.e., difference between an input point and its reconstruction) under the constraint that the weights associated with each point sum up to one. The second step is for "dimension reduction," by looking for vectors in a lower-dimensional space that minimizes the representation error using the optimized weights in the first step. Note that in the first step, the weights are optimized with fixed data, which can be solved as a least squares problem. In the second step, lower-dimensional points are optimized with fixed weights, which can be solved via sparse eigenvalue decomposition.
The reconstruction weights obtained in the first step capture the "intrinsic geometric properties" of a neighborhood in the input data. It is assumed that original data lie on a smooth lower-dimensional manifold, and the "intrinsic geometric properties" captured by the weights of the original data are also expected to be on the manifold. This is why the same weights are used in the second step of LLE. Compared with PCA, LLE is more powerful in exploiting the underlying data structure.
Independent component analysis
Independent component analysis (ICA) is a technique for forming a data representation using a weighted sum of independent non-Gaussian components. The assumption of non-Gaussian is imposed since the weights cannot be uniquely determined when all the components follow Gaussian distribution.
Unsupervised dictionary learning
Unsupervised dictionary learning does not utilize data labels and exploits the structure underlying the data for optimizing dictionary elements. An example of unsupervised dictionary learning is sparse coding, which aims to learn basis functions (dictionary elements) for data representation from unlabeled input data. Sparse coding can be applied to learn overcomplete dictionaries, where the number of dictionary elements is larger than the dimension of the input data. Aharon et al. proposed algorithm K-SVD for learning a dictionary of elements that enables sparse representation.
Multilayer/deep architectures
The hierarchical architecture of the biological neural system inspires deep learning architectures for feature learning by stacking multiple layers of learning nodes. These architectures are often designed based on the assumption of distributed representation: observed data is generated by the interactions of many different factors on multiple levels. In a deep learning architecture, the output of each intermediate layer can be viewed as a representation of the original input data. Each level uses the representation produced by the previous, lower level as input, and produces new representations as output, which are then fed to higher levels. The input at the bottom layer is raw data, and the output of the final, highest layer is the final low-dimensional feature or representation.
Restricted Boltzmann machine
Restricted Boltzmann machines (RBMs) are often used as a building block for multilayer learning architectures. An RBM can be represented by an undirected bipartite graph consisting of a group of binary hidden variables, a group of visible variables, and edges connecting the hidden and visible nodes. It is a special case of the more general Boltzmann machines with the constraint of no intra-node connections. Each edge in an RBM is associated with a weight. The weights together with the connections define an energy function, based on which a joint distribution of visible and hidden nodes can be devised. Based on the topology of the RBM, the hidden (visible) variables are independent, conditioned on the visible (hidden) variables. Such conditional independence facilitates computations.
An RBM can be viewed as a single layer architecture for unsupervised feature learning. In particular, the visible variables correspond to input data, and the hidden variables correspond to feature detectors. The weights can be trained by maximizing the probability of visible variables using Hinton's contrastive divergence (CD) algorithm.
In general, training RBMs by solving the maximization problem tends to result in non-sparse representations. Sparse RBM was proposed to enable sparse representations. The idea is to add a regularization term in the objective function of data likelihood, which penalizes the deviation of the expected hidden variables from a small constant . RBMs have also been used to obtain disentangled representations of data, where interesting features map to separate hidden units.
Autoencoder
An autoencoder consisting of an encoder and a decoder is a paradigm for deep learning architectures. An example is provided by Hinton and Salakhutdinov where the encoder uses raw data (e.g., image) as input and produces feature or representation as output and the decoder uses the extracted feature from the encoder as input and reconstructs the original input raw data as output. The encoder and decoder are constructed by stacking multiple layers of RBMs. The parameters involved in the architecture were originally trained in a greedy layer-by-layer manner: after one layer of feature detectors is learned, they are fed up as visible variables for training the corresponding RBM. Current approaches typically apply end-to-end training with stochastic gradient descent methods. Training can be repeated until some stopping criteria are satisfied.
Self-supervised
Self-supervised representation learning is learning features by training on the structure of unlabeled data rather than relying on explicit labels for an information signal. This approach has enabled the combined use of deep neural network architectures and larger unlabeled datasets to produce deep feature representations. Training tasks typically fall under the classes of either contrastive, generative or both. Contrastive representation learning trains representations for associated data pairs, called positive samples, to be aligned, while pairs with no relation, called negative samples, are contrasted. A larger portion of negative samples is typically necessary in order to prevent catastrophic collapse, which is when all inputs are mapped to the same representation. Generative representation learning tasks the model with producing the correct data to either match a restricted input or reconstruct the full input from a lower dimensional representation.
A common setup for self-supervised representation learning of a certain data type (e.g. text, image, audio, video) is to pretrain the model using large datasets of general context, unlabeled data. Depending on the context, the result of this is either a set of representations for common data segments (e.g. words) which new data can be broken into, or a neural network able to convert each new data point (e.g. image) into a set of lower dimensional features. In either case, the output representations can then be used as an initialization in many different problem settings where labeled data may be limited. Specialization of the model to specific tasks is typically done with supervised learning, either by fine-tuning the model / representations with the labels as the signal, or freezing the representations and training an additional model which takes them as an input.
Many self-supervised training schemes have been developed for use in representation learning of various modalities, often first showing successful application in text or image before being transferred to other data types.
Text
Word2vec is a word embedding technique which learns to represent words through self-supervision over each word and its neighboring words in a sliding window across a large corpus of text. The model has two possible training schemes to produce word vector representations, one generative and one contrastive. The first is word prediction given each of the neighboring words as an input. The second is training on the representation similarity for neighboring words and representation dissimilarity for random pairs of words. A limitation of word2vec is that only the pairwise co-occurrence structure of the data is used, and not the ordering or entire set of context words. More recent transformer-based representation learning approaches attempt to solve this with word prediction tasks. GPTs pretrain on next word prediction using prior input words as context, whereas BERT masks random tokens in order to provide bidirectional context.
Other self-supervised techniques extend word embeddings by finding representations for larger text structures such as sentences or paragraphs in the input data. Doc2vec extends the generative training approach in word2vec by adding an additional input to the word prediction task based on the paragraph it is within, and is therefore intended to represent paragraph level context.
Image
The domain of image representation learning has employed many different self-supervised training techniques, including transformation, inpainting, patch discrimination and clustering.
Examples of generative approaches are Context Encoders, which trains an AlexNet CNN architecture to generate a removed image region given the masked image as input, and iGPT, which applies the GPT-2 language model architecture to images by training on pixel prediction after reducing the image resolution.
Many other self-supervised methods use siamese networks, which generate different views of the image through various augmentations that are then aligned to have similar representations. The challenge is avoiding collapsing solutions where the model encodes all images to the same representation. SimCLR is a contrastive approach which uses negative examples in order to generate image representations with a ResNet CNN. Bootstrap Your Own Latent (BYOL) removes the need for negative samples by encoding one of the views with a slow moving average of the model parameters as they are being modified during training.
Graph
The goal of many graph representation learning techniques is to produce an embedded representation of each node based on the overall network topology. node2vec extends the word2vec training technique to nodes in a graph by using co-occurrence in random walks through the graph as the measure of association. Another approach is to maximize mutual information, a measure of similarity, between the representations of associated structures within the graph. An example is Deep Graph Infomax, which uses contrastive self-supervision based on mutual information between the representation of a “patch” around each node, and a summary representation of the entire graph. Negative samples are obtained by pairing the graph representation with either representations from another graph in a multigraph training setting, or corrupted patch representations in single graph training.
Video
With analogous results in masked prediction and clustering, video representation learning approaches are often similar to image techniques but must utilize the temporal sequence of video frames as an additional learned structure. Examples include VCP, which masks video clips and trains to choose the correct one given a set of clip options, and Xu et al., who train a 3D-CNN to identify the original order given a shuffled set of video clips.
Audio
Self-supervised representation techniques have also been applied to many audio data formats, particularly for speech processing. Wav2vec 2.0 discretizes the audio waveform into timesteps via temporal convolutions, and then trains a transformer on masked prediction of random timesteps using a contrastive loss. This is similar to the BERT language model, except as in many SSL approaches to video, the model chooses among a set of options rather than over the entire word vocabulary.
Multimodal
Self-supervised learning has also been used to develop joint representations of multiple data types. Approaches usually rely on some natural or human-derived association between the modalities as an implicit label, for instance video clips of animals or objects with characteristic sounds, or captions written to describe images. CLIP produces a joint image-text representation space by training to align image and text encodings from a large dataset of image-caption pairs using a contrastive loss. MERLOT Reserve trains a transformer-based encoder to jointly represent audio, subtitles and video frames from a large dataset of videos through 3 joint pretraining tasks: contrastive masked prediction of either audio or text segments given the video frames and surrounding audio and text context, along with contrastive alignment of video frames with their corresponding captions.
Multimodal representation models are typically unable to assume direct correspondence of representations in the different modalities, since the precise alignment can often be noisy or ambiguous. For example, the text "dog" could be paired with many different pictures of dogs, and correspondingly a picture of a dog could be captioned with varying degrees of specificity. This limitation means that downstream tasks may require an additional generative mapping network between modalities to achieve optimal performance, such as in DALLE-2 for text to image generation.
Dynamic Representation Learning
Dynamic representation learning methods generate latent embeddings for dynamic systems such as dynamic networks. Since particular distance functions are invariant under particular linear transformations, different sets of embedding vectors can actually represent the same/similar information. Therefore, for a dynamic system, a temporal difference in its embeddings may be explained by misalignment of embeddings due to arbitrary transformations and/or actual changes in the system. Therefore, generally speaking, temporal embeddings learned via dynamic representation learning methods should be inspected for any spurious changes and be aligned before consequent dynamic analyses.
See also
Automated machine learning (AutoML)
Deep learning
geometric feature learning
Feature detection (computer vision)
Feature extraction
Word embedding
Vector quantization
Variational autoencoder
References
Machine learning | Feature learning | [
"Engineering"
] | 4,430 | [
"Artificial intelligence engineering",
"Machine learning"
] |
38,879,444 | https://en.wikipedia.org/wiki/TP53-inducible%20glycolysis%20and%20apoptosis%20regulator | The TP53-inducible glycolysis and apoptosis regulator (TIGAR) also known as fructose-2,6-bisphosphatase TIGAR is an enzyme that in humans is encoded by the C12orf5 gene.
TIGAR is a recently discovered enzyme that primarily functions as a regulator of glucose breakdown in human cells. In addition to its role in controlling glucose degradation, TIGAR activity can allow a cell to carry out DNA repair, and the degradation of its own organelles. Finally, TIGAR can protect a cell from death. Since its discovery in 2005 by Kuang-Yu Jen and Vivian G. Cheung, TIGAR has become of particular interest to the scientific community thanks to its active role in many cancers. Normally, TIGAR manufactured by the body is activated by the p53 tumour suppressor protein after a cell has experienced a low level of DNA damage or stress. In some cancers, TIGAR has fallen under the control of other proteins. The hope is that future research into TIGAR will provide insight into new ways to treat cancer.
This gene is regulated as part of the p53 tumor suppressor pathway and encodes a protein with sequence similarity to the bisphosphate domain of the glycolytic enzyme that degrades fructose-2,6-bisphosphate. The protein functions by blocking glycolysis and directing the pathway into the pentose phosphate shunt. Expression of this protein also protects cells from DNA damaging reactive oxygen species and provides some protection from DNA damage-induced apoptosis. The 12p13.32 region that includes this gene is paralogous to the 11q13.3 region.
Gene
In humans the TIGAR gene, known as C12orf5, is found on chromosome 12p13-3, and consists of 6 exons. The C12orf5 mRNA is 8237 base pairs in length.
Discovery
Jen and Cheung first discovered the c12orf5 gene whilst using computer based searches to find novel p53-regulated genes that were switched on in response to ionizing radiation. They published their research in Cancer Research in 2005.
Later a study focused wholly on the structure and function of the c12orf5 gene was published in Cell by Karim Bensaad et al., in which c12orf5 was given the name TIGAR in honour of its apparent function.
Expression
TIGAR transcription is rapidly activated by the p53 tumour suppressor protein in response to low levels of cellular stress, such as that caused by exposure to low doses of UV. However, under high levels of cellular stress TIGAR expression decreases. P53, a transcription factor, can bind two sites within the human TIGAR gene to activate expression. One site is found within the first intron, and binds p53 with high affinity. The second is found just prior to the first exon, binds p53 with low affinity, and is conserved between mice and humans.
TIGAR expression can be regulated by other non-p53 mechanisms in tumour cell lines.
Structure
TIGAR is approximately 30kDa and has a tertiary structure that is similar to the histidine phosphatase fold. The core of TIGAR is made up of an α-β-α sandwich, which consists of a six-stranded β sheet surrounded by 4 α helices. Additional α helices and a long loop are built around the core to give the full enzyme. TIGAR has an active site that is structurally similar to that of PhoE (a bacterial phosphatase enzyme) and functionally similar to that of fructose-2,6-bisphosphatase.
The bis-phosphatase-like active site of TIGAR is positively charged, and catalyses the removal of phosphate groups from other molecules. In contrast to Fructose-2,6-Bisphosphatase, TIGAR's active site is open and accessible like that of PhoE. The site contains 3 crucial amino acids (2 histidines and 1 glutamic acid) that are involved in the phosphatase reaction. These 3 residues are known collectively as a catalytic triad, and are found in all enzymes belonging to the phosphoglyceromutase branch of the histidine phosphatase superfamily. One of the histidine residues is electrostatically bound to a negatively charged phosphate. A second phosphate is bound elsewhere in the active site.
Function
TIGAR activity can have multiple cellular effects. TIGAR acts as a direct regulator of fructose-2,6-bisphosphate levels and hexokinase 2 activity, and this can lead indirectly to many changes within the cell in a chain of biochemical events. TIGAR is a fructose bisphosphatase which activates p53, in results of inhibiting the expression of glucose transporter and also regulating the expression of hexokinase and phosphoglycerate mutase. TIGAR also inhibit the Phosphofructokinase (PFK) by lowering the level of fructose-2,6,bisphosphate, therefore, glycolysis is inhibited and pentose phosphate pathway is promoted.
Fructose-2,6-bisphosphate regulation
TIGAR decreases cellular fructose-2,6-bisphosphate levels. It catalyses the removal of a phosphate group from fructose-2,6-bisphosphate (F-2,6-BP):
Fructose-2,6-Bisphosphate->Fructose-6-phosphate (F-6-P) + phosphate
F-2,6-BP is an allosteric regulator of cellular glucose metabolism pathways. Ordinarily F-2,6-BP binds to and increases the activity of phosphofructokinase 1. Phosphofructokinase-1 catalyses the addition of a phosphate to F-6-P to form Fructose-1,6-bisphosphate (F-1,6-BP). This is an essential step in the glycolysis pathway, which forms the first part of aerobic respiration in mammals. F-2,6-BP also binds to and decreases the activity of fructose-1,6-bisphosphatase. Fructose-1,6-bisphosphatase catalyses the removal of phosphate from F-1,6-BP to form F-6-P. This reaction is part of the gluconeogenesis pathway, which synthesizes glucose, and is the reverse of glycolysis. When TIGAR decreases F-2,6-BP levels, phosphofructokinase becomes less active whilst fructose-1,6-bisphosphatase activity increases. Fructose-6-phosphate levels build up, which has multiple effects inside the cell:
The rate of glycolysis decreases
The rate of gluconeogenesis increases
Excess fructose-6-phosphate is converted to glucose-6-phosphate in an isomerization reaction
Excess glucose-6-phosphate enters the pentose phosphate pathway. This ultimately leads to the removal of reactive oxygen species (ROS) in the cell
The removal of ROS helps to prevent apoptosis (cell suicide), and may also reduce build-up of DNA damage over time.
DNA damage response and cell cycle arrest
TIGAR can act to prevent a cell progressing through the stages of its growth and division cycle by decreasing cellular ATP levels. This is known as cell cycle arrest. This function of TIGAR forms part of the p53 mediated DNA damage response where, under low levels of cellular stress, p53 initiates cell cycle arrest to allow the cell time for repair. Under high levels of cellular stress, p53 initiates apoptosis instead.
In non-resting cells, the cell cycle consists of G0 -> G1 -> S -> G2 -> M phases, and is tightly regulated at checkpoints between the phases. If the cell has undergone stress, certain proteins are expressed that will prevent the specific sequence of macromolecular interactions at the checkpoint required for progression to the next phase.
TIGAR activity can prevent cells progressing into S phase through a checkpoint known in humans as the restriction point. At the very start of G1 phase, a protein called retinoblastoma (Rb) exists in an un-phosphorylated state. In this state, Rb binds to a protein transcription factor E2F and prevents E2F from activating transcription of proteins essential for S-phase. During a normal cell cycle, as G1 progresses, Rb will become phosphorylated in a specific set of sequential steps by proteins called cyclin dependent kinases (cdks) bound to cyclin proteins. The specific complexes that phosphorylate Rb are cyclin D-cdk4 and cyclin E-cdk2.
When Rb has been phosphorylated many times, it dissociates from E2F. E2F is free to activate expression of S-phase genes. TIGAR can indirectly prevent a cell passing through the Restriction Point by keeping Rb unphosphorylated.
When expressed, TIGAR decreases cellular ATP levels through its phosphatase activity. Less ATP is available for Rb phosphorylation, so Rb remains un-phosphorylated and bound to E2F, which cannot activate S phase genes. Expression of cyclin D, ckd4, cyclin E and cdk2 decreases when TIGAR is active, due to a lack of ATP essential for their transcription and translation. This TIGAR activity serves to arrest cells in G1.
Activity of hexokinase 2
Under low oxygen conditions known as hypoxia, a small amount of TIGAR travels to the mitochondria and increases the activity of Hexokinase 2 (HK2) by binding to it
During hypoxia, a protein called Hif1α is activated and causes TIGAR to re-localise from the cytoplasm to the outer mitochondrial membrane. Here, HK2 is bound to an anion channel in the outer mitochondrial membrane called VDAC. TIGAR binds hexokinase 2 and increases its activity by an as yet unknown mechanism.
Hexokinase 2 (HK2) carries out the following reaction:
Glucose + ATP -> Glucose-6-phosphate + ADP
HK2 is believed to maintain the mitochondrial membrane potential by keeping ADP levels high. It also prevents apoptosis in several ways: it reduces mitochondrial ROS levels, and it prevents apoptosis-causing protein Bax from creating a channel with VDAC. This stops cytochrome C protein passing out through VDAC into the cytoplasm where it triggers apoptosis via a caspase protein cascade.
TIGAR does not re-localise to the mitochondria and bind HK2 under normal cellular conditions, or if the cell is starved of glucose. Re-localisation to the mitochondria does not require TIGAR's phosphatase domain. Instead 4 amino acids at the C-terminal end of TIGAR are essential.
Protection from apoptosis
Increased expression of TIGAR protects cells from oxidative-stress induced apoptosis by decreasing the levels of ROS. TIGAR can indirectly reduce ROS in two distinctive ways. The intracellular environment of the cell will determine which of these two modes of TIGAR action is more prevalent in the cell at any one time.
The fructose-2,6-bisphosphatase activity of TIGAR reduces ROS by increasing the activity of the Pentose Phosphate Pathway (PPP). Glucose-6-phosphate builds up due to de-phosphorylation of F-2,6-BP by TIGAR and enters the PPP.
This causes the PPP to generate more nicotinamide adenine dinucleotide (NADPH). NADPH is a carrier of electrons that is used by the cell as a reducing agent in many anabolic reactions. NADPH produced by the PPP passes electrons to an oxidized glutathione molecule (GSSG) to form reduced glutathione (GSH).
GSH becomes the reducing agent, and passes electrons on to the ROS hydrogen peroxide to form harmless water in the reaction:
GSH + H2O2 -> H2O + GSSG
The decrease in H2O2 as a result of TIGAR activity protects against apoptosis.
TIGAR also reduces ROS by increasing the activity of HK2. HK2 reduces ROS levels indirectly by keeping ADP levels at the outer mitochondrial membrane high. If ADP levels fall, the rate of respiration decreases and causes the electron transport chain to become over-reduced with excess electrons. These excess electrons pass to oxygen and form ROS.
The action of the TIGAR/HK2 complex only protects cells from apoptosis under low oxygen conditions. Under normal or glucose starved conditions, TIGAR mediated protection from apoptosis comes from its bis-phosphatase activity alone.
TIGAR cannot prevent apoptosis via death pathways that are independent from ROS and p53. In some cells, TIGAR expression can push cells further towards apoptosis.
Interleukin 3 (IL-3) is a growth factor that can bind to receptors on a cell's surface and tells the cell to survive and grow. When IL-3 dependent cell lines are deprived of IL-3 they die due to decreased uptake and metabolism of glucose. When TIGAR is overexpressed in IL-3 deprived cells the rate of glycolysis decreases further which enhances the apoptosis rate.
Autophagy
Autophagy is when a cell digests some of its own organelles by lysosomal degradation. Autophagy is employed to remove damaged organelles, or under starvation conditions to provide additional nutrients. Normally, autophagy occurs by the TSC-Mtor pathway, but can be induced by ROS. TIGAR, even at very low levels, inhibits autophagy by decreasing ROS levels. The mechanism by which TIGAR does this is independent from the Mtor pathway, but the exact details are unknown.
Possible roles in cancer
TIGAR can promote development or inhibition of several cancers depending on the cellular context. TIGAR can have some effect on three characteristics of cancer; the ability to evade apoptosis, uncontrolled cell division, and altered metabolism. Many cancer cells have altered metabolism where the rate of glycolysis and anaerobic respiration are very high whilst oxidative respiration is low, which is called the Warburg Effect (or aerobic glycolysis). This allows cancer cells to survive under low oxygen conditions, and use molecules from respiratory pathways to synthesise amino acids and nucleic acids to maintain rapid growth.
In Glioma, a type of brain cancer, TIGAR can be over-expressed where it has oncogenic-like effects. In this case, TIGAR acts to maintain energy levels for increased growth by increasing respiration (conferring altered metabolism), and also protects glioma cells against hypoxia-induced apoptosis by decreasing ROS (conferring evasion of apoptosis). TIGAR is also overexpressed in some breast cancers.
In multiple myeloma, TIGAR expression is linked to the activity of MUC-1. MUC-1 is an oncoprotein that is overexpressed in multiple myeloma and protects these cells from ROS-induced apoptosis by maintaining TIGAR activity. When MUC-1 activity is removed, levels of TIGAR decline and cells undergo ROS-induced apoptosis.
In a type of head and neck cancer known as nasopharyngeal cancer, the onco-protein kinase c-Met maintains TIGAR expression. TIGAR increases glycolytic rate and NADPH levels which allows the cancer cells to maintain fast growth rates.
However, TIGAR may also have an inhibitory effect on cancer development by preventing cellular proliferation through its role in p53 -mediated cell cycle arrest.
References
Further reading
Apoptosis
Glycolysis
Human proteins | TP53-inducible glycolysis and apoptosis regulator | [
"Chemistry"
] | 3,398 | [
"Carbohydrate metabolism",
"Glycolysis",
"Apoptosis",
"Signal transduction"
] |
38,880,080 | https://en.wikipedia.org/wiki/Lu%20Jeu%20Sham | Lu Jeu Sham (Chinese: 沈呂九) (born April 28, 1938) is an American physicist. He is best known for his work with Walter Kohn on the Kohn–Sham equations.
Biography
Lu Jeu Sham's family was from Fuzhou, Fujian, but he was born in British Hong Kong on April 28, 1938. He was graduated from the Pui Ching Middle School in 1955 and then traveled to England for his higher education. He received his Bachelor of Science in mathematics (1st class honours) from Imperial College, University of London in 1960 and his PhD in physics from the University of Cambridge in 1963. In 1963–1966, he worked with Prof. W. Kohn as a postdoctoral fellow at the University of California, San Diego. From 1966 to 1967, Sham worked in University of California, Irvine as assistant professor in Physics and from 1967 to 1968 in Queen Mary College, University of London as a Reader. He joined the faculty of University of California in 1968. Sham was a professor in the Department of Physics at University of California, San Diego, eventually serving as department head. He is now a UCSD professor emeritus.
Sham was elected to the National Academy of Sciences in 1998.
Scientific contributions
Sham is noted for his work on density functional theory (DFT) with Walter Kohn, which resulted in the Kohn–Sham equations of DFT. The Kohn–Sham method is widely used in materials science. Kohn received a Nobel Prize in Chemistry in 1998 for the Kohn–Sham equations and other work related to DFT.
Sham's other research interests include condensed matter physics and optical control of electron spins in semiconductor nanostructures for quantum information processing.
Honors and awards
Member of the US National Academy of Sciences (1998)
Member of Academia Sinica (1998)
Fellow of American Association for the Advancement of Science (2011)
Fellow of American Physics Society (1977)
Fellow of Optica (formerly OSA) (2009)
The Willis E. Lamb Award for Laser Science and Quantum Optics (2004)
The MRS Materials Theory Award (2019)
Humboldt Foundation Award (1978)
Guggenheim Fellowship (1983)
References
External links
Interview of Lu Sham by David Zierler on October 22, 2020, Niels Bohr Library & Archives, American Institute of Physics, College Park, MD USA
1938 births
Living people
University of California, San Diego faculty
Hong Kong physicists
Alumni of Imperial College London
Alumni of the University of Cambridge
Computational chemists
Fellows of the American Physical Society
American scientists of Asian descent | Lu Jeu Sham | [
"Chemistry"
] | 511 | [
"Computational chemistry",
"Theoretical chemists",
"Computational chemists"
] |
31,801,718 | https://en.wikipedia.org/wiki/Lamina%20emergent%20mechanism | Lamina Emergent Mechanisms (also known as LEMs) are more commonly referred to as "Pop-up Mechanisms" as seen in "pop-up-books". LEM is the technical term of such mechanisms or engineering. LEMs are a subset of compliant mechanisms fabricated from planar materials (lamina) and have motion emerging from the fabrication plane. LEMs use compliance, or the deflection of flexible members to achieve motion.
Background
Ortho-Planar Mechanisms are an earlier concept similar to LEMs. More well known LEMs include pop-up books, flat-folding origami mechanisms, origami stents, and deployable mechanisms. The research in LEMs also overlaps with deployable structures, origami, kirigami, compliant mechanisms, microelectromechanical systems, packaging engineering, robotics, paper engineering, developable mechanisms, and more.
References
External links
Compliant Mechanism Research Group at BYU
Motion Structure research at Oxford
Rigid Origami Structure research by Tomohiro Tachi
Metamorphic Mechanism research at King's College London
Robotic Origami Folding research at Carnegie Mellon
Mechanisms (engineering)
Robotics hardware
Paper folding | Lamina emergent mechanism | [
"Mathematics",
"Engineering"
] | 240 | [
"Robotics hardware",
"Robotics engineering",
"Recreational mathematics",
"Mechanical engineering",
"Paper folding",
"Mechanisms (engineering)"
] |
31,802,451 | https://en.wikipedia.org/wiki/Reaction%20bonded%20silicon%20carbide | Reaction bonded silicon carbide, also known as siliconized silicon carbide or SiSiC, is a type of silicon carbide that is manufactured by a chemical reaction between porous carbon or graphite with molten silicon. Due to the left over traces of silicon, reaction bonded silicon carbide is often referred to as siliconized silicon carbide, or its abbreviation SiSiC.
If bulk silicon carbide is produced by sintering of silicon carbide powder, it usually contains traces of chemicals called sintering aids, which are added to support the sintering process by allowing lower sintering temperatures. This type of silicon carbide is often referred to as sintered silicon carbide, or abbreviated to SSiC.
The silicon carbide powder is gained from silicon carbide produced as described in the article silicon carbide.
References
Ceramic materials
Inorganic silicon compounds
Materials science
Chemical engineering | Reaction bonded silicon carbide | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 185 | [
"Ceramic engineering",
"Applied and interdisciplinary physics",
"Inorganic compounds",
"Chemical engineering",
"Materials science",
"Inorganic compound stubs",
"Ceramic materials",
"nan",
"Inorganic silicon compounds"
] |
31,804,992 | https://en.wikipedia.org/wiki/Luminescent%20solar%20concentrator | A luminescent solar concentrator (LSC) is a device for concentrating radiation, solar radiation in particular, to produce electricity. Luminescent solar concentrators operate on the principle of collecting radiation over a large area, converting it by luminescence (specifically by fluorescence) and directing the generated radiation into relatively small photovoltaic solar cells at the edges.
Design
Initial designs typically comprised parallel thin, flat layers of alternating luminescent and transparent materials, placed to gather incoming radiation on their (broader) faces and emit concentrated radiation around their (narrower) edges. Commonly the device would direct the concentrated radiation onto solar cells to generate electric power.
Other configurations (such as doped or coated optical fibers, or contoured stacks of alternating layers) may better fit particular applications.
Structure and principles of operation
The layers in the stack may be separate parallel plates or alternating strata in a solid structure. In principle, if the effective input area is sufficiently large relative to the effective output area, the output would be of correspondingly higher irradiance than the input, as measured in watts per square metre. The concentration factor is the ratio between output and input irradiance of the whole device.
For example, imagine a square glass sheet (or stack) 200 mm on a side, 5 mm thick. Its input area (e.g. the surface of one single face of the sheet oriented toward the energy source) is 10 times greater than the output area (e.g. the surface of four open sides) - 40000 square mm (200x200) as compared to 4000 square mm (200x5x4). To a first approximation, the concentration factor of such an LSC is proportional to the area of the input surfaces divided by the area of the edges multiplied by the efficiency of diversion of incoming light towards the output area. Suppose that the glass sheet could divert incoming light from the face towards the edges with an efficiency of 50%. The hypothetical sheet of glass in our example would give an output irradiance of light 5 times greater than that of the incident light, producing a concentration factor of 5.
Similarly, a graded refractive index optic fibre 1 square mm in cross section, and 1 metre long, with a luminescent coating might prove useful.
Concentration factor versus efficiency
The concentration factor interacts with the efficiency of the device to determine overall output.
The concentration factor is the ratio between the incoming and emitted irradiance. If the input irradiance is 1 kW/m2 and the output irradiance is 10 kW/m2, that would provide a concentration factor of 10.
The efficiency is the ratio between the incoming radiant flux (measured in watts) and the outgoing wattage, or the fraction of the incoming energy that the device can deliver as usable output energy (not the same as light or electricity, some of which might not be usable). In the previous example, half the received wattage is re-emitted, implying efficiency of 50%.
Most devices (such as solar cells) for converting the incoming energy to useful output are relatively small and costly, and they work best at converting directional light at high intensities and a narrow frequency range, whereas input radiation tends to be at diffuse frequencies, of relatively low irradiance and saturation. Concentration of the input energy accordingly is one option for efficiency and economy.
Luminescence
The above description covers a wider class of concentrators (for example simple optical concentrators) than just luminescent solar concentrators. The essential attribute of LSCs is that they incorporate luminescent materials that absorb incoming light with a wide frequency range, and re-emit the energy in the form of light in a narrow frequency range. The narrower the frequency range, (i.e. the higher the saturation) the simpler a photovoltaic cell can be designed to convert it to electricity.
Suitable optical designs trap light emitted by the luminescent material in all directions, redirecting it so that little escapes the photovoltaic converters. Redirection techniques include internal reflection, refractive index gradients and where suitable, diffraction. In principle such LSCs can use light from cloudy skies and similar diffuse sources that are of little use for powering conventional solar cells or for concentration by conventional optical reflectors or refractive devices.
The luminescent component might be a dopant in the material of some or all of the transparent medium, or it might be in the form of luminescent thin films on the surfaces of some of the transparent components.
Theory of luminescent solar concentrators
Various articles have discussed the theory of internal reflection of fluorescent light so as to provide concentrated emission at the edges, both for doped glasses and for organic dyes incorporated into bulk polymers. When transparent plates are doped with fluorescent materials, effective design requires that the dopants should absorb most of the solar spectrum, re-emitting most of the absorbed energy as long-wave luminescence. In turn, the fluorescent components should be transparent to the emitted wavelengths. Meeting those conditions allows the transparent matrix to convey the radiation to the output area. Control of the internal path of the luminescence could rely on repeated internal reflection of the fluorescent light, and refraction in a medium with a graded refractive index.
Theoretically about 75-80 % of the luminescence could be trapped by total internal reflection in a plate with a refractive index roughly equal to that of typical window glass. Somewhat better efficiency could be achieved by using materials with higher refractive indices. Such an arrangement using a device with a high concentration factor should offer impressive economies in the investment in photovoltaic cells to produce a given amount of electricity. Under ideal conditions the calculated overall efficiency of such a system, in the sense of the amount of energy leaving the photovoltaic cell divided by the energy falling on the plate, should be about 20%.
This takes into account:
the absorption of light by poorly transparent materials in the transparent medium,
the efficiency of light conversion by the luminescent components,
the escape of luminescence beyond the critical angle and
gross efficiency (which is the ratio of the average energy emitted to the average energy absorbed).
Practical prospects and challenges
The relative merits of various functional components and configurations are major concerns, in particular:
Organic dyes offer wider ranges of frequencies and more flexibility in choice of frequencies emitted and re-absorbed than rare earth compounds and other inorganic luminescent agents.
Doping organic polymers is generally practical with organic luminescent agents, whereas doping with stable inorganic luminescent agents usually is not practical except in inorganic glasses.
Luminescent agents configured as bulk doping of a transparent medium have merits that differ from those of thin films deposited on a clear medium.
Various trapping media present varying combinations of durability, transparency, compatibility with other materials and refractive index. Inorganic glass and organic polymer media comprise the two main classes of interest.
Photonic systems create band gaps that trap radiation.
Identifying materials that re-emit more input light as useful luminescence with negligible self-absorption is crucial. Attainment of that ideal depends on tuning the relevant electronic excitation energy levels to differ from the emission levels in the luminescent medium.
Alternatively the luminescent materials can be configured into thin films that emit light into transparent passive media that can efficiently conduct towards the output.
The sensitivity of solar cells must match the maximal emission spectrum of the luminescent colorants.
Increase the probability of transition from the ground state to the excited state of surface plasmons increases efficiency.
Luminescent solar concentrators could be used to integrate solar-harvesting devices into building façades in cities.
Advances
Transparent luminescent solar concentrators
In 2013, researchers at Michigan State University demonstrated the first visibly transparent luminescent solar concentrators. These devices were composed of phosphorescent metal halide nanocluster (or Quantum dot) blends that exhibit massive Stokes shift (or downconversion) and which selectively absorb ultraviolet and emit near-infrared light, allowing for selective harvesting, improved reabsorption efficiency, and non-tinted transparency in the visible spectrum.
The following year, these researchers demonstrated near-infrared harvesting visibly transparent luminescent solar concentrators by utilizing luminescent organic salt derivatives. These devices exhibit a clear visible transparency similar to that of glass and a power conversion efficiency close to 0.5%. In this configuration efficiencies of over 10% are possible due to the large fraction of photon flux in the near-infrared spectrum.
Quantum dots
LSCs based on cadmium selenide/zinc sulfide (CdSe/ZnS) and cadmium selenide/cadmium sulfide (CdSe/CdS) quantum dots (QD) with induced large separation between emission and absorption bands (called a large Stokes shift) were announced in 2007 and 2014 respectively
Light absorption is dominated by an ultra-thick outer shell of CdS, while emission occurs from the inner core of a narrower-gap CdSe. The separation of light-absorption and light-emission functions between the two parts of the nanostructure results in a large spectral shift of emission with respect to absorption, which greatly reduces re-absorption losses. The QDs were incorporated into large slabs (sized in tens of centimeters) of poly(methyl methacrylate) (PMMA). The active particles were about one hundred angstroms across.
Spectroscopic measurements indicated virtually no re-absorption losses on distances of tens of centimeters. Photon harvesting efficiencies were approximately 10%. Despite their high transparency, the fabricated structures showed significant enhancement of solar flux with the concentration factor of more than four.
See also
Concentrated photovoltaics
Solar cells
Solar cell research
Surface plasmons
Thin films
References
Further reading
Strong emitting sol–gel materials based on interaction of luminescence dyes and lanthanide complexes with silver nanoparticles
Theoretical and experimental analysis of photonic structures for fluorescent concentrators with increased efficiencies
Optimized excitation energy transfer in a three-dye luminescent solar concentrator
High-Efficiency Organic Solar Concentrators for Photovoltaics
Efficiency limits of photovoltaic fluorescent collectors
A luminescent solar concentrator with 7.1% power conversion efficiency
Maximising the light output of a Luminescent Solar Concentrator
Characterization and reduction of reabsorption losses in luminescent solar concentrators
Controlling Light Emission in Luminescent Solar Concentrators Through Use of Dye Molecules Aligned in a Planar Manner by Liquid Crystals
The effect of photonic structures on the light guiding efficiency of fluorescent concentrators
Increasing the efficiency of fluorescent concentrator systems
Strongly modified [2,2′-bipyridyl]-3,3′-diol (BP(OH)2): a system undergoing excited state intramolecular proton transfer as a photostabilizer of polymers and as a solar energy collector
Plasmon-controlled fluorescence: a new paradigm in fluorescence spectroscopy
Innovative materials based on sol–gel technology
Organic–Inorganic Sol–Gel Composites Incorporating Semiconductor Nanocrystals for Optical Gain Applications
External links
Other authors:
Solar energy
Energy conversion
Nanoelectronics | Luminescent solar concentrator | [
"Materials_science"
] | 2,321 | [
"Nanotechnology",
"Nanoelectronics"
] |
31,808,529 | https://en.wikipedia.org/wiki/Bethe%E2%80%93Feynman%20formula | The Bethe–Feynman efficiency formula, a simple method for calculating the yield of a fission bomb, was first derived in 1943 after development in 1942. Aspects of the formula are speculated to be secret restricted data.
Related formula
a = internal energy per gram
b = growth rate
c = sphere radius
A numerical coefficient would then be included to create the Bethe–Feynman formula—increasing accuracy by more than an order of magnitude.
where γ is the thermodynamic exponent of a photon gas, is the prompt energy density of the fuel, α is V (neutron velocity) / λ (total reaction mean free path), R is the critical radius and 𝛿 is the excess supercritical radius .
See also
Richard Feynman
Hans Bethe
Robert Serber
References
Nuclear physics
Richard Feynman | Bethe–Feynman formula | [
"Physics"
] | 166 | [
"Nuclear and atomic physics stubs",
"Nuclear physics"
] |
36,010,837 | https://en.wikipedia.org/wiki/Computed%20tomography%20imaging%20spectrometer | The computed tomography imaging spectrometer (CTIS) is a snapshot imaging spectrometer which can produce in fine the three-dimensional (i.e. spatial and spectral) hyperspectral datacube of a scene.
History
The CTIS was conceived separately by Takayuki Okamoto and Ichirou Yamaguchi at Riken (Japan), and by F. Bulygin and G. Vishnakov in Moscow (Russia). The concept was subsequently further developed by Michael Descour, at the time a PhD student at the University of Arizona, under the direction of Prof. Eustace Dereniak.
The first research experiments based on CTIS imaging were conducted in the fields of molecular biology. Several improvements of the technology have been proposed since then, in particular regarding the hardware: dispersive elements providing more information on the datacube, enhanced calibration of the system. The enhancement of the CTIS was also fueled by the general development of bigger image sensors. For academic purposes, although not as widely used as other spectrometers, CTIS has been employed in applications ranging from the military to ophthalmology and astronomy.
Image formation
Optical layout
The optical layout of a CTIS instrument is shown on the left part of the top image. A field stop is placed at the image plane of an objective lens, after which a lens collimates the light before it passes through a disperser (such as a grating or a prism). Finally, a re-imaging lens maps the dispersed image of the field stop onto a large-format detector array.
Resulting image
The information that the CTIS acquires can be seen as the three-dimensional datacube of the scene. Of course, this cube does not exist in physical space as mechanical objects do, but this representation helps to gain intuition on what the image is capturing: As seen in the figure on the right, the shapes on the image can be considered as projections (in a mechanical sense) of the datacube.
The central projection, called the 0th order of diffraction, is the sum of the datacube following the spectral axis (hence, this projection acts as a panchromatic camera). In the image of the "5" on the right, one can clearly read the number in the central projection, but with no information regarding the spectre of the light.
All the other projections result from "looking" at the cube obliquely and hence contain a mixture of spatial and spectral information. From a discrete point of view where the datacube is considered as a set of spectral slices (as in the figure above, where two such slices are represented in purple and red), one can understand these projections as a partial spread of the stack of slices, similarly to a magician spreading his cards in order for an audience member to pick one of them. It is important to note that for typical spectral dispersions and the typical size of a sensor, the spectral information of a given slice is heavily overlapping with the one from other neighboring slices. In the "5" image, one can see in the side projections that the number is not clearly readable (loss of spatial information), but that some spectral information is available (i.e. some wavelengths appear brighter than others). Hence, the image contains multiplexed information regarding the datacube.
The number and layout of the projections depend on the type of diffracting element employed. In particular, more than one order of diffraction can be captured.
Datacube reconstruction
The resulting image contains all of the information of the datacube. It is necessary to carry out a reconstruction algorithm to convert this image back in the 3D spatio-spectral space. Hence, the CTIS is a computational imaging system.
Link to X-ray computed tomography
Conceptually, one can consider each of the projections of the datacube in a manner analogous to the X-ray projections measured by medical X-ray computed tomography instruments used to estimate the volume distribution within a patient's body.
Hence, the most widely-used algorithms for CTIS reconstruction are the same as the one used in the X-ray CT field of study. In particular, the algorithm used by Descour is directly taken from a seminal work in X-ray CT reconstruction. Since then, slightly more elaborate techniques have been employed, in the same way (but not to the same extent) X-ray CT reconstruction has improved since the 80s.
Difficulties
Compared to the X-ray CT field, CTIS reconstruction is notoriously more difficult. In particular, the number of projections resulting from a CTIS acquisition is typically far less than in X-ray CT. This results in a blurrier reconstruction, following the projection-slice theorem. Moreover, unlike X-ray CT where projections are acquired around the patient, the CTIS, as all imaging systems, only acquires the scene from a single point of view, and hence many projection angles are unobtainable.
References
External links
A fast reconstruction algorithm for computed tomography imaging spectrometer (CTIS) is documented in the paper: Larz White, W. Bryan Bell, Ryan Haygood, "Accelerating computed tomographic imaging spectrometer reconstruction using a parallel algorithm exploiting spatial shift-invariance", Opt. Eng. 59(5), 055110 (2020).
Spectrometers
Tomography | Computed tomography imaging spectrometer | [
"Physics",
"Chemistry"
] | 1,109 | [
"Spectrometers",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
36,011,992 | https://en.wikipedia.org/wiki/Octadecylphosphonic%20acid | Octadecylphosphonic acid (C18H39O3P) is a chemical compound used in thermal paper for receipts, adding machines and tickets.
References
Phosphonic acids | Octadecylphosphonic acid | [
"Chemistry"
] | 39 | [
"Organic compounds",
"Organic compound stubs",
"Organic chemistry stubs"
] |
33,362,511 | https://en.wikipedia.org/wiki/List%20of%20sequenced%20plant%20genomes | This list of sequenced plant genomes contains plant species known to have publicly available complete genome sequences that have been assembled, annotated and published. Unassembled genomes are not included, nor are organelle only sequences. For all kingdoms, see the list of sequenced genomes.
See also List of sequenced algae genomes.
Bryophytes
Vascular plants
Lycophytes
Ferns
Gymnosperms
Angiosperms
Amborellales
Chlorantales
Magnoliales
Eudicots
Proteales
Ranunculales
Trochodendrales
Caryophyllales
Rosids
Asterids
Monocots
Grasses
Other non-grasses
Press releases announcing sequencing
Not meeting criteria of the first paragraph of this article in being nearly full sequences with high quality, published, assembled and publicly available. This list includes species where sequences are announced in press releases or websites, but not in a data-rich publication in a refereed peer-review journal with DOI.
Corchorus olitorius (Jute mallow), fibre plant 2017
Corchorus capsularis 2017
Fraxinus excelsior, European ash (2013 draft)
See also
List of sequenced eukaryotic genomes
List of sequenced animal genomes
List of sequenced archaeal genomes
List of sequenced bacterial genomes
List of sequenced fungi genomes
List of sequenced plastomes
List of sequenced protist genomes
External links
http://plabipd.de/timeline_view.ep
http://genomevolution.org/wiki/index.php/Sequenced_plant_genomes
https://phytozome.jgi.doe.gov/pz/portal.html
https://bioinformatics.psb.ugent.be/plaza/
References
Biology-related lists
Plant | List of sequenced plant genomes | [
"Engineering",
"Biology"
] | 381 | [
"Lists of sequenced genomes",
"DNA sequencing",
"Genetic engineering",
"Genome projects"
] |
33,365,207 | https://en.wikipedia.org/wiki/Servo%20bandwidth | Servo bandwidth is the maximum trackable sinusoidal frequency of amplitude A, with tracking achieved at or before 10% of A amplitude is reached. The servo bandwidth indicates the capability of the servo to follow rapid changes in the commanded input. It is usually specified as a frequency in Hertz or radian/sec.
Explanation
Bandwidth of systems is generally defined to be the frequency at which the system's amplitude is times the signal amplitude. But if we apply same logic to servo systems it is difficult to analyze and develop a system to a sufficiently accurate specification. This is because of ambiguity with regard to frequency at which the amplitude should go to .
A simple and sound definition can be sought regarding this.
Let us say we want to design a position servo control system with following specifications:
Bandwidth: 10 Hz
Allowed amplitude range : ± 50°
The above definition is not enough to design a practical control system. The definitions above have inherent problems with regard to what amplitude the manufacturer should take to design the servo with 10 Hz bandwidth.
If the manufacturer takes the amplitude to be ±20° and rise time for this amplitude to be 0.025 sec (10 Hz sinusoid)
and some other manufacturer takes amplitude to be ±50°, the acceleration requirements calculated by two will be very different.
This leads us to understand that giving servo bandwidth alone with no amplitude specification is almost useless. Also defining the bandwidth as per normal bandwidth definition does not help (ambiguity with regard to frequency at which the amplitude should go to .
See also
Servomechanism
References
Wave mechanics
Control theory | Servo bandwidth | [
"Physics",
"Mathematics"
] | 322 | [
"Physical phenomena",
"Applied mathematics",
"Control theory",
"Classical mechanics",
"Waves",
"Wave mechanics",
"Dynamical systems"
] |
33,365,410 | https://en.wikipedia.org/wiki/Commercial%20modular%20construction | Commercial Modular Buildings are code-compliant, non-residential structures that are 60% to 90% completed offsite in a factory-controlled environment. They are then transported or shipped to a final destination where the modules are then erected onto a concrete foundation to form a finished building. The word "modular" does not describe a building type or style; it simply describes a means of construction.
The commercial modular construction industry comprises two distinct divisions:
Permanent Modular Construction (PMC) – modular units built offsite for assembly onsite to create a permanent facility not intended to be relocated. They are comparable to buildings built strictly onsite in terms of quality, life span, and materials used for construction.
Relocatable Buildings – modular units built offsite for assembly onsite that can be partially or completely reused and relocated at future building sites.
Benefits
A primary benefit of modular construction is its fast delivery. Due to the simultaneous process of creating modules in a factory at the same time site work is occurring, modular buildings can be constructed in up to half the time as buildings built completely onsite. This allows the buildings to be occupied sooner and allows owners to see a faster return on investment.
So modular construction has the potential to shorten project design and engineering time and improve construction productivity. The installation of modular buildings is cost-effective, safe and environmentally friendly. The introduction of modular prefabricated units is not only possible in low-rise construction but also in multi-story and high-rise construction.
To save the most time and money and maximize the efficiency of the modular construction process, it must be implemented at the beginning of the design-build process.
Other advantages of modular construction are design customization and sustainability. The latter is a factor that stands out in the use of modular construction in reference to corporate social responsibility, both in relation to waste minimization, the use of renewable energy and the use of sustainable building materials (such as bamboo and recycled steel and wood) that reduce environmental pollution. Given that the construction sector is one of the planet's main carbon emitters, these are aspects that are increasingly in demand by those in charge of construction projects, as well as by architects and designers.
According to the UK group WRAP, (Waste and Resources Action Programme) up to a 90% reduction in materials can be achieved through the use of modular construction. Materials minimized include: wood pallets, shrink wrap, cardboard, plasterboard, timber, concrete, bricks, and cement.
Uses
Modular builders provide all types of building space, from small temporary units to complex, multi-story permanent buildings. The most commonly served markets are education, healthcare, general office, retail and commercial housing.
Some common industrial uses may include: Application Rooms, Laser Rooms, Equipment Enclosures, Environmental Rooms, Maintenance Rooms, or Storage and Security Rooms. Commercial applications may include Offices, Reception Areas, Conference and Meeting Rooms, Copy Centers and Mail Rooms, Shipping and Receiving Rooms, Lunch Rooms and Cafeterias, Break Rooms, Dark Rooms, Training Rooms, and Storage Rooms.
Modular Architecture
The use of modular architecture, especially in large-scale facilities such as hotels or shopping malls, allows for a high-quality building result with excellent value for money. Within modular architecture the use of building information models and the use of 3D modeling design technology makes it easy to take full advantage of the benefits of commercial modular construction. This is in addition to the automation of factory processes with the use of robotic fabrication which provides a result that is perfectly adapted to the needs and requirements of each project.
See also
References
External links
Modular Building Institute - international trade association for commercial modular construction
Construction
Modularity
Building
Buildings and structures | Commercial modular construction | [
"Engineering"
] | 742 | [
"Construction",
"Building",
"Buildings and structures",
"Architecture"
] |
33,365,528 | https://en.wikipedia.org/wiki/Water%20year | A water year (also called hydrological year, discharge year or flow year) is a term commonly used in hydrology to describe a time period of 12 months for which precipitation totals are measured. Its beginning differs from the calendar year because part of the precipitation that falls in late autumn and winter accumulates as snow and does not drain until the following spring or summer's snowmelt. The goal is to ensure that as much as possible of the surface runoff during the water year is attributable to the precipitation during the same water year.
Due to meteorological and geographical factors, the definition of the water years varies. The United States Geological Survey (USGS) defines it as the period between October 1 of one year and September 30th of the next, as late September to early October is the time for many drainage areas in the US to have the lowest stream flow and consistent ground water levels.
The water year is designated by the calendar year in which it ends, so the 2025 water year started on October 1, 2024, and will end on September 30, 2025.
One way to identify a water-year is to find the successive 12-month period that most consistently, year after year, gives the highest correlation between precipitation and streamflow and negligible changes in storage (i.e., soil water and snow). Usually, the time when the variation of storage from year to year is the smallest is the time with the minimum storage level and minimum flow. However, the practical considerations also affect the water year definitions. For example, in Canada the water year starts in October, apparently to coincide with the US one, although better measurement conditions exist in winter.
To accommodate the regional and climatic variations, some researchers use a per-gauge local water year that starts in the month with the lowest average streamflow.
Classification
Water year types (or indices) are used to present the historical hydrological data in a simplified form. These indices help to categorize similar water years for the planning of the rule-based water operations. A typical set includes: very dry year, dry year, normal year, wet year, very wet year. The years are characterized through setting numerical thresholds for the water runoff in the water year. The methods of calculation (and the set of types) naturally vary by the region, therefore many indices exists, for example:
Palmer Drought Severity Index (PDSI). Proposed by W. C. Palmer in 1965, PDSI is extensively used in the US since then;
Standardized Precipitation Index (SPI) was proposed by McKee et al. in 1993;
Reclamation Drought Index;
deciles.
Many practically used indices were created ad-hoc. For example, California River Indices are weighted averages of the estimates of spring melt, runoff for the rest of the year, and the result for the previous year, calculated for few river basins separately to classify the water year as a wet, above normal, below normal, dry, and critical'' ("normal" years in California are extremely rare). These California indices were not created "through a systematic statistical analysis of
historic basin conditions and river flows".
All indices by nature reflect the historic values and therefore cannot capture the variations in climate that are known to cause the distribution of water year types to be non-stationary in time.
Uses
Examples of how water year is used:
Used to compare precipitation from one water year to another.
Used to define a period of examination for hydrologic modeling purposes.
Used in reports by the United States Geological Survey (USGS) as a term that deals with surface-water supply.
The end of the water year is used by the CoCoRaHS project as an opportunity for observers to audit and verify data for their site.
See also
Seasonal year
References
Sources
Hydrology | Water year | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 765 | [
"Hydrology",
"Environmental engineering"
] |
33,373,776 | https://en.wikipedia.org/wiki/Radiation%20effect | Radiation effect is the physical and chemical property changes of materials induced by radiation. One such phenomena is acute radiation syndrome, caused by exposure to ionizing radiation.
Examples
Bleaching of linen
Formation of latent image in photography
Embrittlement of optically transparent polymers such as lucite.
See also
Radiation sensitivity
Radiation Effects and Defects in Solids (journal)
References
External links | Radiation effect | [
"Physics",
"Materials_science",
"Engineering"
] | 76 | [
"Physical phenomena",
"Materials science",
"Radiation",
"Condensed matter physics",
"Radiation effects"
] |
30,855,468 | https://en.wikipedia.org/wiki/Retinoblastoma%20protein | The retinoblastoma protein (protein name abbreviated Rb or pRb; gene name abbreviated Rb, RB or RB1) is a tumor suppressor protein that is dysfunctional in several major cancers. One function of pRb is to prevent excessive cell growth by inhibiting cell cycle progression until a cell is ready to divide. When the cell is ready to divide, pRb is phosphorylated, inactivating it, and the cell cycle is allowed to progress. It is also a recruiter of several chromatin remodeling enzymes such as methylases and acetylases.
pRb belongs to the pocket protein family, whose members have a pocket for the functional binding of other proteins. Should an oncogenic protein, such as those produced by cells infected by high-risk types of human papillomavirus, bind and inactivate pRb, this can lead to cancer. The RB gene may have been responsible for the evolution of multicellularity in several lineages of life including animals.
Name and genetics
In humans, the protein is encoded by the RB1 gene located on chromosome 13—more specifically, 13q14.1-q14.2. If both alleles of this gene are mutated in a retinal cell, the protein is inactivated and the cells grow uncontrollably, resulting in development of retinoblastoma cancer, hence the "RB" in the name 'pRb'. Thus most pRb knock-outs occur in retinal tissue when UV radiation-induced mutation inactivates all healthy copies of the gene, but pRb knock-out has also been documented in certain skin cancers in patients from New Zealand where the amount of UV radiation is significantly higher.
Two forms of retinoblastoma were noticed: a bilateral, familial form and a unilateral, sporadic form. Sufferers of the former were over six times more likely to develop other types of cancer later in life, compared to individuals with sporadic retinoblastoma. This highlighted the fact that mutated pRb could be inherited and lent support for the two-hit hypothesis. This states that only one working allele of a tumour suppressor gene is necessary for its function (the mutated gene is recessive), and so both need to be mutated before the cancer phenotype will appear. In the familial form, a mutated allele is inherited along with a normal allele. In this case, should a cell sustain only one mutation in the other RB gene, all pRb in that cell would be ineffective at inhibiting cell cycle progression, allowing cells to divide uncontrollably and eventually become cancerous. Furthermore, as one allele is already mutated in all other somatic cells, the future incidence of cancers in these individuals is observed with linear kinetics. The working allele need not undergo a mutation per se, as loss of heterozygosity (LOH) is frequently observed in such tumours.
However, in the sporadic form, both alleles would need to sustain a mutation before the cell can become cancerous. This explains why sufferers of sporadic retinoblastoma are not at increased risk of cancers later in life, as both alleles are functional in all their other cells. Future cancer incidence in sporadic pRb cases is observed with polynomial kinetics, not exactly quadratic as expected because the first mutation must arise through normal mechanisms, and then can be duplicated by LOH to result in a tumour progenitor.
RB1 orthologs have also been identified in most mammals for which complete genome data are available.
RB/E2F-family proteins repress transcription.
Structure denotes function
pRb is a multifunctional protein with many binding and phosphorylation sites. Although its common function is seen as binding and repressing E2F targets, pRb is likely a multifunctional protein as it binds to at least 100 other proteins.
pRb has three major structural components: a carboxy-terminus, a "pocket" subunit, and an amino-terminus. Within each domain, there are a variety of protein binding sites, as well as a total of 15 possible phosphorylation sites. Generally, phosphorylation causes interdomain locking, which changes pRb's conformation and prevents binding to target proteins. Different sites may be phosphorylated at different times, giving rise to many possible conformations and likely many functions/activity levels.
Cell cycle suppression
pRb restricts the cell's ability to replicate DNA by preventing its progression from the G1 (first gap phase) to S (synthesis phase) phase of the cell division cycle. pRb binds and inhibits E2 promoter-binding–protein-dimerization partner (E2F-DP) dimers, which are transcription factors of the E2F family that push the cell into S phase. By keeping E2F-DP inactivated, RB1 maintains the cell in the G1 phase, preventing progression through the cell cycle and acting as a growth suppressor. The pRb-E2F/DP complex also attracts a histone deacetylase (HDAC) protein to the chromatin, reducing transcription of S phase promoting factors, further suppressing DNA synthesis.
pRb attenuates protein levels of known E2F Targets
pRb has the ability to reversibly inhibit DNA replication through transcriptional repression of DNA replication factors. pRb is able to bind to transcription factors in the E2F family and thereby inhibit their function. When pRb is chronically activated, it leads to the downregulation of the necessary DNA replication factors. Within 72–96 hours of active pRb induction in A2-4 cells, the target DNA replication factor proteins—MCMs, RPA34, DBF4, RFCp37, and RFCp140—all showed decreased levels. Along with decreased levels, there was a simultaneous and expected inhibition of DNA replication in these cells. This process, however, is reversible. Following induced knockout of pRb, cells treated with cisplatin, a DNA-damaging agent, were able to continue proliferating, without cell cycle arrest, suggesting pRb plays an important role in triggering chronic S-phase arrest in response to genotoxic stress.
One such example of E2F-regulated genes repressed by pRb are cyclin E and cyclin A. Both of these cyclins are able to bind to Cdk2 and facilitate entry into the S phase of the cell cycle. Through the repression of expression of cyclin E and cyclin A, pRb is able to inhibit the G1/S transition.
Repression mechanisms of E2Fs
There are at least three distinct mechanisms in which pRb can repress transcription of E2F-regulated promoters. Though these mechanisms are known, it is unclear which are the most important for the control of the cell cycle.
E2Fs are a family of proteins whose binding sites are often found in the promoter regions of genes for cell proliferation or progression of the cell cycle. E2F1 to E2F5 are known to associate with proteins in the pRb-family of proteins while E2F6 and E2F7 are independent of pRb. Broadly, the E2Fs are split into activator E2Fs and repressor E2Fs though their role is more flexible than that on occasion. The activator E2Fs are E2F1, E2F2 and E2F3 while the repressor E2Fs are E2F4, E2F5 and E2F6. Activator E2Fs along with E2F4 bind exclusively to pRb. pRb is able to bind to the activation domain of the activator E2Fs which blocks their activity, repressing transcription of the genes controlled by that E2F-promoter.
Blocking of pre-initiation complex assembly
The preinitiation complex (PIC) assembles in a stepwise fashion on the promoter of genes to initiate transcription. The TFIID binds to the TATA box in order to begin the assembly of the TFIIA, recruiting other transcription factors and components needed in the PIC. Data suggests that pRb is able to repress transcription by both pRb being recruited to the promoter as well as having a target present in TFIID.
The presence of pRb may change the conformation of the TFIIA/IID complex into a less active version with a decreased binding affinity. pRb can also directly interfere with their association as proteins, preventing TFIIA/IID from forming an active complex.
Modification of chromatin structure
pRb acts as a recruiter that allows for the binding of proteins that alter chromatin structure onto the site E2F-regulated promoters. Access to these E2F-regulated promoters by transcriptional factors is blocked by the formation of nucleosomes and their further packing into chromatin. Nucleosome formation is regulated by post-translational modifications to histone tails. Acetylation leads to the disruption of nucleosome structure. Proteins called histone acetyltransferases (HATs) are responsible for acetylating histones and thus facilitating the association of transcription factors on DNA promoters. Deacetylation, on the other hand, leads to nucleosome formation and thus makes it more difficult for transcription factors to sit on promoters. Histone deacetylases (HDACs) are the proteins responsible for facilitating nucleosome formation and are therefore associated with transcriptional repressors proteins.
pRb interacts with the histone deacetylases HDAC1 and HDAC3. pRb binds to HDAC1 in its pocket domain in a region that is independent to its E2F-binding site. pRb recruitment of histone deacetylases leads to the repression of genes at E2F-regulated promoters due to nucleosome formation. Some genes activated during the G1/S transition such as cyclin E are repressed by HDAC during early to mid-G1 phase. This suggests that HDAC-assisted repression of cell cycle progression genes is crucial for the ability of pRb to arrest cells in G1. To further add to this point, the HDAC-pRb complex is shown to be disrupted by cyclin D/Cdk4 which levels increase and peak during the late G1 phase.
Senescence induced by pRb
Senescence in cells is a state in which cells are metabolically active but are no longer able to replicate. pRb is an important regulator of senescence in cells and since this prevents proliferation, senescence is an important antitumor mechanism. pRb may occupy E2F-regulated promoters during senescence. For example, pRb was detected on the cyclin A and PCNA promoters in senescent cells.
S-phase arrest
Cells respond to stress in the form of DNA damage, activated oncogenes, or sub-par growing conditions, and can enter a senescence-like state called "premature senescence". This allows the cell to prevent further replication during periods of damaged DNA or general unfavorable conditions. DNA damage in a cell can induce pRb activation. pRb's role in repressing the transcription of cell cycle progression genes leads to the S phase arrest that prevents replication of damaged DNA.
Activation and inactivation
When it is time for a cell to enter S phase, complexes of cyclin-dependent kinases (CDK) and cyclins phosphorylate pRb, allowing E2F-DP to dissociate from pRb and become active. When E2F is free it activates factors like cyclins (e.g. cyclin E and cyclin A), which push the cell through the cell cycle by activating cyclin-dependent kinases, and a molecule called proliferating cell nuclear antigen, or PCNA, which speeds DNA replication and repair by helping to attach polymerase to DNA.
Inactivation
Since the 1990s, pRb was known to be inactivated via phosphorylation. Until, the prevailing model was that Cyclin D- Cdk 4/6 progressively phosphorylated it from its unphosphorylated to its hyperphosphorylated state (14+ phosphorylations). However, it was recently shown that pRb only exists in three states: un-phosphorylated, mono-phosphorylated, and hyper-phosphorylated. Each has a unique cellular function.
Before the development of 2D IEF, only hyper-phosphorylated pRb was distinguishable from all other forms, i.e. un-phosphorylated pRb resembled mono-phosphorylated pRb on immunoblots. As pRb was either in its active "hypo-phosphorylated" state or inactive "hyperphosphorylated" state. However, with 2D IEF, it is now known that pRb is un-phosphorylated in G0 cells and mono-phosphorylated in early G1 cells, prior to hyper-phosphorylation after the restriction point in late G1.
pRb mono phosphorylation
When a cell enters G1, Cyclin D- Cdk4/6 phosphorylates pRb at a single phosphorylation site. No progressive phosphorylation occurs because when HFF cells were exposed to sustained cyclin D- Cdk4/6 activity (and even deregulated activity) in early G1, only mono-phosphorylated pRb was detected. Furthermore, triple knockout, p16 addition, and Cdk 4/6 inhibitor addition experiments confirmed that Cyclin D- Cdk 4/6 is the sole phosphorylator of pRb.
Throughout early G1, mono-phosphorylated pRb exists as 14 different isoforms (the 15th phosphorylation site is not conserved in primates in which the experiments were performed). Together, these isoforms represent the "hypo-phosphorylated" active pRb state that was thought to exist. Each isoform has distinct preferences to associate with different exogenous expressed E2Fs.
A recent report showed that mono-phosphorylation controls pRb's association with other proteins and generates functional distinct forms of pRb. All different mono-phosphorylated pRb isoforms inhibit E2F transcriptional program and are able to arrest cells in G1-phase. Importantly, different mono-phosphorylated forms of pRb have distinct transcriptional outputs that are extended beyond E2F regulation.
Hyper-phosphorylation
After a cell passes the restriction point, Cyclin E - Cdk 2 hyper-phosphorylates all mono-phosphorylated isoforms. While the exact mechanism is unknown, one hypothesis is that binding to the C-terminus tail opens the pocket subunit, allowing access to all phosphorylation sites. This process is hysteretic and irreversible, and it is thought accumulation of mono-phosphorylated pRb induces the process. The bistable, switch like behavior of pRb can thus be modeled as a bifurcation point:
Control of pRb function by phosphorylation
Presence of un-phosphorylated pRb drives cell cycle exit and maintains senescence. At the end of mitosis, PP1 dephosphorylates hyper-phosphorylated pRb directly to its un-phosphorylated state. Furthermore, when cycling C2C12 myoblast cells differentiated (by being placed into a differentiation medium), only un-phosphorylated pRb was present. Additionally, these cells had a markedly decreased growth rate and concentration of DNA replication factors (suggesting G0 arrest).
This function of un-phosphorylated pRb gives rise to a hypothesis for the lack of cell cycle control in cancerous cells: Deregulation of Cyclin D - Cdk 4/6 phosphorylates un-phosphorylated pRb in senescent cells to mono-phosphorylated pRb, causing them to enter G1. The mechanism of the switch for Cyclin E activation is not known, but one hypothesis is that it is a metabolic sensor. Mono-phosphorylated pRb induces an increase in metabolism, so the accumulation of mono-phosphorylated pRb in previously G0 cells then causes hyper-phosphorylation and mitotic entry. Since any un-phosphorylated pRb is immediately phosphorylated, the cell is then unable to exit the cell cycle, resulting in continuous division.
DNA damage to G0 cells activates Cyclin D - Cdk 4/6, resulting in mono-phosphorylation of un-phosphorylated pRb. Then, active mono-phosphorylated pRb causes repression of E2F-targeted genes specifically. Therefore, mono-phosphorylated pRb is thought to play an active role in DNA damage response, so that E2F gene repression occurs until the damage is fixed and the cell can pass the restriction point. As a side note, the discovery that damages causes Cyclin D - Cdk 4/6 activation even in G0 cells should be kept in mind when patients are treated with both DNA damaging chemotherapy and Cyclin D - Cdk 4/6 inhibitors.
Activation
During the M-to-G1 transition, pRb is then progressively dephosphorylated by PP1, returning to its growth-suppressive hypophosphorylated state.
pRb family proteins are components of the DREAM complex composed of DP, E2F4/5, RB-like (p130/p107) And MuvB (Lin9:Lin37:Lin52:RbAbP4:Lin54). The DREAM complex is assembled in Go/G1 and maintains quiescence by assembling at the promoters of > 800 cell-cycle genes and mediating transcriptional repression. Assembly of DREAM requires DYRK1A (Ser/Thr kinase) dependant phosphorylation of the MuvB core component, Lin52 at Serine28. This mechanism is crucial for recruitment of p130/p107 to the MuvB core and thus DREAM assembly.
Consequences of pRb loss
Consequences of loss of pRb function is dependent on cell type and cell cycle status, as pRb's tumor suppressive role changes depending on the state and current identity of the cell.
In G0 quiescent stem cells, pRb is proposed to maintain G0 arrest although the mechanism remains largely unknown. Loss of pRb leads to exit from quiescence and an increase in the number of cells without loss of cell renewal capacity. In cycling progenitor cells, pRb plays a role at the G1, S, and G2 checkpoints and promotes differentiation. In differentiated cells, which make up the majority of cells in the body and are assumed to be in irreversible G0, pRb maintains both arrest and differentiation.
Loss of pRb therefore exhibits multiple different responses within different cells that ultimately all could result in cancer phenotypes. For cancer initiation, loss of pRb may induce cell cycle re-entry in both quiescent and post-mitotic differentiated cells through dedifferentiation. In cancer progression, loss of pRb decreases the differentiating potential of cycling cells, increases chromosomal instability, prevents induction of cellular senescence, promotes angiogenesis, and increases metastatic potential.
Although most cancers rely on glycolysis for energy production (Warburg effect), cancers due to pRb loss tend to upregulate oxidative phosphorylation. The increased oxidative phosphorylation can increase stemness, metastasis, and (when enough oxygen is available) cellular energy for anabolism.
In vivo, it is still not entirely clear how and which cell types cancer initiation occurs with solely loss of pRb, but it is clear that the pRb pathway is altered in large number of human cancers.[110] In mice, loss of pRb is sufficient to initiate tumors of the pituitary and thyroid glands, and mechanisms of initiation for these hyperplasia are currently being investigated.
Non-canonical roles
The classic view of pRb's role as a tumor suppressor and cell cycle regulator developed through research investigating mechanisms of interactions with E2F family member proteins. Yet, more data generated from biochemical experiments and clinical trials reveal other functions of pRb within the cell unrelated (or indirectly related) to tumor suppression.
Functional hyperphosphorylated pRb
In proliferating cells, certain pRb conformations (when RxL motif if bound by protein phosphatase 1 or when it is acetylated or methylated) are resistant to CDK phosphorylation and retain other function throughout cell cycle progression, suggesting not all pRb in the cell are devoted to guarding the G1/S transition.
Studies have also demonstrated that hyperphosphorylated pRb can specifically bind E2F1 and form stable complexes throughout the cell cycle to carry out unique unexplored functions, a surprising contrast from the classical view of pRb releasing E2F factors upon phosphorylation.
In summary, many new findings about pRb's resistance to CDK phosphorylation are emerging in pRb research and shedding light on novel roles of pRb beyond cell cycle regulation.
Genome stability
pRb is able to be localize to sites of DNA breaks during the repair process and assist in non-homologous end joining and homologous recombination through complexing with E2F1. Once at the breaks, pRb is able to recruit regulators of chromatin structure such as the DNA helicase transcription activator BRG1. pRb has been shown to also be able to recruit protein complexes such as condensin and cohesin to assist in the structural maintenance of chromatin.
Such findings suggest that in addition to its tumor suppressive role with E2F, pRb is also distributed throughout the genome to aid in important processes of genome maintenance such as DNA break-repair, DNA replication, chromosome condensation, and heterochromatin formation.
Regulation of metabolism
pRb has also been implicated in regulating metabolism through interactions with components of cellular metabolic pathways. RB1 mutations can cause alterations in metabolism, including reduced mitochondrial respiration, reduced activity in the electron transport chain, and changes in flux of glucose and/or glutamine. Particular forms of pRb have been found to localize to the outer mitochondrial membrane and directly interacts with Bax to promote apoptosis.
As a drug target
pRb Reactivation
While the frequency of alterations of the RB gene is substantial for many human cancer types including as lung, esophageal, and liver, alterations in up-steam regulatory components of pRb such as CDK4 and CDK6 have been the main targets for potential therapeutics to treat cancers with dysregulation in the RB pathway. This focus has resulted in the recent development and FDA clinical approval of three small molecule CDK4/6 inhibitors (Palbociclib (IBRANCE, Pfizer Inc. 2015), Ribociclib (KISQUALI, Novartis. 2017), and Abemaciclib (VERZENIO, Eli Lilly. 2017)) for the treatment of specific breast cancer subtypes. However, recent clinical studies finding limited efficacy, high toxicity, and acquired resistance of these inhibitors suggests the need to further elucidate mechanisms that influence CDK4/6 activity as well as explore other potential targets downstream in the pRb pathway to reactivate pRb's tumor suppressive functions. Treatment of cancers by CDK4/6 inhibitors depends on the presence of pRb within the cell for therapeutic effect, limiting their usage only to cancers where RB is not mutated and pRb protein levels are not significantly depleted.
Direct pRb reactivation in humans has not been achieved. However, in murine models, novel genetic methods have allowed for in vivo pRb reactivation experiments. pRb loss induced in mice with oncogenic KRAS-driven tumors of lung adenocarcinoma negates the requirement of MAPK signal amplification for progression to carcinoma and promotes loss of lineage commitment as well as accelerate the acquisition of metastatic competency. Reactivation of pRb in these mice rescues the tumors towards a less metastatic state, but does not completely stop tumor growth due to a proposed rewiring of MAPK pathway signaling, which suppresses pRb through a CDK-dependent mechanism.
Pro-apoptotic effects of pRb loss
Besides trying to re-activate the tumor suppressive function of pRb, one other distinct approach to treat dysregulated pRb pathway cancers is to take advantage of certain cellular consequences induced by pRb loss. It has been shown that E2F stimulates expression of pro-apoptotic genes in addition to G1/S transition genes, however, cancer cells have developed defensive signaling pathways that protect themselves from death by deregulated E2F activity. Development of inhibitors of these protective pathways could thus be a synthetically lethal method to kill cancer cells with overactive E2F.
In addition, it has been shown that the pro-apoptotic activity of p53 is restrained by the pRb pathway, such that pRb deficient tumor cells become sensitive to p53 mediated cell death. This opens the door to research of compounds that could activate p53 activity in these cancer cells and induce apoptosis and reduce cell proliferation.
Regeneration
While the loss of a tumor suppressor such as pRb leading to uncontrolled cell proliferation is detrimental in the context of cancer, it may be beneficial to deplete or inhibit suppressive functions of pRb in the context of cellular regeneration. Harvesting the proliferative abilities of cells induced to a controlled "cancer like" state could aid in repairing damaged tissues and delay aging phenotypes. This idea remains to be thoroughly explored as a potential cellular injury and anti-aging treatment.
Cochlea
The retinoblastoma protein is involved in the growth and development of mammalian hair cells of the cochlea, and appears to be related to the cells' inability to regenerate. Embryonic hair cells require pRb, among other important proteins, to exit the cell-cycle and stop dividing, which allows maturation of the auditory system. Once wild-type mammals have reached adulthood, their cochlear hair cells become incapable of proliferation. In studies where the gene for pRb is deleted in mice cochlea, hair cells continue to proliferate in early adulthood. Though this may seem to be a positive development, pRb-knockdown mice tend to develop severe hearing loss due to degeneration of the organ of Corti. For this reason, pRb seems to be instrumental for completing the development of mammalian hair cells and keeping them alive. However, it is clear that without pRb, hair cells have the ability to proliferate, which is why pRb is known as a tumor suppressor. Temporarily and precisely turning off pRb in adult mammals with damaged hair cells may lead to propagation and therefore successful regeneration. Suppressing function of the retinoblastoma protein in the adult rat cochlea has been found to cause proliferation of supporting cells and hair cells. pRb can be downregulated by activating the sonic hedgehog pathway, which phosphorylates the proteins and reduces gene transcription.
Neurons
Disrupting pRb expression in vitro, either by gene deletion or knockdown of pRb short interfering RNA, causes dendrites to branch out farther. In addition, Schwann cells, which provide essential support for the survival of neurons, travel with the neurites, extending farther than normal. The inhibition of pRb supports the continued growth of nerve cells.
Interactions
pRb is known to interact with more than 300 proteins, some of which are listed below:
Abl gene
Androgen receptor
Apoptosis-antagonizing transcription factor
ARID4A
Aryl hydrocarbon receptor
BRCA1
BRF1
C-jun
C-Raf
CDK9
CUTL1
Cyclin A1
Cyclin D1
Cyclin T2
DNMT1
E2F1
E2F2,
E4F1
EID1
ENC1
FRK
HBP1
HDAC1
HDAC3
Histone deacetylase 2
Insulin
JARID1A
Large tumor antigen
LIN9
MCM7
MORF4L1
MRFAP1,
MyoD
NCOA6
PA2G4
Peroxisome proliferator-activated receptor gamma
PIK3R3
Plasminogen activator inhibitor-2
Polymerase (DNA directed), alpha 1
PRDM2
PRKRA
Prohibitin
Promyelocytic leukemia protein
RBBP4
RBBP7
RBBP8
RBBP9
SNAPC1
SKP2
SNAPC3
SNW1
SUV39H1
TAF1
THOC1
TRAP1
TRIP11
UBTF
USP4.
Detection
Several methods for detecting the RB1 gene mutations have been developed including a method that can detect large deletions that correlate with advanced stage retinoblastoma.
See also
p53 - involved in the DNA repair support function of pRb
Transcription coregulator
Retinoblastoma
References
Further reading
External links
GeneReviews/NIH/NCBI/UW entry on Retinoblastoma
Retinoblastoma Genetics
Drosophila Retinoblastoma-family protein - The Interactive Fly
Drosophila Retinoblastoma-family protein 2 - The Interactive Fly
Evolutionary Homologs Retinoblastoma-family proteins - The Interactive Fly
There is a diagram of the pRb-E2F interactions here.
DNA replication
Gene expression
Transcription coregulators
Transcription factors
Tumor suppressor genes | Retinoblastoma protein | [
"Chemistry",
"Biology"
] | 6,325 | [
"Genetics techniques",
"Gene expression",
"Signal transduction",
"DNA replication",
"Molecular genetics",
"Induced stem cells",
"Cellular processes",
"Molecular biology",
"Biochemistry",
"Transcription factors"
] |
30,856,302 | https://en.wikipedia.org/wiki/Veterinary%20ethics | Veterinary ethics is a system of moral principles that apply values and judgments to the practice of veterinary medicine. As a scholarly discipline, veterinary ethics encompasses its practical application in clinical settings as well as work on its history, philosophy, theology, and sociology. Veterinary ethics combines veterinary professional ethics and the subject of animal ethics. The subject of veterinary ethics can be interpreted as an extension of critical thinking skills necessary to make the decisions in veterinary care in order to support the profession's responsibilities to animal kind and mankind. Five main topics construct the physical usage of Veterinary Ethics. The first is history which describes how these ethics came to be, and how they have changed in the modernization of the veterinary industry. The second is the relation veterinary ethics has with human medical ethics, which together share many values. Third, the principles of these ethics which are updated regularly by the AVMA. Fourth are the key topics of veterinary ethics, which describe what these ethics cover. Last, how these ethics are incorporated into everyday practice and also how they affect those employed in the industry.
History
Animal welfare as a subject has been studied in great depth. It largely looks at the ways in which an animal may suffer in particular circumstances, or how their lives may be enriched. Animal ethics is another well-documented subject, and philosophers since Aristotle, have commented on its importance. Often referred to as “the animal problem,’ the questions that seem to be asked in this field are at their foundation trying to determine what the morally relevant difference is between animals and humans, and if there is no difference how do we justify treating animals a certain way, and if there is a difference then what is it about this difference that allows us to treat animals in a certain way.
Veterinary ethics is a modern subject that does not have a defined start point. As it combines the study of animal welfare and animal ethics as its root and uses information from this as data for its deliberations it could be said to have a long history, however as an academic discipline it is only recently that works have been published on the topic.
The two academics who have written on veterinary ethics for the longest time are Bernard Rollin (Colorado State University) and Jerrold Tannenbaum (University of California, Davis). More recently, emergency veterinarian Jessica Fragola wrote in 2022 about the ethics of animal triage, with pressures on veterinarians having been exacerbated by staffing shortages that resulted from the Covid pandemic, coupled with growth in spending on veterinary care and on pet insurance. They can be seen as the founders of the subject in veterinary ethics. Currently, most veterinary schools teach veterinary ethics, and it is often combined in teaching with animal welfare or with law.
Relation with medical ethics
The subject is very similar to that of human medical ethics, in that the study of the relationship between the doctor and the patient relates closely to that of the veterinary surgeon and animal owner. However, the subject differs greatly in the consideration of the uses of animals - while a doctor's duty may be to preserve life at nearly all cost, the veterinary surgeon needs to adapt their attitude to health and longevity of life to the purpose of the animal (e.g., farm animals).
Much of what is understood in the field of professionalism and professional responsibilities in confidentiality, preserving autonomy, beneficence, truth-telling, whistleblowing, informed consent, and communication is largely lifted from the research done in the medical profession. The difference between human patients and animal patients does not interfere with the professional discussion between doctors and human patients and vets with their clients.
Another major difference between veterinary ethics and human medical ethics is the interplay with law. Human medical ethics has driven changes in the law and, to a lesser degree, vice versa. Largely involving cases of human rights a wide-ranging variety of high-profile legal challenges in many countries have involved the use of ethics to encourage changes in law (for example, assisted suicide, abortion, duty of care, rights to refuse treatment). Veterinary ethics does not have such a strong interplay. It is rare to have an animal-based legal challenge reach high into the legal system. Cases involving challenges to professionalism and duty of care are largely dealt with via the veterinary governing bodies.
The veterinary profession remains largely self-regulating across the world (e.g., by the RCVS in the United Kingdom and AVMA in the United States). This has caused some controversy as to why the veterinary profession remains one of the few remaining self-regulating professions. Bernard Rollin wrote on the difficulty in keeping public confidence while remaining self-regulating; trust and impartiality are critical, but most important is the need for a profession to be self-sacrificial by putting the client's needs above that of the profession or professional.
“Every profession—be it medicine, law, or agriculture—is given freedom by the social ethic to pursue its aims. In return, society basically says to professions it does not understand well enough to regulate, “You regulate yourselves the way we would regulate you if we understood what you do, which we don’t. But we will know if you don’t self-regulate properly and then we will [hammer you with draconian rules and] regulate you, despite our lack of understanding.”
Principles
The American Veterinary Medical Association (AVMA) regularly reviews and updates its principles of ethics. The AVMA Judicial Council ensures the principles are current. Much like the human medical code, veterinarians are expected to "adhere to a progressive code of ethical conduct". Overall there are eight main principles, covering areas such as competence, animal welfare, the veterinarian-client-patient relationship, standards of professionalism, honesty, compliance with the law, continuing education, acting within boundaries of competence, and the betterment of public health.
Incorporation into everyday practice
One of the most important reasons veterinary ethics are taught to veterinarians is to expose these individuals to the willingness and responsibility they will need to be understanding of achieving with the cases they are exposed to. Veterinary ethics prepares veterinarians and veterinary staff for adequate and professional conversations with clients, and other professionals. Signatures for treatment plans and invoices are also the result of these ethics, as legal cases have gotten involved. Another important subject that these ethics prepare veterinary staff for is discussing with the actual clients about their recommended treatment for the client's pet. This is very important training that veterinarians will go through while in school. Determining the best treatment plans and outcomes, while also communicating these to the clients to keep them understanding is the only way to practice veterinary medicine.
Although these ethics provide a safe environment for the animals being treated, the work environment for veterinarians and staff is not always desired. The COVID-19 pandemic (as mentioned above) has brought a significant number of new patients to hospitals all over the world, while schools are producing staff at a rate that cannot keep up with the animal population growth. The large growth in an industry that is not staffed sufficiently has led many hospitals to overwork staff leading to burnout. Statistics prove that from the beginning of the pandemic in January 2020 to the time when businesses began opening again, financial growth reached up to 11%. Significant growth requires additional work, keeping up with patient flow, cleaning, and managing. With limited individuals meeting employment requirements, it has been hard for the industry to keep up with.
Key topics
Key topics within veterinary ethics include:
Complementary and alternative medicine
Confidentiality
Cosmetic interventions
Euthanasia
Informed consent
Negligence
Non-therapeutic mutilations
Professionalism and professional regulation
Religious influences
Research ethics
Selective breeding
Triage
See also
Universities Federation for Animal Welfare
References
Animal ethics
Bioethics
Veterinary medicine | Veterinary ethics | [
"Technology"
] | 1,572 | [
"Bioethics",
"Ethics of science and technology"
] |
30,857,444 | https://en.wikipedia.org/wiki/Joint%20encoding | In audio engineering, joint encoding is the joining of several channels of similar information during encoding in order to obtain higher quality, a smaller file size, or both.
Joint stereo
The term joint stereo has become prominent as the Internet has allowed for the transfer of relatively low bit rate, acceptable-quality audio with modest Internet access speeds. Joint stereo refers to any number of encoding techniques used for this purpose. Two forms are described here, both of which are implemented in various ways with different codecs, such as MP3, AAC and Ogg Vorbis.
Intensity stereo coding
This form of joint stereo uses a technique known as joint frequency encoding, which functions on the principle of sound localization. Human hearing is predominantly less acute at perceiving the direction of certain audio frequencies. By exploiting this characteristic, intensity stereo coding can reduce the data rate of an audio stream with little or no perceived change in apparent quality.
More specifically, the dominance of inter-aural time differences (ITD) for sound localization by humans is only present for lower frequencies. That leaves inter-aural amplitude differences (IAD) as the dominant location indicator for higher frequencies (the cutoff being ~2 kHz). The idea of intensity stereo coding is to merge the lower spectrum into just one channel (thus reducing overall differences between channels) and to transmit a little side information about how to pan certain frequency regions to recover the IAD cues. ITD is not lost completely in this scheme, however: the shape of the ear makes it such that the ITD can be recovered from IAD if the sound comes from free space, e.g. played through loudspeakers.
This type of coding does not perfectly reconstruct the original audio because of the loss of information which results in the simplification of the stereo image and can produce perceptible compression artifacts. However, for very low bit rates this type of coding usually yields a gain in perceived quality of the audio. It is supported by many audio compression formats (including MP3, AAC, Vorbis and Opus) but not always by every encoder.
M/S stereo coding
M/S stereo coding transforms the left and right channels into a mid channel and a side channel. The mid channel is the sum of the left and right channels, or . The side channel is the difference of the left and right channels, or . Unlike intensity stereo coding, M/S coding is a special case of transform coding, and retains the audio perfectly without introducing artifacts. Lossless codecs such as FLAC or Monkey's Audio use M/S stereo coding because of this characteristic.
To reconstruct the original signal, the channels are either added or subtracted .
This form of coding is also sometimes known as matrix stereo and is used in many different forms of audio processing and recording equipment. It is not limited to digital systems and can even be created with passive audio transformers or analog amplifiers. One example of the use of M/S stereo is in FM stereo broadcasting, where modulates the carrier wave and modulates a subcarrier. This enables backwards compatibility with mono equipment, which will only require the mid channel. Another example of M/S stereo is the stereophonic microgroove record. Lateral motions of a stylus represent the sum of two channels and the vertical motion represents the difference between the channels; two perpendicular coils mechanically decode the channels.
M/S is also a common technique for production of stereo recordings. See .
M/S encoding does not strictly require that the left and right channels use the same weight. In Opus CELT, M/S encoding is combined with an angle parameter, so that different weights can be used to maximize de-correlation.
A similar form of joining multiple channels is seen in the ambisonics implementation of Opus 1.3. A matrix may be used to mix the spherical harmonic channels together, reducing redundancy.
Parametric stereo
Parametric stereo is similar to intensity stereo, except that parameters beyond the intensity difference is used. In the MPEG-4 (HE-AAC) version, the intensity difference and time delay difference are used, allowing all bands to be used without hurting localization. HE-AAC also adds "correlation" information, which replicates ambience by synthesizing some difference between channels.
Binaural cue coding (BCC) is the HE-AAC PS technique extended for many input channels, all downmixing to one. The very same ILD, ITD, and IC parameters were used. MPEG Surround is similar to BCC, but allows downmixing to multiple channels, and does not seem to use ITD.
Joint frequency encoding
Joint frequency encoding is an encoding technique used in audio data compression to reduce the data rate.
The idea is to merge a given frequency range of multiple sound channels together so that the resulting encoding will preserve the sound information of that range not as a bundle of separate channels but as one homogeneous data stream. This will destroy the original channel separation permanently, as the information cannot be accurately reconstructed, but will greatly lessen the amount of required storage space. Only some forms of joint stereo use the joint frequency encoding technique, such as intensity stereo coding.
Implementations
When used within the MP3 compression process, joint stereo normally employs multiple techniques, and can switch between them for each MPEG frame. Typically, a modern encoder's joint stereo mode uses M/S stereo for some frames and L/R stereo for others, whichever method yields the best result. Encoders use different algorithms to determine when to switch and how much space to allocate to each channel; quality can suffer if the switching is too frequent or if the side channel doesn't get enough bits. With some encoding software, it is possible to force the use of M/S stereo for all frames, mimicking the joint stereo mode of some early encoders like Xing. Within the LAME encoder, this is known as forced joint stereo.
As with MP3, Ogg Vorbis stereo files can employ either L/R stereo or joint stereo. When using joint stereo, both M/S stereo and intensity stereo methods may be used. As opposed to MP3 where M/S stereo (when used) is applied before quantization, an Ogg Vorbis encoder applies M/S stereo to samples in the frequency domain after quantization, making application of M/S stereo a lossless step. After this step, any frequency area can be converted to intensity stereo by removing the corresponding part of the M/S signal's side channel. Ogg Vorbis' floor function will take care of the required left-right panning. Opus similarly has support for all three options in the CELT layer; the SILK layer is M/S-only.
Notes
References
External links
Jürgen Herre, Fraunhofer IIS. From Joint Stereo to Spatial Audio Coding - Recent Progress and Standardization. October 2004, Paper 157, DAFx'04 7th International Conference of Digital Audio Effects.
Audio engineering | Joint encoding | [
"Engineering"
] | 1,439 | [
"Electrical engineering",
"Audio engineering"
] |
30,860,168 | https://en.wikipedia.org/wiki/BZIP%20domain | The Basic Leucine Zipper Domain (bZIP domain) is found in many DNA binding eukaryotic proteins. One part of the domain contains a region that mediates sequence specific DNA binding properties and the leucine zipper that is required to hold together (dimerize) two DNA binding regions. The DNA binding region comprises a number of basic amino acids such as arginine and lysine. Proteins containing this domain are transcription factors.
bZIP transcription factors
bZIP transcription factors are found in all eukaryotes and form one of the largest families of dimerizing TFs. An evolutionary study from 2008 revealed that 4 bZIP genes were encoded by the genome of the most recent common ancestor of all plants. Interactions between bZIP transcription factors are numerous and complex and play important roles in cancer development in epithelial tissues, steroid hormone synthesis by cells of endocrine tissues, factors affecting reproductive functions, and several other phenomena that affect human health.
bZIP domain containing proteins
AP-1 fos/jun heterodimer that forms a transcription factor
Jun-B transcription factor
CREB cAMP response element transcription factor
OPAQUE2 (O2) transcription factor of the 22-kD zein gene that encodes a class of storage proteins in the endosperm of maize (Zea mays) kernels
NFE2L2 or Nrf2
Bzip Maf transcription factors
Human proteins containing this domain
ATF1; ATF2; ATF4; ATF5; ATF6; ATF7; BACH1; BACH2;
BATF; BATF2; CEBPA; CEBPB; CEBPD; CEBPE; CEBPG; CEBPZ; CREB1; CREB3; CREB3L1; CREB3L2; CREB3L3; CREB3L4;
CREB5; CREBL1; CREM; E4BP4; FOSL1; FOSL2; JUN; JUNB; JUND; MAFA; MAFB; MAFF; MAFG; NRL; C-MAF; MAFK;
NFE2; NFE2L2; NFE2L3; SNFT; XBP1
References
External links
bZIP domain entry in the SMART database
bZIP family at PlantTFDB: Plant Transcription Factor Database
Plant bZIP transcription factors
Protein domains
DNA-binding proteins | BZIP domain | [
"Biology"
] | 513 | [
"Protein domains",
"Protein classification"
] |
44,569,825 | https://en.wikipedia.org/wiki/Skin%20friction%20drag | Skin friction drag is a type of aerodynamic or hydrodynamic drag, which is resistant force exerted on an object moving in a fluid. Skin friction drag is caused by the viscosity of fluids and is developed from laminar drag to turbulent drag as a fluid moves on the surface of an object. Skin friction drag is generally expressed in terms of the Reynolds number, which is the ratio between inertial force and viscous force.
Total drag can be decomposed into a skin friction drag component and a pressure drag component, where pressure drag includes all other sources of drag including lift-induced drag. In this conceptualisation, lift-induced drag is an artificial abstraction, part of the horizontal component of the aerodynamic reaction force. Alternatively, total drag can be decomposed into a parasitic drag component and a lift-induced drag component, where parasitic drag is all components of drag except lift-induced drag. In this conceptualisation, skin friction drag is a component of parasitic drag.
Flow and effect on skin friction drag
Laminar flow over a body occurs when layers of the fluid move smoothly past each other in parallel lines. In nature, this kind of flow is rare. As the fluid flows over an object, it applies frictional forces to the surface of the object which works to impede forward movement of the object; the result is called skin friction drag. Skin friction drag is often the major component of parasitic drag on objects in a flow.
The flow over a body may begin as laminar. As a fluid flows over a surface shear stresses within the fluid slow additional fluid particles causing the boundary layer to grow in thickness. At some point along the flow direction, the flow becomes unstable and becomes turbulent. Turbulent flow has a fluctuating and irregular pattern of flow which is made obvious by the formation of vortices. While the turbulent layer grows, the laminar layer thickness decreases. This results in a thinner laminar boundary layer which, relative to laminar flow, depreciates the magnitude of friction force as fluid flows over the object.
Skin friction coefficient
Definition
The skin friction coefficient is defined as:
where:
is the skin friction coefficient.
is the density of the free stream (far from the body's surface).
is the free stream speed, which is the velocity magnitude of the fluid in the free stream.
is the skin shear stress on the surface.
is the dynamic pressure of the free stream.
The skin friction coefficient is a dimensionless skin shear stress which is nondimensionalized by the dynamic pressure of the free stream. The skin friction coefficient is defined at any point of a surface that is subjected to the free stream. It will vary at different positions. A fundamental fact in aerodynamics states that
.
This immediately implies that laminar skin friction drag is smaller than turbulent skin friction drag, for the same inflow.
The skin friction coefficient is a strong function of the Reynolds number , as increases decreases.
Laminar flow
Blasius solution
where:
, which is the Reynolds number.
is the distance from the reference point at which a boundary layer starts to form.
The above relation derived from Blasius boundary layer, which assumes constant pressure throughout the boundary layer and a thin boundary layer. The above relation shows that the skin friction coefficient decreases as the Reynolds number () increases.
Transitional flow
The Computational Preston Tube Method (CPM)
CPM, suggested by Nitsche, estimates the skin shear stress of transitional boundary layers by fitting the equation below to a velocity profile of a transitional boundary layer. (Karman constant), and (skin shear stress) are determined numerically during the fitting process.
where:
is a distance from the wall.
is a speed of a flow at a given .
is the Karman constant, which is lower than 0.41, the value for turbulent boundary layers, in transitional boundary layers.
is the Van Driest constant, which is set to 26 in both transitional and turbulent boundary layers.
is a pressure parameter, which is equal to when is a pressure and is the coordinate along a surface where a boundary layer forms.
Turbulent flow
Prandtl's one-seventh-power law
The above equation, which is derived from Prandtl's one-seventh-power law, provided a reasonable approximation of the drag coefficient of low-Reynolds-number turbulent boundary layers. Compared to laminar flows, the skin friction coefficient of turbulent flows lowers more slowly as the Reynolds number increases.
Skin friction drag
A total skin friction drag force can be calculated by integrating skin shear stress on the surface of a body.
Relationship between skin friction and heat transfer
In the point of view of engineering, calculating skin friction is useful in estimating not only total frictional drag exerted on an object but also convectional heat transfer rate on its surface. This relationship is well developed in the concept of Reynolds analogy, which links two dimensionless parameters: skin friction coefficient (Cf), which is a dimensionless frictional stress, and Nusselt number (Nu), which indicates the magnitude of convectional heat transfer. Turbine blades, for example, require the analysis of heat transfer in their design process since they are imposed in high temperature gas, which can damage them with the heat. Here, engineers calculate skin friction on the surface of turbine blades to predict heat transfer occurred through the surface.
Effects of skin friction drag
A 1974 NASA study found that for subsonic aircraft, skin friction drag is the largest component of drag, causing about 45% of the total drag. For supersonic and hypersonic aircraft, the figures are 35% and 25% respectively.
A 1992 NATO study found that for a typical civil transport aircraft, skin friction drag accounted for almost 48% of total drag, followed by induced drag at 37%.
Reducing skin friction drag
There are two main techniques for reducing skin friction drag: delaying the boundary layer transition, and modifying the turbulence structures in a turbulent boundary layer.
One method to modify the turbulence structures in a turbulent boundary layer is the use of riblets. Riblets are small grooves in the surface of the aircraft, aligned with the direction of flow. Tests on an Airbus A320 found riblets caused a drag reduction of almost 2%. Another method is the use of large eddy break-up (LEBU) devices. However, some research into LEBU devices has found a slight increase in drag.
See also
Parasitic drag
Pressure drag
References
Fundamentals of Flight by Richard Shepard Shevell
Drag (physics) | Skin friction drag | [
"Chemistry"
] | 1,312 | [
"Drag (physics)",
"Fluid dynamics"
] |
44,571,186 | https://en.wikipedia.org/wiki/Balance%20point%20temperature | The building balance point temperature is the outdoor air temperature when the heat gains of the building are equal to the heat losses. Internal heat sources due to electric lighting, mechanical equipment, body heat, and solar radiation may offset the need for additional heating although the outdoor temperature may be below the thermostat set-point temperature.
The building balance point temperature is the base temperature necessary to calculate heating degree day to anticipate the annual energy demand to heat a building. The balance point temperature is a consequence of building design and function rather than outdoor weather conditions.
Mathematical definition
The balance point temperature is mathematically defined as: Equation 1:
Where:
tbalance is the balance point outdoor air temperature, given in °C (°F).
tThermostat is the building thermostat set-point temperature, given in °C (°F).
QIHG is the internal heat generation rate per unit floor area due to occupancy, electric lighting and mechanical equipment, given in W/m2 (Btu/s/ft2). This internal heat generation is not constant due to variability in occupancy, lighting, and equipment operation schedule but is largely considered constant to a first order approximation.
QSOL is the building heat gain per unit floor area due to solar radiation, given in W/m2 (Btu/s/ft2). This heat gain is not constant due to solar variability with time of day and year but is largely considered constant to a first order approximation. In winter, it is reasonable to assume QSOL=0.
Ubldg is the rate of heat transfer across the building envelope per degree temperature difference between outdoor and indoor temperature and per unit floor area, given in W/K/m2 (Btu/s/°F/ft2). This heat transfer can vary due to variations of fresh air ventilation rate but is largely considered constant to a first order approximation.
This equation is simplified by assuming steady state heat transfer between the building and the environment and only provides an approximate building balance point temperature. The 2013 ASHRAE Handbook – Fundamentals, Chapter F18 provides more rigorous methodologies to calculate the heating loads in a nonresidential buildings. The ASHRAE heat balance method, for example, fully delineates the heat transfer through the inner and outer boundaries of the building wall by incorporating radiative (e.g. sun, indoor surfaces), convective (e.g. indoor and outdoor air), and conductive (e.g. inner to outer boundary) modes of heat transfer.
Determination Methods
In real-world scenarios, the balance point may be determined in one of two ways. In the energy signature method, a plot is created mapping energy consumption against mean outdoor temperature. The point on the chart at which weather-independent and weather-dependent electricity or gas demand intersect is the balance point temperature. This method only works if large quantities of data on the building energy use are available, preferably on a daily resolution.
In the performance line method multiple plots of energy consumption against heating degree days (HDD) and cooling degree days (CDD) are created, using a range of balance point temperatures to calculate the degree days. Best-fit second-order polynomials of the form are then applied to the plots, which show various levels of curvature across the range of the data depending on the accuracy of the balance point temperature. In plots with overly high balance point temperatures the variable is positive, resulting in an upward curve, while plots with low balance point temperatures curve downward due to a negative variable. The plot in which is closest to zero represents the most accurate balance point temperature. This method may be applied to buildings in which the availability of energy use data is less granular, perhaps only available on a weekly or monthly basis.
Building characteristics
A building's thermal characteristics may be described as either internally load dominated or envelope load dominated, each having a characteristic balance point temperature.
Internally load dominated buildings have high internal heat gains from occupants, lighting and equipment. These buildings are usually compact with a low surface-area-to-volume ratio and many exterior walls in each room. The high internal heat gains allow the building to not be strongly affected by outdoor conditions. Large office spaces, schools and auditoriums are typical examples of internal load dominated buildings where the balance point temperature is around .
Envelope load dominated buildings have significant heat loss through the building envelope. These buildings have a high surface-area-to-volume ratio with few exterior walls in each room. Outdoor conditions strongly affect these buildings due to a lack of internal heat gains. Residences, small office buildings and schools are typical examples of skin load dominated buildings where the balance point temperature is set around .
Solar gains can hamper internal load dominated buildings, contributing to overheating, while helping skin dominated buildings that lose heat due to poor envelope performance. Therefore, architects and building designers must strategically control solar gains based on the building characteristics.
Degree days
The concepts of degree days and balance point temperature are interconnected. By summing the differences between the balance point temperature and the outdoor temperature over a period of time, the resultant value is degree-time. Use of daily mean temperature data in the summation results in degree days, although degree hours or even degree minutes may be possible depending upon the granularity of the data used. The degree day is often further broken down into heating degree days (HDD), in which energy will need to be spent to heat the space, and cooling degree days (CDD), in which the space will need cooling (either through an input of energy or by natural means). This is achieved by counting any positive difference between the balance point temperature and the outdoor air temperature as HDD, and either discarding the remaining data or considering them to be CDD. Although degree days are calculated based on recorded energy use in the building, the balance point temperature of the building determines whether a building will annually have more HDD or CDD. A low balance point temperature (relative to the local climate) indicates that the building will be more likely to need additional cooling, while a high balance point temperature indicates that it is more likely to need heating. Ideally, a building should be designed such that the balance point temperature is as near as possible to the average outdoor temperature of the local climate, which will minimize both the CDD and HDD.
Modeling
Balance point temperature is frequently used in modeling as a base by which to calculate the energy demand of buildings due to various stressors. This is achieved by calculating HDD or CDD based on the balance point, and extending these results to estimate energy use. A sensitivity analysis can also be conducted based on the effects of changing the balance point temperature, which may demonstrate the effect on a model of altering internal loads or envelope conditions of a building.
References
Heating, ventilation, and air conditioning
Temperature | Balance point temperature | [
"Physics",
"Chemistry"
] | 1,384 | [
"Scalar physical quantities",
"Temperature",
"Thermodynamic properties",
"Physical quantities",
"SI base quantities",
"Intensive quantities",
"Thermodynamics",
"Wikipedia categories named after physical quantities"
] |
44,575,324 | https://en.wikipedia.org/wiki/1050%20aluminium%20alloy | 1050 aluminium alloy is an aluminium-based alloy in the "commercially pure" wrought family (1000 or 1xxx series). As a wrought alloy, it is not used in castings. Instead, it is usually formed by extrusion or rolling. It is commonly used in the electrical and chemical industries, on account of having high electrical conductivity, corrosion resistance, and workability. 1050 alloy is also sometimes used for the manufacture of heat sinks, since it has a higher thermal conductivity than other alloys. It has low mechanical strength compared to more significantly alloyed metals. It can be strengthened by cold working, but not by heat treatment.
Alternate names and designations include Al99.5, 3.0255, and A91050. It is described in the following standards:
ASTM B 491: Standard Specification for Aluminium and Aluminium-Alloy Extruded Round Tubes for General-Purpose Applications
ISO 6361: Wrought Aluminium and Aluminium Alloy Sheets, Strips and Plates
Chemical composition
The alloy composition of 1050 aluminium is:
Aluminium: 99.5% min
Copper: 0.05% max
Iron: 0.4% max
Magnesium: 0.05% max
Manganese: 0.05% max
Silicon: 0.25% max
Titanium: 0.03% max
Vanadium: 0.05% max
Zinc: 0.05% max
References
Aluminium alloy table
Aluminium alloys | 1050 aluminium alloy | [
"Chemistry"
] | 287 | [
"Alloys",
"Aluminium alloys"
] |
44,575,747 | https://en.wikipedia.org/wiki/Oil%20sands%20tailings%20ponds%20%28Canada%29 | Oil sands tailings ponds are engineered dam and dyke systems used to capture oil sand tailings. Oil sand tailings contain a mixture of salts, suspended solids and other dissolvable chemical compounds such as acids, benzene, hydrocarbons residual bitumen, fine silts and water. Large volumes of tailings are a byproduct of bitumen extraction from the oil sands and managing these tailings is one of the most difficult environmental challenges facing the oil sands industry. An October 2021 Alberta Energy Regulator (AER) report said that in 2020 the tailings ponds increased by another 90 million cubic meters and contained 1.36 billion cubic metres of fluids.
Location
In Canada there are three major oil sand deposits, primarily located in the province of Alberta, with some also located in the neighbouring province of Saskatchewan. They are known as Athabasca Oil Sands, Cold Lake oil sands, and Peace River oil sands. The Athabasca Oil Sands Region (AOSR) has 19 tailings ponds.
Components of oil sands tailings ponds
Oil sand tailings or oil sands process-affected water (OSPW), have a highly variable composition and a complex mixture of compounds. In his oft-cited 2008 journal article, E. W. Allen wrote that typically tailings ponds consist of c. 75% water, c. 25% sand, silt and clay, c.2% of residual bitumen, as well as dissolved salts, organics, and minerals. Although many of the components of TPW "occur naturally in adjacent landscapes, the mining process increases their concentrations", for example, sodium, chloride, sulphate, bicarbonate, and ammonia. Citing research from the 1978 onwards, Allen included naphthenic acids (NAs), bitumen, asphaltenes, creosols, phenols, humic and fulvic acids, benzene, phthalates, toluene, polycyclic aromatic hydrocarbons (PAHs), in the list of organic compounds in TPW. Allen names aromatic hydrocarbons [including polycyclic aromatic hydrocarbons (PAHs), benzene, phenols and toluene], naphthenic acids (NAs) and dissolved solids, as those that were most harmful to humans, fish, and birds. As well as toxic metals considered to be priority pollutants such as chromium, arsenic, nickel, cadmium, copper, lead, and zinc, OSPW also contains "common, low-toxicity metals" including titanium aluminum, molybdenum, iron, and vanadium. The exposure to particulate matter (PM) containing polycyclic aromatic hydrocarbons has been seen to have higher cytotoxicity then PM containing heavy metals.
The concentrations of chemicals are harmful to fish, and oil on the surface of the ponds is harmful to birds.
The lack of knowledge and identification of individual compounds has become a major hindrance to the handling and monitoring of oil sands tailings. A better understanding of the chemical makeup, including naphthenic acids, it may make it possible to monitor rivers for leachate and also to remove toxic components. The identification of individual acids has for many years proved to be impossible but a breakthrough in 2011 in analysis began to reveal what is in the oil sands tailings ponds. Theoretically, as much as ninety percent of the water in the tailings could be reused for further oil extraction.
Size and scope
According to an October 2021 Alberta Energy Regulator (AER) report, in 2020 in spite of a decrease in oil production, oil sands tailings ponds grew another 90 million cubic meters in 2020 containing 1.36 billion cubic metres of fluids. This represents a surface comparable to "1.7 times the size of Vancouver".
In 2008 tailings ponds held 732 billion litres of tailings. By 2009, as tailing ponds continued to proliferate and volumes of fluid tailings increased, the Energy Resources Conservation Board of Alberta issued Directive 074 to force oil companies to manage tailings based on aggressive criteria.
By 2013, the Government of Alberta reported that tailings ponds covered an area of about . According to a Calgary Herald article, by September 2017, the tailings ponds held c."1.2 trillion litres of contaminated water" and covered about .
Cost of clean-up
A 2018 joint investigation by the Toronto Star, Global News, National Observer, and four journalism schools—Concordia University, Ryerson University, University of Regina and University of British Columbia—revealed that the estimated liability for the clean up cost for "oilsands mining operations facilities" was about $130 billion. The investigation, which resulted in the news coverage series, The Price of Oil, was undertaken by "the largest ever collaboration of journalists in Canada". The investigation revealed that the security collected from companies to cover the costs of shutting down and cleaning up mining sites including tailings ponds and pipelines was $1.4 billion; and the previous calculated liability was $27.80 billion. The clean-up of tailings ponds, which "have sprawled to cover an area the size of Kelowna", which is , represents a "significant part of the liability." The journalists working on the Price of Oil series were told by experts that the liabilities in the oilsands, mainly tailings ponds, represent almost 50% of the $130 billion in the AER mining category, the total estimated liability.
Documents released through the freedom of information legislation as requested for the joint investigation of Alberta Energy Regulator internal documents included Rob Wadsworth's speaking notes at a February 28, 2018, presentation to the Petroleum History Society in Calgary. Wadsworth warned that "the true costs of cleaning up the oils sands" could be $260 billion and a significant part of the costs include the clean-up of toxic tailings ponds. In his outline of the financial liabilities in Alberta's oil patch, Wadsworth, who was the AER vice president of closure and liability, said that with the rules in place in 2018, fossil fuel companies could put off setting aside enough money to cover the costs of cleaning up their sites, until their business could "no longer afford to pay anything". He warned that even though weaknesses in the flawed programs were known, there was no "proactive change to the liability programs." Until about 2018, the "implications of our flawed system had not been not realized". He cautioned that if the industry did not respond, it would be the public that felt the impact and called on the industry representatives to retain the liabilities so they are "not passed on to Albertans". In response to the report, then Environment Minister of Alberta Shannon Phillips said that Wadsworth's estimates represented a “worst-case scenario” in which the industry shuts down overnight."
Syncrude Tailings Dam
The Syncrude Tailings Dam or Mildred Lake Settling Basin (MLSB) is an embankment dam that was, by volume of construction material, the largest earth structure in the world in 2001. It is located north of Fort McMurray, Alberta, Canada, at the northern end of the Mildred Lake lease owned by Syncrude Canada Ltd. The dam and the tailings artificial lake within it are constructed and maintained as part of ongoing operations by Syncrude in extracting oil from the Athabasca Oil Sands. Other tailings dams constructed and operated in the same area by Syncrude include the Southwest Sand Storage (SWSS), which is the third largest dam in the world by volume of construction material after the Tarbela Dam. The MLSB, which is the oldest tailings pond in the Athabasca Oil Sands Region (AOSR), was found in a 2018 report published in the Atmospheric Chemistry and Physics journal to be "responsible for the majority of tailings ponds emissions of methane."
On 31 December 2018, Syncrude was fined $2.75 million after pleading guilty under the federal Migratory Birds Convention Act (MBCA) and Alberta's Environmental Protection and Enhancement Act in relation to the deaths of 31 great blue herons in August 2015 at the MLSB. At the time the MLSB inactive sump "was not covered by Syncrude's waterfowl protection plan to deter birds from landing at tailings areas". Doreen Cole, who has been Managing Director of Syncrude Canada since December 2017, "We immediately took steps to bring all these areas on our Mildred Lake and Aurora sites into our waterfowl protection plan. We're committed to being a responsible operator and this has strengthened our resolve to reduce the impact of our operations on wildlife." On 22 October 2010 Syncrude was found guilty under the provincial and federal Acts and was fined $3-million, which at that time represented the "largest environmental penalty in Alberta history." In 2008, 1,606 ducks died in Syncrude's tailings ponds, which at that time covered an area of 12-square-kilometres, because "cannons, effigies and other deterrents", intended for use to deter migratory birds, had not been deployed. Syncrude's trial lawyer at that time, Robert White, had urged his client to challenge the guilty verdict. But Syncrude spokeswoman said that they would plead guilty and pay the fine as, "At Syncrude, we're eager to move forward. The incident haunted us and we regret that it ever happened."
Horizon tailings dam
As of 2010, according to the "Mature Fine Tailings Inventory from mine operator tailings plans submitted in October 2009, Canadian Natural Resources's (CNRL) mine, Horizon mine had of mature fine tailings (MFT) in their tailings ponds. However COSIA argues that CNRL's Horizon External Tailings Facility (ETF) is a relatively young pond with a configuration that minimizes the "Pond Centre (PC) depositional environment". It has a "side hill" facility with a three-sided dyke impounding fluid against the natural ground that rises away from the containment dyke."
Regulations and oversight
From its establishment in January 2008, until it was disbanded in 2013, the Edmonton, Alberta-based Energy Resources Conservation Board (ERCB)—an independent, quasi-judicial agency of the Government of Alberta—regulated Alberta's energy resource industry, which included oils sands tailings ponds. Board members included engineers, geologists, technicians, economists, and other professionals. The ERCB was created to replace the Alberta Energy and Utilities Board (EUB) and the Alberta Utilities Commission. The ERCB's first major publication was the December 2008, Directive 073: Requirements for Inspection and Compliance of Oil Sands Mining and Processing Plant Operations in the Oil Sands Mining Area, which was based Oil Sands Conservation Act (OSCA), Oil Sands Conservation Regulation (OSCR), Informational Letter (IL) 96-07: EUB/AEP Memorandum of Understanding on the Regulation of Oil Sands Development, IL 94-19: Dam Safety Accord, Agreement Between Alberta Employment, Immigration and Industry and the Alberta Energy and Utilities Board Respecting the Coordination of Services for Coal and Oil Sands Mine Projects (EII/EUB MOU), requirements set out in approval conditions for each oil sands mining and processing plant scheme, operator's ERCB-approved S-23 production accounting manual, Interim Directive (ID) 2001-07: Operating Criteria: Resource Recovery Requirements for Oil Sands Mine and Processing Plants, ID 2001-03: Sulphur Recovery Guidelines for the Province of Alberta, and Directive 019: ERCB Compliance Assurance—Enforcement.
In 2009, the ERCB published an industry wide directive—Directive 074—which was the first of its kind. Directive 074 set out the "industry-wide requirements for tailings management," requiring "operators to commit resources to research, develop, and implement fluid tailings reduction technologies and to commit to tailings management and progressive reclamation as operational priorities that are integrated with mine planning and bitumen production activities."
In 2012, the Government of Alberta set up a Tailings Management Framework (TMF) to complement and expand Directive 074's policies to "ensure that fluid fine tailings are reclaimed as quickly as possible and that current inventories are reduced."
The ECRB report entitled 2012 Tailings Management Assessment Report: Oil Sands Mining Industry, cautioned that oil sands operators failed to convert their tailings ponds into deposits suitable for reclamation in a timely fashion, as proposed in their project applications. "The volume of fluid tailings, and the area required to hold fluid tailings, continued to grow, and the reclamation of tailings ponds was further delayed."
The Government of Alberta released the 2012 "Tailings Management Framework for Mineable Oil Sands" as part of Alberta's Progressive Reclamation Strategy for the oil sands to ensure that tailings are reclaimed as quickly as possible.
The ERCB's 2013 "Tailings Management Framework for Mineable Oil Sands" "challenged a "key plank" of the Conservative provincial government, under Premier Alison Redford, who served from October 2011 until her resignation on 23 March 2014. During the tenure of the Redford cabinet, the province was promoting "Alberta as a responsible energy producer." The government had pledged that the "turbid tailings ponds containing the byproducts of bitumen production will soon be a thing of the past." In April 2013, Premier undertook a trade mission to Washington, D.C., in which she said that, "tailings ponds [will] disappear from Alberta's landscape in the very near future." She said that there would be new environmental rules that will force "companies who do use mines and tailings" to "completely halt the growth of fluid tailings ponds by 2016."
In 2013, the Alberta government replaced the ERCB with the newly created Alberta Energy Regulator (AER), with Jim Ellis, as CEO. The AER's mandate included overseeing the "development of hydrocarbon resources over their entire life cycle", which included "allocating and conserving water resources, and managing public lands." The AER was also tasked with "protecting the environment while providing economic benefits for all Albertans."
In March 2015, in response to the ERCB's "Tailings Management Framework for Mineable Oil Sands", AER suspended Directive 074: Tailings Performance Criteria and Requirements for Oil Sands Mining Schemes.
In May 2016, the Court of Queen's Bench of Alberta (ABQB) in 2016 ABQB 278, "confirmed that the federal Bankruptcy and Insolvency Act supersedes the provincial requirements that companies must clean up wells." "[B]ankrupt companies can avoid their liabilities and leave them as a public obligation."
Directive 85 was issued on 14 July 2016, by the Alberta Energy Regulator, following consultations with "consultations with First Nations, local communities, environmental groups and industry itself". Directive 85 with new guidelines and a phased-in approach on oil sands producers' management of their tailings ponds. Under Directive 85 "fluid tailings" must be "ready to reclaim" within ten years of the closing of an oil sands mine.
On 25 April 2017, the Court of Appeal of Alberta (ABCA) dismissed the AER and OWA's appeal in a landmark decision, affirming the May 2016 decision of the Court of Queen's Bench of Alberta in favour of Redwater Energy Corporation's receiver, Grant Thornton Limited, in Redwater's bankruptcy proceedings. The ABCA found that Grant Thornton Limited "entitled to disclaim Redwater's non-producing oil wells and sell its producing ones".
In July 2019, the AER announced their Decision 2019 ABAER 006: Syncrude Canada Ltd. Mildred Lake Extension Project and Mildred Lake Tailings Management Plan, with a 289-page report. Syncrude's had submitted their request regarding the Mineral Surface Lease MSL352 in 30 June 2017. The AER decision allows Syncrude to use more public lands to develop oil sands on oil sands leases 17 and 22, under section 20 of the Public Lands Act, with a number of conditions, related to relevant laws, including the Oil Sands Conservation Act (OSCA), the Environmental Protection and Enhancement Act (EPEA), the Water Act, and the Public Lands Act. The AER found that Syncrude's Mildred Lake Extension (MLX) project was in the "public interest."
The AER found that Mildred Lake Extension Project (MLX) did not meet Directive 085: Fluid Tailings Management for Oil Sands Mining Projects requirements. Syncrude has until January 2023 to submit an "updated Tailings Management Plan" that aligns with Tailings Management Framework for the Mineable Oil Sands (TMF). The "TMF under the Lower Athabasca Regional Plan (LARP) provides direction to the AER and industry on the management of fluid tailings during and after mine operation. AER Directive 085, under the Oil Sands Conservation Act (OSCA), "sets out requirements for managing fluid tailings for oil sands mining projects."
AER on cost of clean-up of tailings ponds
In an AER presentation in February 2018, the AER's "vice-president of closure and liability" said that "based on "a hypothetical worst-case scenario", the cleanup cost would be $260-billion based on "internal AER calculations". The oil industry's "accumulated environmental liability" estimate of $58.65 billion was the amount that the AER had publicly reported. Of that cost, "tailings ponds make up the largest but unknown portion of this AER estimate".
On 15 February 2018 the Supreme Court of Canada held a hearing centering on Alberta's lower courts' findings in favour of Redwater Energy's creditors, to determine if Canada's bankruptcy laws are in conflict with Alberta's regulatory regime – and if those federal laws are paramount to the province's environmental rules".
By February, 2018, there were 1,800 abandoned or orphan wells—sites that had been licensed by AER with combined liabilities of over $110 million. From 2014 to 2018 the industry-led organization's Orphan Well Association's (OWA) inventory, increased from 1,200 to over 3,700.
In late February 2018, CBC News and CP reported that Sequoia Resources Ltd, an oil firm that had purchased "licences for 2,300 wells" in 2016 from Perpetual Energy Inc., had notified AER that it was ceasing operations "imminently" and were unable to maintain "almost 200 facilities and nearly 700 pipeline segments". Sequoia Resources Ltd had defaulted on its "municipal tax payments" and could not reclaim its properties. According to The Star, after Sequoia Resources Ltd filed for bankruptcy protection in March "without decommissioning and cleaning up 4,000 wells, pipelines and other facilities", as required of all oil companies,
On 7 August 2018 PricewaterhouseCoopers, the trustee for Chinese investors who purchased Sequoia Resources Ltd in 2016, launched a lawsuit against Perpetual Energy Inc. in an "unprecedented bid to void" the 2016 sale of Perpetual Energy Inc.'s subsidiary called Perpetual Energy Operating Corp. (PEOC) now known as Sequoia Resources Ltd to Chinese investors. An article in The Globe and Mail said that this appears to be the "first attempt by a bankruptcy trustee in Alberta to have a previous oil and gas transaction unwound." It could "introduce major new risks to the [oil and gas] industry’s ability to buy and sell assets and could also deliver a severe blow to Perpetual." The lawsuit alleges that Perpetual and its CEO Susan Riddell Rose "knew the deal would sink the buyer". Perpetual says that "the claim is without merit".
In a public statement released on 8 August 2018, AER CEO Jim Ellis, who had been CEO since AER's creation in 2013, took the "unusual step" of admitting that the Sequoia "situation has exposed a gap in the system" that needed to be fixed and "raised questions" about how to proceed in the future.
On 1 November 2018 AER CEO Jim Ellis apologized for failing to report "that cleaning up after the province's oil and gas industry would cost $260 billion". On 2 November he announced his retirement as CE0.
At Canadian federal government level, in May 2023, passing of Bill S-5 to update the Canadian Environmental Protection Act was reported as having slowed because of Liberal Party support for an amendment proposed by the New Democratic Party. The amendment stipulated that "... the federal government has the power to compel the production of information about tailings ponds".
Reduction and reclamation
The Alberta Energy Regulator has confirmed that no tailings have ever been certified reclaimed to date. In fact, across the entire oil sands region, only one square km of the total area disturbed by mining operations has ever been certified reclaimed.
Suncor invested $1.2 billion in their Tailings Reduction Operations (TROTM) method that treats mature fine tails (MFT) from tailings ponds with chemical flocculant, an anionic Polyacrylamide, commonly used in water treatment plants to improve removal of total organic content (TOC), to speed their drying into more easily reclaimable matter. Mature tailings dredged from a pond bottom in suspension were mixed with a polymer flocculant and spread over a "beach" with a shallow grade where the tailings would dewater and dry under ambient conditions. The dried MFT can then be reclaimed in situ or moved to another location for final reclamation. Suncor hoped this would reduce the time for water reclamation from tailings to weeks rather than years, with the recovered water being recycled into the oil sands plant. Suncor claimed the mature fines tailings process would reduce the number of tailing ponds and shorten the time to reclaim a tailing pond from 40 years at present to 7–10 years, with land rehabilitation continuously following 7 to 10 years behind the mining operations. For the reporting periods from 2010 to 2012, Suncor had a lower-than-expected fines capture performance from this technology.
Syncrude used the older composite tailings (CT) technology to capture fines at its Mildred Lake project. Syncrude had a lower-than-expected fines capture performance in 2011/2012 but exceeded expectations in 2010/2011. Shell used atmospheric fines drying (AFD) technology combined "fluid tailings and flocculants and deposits the mixture in a sloped area to allow the water to drain and the deposit to dry" and had a lower-than-expected fines capture performance.
Suncor's Wapisiw Lookout
By 2010 Suncor had transformed their first tailings pond, Pond One, into Wapisiw Lookout, the first reclaimed settling basin in the oil sands. In 2007 the area was a 220-hectare pond of toxic effluent but several years later there was firm land planted with black spruce and trembling aspen. Wapisiw Lookout represents only one percent of tailings ponds in 2011 but Pond One was the first effluent pond in the oil sands industry in 1967 and was used until 1997. By 2011 only 65 square kilometres were cleaned up and about one square kilometre was certified by Alberta as a self-sustaining natural environment. Wapisiw Lookout has not yet been certified. Closure operations of Pond One began in 2007. The jello-like mature fine tails (MFT) were pumped and dredged out of the pond and relocated to another tailings pond for long-term storage and treatment. The MFT was then replaced with 30 million tonnes clean sand and then topsoil that had been removed from the site in the 1960s. The 1.2 million cubic meters of topsoil over the surface, to a depth of 50 centimetres, was placed on top of the sand in the form of hummocks and swales. It was then planted with reclamation plants.
This often-cited example of reclamation is challenged by environmental groups, who point out that the pond is not reclaimed, as the actual harmful tailings fluids were just moved somewhere else. Indeed, the pond's content was drained, and the tailings fluids were transported to other ponds. The pond was then filled with coarser materials and vegetation was added on top. The site is not usable or accessible to the public, and the peatland was not restored.
Syncrude's Sandhill Fen project
In 2008 Syncrude Canada Ltd. began construction of Sandhill Fen project, a 57-hectare research watershed- creating a mix of forest and wetland- on top of sand-capped composite tailings at its former 60-metre deep East Mine.
End Pit Lakes
The Pembina Institute suggested that the huge investments by many companies in Canadian oil sands was leading to increased production results in excess bitumen with no place to store it. It added that by 2022 a month's output of waste-water could result in an 11-feet deep toxic reservoir the size of New York City's Central Park [840.01 acres (339.94 ha) (3.399 km²)].
The oil sands industry may build a series of up to thirty lakes by pumping water into old mine pits when they have finished excavation leaving toxic effluent at their bottoms and letting biological processes restore it to health. It is less expensive to fill abandoned open pit mines with water instead of dirt. In 2012, the Cumulative Environmental Management Association (CEMA) defined End Pit Lakes (EPLs) as Syncrude Ltd. has a full-scale, operational EPL known as Base Mine Lake, which has been under reclamation since 2012. Originally West-In Pit, a tailings pond, Base Mine Lake contains 45 meters of fluid tailings are weighted down by 5 meters of water, compressing the tailings so much that the water cap is now 12 meters in depth. The addition of a water cap and alum successfully cleared the water of turbidity. Algae and invertebrates populate the water column, and the bacterial communities are distinct from those of tailings ponds. However, as an oil sheen does remain on the surface of the lake, Syncrude does not plan to connect this EPL with natural water bodies, or release any water from the site.
Research
In March 2012 an alliance of oil companies called Canada’s Oil Sands Innovation Alliance (COSIA) was launched with a mandate to share research and technology to decrease the negative environmental impact of oil sands production focusing on tailings ponds, greenhouse gases, water and land. Almost all the water used to produce crude oil using steam methods of production ends up in tailings ponds. Recent enhancements to this method include Tailings Oil Recovery (TOR) units which recover oil from the tailings, Diluent Recovery Units to recover naphtha from the froth, Inclined Plate Settlers (IPS) and disc centrifuges. These allow the extraction plants to recover well over 90% of the bitumen in the sand.
In January 2013, scientists from Queen's University published a report analyzing lake sediments in the Athabasca region over the past fifty years. They found that levels of polycyclic aromatic hydrocarbons (PAHs) had increased as much as 23-fold since bitumen extraction began in the 1960s. Levels of carcinogenic, mutagenic, and teratogenic PAHs were substantially higher than guidelines for lake sedimentation set by the Canadian Council of Ministers of the Environment in 1999. The team discovered that the contamination spread farther than previously thought.
For metal concentrations, some studies of metal contaminants in Peace-Athabasca Delta lake sediments in 2014 to 2018 showed "... little to no evidence of recent oil sands-derived metals enrichment in sediment of lakes" in the region.
Emissions
According to the 2018 study by Baray et al, ninety-six per cent of methane emissions in the AOSR came from Mildred Lake Settling Basin and the Syncrude Mildred Lake West In-Pit (WIP) pond and Suncor Energy OSG's Ponds 2–3 (P23). MLSB "was found to be responsible for over 70% of tailings ponds emissions of methane (CH4)." The study collected data on emission rates of CH4 from the "five major facilities in the AOSR: Syncrude Mildred Lake (SML), Suncor Energy OSG (SUN), Canadian Natural Resources Limited Horizon (CNRL), Shell Albian Muskeg River and Jackpine (SAJ) and Syncrude Aurora (SAU)." 2018 report published in the Atmospheric Chemistry and Physics journal to be "responsible for the majority of tailings ponds emissions of methane."
See also
Tailings
Notes
References
Further reading
Bituminous sands
Petroleum industry
Unconventional oil
Dams in Alberta
Mining in Alberta
Regional Municipality of Wood Buffalo
Tailings dams
Environmental issues in Alberta | Oil sands tailings ponds (Canada) | [
"Chemistry",
"Technology",
"Engineering"
] | 5,962 | [
"Bituminous sands",
"Unconventional oil",
"Mining engineering",
"Mining equipment",
"Petroleum industry",
"Petroleum",
"Hazardous waste",
"Chemical process engineering",
"Tailings dams",
"Asphalt"
] |
34,924,672 | https://en.wikipedia.org/wiki/WeNMR | WeNMR is a worldwide e-Infrastructure for NMR spectroscopy and structural biology. It is the largest virtual Organization in the life sciences and is supported by EGI.
Goals
WeNMR aims at bringing together complementary research teams in the structural biology and life science area into a virtual research community at a worldwide level and provide them with a platform integrating and streamlining the computational approaches necessary for NMR and SAXS data analysis and structural modelling. Access to the infrastructure is provided through a portal integrating commonly used software and GRID technology.
Services
There are about 2 dozen computational NMR services available that can be divided into:
Processing: MDD NMR
Assignment: Auto Assign • MARS • UNIO
Analysis: TALOS+ • AnisoFIT • MaxOcc • iCing
Structure Calculation: CS-ROSETTA • CYANA • UNIO • Xplor-NIH
Molecular Dynamics: AMBER • GROMACS
Modelling: 3D-DART • HADDOCK
Tools: Format Converter • SHIFTX2 • Antechamber • PREDITOR • RCI • UPLABEL
Associated activities
Critical Assessment of Automated Structure Determination of Proteins from NMR Data (CASD-NMR ) is hosted by WeNMR. The first CASD-NMR paper, describing the results achieved in the 2009-2010 round, has been published in Structure.
The WeNMR facilitated the creation of an archive of 9000+ validation reports on Protein Data Bank structures in NRG-CING.
WeNMR closely collaborates with the ESFRI project Instruct for Integrated structural biology
History
The three-year WeNMR project started in November 2010 as the natural successor of the eNMR project. Financial support was provided by the European Community grants 213010 (eNMR) and 261572 (WeNMR) in the 7th Framework Programme (e-Infrastructure RI-261571).
Partners
References
External links
The WeNMR VRC portal and website
WeNMR in the EU FP7 CORDIS database
Structural biology | WeNMR | [
"Chemistry",
"Biology"
] | 409 | [
"Biochemistry",
"Structural biology"
] |
34,927,770 | https://en.wikipedia.org/wiki/Convective%20mixing | In fluid dynamics, convective mixing is the vertical transport of a fluid and its properties. In many important ocean and atmospheric phenomena, convection is driven by density differences in the fluid, e.g. the sinking of cold, dense water in polar regions of the world's oceans; and the rising of warm, less-dense air during the formation of cumulonimbus clouds and hurricanes.
See also
Atmospheric convection
Bénard cells
Churchill–Bernstein equation
Double diffusive convection
Heat transfer
Heat conduction
Thermal radiation
Heat pipe
Laser-heated pedestal growth
Nusselt number
Thermomagnetic convection
References
Notes
Further reading
Convection | Convective mixing | [
"Physics",
"Chemistry"
] | 129 | [
"Transport phenomena",
"Physical phenomena",
"Convection",
"Thermodynamics",
"Fluid dynamics stubs",
"Fluid dynamics"
] |
34,930,324 | https://en.wikipedia.org/wiki/Terminal%20amine%20isotopic%20labeling%20of%20substrates | Terminal amine isotopic labeling of substrates (TAILS) is a method in quantitative proteomics that identifies the protein content of samples based on N-terminal fragments of each protein (N-terminal peptides) and detects differences in protein abundance among samples.
Like other methods based on N-terminal peptides, this assay uses trypsin to break proteins into fragments and separates the N-terminal peptides (the fragments containing the N-termini of the original proteins) from the other fragments (internal tryptic peptides). TAILS isolates the N-terminal peptides by identifying and removing the internal tryptic peptides. This negative selection allows the TAILS method to detect all N-termini in the given samples. Alternative methods that rely on the free amino group of the N-terminus to identify the N-terminal peptides cannot detect some N-termini because they are "naturally blocked" (i.e. the natural protein does not have a free amino group).
The TAILS method has a number of applications including the identification of new substrates and proteases (including those that have an unknown and broad specificity) and as a way to define the termini of proteins that enables protein annotation. TAILS can also be used to link proteases with a variety of defined biological pathways in diseases such as cancer, in order to gain a clearer understanding of the substrates and proteases involved in the disease state.
Method
TAILS is a 2D or 3D proteomics based assay for the labeling and isolation of N-terminal peptides, developed by a group at the University of British Columbia. The TAILS method is designed for comparison of multiple protease treated cells and control proteome cells. Samples can be derived from a variety of sources including tissue, fibroblasts, cancer cells and from fluid effusions.
This assay isolates the N-terminal peptides by removing the internal tryptic peptides via ultrafiltration leaving the labeled mature N-terminal and neo-N-Terminal peptides to be analyzed by tandem mass spectrometry (MS/MS). This negative selection allows the TAILS method to detect all N-termini in the given samples. Alternative methods that rely on the free amino group of the N-terminus to isolate the N-terminal peptides cannot detect naturally blocked N-termini because they do not have a free amino group.
TAILS requires only small sample of peptide for experimentation (100–300 ug), can be used with proteases which have unknown or broad specificity and supports a variety of methods for sample labeling. However, it identifies ~ 50% of proteins by two or more different and unique peptides (one of the original mature N-terminus and/or one or more neo-N-terminal peptide via cleavage site) that do not represent independent biological events thus cannot be averaged for quantification. It also has difficulty validating results for single peptide-base N-terminome analysis.
The following steps are for the dimethylation-TAILS assay, comparing a control sample (exhibiting normal proteolytic activity) and a treated sample (which in this example exhibits an additional proteolytic activity).
Proteome-wide proteolysis occurring in both the treated and control samples with additional proteolytic activity in the treated sample.
Inactivation of the proteases and protein denaturation and reduction.
Labelling with stable isotopes. This allows peptides that originated in the control sample to be distinguished from those that originated in the treated sample so their relative abundance can be compared. In this example, the labelling is applied by reductive dimethylation of the primary amines using either heavy for the treated samples or light for the controls. This reaction is catalyzed by sodium cyanoborohydride and attaches the labeled methyl groups to lysine-amines and the free (∝)- amino groups at the N-termini of the proteins and protease cleavage products.
Blocking of reactive amino groups. This allows the internal tryptic peptides to be identified later in the process because they will be the only peptides with reactive amino groups. In this example the labelling reaction (reductive dimethylation) also blocks the reactive amino groups.
Pooling. The two labeled proteomes are now mixed. This ensures that the samples are treated identically at all subsequent steps allowing the relative quantities of the proteins in the two samples to be more accurately measured.
Trypsinization. This breaks each protein into fragments. The labeled N-termini of the original proteins remain blocked, while the new internal tryptic peptides have a free N-terminus.
Negative selection. A hyperbranched polyglycerol and aldehyde (HPG) polymer specific for tryptic peptide binding is added to the sample and reacts with the newly generated tryptic peptides through their free N-termini. As in step 3 above, this reaction is catalyzed by sodium cyanoborohydride. The dimethylated lysine's acetylated and isotopically labeled protein peptides and neo(new)-N-terminal peptides are unreactive and remain unbound and can be separated from the poly-internal tryptic peptide complexes using ultrafiltration.
The eluted unbound proteins are highly concentrated with the N-terminal peptides and neo-N-terminal peptides.
This eluted sample is then quantified and analysis completed by MS/MS.
The final step in TAILS involves bioinformatics. Using a hierarchical substrate winnowing process that discriminates from background proteolysis products and non-cleaved proteins by a peptide isotope quantification and certain bioinformatic search criteria.
Types
The types of TAILS differ in the methods used to block and label the amino groups of the proteins and protease cleavage products. These amino groups include lysine-amines and the free (∝)-amino groups of the N-termini of the proteins.
Dimethylation-TAILS procedure is a chemical labeling based procedure that is performed in one step using amine-reactive isotopic reagents. The labeling of two samples uses either 12CH2-formaldehyde (light) or 13CD2-formaldehyde (heavy) and uses sodium cyanoborohydride as the catalyst. The advantage of this method is that it is robust, efficient and cost effective. The labeling procedure for the controls and protease treated samples must be carried out separately before they can be pooled, and it is limited to two samples per experiment, which may be a disadvantage if multiple samples need to be studied simultaneously.
Stable isotope labeling with amino acids in cell culture (SILAC) is a procedure that can be done in vivo. This procedure can be used in all cell culture laboratories and is a routinely used labeling technique. This metabolic labeling enables inhibition of a given protease in biological samples and analysis of ex vivo processing. An advantage of using this metabolic labeling method over chemical labeling is that it allows for reliable, fast and efficient discrimination between the real cellular derived proteins that are being investigated from contaminants such as serum proteins. SILAC TAILS can be used for analysis of up to five multiplex samples. SILAC is not suitable for clinically relevant human samples that are not able to be metabolically labeled. SILAC is an expensive method and may not be a feasible option for most laboratories.
The isobaric tag for relative and absolute quantification (iTRAQ) method or iTRAQ-TAILS enables the quantification of multiple samples simultaneously. This method has the ability to simultaneously analyze from 4-8 samples in multiplex experiments using four- and eight- plex iTRAQ reagents. This method provides high accuracy identification and quantification of samples and allows for more reproducible analysis of sample replicates. Like other iTRAQ methods, iTRAQ-TAILS requires a MALDI mass spectrometer and costly iTRAQ reagents.
Alternative methods
There are several alternative approaches to studying N termini and proteolysis products.
Acetylation of amines followed by tryptic digestion and biotinylation of free N-terminal peptides uses chemical (acetylation) to label free lysines and N-termini. The blocked N-termini is then negatively selected. However, the naturally free internal N-termini and blocked N-termini cannot be distinguished after acetylation. This method does not use isotopic labeling, thus it is difficult to quantify the findings. Also, it is hard distinguish between experimental and background proteolysis products.
Lysine guanidination followed by biotinylation of N-termini uses a chemical to block lysine residues and tag free N-termini. The tagged free N-termini is then selected. The down side to this method is that the findings cannot be applied to a statistical model using non-cleaved peptides due to not being able to capture naturally blocked N-termini. Since it does not involve isotopic labeling, the results cannot be quantified. The cleavage site also has to be already known to do labeling.
Subtiligase biotinylation of N-termini uses enzymatic labeling of N-terminal peptides, but does not use lysine blocking chemicals. Without lysine blocking, many of the cleaved N-terminal peptide will be too short for identification. The results can be highly dependent on the properties of subtiligase thus may be biased. This method does not capture naturally blocked N-termini and also does not use isotopic labeling thus it would be difficult to quantify findings.
ITRAQ-labeling of N-termini uses iTRAQ to label the N-termini. Neo-N-termini peptides are selected through in silico. The down side to this technique is that MALDI mass spectrometer is needed and the iTRAQ reagents required are costly. This method does not capture naturally blocked N-termini. The whole process will require 50-100mg of peptide samples.
Combined fractional diagonal chromatography (COFRADIC) allows different labeling for naturally blocked N-termini and protease generated neo-N-termini. All blocked N-termini are negatively selected. However the process requires many chemical processing, chromatography and mass spectrometry. The best separation results are dependent on the amino acid modification such as methionine oxidation not occurring during handling. This method requires 150 MS/MS analyses per sample but samples can be pooled for mass spectrometry (and number of analyses can be reduced). This technique is suitable to be used for proteases with unknown or broad specificity.
See also
SILAC
iTRAQ
References
Proteomics
Mass spectrometry | Terminal amine isotopic labeling of substrates | [
"Physics",
"Chemistry"
] | 2,258 | [
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Mass spectrometry",
"Matter"
] |
34,930,586 | https://en.wikipedia.org/wiki/Genome%20editing | Genome editing, or genome engineering, or gene editing, is a type of genetic engineering in which DNA is inserted, deleted, modified or replaced in the genome of a living organism. Unlike early genetic engineering techniques that randomly inserts genetic material into a host genome, genome editing targets the insertions to site-specific locations. The basic mechanism involved in genetic manipulations through programmable nucleases is the recognition of target genomic loci and binding of effector DNA-binding domain (DBD), double-strand breaks (DSBs) in target DNA by the restriction endonucleases (FokI and Cas), and the repair of DSBs through homology-directed recombination (HDR) or non-homologous end joining (NHEJ).
History
Genome editing was pioneered in the 1990s, before the advent of the common current nuclease-based gene editing platforms but its use was limited by low efficiencies of editing. Genome editing with engineered nucleases, i.e. all three major classes of these enzymes—zinc finger nucleases (ZFNs), transcription activator-like effector nucleases (TALENs) and engineered meganucleases—were selected by Nature Methods as the 2011 Method of the Year. The CRISPR-Cas system was selected by Science as 2015 Breakthrough of the Year.
four families of engineered nucleases were used: meganucleases, zinc finger nucleases (ZFNs), transcription activator-like effector-based nucleases (TALEN), and the clustered regularly interspaced short palindromic repeats (CRISPR/Cas9) system. Nine genome editors were available .
In 2018, the common methods for such editing used engineered nucleases, or "molecular scissors". These nucleases create site-specific double-strand breaks (DSBs) at desired locations in the genome. The induced double-strand breaks are repaired through nonhomologous end-joining (NHEJ) or homologous recombination (HR), resulting in targeted mutations ('edits').
In May 2019, lawyers in China reported, in light of the purported creation by Chinese scientist He Jiankui of the first gene-edited humans (see Lulu and Nana controversy), the drafting of regulations that anyone manipulating the human genome by gene-editing techniques, like CRISPR, would be held responsible for any related adverse consequences. A cautionary perspective on the possible blind spots and risks of CRISPR and related biotechnologies has been recently discussed, focusing on the stochastic nature of cellular control processes.
The University of Edinburgh Roslin Institute engineered pigs resistant to a virus that causes porcine reproductive and respiratory syndrome, which costs US and European pig farmers $2.6 billion annually.
In February 2020, a US trial safely showed CRISPR gene editing on 3 cancer patients. In 2020 Sicilian Rouge High GABA, a tomato that makes more of an amino acid said to promote relaxation, was approved for sale in Japan.
In 2021, England (not the rest of the UK) planned to remove restrictions on gene-edited plants and animals, moving from European Union-compliant regulation to rules closer to those of the US and some other countries. An April 2021 European Commission report found "strong indications" that the current regulatory regime was not appropriate for gene editing. Later in 2021, researchers announced a CRISPR alternative, labeled obligate mobile element–guided activity (OMEGA) proteins including IscB, IsrB and TnpB as endonucleases found in transposons, and guided by small ωRNAs.
Background
Genetic engineering as method of introducing new genetic elements into organisms has been around since the 1970s. One drawback of this technology has been the random nature with which the DNA is inserted into the hosts genome, which can impair or alter other genes within the organism. Although, several methods have been discovered which target the inserted genes to specific sites within an organism genome. It has also enabled the editing of specific sequences within a genome as well as reduced off target effects. This could be used for research purposes, by targeting mutations to specific genes, and in gene therapy. By inserting a functional gene into an organism and targeting it to replace the defective one it could be possible to cure certain genetic diseases.
Gene targeting
Homologous recombination
Early methods to target genes to certain sites within a genome of an organism (called gene targeting) relied on homologous recombination (HR). By creating DNA constructs that contain a template that matches the targeted genome sequence it is possible that the HR processes within the cell will insert the construct at the desired location. Using this method on embryonic stem cells led to the development of transgenic mice with targeted genes knocked out. It has also been possible to knock in genes or alter gene expression patterns. In recognition of their discovery of how homologous recombination can be used to introduce genetic modifications in mice through embryonic stem cells, Mario Capecchi, Martin Evans and Oliver Smithies were awarded the 2007 Nobel Prize for Physiology or Medicine.
Conditional targeting
If a vital gene is knocked out it can prove lethal to the organism. In order to study the function of these genes site specific recombinases (SSR) were used. The two most common types are the Cre-LoxP and Flp-FRT systems. Cre recombinase is an enzyme that removes DNA by homologous recombination between binding sequences known as Lox-P sites. The Flip-FRT system operates in a similar way, with the Flip recombinase recognising FRT sequences. By crossing an organism containing the recombinase sites flanking the gene of interest with an organism that express the SSR under control of tissue specific promoters, it is possible to knock out or switch on genes only in certain cells. These techniques were also used to remove marker genes from transgenic animals. Further modifications of these systems allowed researchers to induce recombination only under certain conditions, allowing genes to be knocked out or expressed at desired times or stages of development.
Process
Double strand break repair
A common form of Genome editing relies on the concept of DNA double stranded break (DSB) repair mechanics. There are two major pathways that repair DSB; non-homologous end joining (NHEJ) and homology directed repair (HDR). NHEJ uses a variety of enzymes to directly join the DNA ends while the more accurate HDR uses a homologous sequence as a template for regeneration of missing DNA sequences at the break point. This can be exploited by creating a vector with the desired genetic elements within a sequence that is homologous to the flanking sequences of a DSB. This will result in the desired change being inserted at the site of the DSB. While HDR based gene editing is similar to the homologous recombination based gene targeting, the rate of recombination is increased by at least three orders of magnitude.
Engineered nucleases
The key to genome editing is creating a DSB at a specific point within the genome. Commonly used restriction enzymes are effective at cutting DNA, but generally recognize and cut at multiple sites. To overcome this challenge and create site-specific DSB, three distinct classes of nucleases have been discovered and bioengineered to date. These are the Zinc finger nucleases (ZFNs), transcription-activator like effector nucleases (TALEN), meganucleases and the clustered regularly interspaced short palindromic repeats (CRISPR/Cas9) system.
Meganucleases
Meganucleases, discovered in the late 1980s, are enzymes in the endonuclease family which are characterized by their capacity to recognize and cut large DNA sequences (from 14 to 40 base pairs). The most widespread and best known meganucleases are the proteins in the LAGLIDADG family, which owe their name to a conserved amino acid sequence.
Meganucleases, found commonly in microbial species, have the unique property of having very long recognition sequences (>14bp) thus making them naturally very specific. However, there is virtually no chance of finding the exact meganuclease required to act on a chosen specific DNA sequence. To overcome this challenge, mutagenesis and high throughput screening methods have been used to create meganuclease variants that recognize unique sequences. Others have been able to fuse various meganucleases and create hybrid enzymes that recognize a new sequence. Yet others have attempted to alter the DNA interacting aminoacids of the meganuclease to design sequence specific meganucelases in a method named rationally designed meganuclease. Another approach involves using computer models to try to predict as accurately as possible the activity of the modified meganucleases and the specificity of the recognized nucleic sequence.
A large bank containing several tens of thousands of protein units has been created. These units can be combined to obtain chimeric meganucleases that recognize the target site, thereby providing research and development tools that meet a wide range of needs (fundamental research, health, agriculture, industry, energy, etc.) These include the industrial-scale production of two meganucleases able to cleave the human XPC gene; mutations in this gene result in Xeroderma pigmentosum, a severe monogenic disorder that predisposes the patients to skin cancer and burns whenever their skin is exposed to UV rays.
Meganucleases have the benefit of causing less toxicity in cells than methods such as Zinc finger nuclease (ZFN), likely because of more stringent DNA sequence recognition; however, the construction of sequence-specific enzymes for all possible sequences is costly and time-consuming, as one is not benefiting from combinatorial possibilities that methods such as ZFNs and TALEN-based fusions utilize.
Zinc finger nucleases
As opposed to meganucleases, the concept behind ZFNs and TALEN technology is based on a non-specific DNA cutting catalytic domain, which can then be linked to specific DNA sequence recognizing peptides such as zinc fingers and transcription activator-like effectors (TALEs). The first step to this was to find an endonuclease whose DNA recognition site and cleaving site were separate from each other, a situation that is not the most common among restriction enzymes. Once this enzyme was found, its cleaving portion could be separated which would be very non-specific as it would have no recognition ability. This portion could then be linked to sequence recognizing peptides that could lead to very high specificity.
Zinc finger motifs occur in several transcription factors. The zinc ion, found in 8% of all human proteins, plays an important role in the organization of their three-dimensional structure. In transcription factors, it is most often located at the protein-DNA interaction sites, where it stabilizes the motif. The C-terminal part of each finger is responsible for the specific recognition of the DNA sequence.
The recognized sequences are short, made up of around 3 base pairs, but by combining 6 to 8 zinc fingers whose recognition sites have been characterized, it is possible to obtain specific proteins for sequences of around 20 base pairs. It is therefore possible to control the expression of a specific gene. It has been demonstrated that this strategy can be used to promote a process of angiogenesis in animals. It is also possible to fuse a protein constructed in this way with the catalytic domain of an endonuclease in order to induce a targeted DNA break, and therefore to use these proteins as genome engineering tools.
The method generally adopted for this involves associating two DNA binding proteins – each containing 3 to 6 specifically chosen zinc fingers – with the catalytic domain of the FokI endonuclease which need to dimerize to cleave the double-strand DNA. The two proteins recognize two DNA sequences that are a few nucleotides apart. Linking the two zinc finger proteins to their respective sequences brings the two FokI domains closer together. FokI requires dimerization to have nuclease activity and this means the specificity increases dramatically as each nuclease partner would recognize a unique DNA sequence. To enhance this effect, FokI nucleases have been engineered that can only function as heterodimers.
Several approaches are used to design specific zinc finger nucleases for the chosen sequences. The most widespread involves combining zinc-finger units with known specificities (modular assembly). Various selection techniques, using bacteria, yeast or mammal cells have been developed to identify the combinations that offer the best specificity and the best cell tolerance. Although the direct genome-wide characterization of zinc finger nuclease activity has not been reported, an assay that measures the total number of double-strand DNA breaks in cells found that only one to two such breaks occur above background in cells treated with zinc finger nucleases with a 24 bp composite recognition site and obligate heterodimer FokI nuclease domains.
The heterodimer functioning nucleases would avoid the possibility of unwanted homodimer activity and thus increase specificity of the DSB. Although the nuclease portions of both ZFNs and TALEN constructs have similar properties, the difference between these engineered nucleases is in their DNA recognition peptide. ZFNs rely on Cys2-His2 zinc fingers and TALEN constructs on TALEs. Both of these DNA recognizing peptide domains have the characteristic that they are naturally found in combinations in their proteins. Cys2-His2 Zinc fingers typically happen in repeats that are 3 bp apart and are found in diverse combinations in a variety of nucleic acid interacting proteins such as transcription factors. Each finger of the Zinc finger domain is completely independent and the binding capacity of one finger is impacted by its neighbor. TALEs on the other hand are found in repeats with a one-to-one recognition ratio between the amino acids and the recognized nucleotide pairs. Because both zinc fingers and TALEs happen in repeated patterns, different combinations can be tried to create a wide variety of sequence specificities. Zinc fingers have been more established in these terms and approaches such as modular assembly (where Zinc fingers correlated with a triplet sequence are attached in a row to cover the required sequence), OPEN (low-stringency selection of peptide domains vs. triplet nucleotides followed by high-stringency selections of peptide combination vs. the final target in bacterial systems), and bacterial one-hybrid screening of zinc finger libraries among other methods have been used to make site specific nucleases.
Zinc finger nucleases are research and development tools that have already been used to modify a range of genomes, in particular by the laboratories in the Zinc Finger Consortium. The US company Sangamo BioSciences uses zinc finger nucleases to carry out research into the genetic engineering of stem cells and the modification of immune cells for therapeutic purposes. Modified T lymphocytes are currently undergoing phase I clinical trials to treat a type of brain tumor (glioblastoma) and in the fight against AIDS.
TALEN
Transcription activator-like effector nucleases (TALENs) are specific DNA-binding proteins that feature an array of 33 or 34-amino acid repeats. TALENs are artificial restriction enzymes designed by fusing the DNA cutting domain of a nuclease to TALE domains, which can be tailored to specifically recognize a unique DNA sequence. These fusion proteins serve as readily targetable "DNA scissors" for gene editing applications that enable to perform targeted genome modifications such as sequence insertion, deletion, repair and replacement in living cells. The DNA binding domains, which can be designed to bind any desired DNA sequence, comes from TAL effectors, DNA-binding proteins excreted by plant pathogenic Xanthomanos app. TAL effectors consists of repeated domains, each of which contains a highly conserved sequence of 34 amino acids, and recognize a single DNA nucleotide within the target site. The nuclease can create double strand breaks at the target site that can be repaired by error-prone non-homologous end-joining (NHEJ), resulting in gene disruptions through the introduction of small insertions or deletions. Each repeat is conserved, with the exception of the so-called repeat variable di-residues (RVDs) at amino acid positions 12 and 13. The RVDs determine the DNA sequence to which the TALE will bind. This simple one-to-one correspondence between the TALE repeats and the corresponding DNA sequence makes the process of assembling repeat arrays to recognize novel DNA sequences straightforward. These TALEs can be fused to the catalytic domain from a DNA nuclease, FokI, to generate a transcription activator-like effector nuclease (TALEN). The resultant TALEN constructs combine specificity and activity, effectively generating engineered sequence-specific nucleases that bind and cleave DNA sequences only at pre-selected sites. The TALEN target recognition system is based on an easy-to-predict code. TAL nucleases are specific to their target due in part to the length of their 30+ base pairs binding site. TALEN can be performed within a 6 base pairs range of any single nucleotide in the entire genome.
TALEN constructs are used in a similar way to designed zinc finger nucleases, and have three advantages in targeted mutagenesis: (1) DNA binding specificity is higher, (2) off-target effects are lower, and (3) construction of DNA-binding domains is easier.
CRISPR
CRISPRs (Clustered Regularly Interspaced Short Palindromic Repeats) are genetic elements that bacteria use as a kind of acquired immunity to protect against viruses. They consist of short sequences that originate from viral genomes and have been incorporated into the bacterial genome. Cas (CRISPR associated proteins) process these sequences and cut matching viral DNA sequences. By introducing plasmids containing Cas genes and specifically constructed CRISPRs into eukaryotic cells, the eukaryotic genome can be cut at any desired position.
Editing by nucleobase modification (Base editing)
One of the earliest methods of efficiently editing nucleic acids employs nucleobase modifying enzymes directed by nucleic acid guide sequences was first described in the 1990s and has seen resurgence more recently. This method has the advantage that it does not require breaking the genomic DNA strands, and thus avoids the random insertion and deletions associated with DNA strand breakage. It is only appropriate for precise editing requiring single nucleotide changes and has found to be highly efficient for this type of editing.
ARCUT
ARCUT stands for artificial restriction DNA cutter, it is a technique developed by Komiyama. This method uses pseudo-complementary peptide nucleic acid (pcPNA), for identifying cleavage site within the chromosome. Once pcPNA specifies the site, excision is carried out by cerium (CE) and EDTA (chemical mixture), which performs the splicing function.
Precision and efficiency of engineered nucleases
Meganucleases method of gene editing is the least efficient of the methods mentioned above. Due to the nature of its DNA-binding element and the cleaving element, it is limited to recognizing one potential target every 1,000 nucleotides. ZFN was developed to overcome the limitations of meganuclease. The number of possible targets ZFN can recognized was increased to one in every 140 nucleotides. However, both methods are unpredictable because of their DNA-binding elements affecting each other. As a result, high degrees of expertise and lengthy and costly validations processes are required.
TALE nucleases being the most precise and specific method yields a higher efficiency than the previous two methods. It achieves such efficiency because the DNA-binding element consists of an array of TALE subunits, each of them having the capability of recognizing a specific DNA nucleotide chain independent from others, resulting in a higher number of target sites with high precision. New TALE nucleases take about one week and a few hundred dollars to create, with specific expertise in molecular biology and protein engineering.
CRISPR nucleases have a slightly lower precision when compared to the TALE nucleases. This is caused by the need of having a specific nucleotide at one end in order to produce the guide RNA that CRISPR uses to repair the double-strand break it induces. It has been shown to be the quickest and cheapest method, only costing less than two hundred dollars and a few days of time. CRISPR also requires the least amount of expertise in molecular biology as the design lays in the guide RNA instead of the proteins. One major advantage that CRISPR has over the ZFN and TALEN methods is that it can be directed to target different DNA sequences using its ~80nt CRISPR sgRNAs, while both ZFN and TALEN methods required construction and testing of the proteins created for targeting each DNA sequence.
Because off-target activity of an active nuclease would have potentially dangerous consequences at the genetic and organismal levels, the precision of meganucleases, ZFNs, CRISPR, and TALEN-based fusions has been an active area of research. While variable figures have been reported, ZFNs tend to have more cytotoxicity than TALEN methods or RNA-guided nucleases, while TALEN and RNA-guided approaches tend to have the greatest efficiency and fewer off-target effects. Based on the maximum theoretical distance between DNA binding and nuclease activity, TALEN approaches result in the greatest precision.
Multiplex Automated Genomic Engineering (MAGE)
The methods for scientists and researchers wanting to study genomic diversity and all possible associated phenotypes were very slow, expensive, and inefficient. Prior to this new revolution, researchers would have to do single-gene manipulations and tweak the genome one little section at a time, observe the phenotype, and start the process over with a different single-gene manipulation. Therefore, researchers at the Wyss Institute at Harvard University designed the MAGE, a powerful technology that improves the process of in vivo genome editing. It allows for quick and efficient manipulations of a genome, all happening in a machine small enough to put on top of a small kitchen table. Those mutations combine with the variation that naturally occurs during cell mitosis creating billions of cellular mutations.
Chemically combined, synthetic single-stranded DNA (ssDNA) and a pool of oligionucleotides are introduced at targeted areas of the cell thereby creating genetic modifications. The cyclical process involves transformation of ssDNA (by electroporation) followed by outgrowth, during which bacteriophage homologous recombination proteins mediate annealing of ssDNAs to their genomic targets. Experiments targeting selective phenotypic markers are screened and identified by plating the cells on differential medias. Each cycle ultimately takes 2.5 hours to process, with additional time required to grow isogenic cultures and characterize mutations. By iteratively introducing libraries of mutagenic ssDNAs targeting multiple sites, MAGE can generate combinatorial genetic diversity in a cell population. There can be up to 50 genome edits, from single nucleotide base pairs to whole genome or gene networks simultaneously with results in a matter of days.
MAGE experiments can be divided into three classes, characterized by varying degrees of scale and complexity: (i) many target sites, single genetic mutations; (ii) single target site, many genetic mutations; and (iii) many target sites, many genetic mutations. An example of class three was reflected in 2009, where Church and colleagues were able to program Escherichia coli to produce five times the normal amount of lycopene, an antioxidant normally found in tomato seeds and linked to anti-cancer properties. They applied MAGE to optimize the 1-deoxy-D-xylulose 5-phosphate (DXP) metabolic pathway in Escherichia coli to overproduce isoprenoid lycopene. It took them about 3 days and just over $1,000 in materials. The ease, speed, and cost efficiency in which MAGE can alter genomes can transform how industries approach the manufacturing and production of important compounds in the bioengineering, bioenergy, biomedical engineering, synthetic biology, pharmaceutical, agricultural, and chemical industries.
Applications
As of 2012 efficient genome editing had been developed for a wide range of experimental systems ranging from plants to animals, often beyond clinical interest, and was becoming a standard experimental strategy in research labs. The recent generation of rat, zebrafish, maize and tobacco ZFN-mediated mutants and the improvements in TALEN-based approaches testify to the significance of the methods, and the list is expanding rapidly. Genome editing with engineered nucleases will likely contribute to many fields of life sciences from studying gene functions in plants and animals to gene therapy in humans. For instance, the field of synthetic biology which aims to engineer cells and organisms to perform novel functions, is likely to benefit from the ability of engineered nuclease to add or remove genomic elements and therefore create complex systems. In addition, gene functions can be studied using stem cells with engineered nucleases.
Listed below are some specific tasks this method can carry out:
Targeted gene mutation
Gene therapy
Creating chromosome rearrangement
Study gene function with stem cells
Transgenic animals
Endogenous gene labeling
Targeted transgene addition
Targeted gene modification in animals
The combination of recent discoveries in genetic engineering, particularly gene editing and the latest improvement in bovine reproduction technologies (e.g. in vitro embryo culture) allows for genome editing directly in fertilised oocytes using synthetic highly specific endonucleases. RNA-guided endonucleases:clustered regularly interspaced short palindromic repeats associated Cas9 (CRISPR/Cas9) are a new tool, further increasing the range of methods available. In particular CRISPR/Cas9 engineered endonucleases allows the use of multiple guide RNAs for simultaneous Knockouts (KO) in one step by cytoplasmic direct injection (CDI) on mammalian zygotes.
Furthermore, gene editing can be applied to certain types of fish in aquaculture such as Atlantic salmon. Gene editing in fish is currently experimental, but the possibilities include growth, disease resistance, sterility, controlled reproduction, and colour. Selecting for these traits can allow for a more sustainable environment and better welfare for the fish.
AquAdvantage salmon is a genetically modified Atlantic salmon developed by AquaBounty Technologies. The growth hormone-regulating gene in the Atlantic salmon is replaced with the growth hormone-regulating gene from the Pacific Chinook salmon and a promoter sequence from the ocean pout
Thanks to the parallel development of single-cell transcriptomics, genome editing and new stem cell models we are now entering a scientifically exciting period where functional genetics is no longer restricted to animal models but can be performed directly in human samples. Single-cell gene expression analysis has resolved a transcriptional road-map of human development from which key candidate genes are being identified for functional studies. Using global transcriptomics data to guide experimentation, the CRISPR based genome editing tool has made it feasible to disrupt or remove key genes in order to elucidate function in a human setting.
Targeted gene modification in plants
Genome editing using Meganuclease, ZFNs, and TALEN provides a new strategy for genetic manipulation in plants and are likely to assist in the engineering of desired plant traits by modifying endogenous genes. For instance, site-specific gene addition in major crop species can be used for 'trait stacking' whereby several desired traits are physically linked to ensure their co-segregation during the breeding processes. Progress in such cases have been recently reported in Arabidopsis thaliana and Zea mays. In Arabidopsis thaliana, using ZFN-assisted gene targeting, two herbicide-resistant genes (tobacco acetolactate synthase SuRA and SuRB) were introduced to SuR loci with as high as 2% transformed cells with mutations. In Zea mays, disruption of the target locus was achieved by ZFN-induced DSBs and the resulting NHEJ. ZFN was also used to drive herbicide-tolerance gene expression cassette (PAT) into the targeted endogenous locus IPK1 in this case. Such genome modification observed in the regenerated plants has been shown to be inheritable and was transmitted to the next generation. A potentially successful example of the application of genome editing techniques in crop improvement can be found in banana, where scientists used CRISPR/Cas9 editing to inactivate the endogenous banana streak virus in the B genome of banana (Musa spp.) to overcome a major challenge in banana breeding.
In addition, TALEN-based genome engineering has been extensively tested and optimized for use in plants. TALEN fusions have also been used by a U.S. food ingredient company, Calyxt, to improve the quality of soybean oil products and to increase the storage potential of potatoes
Several optimizations need to be made in order to improve editing plant genomes using ZFN-mediated targeting. There is a need for reliable design and subsequent test of the nucleases, the absence of toxicity of the nucleases, the appropriate choice of the plant tissue for targeting, the routes of induction of enzyme activity, the lack of off-target mutagenesis, and a reliable detection of mutated cases.
A common delivery method for CRISPR/Cas9 in plants is Agrobacterium-based transformation. T-DNA is introduced directly into the plant genome by a T4SS mechanism. Cas9 and gRNA-based expression cassettes are turned into Ti plasmids, which are transformed in Agrobacterium for plant application. To improve Cas9 delivery in live plants, viruses are being used more effective transgene delivery.
Research
Gene therapy
The ideal gene therapy practice is that which replaces the defective gene with a normal allele at its natural location. This is advantageous over a virally delivered gene as there is no need to include the full coding sequences and regulatory sequences when only a small proportions of the gene needs to be altered as is often the case. The expression of the partially replaced genes is also more consistent with normal cell biology than full genes that are carried by viral vectors.
The first clinical use of TALEN-based genome editing was in the treatment of CD19+ acute lymphoblastic leukemia in an 11-month old child in 2015. Modified donor T cells were engineered to attack the leukemia cells, to be resistant to Alemtuzumab, and to evade detection by the host immune system after introduction.
Extensive research has been done in cells and animals using CRISPR-Cas9 to attempt to correct genetic mutations which cause genetic diseases such as Down syndrome, spina bifida, anencephaly, and Turner and Klinefelter syndromes.
In February 2019, medical scientists working with Sangamo Therapeutics, headquartered in Richmond, California, announced the first ever "in body" human gene editing therapy to permanently alter DNA - in a patient with Hunter syndrome. Clinical trials by Sangamo involving gene editing using Zinc Finger Nuclease (ZFN) are ongoing.
Eradicating diseases
Researchers have used CRISPR-Cas9 gene drives to modify genes associated with sterility in A. gambiae, the vector for malaria. This technique has further implications in eradicating other vector borne diseases such as yellow fever, dengue, and Zika.
The CRISPR-Cas9 system can be programmed to modulate the population of any bacterial species by targeting clinical genotypes or epidemiological isolates. It can selectively enable the beneficial bacterial species over the harmful ones by eliminating pathogen, which gives it an advantage over broad-spectrum antibiotics.
Antiviral applications for therapies targeting human viruses such as HIV, herpes, and hepatitis B virus are under research. CRISPR can be used to target the virus or the host to disrupt genes encoding the virus cell-surface receptor proteins. In November 2018, He Jiankui announced that he had edited two human embryos, to attempt to disable the gene for CCR5, which codes for a receptor that HIV uses to enter cells. He said that twin girls, Lulu and Nana, had been born a few weeks earlier. He said that the girls still carried functional copies of CCR5 along with disabled CCR5 (mosaicism) and were still vulnerable to HIV. The work was widely condemned as unethical, dangerous, and premature.
In January 2019, scientists in China reported the creation of five identical cloned gene-edited monkeys, using the same cloning technique that was used with Zhong Zhong and Hua Hua – the first ever cloned monkeys - and Dolly the sheep, and the same gene-editing Crispr-Cas9 technique allegedly used by He Jiankui in creating the first ever gene-modified human babies Lulu and Nana. The monkey clones were made in order to study several medical diseases.
Prospects and limitations
In the future, an important goal of research into genome editing with engineered nucleases must be the improvement of the safety and specificity of the nucleases action. For example, improving the ability to detect off-target events can improve our ability to learn about ways of preventing them. In addition, zinc-fingers used in ZFNs are seldom completely specific, and some may cause a toxic reaction. However, the toxicity has been reported to be reduced by modifications done on the cleavage domain of the ZFN.
In addition, research by Dana Carroll into modifying the genome with engineered nucleases has shown the need for better understanding of the basic recombination and repair machinery of DNA. In the future, a possible method to identify secondary targets would be to capture broken ends from cells expressing the ZFNs and to sequence the flanking DNA using high-throughput sequencing.
Because of the ease of use and cost-efficiency of CRISPR, extensive research is currently being done on it. There are now more publications on CRISPR than ZFN and TALEN despite how recent the discovery of CRISPR is. Both CRISPR and TALEN are favored to be the choices to be implemented in large-scale productions due to their precision and efficiency.
Genome editing occurs also as a natural process without artificial genetic engineering. The agents that are competent to edit genetic codes are viruses or subviral RNA-agents.
Although GEEN has higher efficiency than many other methods in reverse genetics, it is still not highly efficient; in many cases less than half of the treated populations obtain the desired changes. For example, when one is planning to use the cell's NHEJ to create a mutation, the cell's HDR systems will also be at work correcting the DSB with lower mutational rates.
Traditionally, mice have been the most common choice for researchers as a host of a disease model. CRISPR can help bridge the gap between this model and human clinical trials by creating transgenic disease models in larger animals such as pigs, dogs, and non-human primates. Using the CRISPR-Cas9 system, the programmed Cas9 protein and the sgRNA can be directly introduced into fertilized zygotes to achieve the desired gene modifications when creating transgenic models in rodents. This allows bypassing of the usual cell targeting stage in generating transgenic lines, and as a result, it reduces generation time by 90%.
One potential that CRISPR brings with its effectiveness is the application of xenotransplantation. In previous research trials, CRISPR demonstrated the ability to target and eliminate endogenous retroviruses, which reduces the risk of transmitting diseases and reduces immune barriers. Eliminating these problems improves donor organ function, which brings this application closer to a reality.
In plants, genome editing is seen as a viable solution to the conservation of biodiversity. Gene drive are a potential tool to alter the reproductive rate of invasive species, although there are significant associated risks.
Human enhancement
Many transhumanists see genome editing as a potential tool for human enhancement. Australian biologist and Professor of Genetics David Andrew Sinclair notes that "the new technologies with genome editing will allow it to be used on individuals (...) to have (...) healthier children"designer babies. According to a September 2016 report by the Nuffield Council on Bioethics in the future it may be possible to enhance people with genes from other organisms or wholly synthetic genes to for example improve night vision and sense of smell. George Church has compiled a list of potential genetic modifications for possibly advantageous traits such as less need for sleep, cognition-related changes that protect against Alzheimer's disease, disease resistances and enhanced learning abilities along with some of the associated studies and potential negative effects.
The American National Academy of Sciences and National Academy of Medicine issued a report in February 2017 giving qualified support to human genome editing. They recommended that clinical trials for genome editing might one day be permitted once answers have been found to safety and efficiency problems "but only for serious conditions under stringent oversight."
Risks
In the 2016 Worldwide Threat Assessment of the US Intelligence Community statement United States Director of National Intelligence, James R. Clapper, named genome editing as a potential weapon of mass destruction, stating that genome editing conducted by countries with regulatory or ethical standards "different from Western countries" probably increases the risk of the creation of harmful biological agents or products. According to the statement the broad distribution, low cost, and accelerated pace of development of this technology, its deliberate or unintentional misuse might lead to far-reaching economic and national security implications. For instance technologies such as CRISPR could be used to make "killer mosquitoes" that cause plagues that wipe out staple crops.
According to a September 2016 report by the Nuffield Council on Bioethics, the simplicity and low cost of tools to edit the genetic code will allow amateursor "biohackers"to perform their own experiments, posing a potential risk from the release of genetically modified bugs. The review also found that the risks and benefits of modifying a person's genomeand having those changes pass on to future generationsare so complex that they demand urgent ethical scrutiny. Such modifications might have unintended consequences which could harm not only the child, but also their future children, as the altered gene would be in their sperm or eggs. In 2001 Australian researchers Ronald Jackson and Ian Ramshaw were criticized for publishing a paper in the Journal of Virology that explored the potential control of mice, a major pest in Australia, by infecting them with an altered mousepox virus that would cause infertility as the provided sensitive information could lead to the manufacture of biological weapons by potential bioterrorists who might use the knowledge to create vaccine resistant strains of other pox viruses, such as smallpox, that could affect humans. Furthermore, there are additional concerns about the ecological risks of releasing gene drives into wild populations.
Nobel prize
In 2007, the Nobel Prize for Physiology or Medicine was awarded to Mario Capecchi, Martin Evans and Oliver Smithies "for their discoveries of principles for introducing specific gene modifications in mice by the use of embryonic stem cells."
In 2020, the Nobel Prize in Chemistry was awarded to Emmanuelle Charpentier and Jennifer Doudna for "the development of a method for genome editing".
See also
CRISPR/Cpf1
RNA editing
Epigenome editing
Prime editing
Transposons as a genetic tool
Germinal choice technology
NgAgo, a ssDNA-guided Argonaute endonuclease
References
"WHO launches global registry on human genome editing." PharmaBiz, 31 Aug. 2019. Gale General OneFile, Accessed 27 Apr. 2020.
Further reading | Genome editing | [
"Engineering",
"Biology"
] | 8,232 | [
"Genetics techniques",
"Genetic engineering",
"Genome editing"
] |
34,932,939 | https://en.wikipedia.org/wiki/MicroRNA%20sequencing | MicroRNA sequencing (miRNA-seq), a type of RNA-Seq, is the use of next-generation sequencing or massively parallel high-throughput DNA sequencing to sequence microRNAs, also called miRNAs. miRNA-seq differs from other forms of RNA-seq in that input material is often enriched for small RNAs. miRNA-seq allows researchers to examine tissue-specific expression patterns, disease associations, and isoforms of miRNAs, and to discover previously uncharacterized miRNAs. Evidence that dysregulated miRNAs play a role in diseases such as cancer has positioned miRNA-seq to potentially become an important tool in the future for diagnostics and prognostics as costs continue to decrease. Like other miRNA profiling technologies, miRNA-Seq has both advantages (sequence-independence, coverage) and disadvantages (high cost, infrastructure requirements, run length, and potential artifacts).
Introduction
MicroRNAs (miRNAs) are a family of small ribonucleic acids, 21-25 nucleotides in length, that modulate protein expression through transcript degradation, inhibition of translation, or sequestering transcripts. The first miRNA to be discovered, lin-4, was found in a genetic mutagenesis screen to identify molecular elements controlling post-embryonic development of the nematode Caenorhabditis elegans. The lin-4 gene encoded a 22 nucleotide RNA with conserved complementary binding sites in the 3’-untranslated region of the lin-14 mRNA transcript and downregulated LIN-14 protein expression. miRNAs are now thought to be involved in the regulation of many developmental and biological processes, including haematopoiesis (miR-181 in Mus musculus), lipid metabolism (miR-14 in Drosophila melanogaster) and neuronal development (lsy-6 in Caenorhabditis elegans). These discoveries necessitated development of techniques able to identify and characterize miRNAs, such as miRNA-seq.
History
MicroRNA sequencing (miRNA-seq) was developed to take advantage of next-generation sequencing or massively parallel high-throughput sequencing technologies in order to find novel miRNAs and their expression profiles in a given sample. miRNA sequencing in and of itself is not a new idea, initial methods of sequencing utilized Sanger sequencing methods. Sequencing preparation involved creating libraries by cloning of DNA reverse transcribed from endogenous small RNAs of 21–25 bp size selected by column and gel electrophoresis. However, this method is exhaustive in terms of time and resources, as each clone has to be individually amplified and prepared for sequencing. This method also inadvertently favors miRNAs that are highly expressed. Next-generation sequencing eliminates the need for sequence specific hybridization probes required in DNA microarray analysis as well as laborious cloning methods required in the Sanger sequencing method. Additionally, next-generation sequencing platforms in the miRNA-SEQ method facilitate the sequencing of large pools of small RNAs in a single sequencing run.
miRNA-seq can be performed using a variety of sequencing platforms. The first analysis of small RNAs using miRNA-seq methods examined approximately 1.4 million small RNAs from the model plant Arabidopsis thaliana using Lynx Therapeutics' Massively Parallel Signature Sequencing (MPSS) sequencing platform. This study demonstrated the potential of novel, high-throughput sequencing technologies for the study of small RNAs, and it showed that genomes generate large numbers of small RNAs with plants as particularly rich sources of small RNAs. Later studies used other sequencing technologies, such as a study in C. elegans which identified 18 novel miRNA genes as well as a new class of nematode small RNAs termed 21U-RNAs. Another study comparing small RNA profiles of human cervical tumours and normal tissue, utilized the Illumina (company) Genome Analyzer to identify 64 novel human miRNA genes as well as 67 differentially expressed miRNAs. Applied Biosystems SOLiD sequencing platform has also been used to examine the prognostic value of miRNAs in detecting human breast cancer.
Methods
Small RNA Preparation
Sequence library construction can be performed using a variety of different kits depending on the high-throughput sequencing platform being employed. However, there are several common steps for small RNA sequencing preparation.
Total RNA Isolation
In a given sample all the RNA is extracted and isolated using an isothiocyanate/phenol/chloroform (GITC/phenol) method or a commercial product such as Trizol (Invitrogen) reagent. A starting quantity of 50-100 μg total RNA, 1 g of tissue typically yields 1 mg of total RNA, is usually required for gel purification and size selection. Quality control of the RNA is also measured, for example running an RNA chip on Caliper LabChipGX (Caliper Life Sciences).
Size Fractionation of small RNAs by Gel Electrophoresis
Isolated RNA is run on a denaturing polyacrylamide gel. An imaging method such as radioactive 5’-32P-labeled oligonucleotides along with a size ladder is used to identify a section of the gel containing RNA of the appropriate size, reducing the amount of material ultimately sequenced. This step does not have to be necessarily carried out before the ligation and reverse transcription steps outlined below.
Ligation
The ligation step adds DNA adaptors to both ends of the small RNAs, which act as primer binding sites during reverse transcription and PCR amplification. An adenylated single strand DNA 3’adaptor followed by a 5’adaptor is ligated to the small RNAs using a ligating enzyme such as T4 RNA ligase2. The adaptors are also designed to capture small RNAs with a 5’ phosphate group, characteristic microRNAs, rather than RNA degradation products with a 5’ hydroxyl group.
Reverse Transcription and PCR Amplification
This step converts the small adaptor ligated RNAs into cDNA clones used in the sequencing reaction. There are many commercial kits available that will carry out this step using some form of reverse transcriptase. PCR is then carried out to amplify the pool of cDNA sequences. Primers designed with unique nucleotide tags can also be used in this step to create ID tags in pooled library multiplex sequencing.
Sequencing
The actual RNA sequencing varies significantly depending on the platform used. Three common next-generation sequencing platforms are Pyrosequencing on the 454 Life Sciences platform, polymerase-based sequence-by-synthesis on the Illumina (company) platform, or sequencing by ligation on the ABI Solid Sequencing platform.
Data Analysis
Central to miRNA-seq data analysis is the ability to 1) obtain miRNA abundance levels from sequence reads, 2) discover novel miRNAs and then be able to 3) determine the differentially expressed miRNA and their 4) associated mRNA gene targets.
miRNA Alignment & Abundance Quantification
miRNAs may be preferentially expressed in certain cell types, tissues, stages of development, or in particular disease states such as cancer. Since deep sequencing (miRNA-seq) generates millions of reads from a given sample, it allows us to profile miRNAs; whether it may be by quantifying their absolute abundance, to discover their variants (known as isomirs) Note that given that the average length of sequence reads are longer than the average miRNA (17-25 nt), the 3’ and 5’ ends of the miRNA should be found on the same read.
There are several miRNA abundance quantification algorithms. Their general steps are as follows:
After sequencing, the raw sequence reads are filtered based on quality. The adaptor sequences are also trimmed off the raw sequence reads.
The resulting reads are then formatted into a fasta file where the copy number and sequence is recorded for each unique tag.
Sequences that may represent E. Coli contamination are identified by a BLAST search against an E. Coli database and are removed from analysis.
Each of the remaining sequences are aligned against a miRNA sequence database (such as miRBase) In order to account for imperfect DICER processing, a 6nt overhang on the 3’ end, and 3nt on the 5’ end are allowed.
The reads that do not align to the miRNA database are then loosely aligned to miRNA precursors to detect miRNAs that might carry mutations or those that have gone through RNA editing.
The read counts for each miRNA are then normalized to the total number of mapped miRNAs to report the abundance of each miRNA.
Novel miRNA Discovery
Another advantage of miRNA-seq is that it allows the discovery of novel miRNAs that may have eluded traditional screening and profiling methods. There are several novel miRNA discovery algorithms. Their general steps are as follows:
Obtain reads that did not align to known miRNA sequences, and map them to the genome.
RNA Folding Method
For the miRNA sequences were an exact match is found, obtain the genomic sequence including ~100bp of flanking sequence on either side, and run the RNA through RNA folding software such as the Vienna package.
Folded sequences that lie on one arm of the miRNA hairpin and have a minimum free energy of less than ~25kcal/mol are shortlisted as putative miRNA.
The shortlisted sequences are trimmed down to include only the possible precursor sequence and are then refolded to ensure that the precursor was not artificially stabilized by neighbouring sequences.
The resulting folded sequences are considered novel miRNAs if the miRNA sequence falls within one arm of the hairpin, and are highly conserved between species.
Star Strand Expression Method (miRdeep)
Novel miRNA sequences are identified based on the characteristic expression pattern that they display due to DICER processing: higher expression of the mature miRNA over the star strand and loop sequences.
Differential Expression Analysis
After the abundances of miRNAs are quantified for each sample, their expression levels can be compared between samples. One would then be able to identify miRNA that are preferentially expressed that particular time points, or in particular tissues or disease states. After normalizing for the number of mapped reads between samples, one can use a host of statistical tests (like those used in gene expression profiling) to determine differential expression
Target Prediction
Identifying a miRNA's mRNA targets will provide an understanding of the genes or networks of genes whose expression they regulate. Public databases provide predictions of miRNA targets. But to better distinguish true positive predictions from false positive predictions, miRNA-seq data can be integrated to mRNA-seq data to observe for miRNA:mRNA functional pairs. RNA22, TargetScan, miRanda, and PicTar are software designed for this purpose. A list of prediction software is given here.
The general steps are:
Determine miRNA:mRNA binding pairs, complementarity between the miRNA sequences at the 3’-UTR of the mRNA sequence is identified.
Determine the degree of conservation of miRNA:mRNA binding pairs across species. Typically, more highly binding pairs are less likely to be false positives of prediction.
Observe for evidence of miRNA targeting in mRNA-seq or protein expression data: where the miRNA expression is high, the gene and protein expression of its target gene should be low.
Target Validation for Cleaved mRNA Targets
Many miRNAs function to direct cleavage of their mRNA targets; this is particularly true in plants, and thus high-throughput sequencing methods have been developed to take advantage of this property of miRNAs by sequencing the uncapped 3' ends of cleaved or degraded mRNAs. These methods are known as Degradome sequencing or PARE. Validation of target cleavage in specific mRNAs is typically performed using a modified version of 5' Rapid Amplification of cDNA Ends with a gene-specific primer.
Applications
Identification of Novel miRNAs
miRNA-seq has revealed novel miRNAs that were previously eluded in traditional miRNA profiling methods. Examples of such findings are in embryonic stem cells, chicken embryos, acute lymphoblastic leukaemia, diffuse large b-cell lymphoma and b-cells, acute myeloid leukemia, and lung cancer.
Disease biomarkers
Micro RNAs are important regulators of almost all cellular processes such as survival, proliferation, and differentiation. Consequently, it is not unexpected that miRNAs are involved in various aspects of cancer through the regulation of onco- and tumor suppressor gene expression. In combination with the development of high-throughput profiling methods, miRNAs have been identified as biomarkers for cancer classification, response to therapy, and prognosis. Additionally, because miRNAs regulate gene expression they can also reveal perturbations in important regulatory networks that may be driving a particular disorder. Several applications of miRNAs as biomarkers and predictors of disease are given below.
αThis is not a comprehensive list of miRNAs involved with these malignancies.
Comparison With Other Methods of miRNA Profiling
The disadvantages of using miRNA-seq over other methods of miRNA profiling are that it is more expensive, generally requires a larger amount of total RNA, involves extensive amplification, and is more time-consuming than microarray and qPCR methods. As well, miRNA-seq library preparation methods seem to have systematic preferential representation of the miRNA complement, and this prevents accurate determination of miRNA abundance. At the same time, the approach is hybridization independent and therefore does not require a priori sequence information. Because of this, one can obtain sequences of novel miRNAs and miRNA isoforms (isoMirs), distinguish sequentially similar miRNAs, and identify point mutations.
Platform Comparison of miRNA Profiling
References
DNA sequencing | MicroRNA sequencing | [
"Chemistry",
"Biology"
] | 2,875 | [
"Molecular biology techniques",
"DNA sequencing"
] |
37,430,358 | https://en.wikipedia.org/wiki/Light%20sheet%20fluorescence%20microscopy | Light sheet fluorescence microscopy (LSFM) is a fluorescence microscopy technique with an intermediate-to-high optical resolution, but good optical sectioning capabilities and high speed. In contrast to epifluorescence microscopy only a thin slice (usually a few hundred nanometers to a few micrometers) of the sample is illuminated perpendicularly to the direction of observation. For illumination, a laser light-sheet is used, i.e. a laser beam which is focused only in one direction (e.g. using a cylindrical lens). A second method uses a circular beam scanned in one direction to create the lightsheet. As only the actually observed section is illuminated, this method reduces the photodamage and stress induced on a living sample. Also the good optical sectioning capability reduces the background signal and thus creates images with higher contrast, comparable to confocal microscopy. Because light sheet fluorescence microscopy scans samples by using a plane of light instead of a point (as in confocal microscopy), it can acquire images at speeds 100 to 1,000 times faster than those offered by point-scanning methods.
This method is used in cell biology and for microscopy of intact, often chemically cleared, organs, embryos, and organisms.
Starting in 1994, light sheet fluorescence microscopy was developed as orthogonal plane fluorescence optical sectioning microscopy or tomography (OPFOS) mainly for large samples and later as the selective/single plane illumination microscopy (SPIM) also with sub-cellular resolution. This introduced an illumination scheme into fluorescence microscopy, which has already been used successfully for dark field microscopy under the name ultramicroscopy.
Setup
Basic setup
In this type of microscopy, the illumination is done perpendicularly to the direction of observation (see schematic image at the top of the article). The expanded beam of a laser is focused in only one direction by a cylindrical lens, or by a combination of a cylindrical lens and a microscope objective as the latter is available in better optical quality and with higher numerical aperture than the first. This way a thin sheet of light or lightsheet is created in the focal region that can be used to excite fluorescence only in a thin slice (usually a few micrometers thin) of the sample.
The fluorescence light emitted from the lightsheet is then collected perpendicularly with a standard microscope objective and projected onto an imaging sensor (usually a CCD, electron-multiplying CCD or CMOS camera). In order to let enough space for the excitation optics/lightsheet an observation objective with high working distance is used. In most light sheet fluorescence microscopes the detection objective and sometimes also the excitation objective are fully immersed in the sample buffer, so usually the sample and excitation/detection optics are embedded into a buffer-filled sample chamber, which can also be used to control the environmental conditions (temperature, carbon dioxide level ...) during the measurement. The sample mounting in light sheet fluorescence microscopy is described below in more detail.
As both the excitation lightsheet and the focal plane of the detection optics have to coincide to form an image, focusing different parts of the sample can not be done by translating the detection objective, but usually the whole sample is translated and rotated instead.
Extensions of the basic idea
In recent years, several extensions to this scheme have been developed:
The use of two counter-propagating lightsheets helps to reduce typical selective plane illumination microscopy artifacts, like shadowing (see first z-stack above)
In addition to counter-propagating lightsheets a setup with detection from two opposing sides has been proposed in 2012. This allows measurement of z- and rotation-stacks for a full 3D reconstruction of the sample more rapidly.
The lightsheet can also be created by scanning a normal laser focus up and down. This also allows use of self-reconstructing beams (such as bessel beams or Airy beams) for the illumination which improve the penetration of the lightsheet into thick samples, as the negative effect of scattering on the lightsheet is reduced. These self-reconstructing beams can be modified to counteract intensity losses using attenuation-compensation techniques, further increasing the signal collected from within thick samples.
In oblique plane microscopy (OPM) the detection objective is used to also create the lightsheet: The lightsheet is now emitted from this objective under an angle of about 60°. Additional optics is used to also tilt the focal plane used for detection by the same angle.
Light sheet fluorescence microscopy has also been combined with two-photon (2P) excitation, which improves the penetration into thick and scattering samples. Use of 2P excitation in near-infrared wavelengths has been used to replace 1P excitation in blue-visible wavelengths in brain imaging experiments involving response to visual stimuli.
Selective plane illumination microscopy can also be combined with techniques such as fluorescence correlation spectroscopy, to allow spatially resolved mobility measurements of fluorescing particles (e.g. fluorescent beads, quantum dots or fluorescently labeled proteins) inside living biological samples.
Also a combination of a selective plane illumination microscope with a gated image intensifier camera has been reported that allowed measuring a map of fluorescence lifetimes (fluorescence lifetime imaging, FLIM).
Light sheet fluorescence microscopy was combined with super resolution microscopy techniques to improve its resolution beyond the Abbe limit. Also a combination of stimulated emission depletion microscopy (STED) and selective plane illumination microscopy has been published, that leads to a reduced lightsheet thickness due to the stimulated emission depletion microscopy effect. See also the section on the power of resolution of light sheet fluorescence microscopy below.
Light sheet fluorescence microscopy was modified to be compatible with all objectives, even coverslip-based, oil-immersion objectives with high numerical aperture to increase native spatial resolution and fluorescence detection efficiency. This technique involves tilting the light sheet relative to the detection objective at a precise angle to allow the light sheet to form on the surface of glass coverslips.
Light sheet fluorescence microscopy was combined with Adaptive Optics techniques in 2012 to improve the depth of imaging in thick and inhomogenous samples at a depth of 350 um. A Shack Hartmann wavefront sensor was positioned in the detection path and guide stars are used in a close feedback loop. In his thesis, the author discuss the advantage of having Adaptive Optics both in the illumination and detection path of the light sheet fluorescence microscope to correct aberrations induced by the sample.
Sample mounting
The separation of the illumination and detection beampaths in light sheet fluorescence microscopy (except in oblique plane microscopy) creates a need for specialized sample mounting methods. To date most light sheet fluorescence microscopes are built in such a way that the illumination and detection beampath lie in a horizontal plane (see illustrations above), thus the sample is usually hanging from the top into the sample chamber or is resting on a vertical support inside the sample chamber. Several methods have been developed to mount all sorts of samples:
Fixed (and potentially also cleared) samples can be glued to a simple support or holder and can stay in their fixing solution during imaging.
Larger living organisms are usually sedated and mounted in a soft gel cylinder that is extruded from a (glass or plastic) capillary hanging from above into the sample chamber.
Adherent cells can be grown on small glass plates that are hanging in the sample chamber.
Plants can be grown in clear gels containing a growth medium. The gels are cut away at the position of imaging, so they do not reduce the lightsheet and image quality by scattering and absorption.
Liquid samples (e.g. for fluorescence correlation spectroscopy) can be mounted in small bags made of thin plastic foil matching the refractive index of the surrounding immersion medium in the sample chamber.
Some light sheet fluorescence microscopes have been developed where the sample is mounted as in standard microscopy (e.g. cells grow horizontally on the bottom of a petri dish) and the excitation and detection optics are constructed in an upright plane from above. This also allows combining a light sheet fluorescence microscope with a standard inverted microscope and avoids the requirement for specialized sample mounting procedures.
Image properties
Typical imaging modes
Most light sheet fluorescence microscopes are used to produce 3D images of the sample by moving the sample through the image plane. If the sample is larger than the field of view of the image sensor, the sample also has to be shifted laterally. An alternative approach is to move the image plane through the sample to create the image stack.
Long experiments can be carried out, for example with stacks recorded every 10 sec–10 min over the timespan of days. This allows study of changes over time in 3D, or so-called 4D microscopy.
After the image acquisition the different image stacks are registered to form one single 3D dataset. Multiple views of the sample can be collected, either by interchanging the roles of the objectives or by rotating the sample. Having multiple views can yield more information than a single stack; for example occlusion of some parts of the sample may be overcome. Multiple views also improves 3D image resolution by overcoming poor axial resolution as described below.
Some studies also use a selective plane illumination microscope to image only one slice of the sample, but at much higher temporal resolution. This allows e.g. to observe the beating heart of a zebra fish embryo in real-time. Together with fast translation stages for the sample a high-speed 3D particle tracking has been implemented.
Power of resolution
The lateral resolution of a selective plane illumination microscope is comparable to that of a standard (epi) fluorescence microscope, as it is determined fully by the detection objective and the wavelength of the detected light (see Abbe limit). E.g. for detection in the green spectral region around 525 nm, a resolution of 250–500 nm can be reached. The axial resolution is worse than the lateral (about a factor of 4), but it can be improved by using a thinner lightsheet in which case nearly isotropic resolution is possible. Thinner light sheets are either thin only in a small region (for Gaussian beams) or else specialized beam profiles such as Bessel beams must be used (besides added complexity, such schemes add side lobes which can be detrimental). Alternatively, isotropic resolution can be achieved by computationally combining 3D image stacks taken from the same sample under different angles. Then the depth-resolution information lacking in one stack is supplied from another stack; for example with two orthogonal stacks the (poor-resolution) axial direction in one stack is a (high-resolution) lateral direction in the other stack.
The lateral resolution of light sheet fluorescence microscopy can be improved beyond the Abbe limit, by using super resolution microscopy techniques, e.g. with using the fact, that single fluorophores can be located with much higher spatial precision than the nominal resolution of the used optical system (see stochastic localization microscopy techniques). In Structured Illumination Light Sheet Microscopy, structured illumination techniques have been applied to further improve the optical sectioning capacity of light sheet fluorescence microscopy.
Stripe artifacts
As the illumination typically penetrates the sample from one side, obstacles lying in the way of the lightsheet can disturb its quality by scattering and/or absorbing the light. This typically leads to dark and bright stripes in the images. If parts of the samples have a significantly higher refractive index (e.g. lipid vesicles in cells), they can also lead to a focussing effect resulting in bright stripes behind these structures. To overcome this artifact, the lightsheets can e.g. be "pivoting". That means that the lightsheet's direction of incidence is changed rapidly (~1 kHz rate) by a few degrees (~10°), so light also hits the regions behind the obstacles. Illumination can also be performed with two (pivoted) lightsheets (see above) to further reduce these artifacts.
Alternatively, the Variational Stationary Noise Remover (VSNR) algorithm has been developed and is available as a free Fiji plugin.
History
At the beginning of the 20th century, R. A. Zsigmondy introduced the ultramicroscope as a new illumination scheme into dark-field microscopy. Here sunlight or a white lamp is used to illuminate a precision slit. The slit is then imaged by a condensor lens into the sample to form a lightsheet. Scattering (sub-diffractive) particles can be observed perpendicularly with a microscope. This setup allowed the observation of particles with sizes smaller than the microscope's resolution and led to a Nobel prize for Zsigmondy in 1925.
The first application of this illumination scheme for fluorescence microscopy was published in 1993 by Voie et al. under the name orthogonal-plane fluorescence optical sectioning (OPFOS). for imaging of the internal structure of the cochlea. The resolution at that time was limited to 10 µm laterally and 26 µm longitudinally but at a sample size in the millimeter range. The orthogonal-plane fluorescence optical sectioning microscope used a simple cylindrical lens for illumination. Further development and improvement of the selective plane illumination microscope started in 2004. After this publication by Huisken et al. the technique found wide application and is still adapted to new measurement situations today (see above). Since 2010 a first ultramicroscope with fluorescence excitation and limited resolution and since 2012 a first selective plane illumination microscope are available commercially. A good overview about the development of selective plane illumination microscopy is given in ref. During 2012 also open source projects have started to appear that freely publish complete construction plans for light sheet fluorescence microscopes and also the required software suites.
Applications
Selective plane illumination microscopy/light sheet fluorescence microscopy is often used in developmental biology, where it enables long-time (several days) observations of embryonic development (even with full lineage tree reconstruction). Selective plane illumination microscopy can also be combined with techniques, like fluorescence correlation spectroscopy to allow spatially resolved mobility measurements of fluorescing particles (e.g. fluorescent beads, quantum dots or fluorescent proteins) inside living biological samples.
Strongly scattering biological tissue such as brain or kidney has to be chemically fixed and cleared before it can be imaged in a selective plane illumination microscope. Special tissue clearing techniques have been developed for this purpose, e.g. 3DISCO, CUBIC and CLARITY. Depending on the index of refraction of the cleared sample, matching immersion fluids and special long-distance objectives must be used during imaging.
References
Further reading
Review:
Review of different light sheet fluorescence microscopy modalities and results in developmental biology:
Review of light sheet fluorescence microscopy for imaging anatomic structures:
Editorial:
External links
: The linked video shows the development of a fruit fly embryo, which was recorded during 20 hours. Two projections of the full 3D dataset are shown.
The mesoSPIM Initiative. Open-source light-sheet microscopes for imaging cleared tissue.
A practical guide to adaptive light-sheet microscopy
Fluorescence techniques
Cell imaging
Laboratory equipment
Optical microscopy techniques
Articles containing video clips | Light sheet fluorescence microscopy | [
"Chemistry",
"Biology"
] | 3,109 | [
"Cell imaging",
"Fluorescence techniques",
"Microscopy"
] |
37,431,028 | https://en.wikipedia.org/wiki/Oxygen%20diffusion-enhancing%20compound | An oxygen diffusion-enhancing compound is any substance that increases the availability of oxygen in body tissues by influencing the molecular structure of water in blood plasma and thereby promoting the movement (diffusion) of oxygen through plasma. Oxygen diffusion-enhancing compounds have shown promise in the treatment of conditions associated with hypoxia (a lack of oxygen in tissues) and ischemia (a lack of oxygen in the circulating blood supply). Such conditions include hemorrhagic shock, myocardial infarction (heart attack), and stroke.
Types
One of the first substances that was reported to produce an oxygen diffusion-enhancing effect was crocetin, a carotenoid that occurs naturally in plants such as crocus sativus, and is related to another carotenoid, saffron. Saffron has been used culturally (e.g., as a dye) and medicinally since ancient times.
Trans sodium crocetinate (TSC), a synthetic drug containing the carotenoid structure of trans crocetin has been extensively investigated in animal disease models and in human clinical trials. Clinical trials of TSC have focused on testing the compound's effectiveness in sensitizing hypoxic cancer cells to radiation therapy in patients with glioblastoma, an aggressive form of brain cancer.
TSC, which is being developed by Diffusion Pharmaceuticals, has been shown to enhance the oxygenation of hypoxic tumor tissue and belongs to a subclass of oxygen diffusion-enhancing compounds known as bipolar trans carotenoid salts. Diffusion Pharmaceuticals is currently investigating the use of trans sodium crocetinate in the treatment of COVID-19, acute stroke, and solid cancerous tumors.
Mechanism of action
Oxygen diffusion-enhancing compounds are thought to act by exerting hydrophobic forces that interact with water molecules. These interactions result in greater hydrogen bonding among water molecules, which constitute the majority of the blood plasma medium. As hydrogen bonding increases, the overall molecular structure of water in the plasma becomes more lattice-like, a phenomenon known as structure building. Structure building reduces resistance to the movement of oxygen through plasma via diffusion. Since blood plasma offers the major barrier for oxygen to move from the red blood cells and into the tissues, the more structured character of water imparted by the oxygen diffusion-enhancing compound will enhance movement into tissues.
Computer simulations have shown that TSC specifically can increase the transport of oxygen through water by as much as 30 percent.
References
Oxygen
Diffusion
Chemical compounds
Biochemistry
Drugs by mechanism of action | Oxygen diffusion-enhancing compound | [
"Physics",
"Chemistry",
"Biology"
] | 518 | [
"Transport phenomena",
"Physical phenomena",
"Diffusion",
"Molecules",
"Chemical compounds",
"nan",
"Biochemistry",
"Matter"
] |
37,434,162 | https://en.wikipedia.org/wiki/Born%20reciprocity | In physics, Born reciprocity, also called reciprocal relativity or Born–Green reciprocity, is a principle set up by theoretical physicist Max Born that calls for a duality-symmetry among space and momentum. Born and his co-workers expanded his principle to a framework that is also known as reciprocity theory.
Born noticed a symmetry among configuration space and momentum space representations of a free particle, in that its wave function description is invariant to a change of variables x → p and p → −x. (It can also be worded such as to include scale factors, e.g. invariance to x → ap and p → −bx where a, b are constants.) Born hypothesized that such symmetry should apply to the four-vectors of special relativity, that is, to the four-vector space coordinates
and the four-vector momentum (four-momentum) coordinates
Both in classical and in quantum mechanics, the Born reciprocity conjecture postulates that the transformation x → p and p → −x leaves invariant the Hamilton equations:
and
From his reciprocity approach, Max Born conjectured the invariance of a space-time-momentum-energy line element. Born and H.S. Green similarly introduced the notion an invariant (quantum) metric operator as extension of the Minkowski metric of special relativity to an invariant metric on phase space coordinates. The metric is invariant under the group of quaplectic transformations.
Such a reciprocity as called for by Born can be observed in much, but not all, of the formalism of classical and quantum physics. Born's reciprocity theory was not developed much further for reason of difficulties in the mathematical foundations of the theory.
However Born's idea of a quantum metric operator was later taken up by Hideki Yukawa when developing his nonlocal quantum theory in the 1950s. In 1981, Eduardo R. Caianiello proposed a "maximal acceleration", similarly as there is a minimal length at Planck scale, and this concept of maximal acceleration has been expanded upon by others. It has also been suggested that Born reciprocity may be the underlying physical reason for the T-duality symmetry in string theory, and that Born reciprocity may be of relevance to developing a quantum geometry.
Born chose the term "reciprocity" for the reason that in a crystal lattice, the motion of a particle can be described in p-space by means of the reciprocal lattice.
References
Further reading
Equations of physics
Duality theories
Max Born | Born reciprocity | [
"Physics",
"Mathematics"
] | 515 | [
"Mathematical structures",
"Equations of physics",
"Mathematical objects",
"Equations",
"Category theory",
"Duality theories",
"Geometry"
] |
37,435,596 | https://en.wikipedia.org/wiki/Metals%20in%20medicine | Metals in medicine are used in organic systems for diagnostic and treatment purposes. Inorganic elements are also essential for organic life as cofactors in enzymes called metalloproteins. When metals are under or over-abundant in the body, equilibrium must be returned to its natural state via interventional and natural methods.
Toxic metals
Metals can be toxic in high quantities. Either ingestion or faulty metabolic pathways can lead to metal toxicity (metal poisoning). Sources of toxic metals include cadmium from tobacco, arsenic from agriculture and mercury from volcanoes and forest fires. Nature, in the form of trees and plants, is able to trap many toxins and can bring abnormally high levels back into equilibrium. Toxic metal poisoning is usually treated with some type of chelating agent. Heavy metal poisoning, such as from mercury, cadmium, or lead, is particularly pernicious.
Examples of specific types of toxic metals include:
Copper: copper toxicity usually presents itself as a side effect of low levels of the protein ceruloplasmin, which normally is involved in copper storage. This is referred to as Wilson’s disease. Wilson's disease is an autosomal recessive genetic disorder whose mutation causes the ATPase that transports copper into bile and ultimately incorporates it into ceruloplasmin to malfunction.
Plutonium: ever since the nuclear age, plutonium poisoning is a potential danger, especially among nuclear reactor employees; inhalation of Pu dust is particularly dangerous due to its intense alpha particle emission. There have been very few cases of plutonium poisoning.
Mercury: mercury is usually ingested from agricultural sources or other environmental sources. Mercury poisoning can lead to neurological disease and kidney failure if left untreated.
Iron: iron toxicity, iron poisoning, or iron overload is well known. Iron does test only very weakly positive for the Ames test for cancer, however, since it is such a strong catalyst and essential for the production of ATP and consequently DNA production, any excess soluble iron is toxic especially over time. Too much iron deposited in tissues or high levels in the blood stream has been successfully linked to a vast majority of human diseases from Alzheimer's to Malaria. In Botany, iron is a severe problem for the irrigation of plants like rice, maize, or wheat in Sub-Saharan Africa whose subterranean water contains excessive amounts of iron which then poisons these crops.
Lead and cadmium: lead poisoning and cadmium poisoning can lead to gastrointestinal, kidney, and neurological dysfunction. The use of unleaded paints and gas has successfully decreased the number of cases of lead heavy metal poisoning.
Nickel, chromium, and cadmium: via metal-DNA interactions, these metals can be carcinogenic.
Nickel: allergies to nickel, particularly from skin to metal contact via jewelry, are common.
Zinc, cadmium, magnesium, chromium: metal fume fever can be caused by ingestion of the fumes of these metals and leads to flu-like symptoms.
Beryllium: The risk of beryllium poisoning is relevant to occupational safety and health for metalworkers or ore millers who work with alloys or ores that contain beryllium in greater than trace amounts. For alloys, this means those alloys in which beryllium is intentionally featured, and for ores, it means those pursued for beryllium or those with nonnegligible beryllium co-occurrence.
Biometals
Homeostasis
Fluid and electrolyte balance, in which fluid balance and electrolyte balance are intertwined homeostatically, is necessary to health in all organisms. It includes reference ranges for cation concentrations of biometals, which in reference to human medicine and veterinary medicine principally includes those for blood serum ion concentrations in humans and in livestock and pets. Derangements in such fluid and electrolyte balance most often occur in the contexts of dehydration, overexertion, and diarrhea, but they also occur in cancers (most especially in paraneoplastic syndromes), parasitism, inborn errors of metabolism, and several other contexts. Some medical specialties deal especially frequently with electrolyte derangements, including internal medicine and endocrinology (especially in chronic conditions) and intensive care medicine (in severe acute conditions).
Metal anemia
Humans need a certain amount of certain metals to function normally. Most metals are used as cofactors or prosthetics in enzymes, catalyzing specific reactions and serving essential roles. The essential metals for humans are: Sodium, Potassium, Magnesium, Copper, Vanadium, Chromium, Manganese, Iron, Cobalt, Nickel, Zinc, Molybdenum, and Cadmium. Anemia symptoms are caused by lack of a certain essential metal. Anemia can be associated with malnourishment or faulty metabolic processes, usually caused by a genetic defect.
Examples of specific types of metal anemia include:
Iron: common simple anemia (iron deficiency), results in the loss of functional heme proteins (hemoglobin, myoglobin, etc.), which are responsible for oxygen transport or utilization of oxygen. Pernicious anemia comes from a lack of vitamin B-12 (which contains a cobalt complex called cobalamin), which then in turn interferes with the function of red blood cells.
Zinc: Zinc anemia is mostly due to diet can result in growth retardation.
Copper: Copper anemia in infants results from infants with a poor diet and can cause heart disease.
Metals in diagnosis
Metal complexes in nuclear imaging
Metal ions are often used for diagnostic medical imaging. Metal complexes can be used either for radioisotope imaging (from their emitted radiation) or as contrast agents, for example, in magnetic resonance imaging (MRI). Such imaging can be enhanced by manipulation of the ligands in a complex to create specificity so that the complex will be taken up by a certain cell or organ type.
Examples of metals used for diagnosis include:
Technetium. 99mTc is the most commonly used radioisotope agent for imaging purposes. It has a short half-life, emits only gamma ray photons, and does not emit beta or alpha particles (which are more damaging to surrounding cells), and thus is particularly suitable as an imaging radioisotope.
Gadolinium(III), Iron(III), Manganese(II): For MRI imaging paramagnetic metals are needed for contrast imaging. Gadolinium(III), Iron(III), and Manganese(II) are all paramagnetic metals that are able to alter the tissue relaxation times and produce a contrast image.
Gallium-68 is useful as a positron source for Positron emission tomography.
Cobalt(III): 57Cobalt(III) is used with the compound bleomycin (BLM) (Figure 1), which is an antibiotic, to selectively be taken up by tumor cells. The use of cobalt results in the best blood-to-tumor distribution ratio, but its half-life is too long to be conducive for imaging purposes. A solution has been proposed to attach an EDTA moiety to the terminal thiazole ring of bleomycin, radiolabeled so that the entire complex can then be traceable. This system could provide tumor locations accurately, leading to earlier detection and more non-invasive procedures in the future.
Metal objects in MRI imaging
An important contraindication to MRI (magnetic resonance imaging) is having metal objects anywhere near, and most especially inside the field of, the MRI scanner. Not only does this entail that people with implanted metal plates, bone screws (internal fixation), or syndesmotic screws often cannot undergo MRI, it also entails that many everyday objects, including jewelry, belt buckles, wallets, purses, security guards' weapons, and so on, must be kept out of the MRI area.
Metals in treatment
Metals have been used in treatments since ancient times. The Ebers Papyrus from 1500BC is the first written account of the use of metals for treatment and describes the use of Copper to reduce inflammation and the use of iron to treat anemia. Sodium vanadate has been used since the early 20th century to treat rheumatoid arthritis. Recently metals have been used to treat cancer, by specifically attacking cancer cells and interacting directly with DNA. The positive charge on most metals can interact with the negative charge of the phosphate backbone of DNA. Some drugs developed that include metals interact directly with other metals already present in protein active sites, while other drugs can use metals to interact with amino acids with the highest reduction potential.
Examples of metals used in treatment include:
Platinum: Platinum based compounds have been shown to specifically affect head and neck tumors. These coordination complexes are thought to act to cross-link DNA in tumor cells (Figure 2).
Gold: Gold salt complexes have been used to treat rheumatoid arthritis (Figure 3). The gold salts are believed to interact with albumin and eventually be taken up by immune cells, triggering anti-mitochondrial effects and eventually cell apoptosis. This is an indirect treatment of arthritis, mitigating the immune response.
Lithium: Li2CO3 can be used to treat prophylaxis of manic depressive disorder.
Zinc: Zinc can be used topically to heal wounds. Zn2+ can be used to treat the herpes virus.
Silver: Silver has been used to prevent infection at the burn site for burn wound patients.
Platinum, Titanium, Vanadium, Iron: cis DDP (cis-diaminedichoroplatinum), titanium, vanadium, and iron have been shown to react with DNA specifically in tumor cells to treat patients with cancer.
Gold, Silver, Copper: Phosphine ligand compounds containing gold, silver, and copper have anti-cancer properties.
Lanthanum: Lanthanum Carbonate often used under the trade-name Fosrenol is used as a phosphate binder in patients with chronic kidney disease.
Bismuth: Bismuth subsalicylate is used as an antacid.
Zirconium: Sodium zirconium cyclosilicate is a potassium binder used in people with chronic kidney disease
Arsenic: Arsenic trioxide is a chemotherapeutic used to treat acute promyelocytic leukemia
See also
Titanium biocompatibility
Biometal
References
Biomaterials
Metals
Chemicals in medicine | Metals in medicine | [
"Physics",
"Chemistry",
"Biology"
] | 2,162 | [
"Biomaterials",
"Metals",
"Materials",
"Medicinal chemistry",
"Chemicals in medicine",
"Matter",
"Medical technology"
] |
23,251,812 | https://en.wikipedia.org/wiki/Affibody%20molecule | Affibody molecules are small, robust proteins engineered to bind to a large number of target proteins or peptides with high affinity, imitating monoclonal antibodies, and are therefore a member of the family of antibody mimetics. Affibody molecules are used in biochemical research and are being developed as potential new biopharmaceutical drugs. These molecules can be used for molecular recognition in diagnostic and therapeutic applications.
Development
As with other antibody mimetics, the idea behind developing the Affibody molecule was to apply a combinatorial protein engineering approach on a small and robust protein scaffold. The aim was to generate new binders capable of specific binding to different target proteins with almost good affinity, while retaining the favorable folding and stability properties, and ease of bacterial expression of the parent molecule.
The original Affibody protein scaffold was designed based on the Z domain (the immunoglobulin G binding domain) of protein A. These molecules are the newly developed class of scaffold proteins derived from the randomization of 13 amino acids located in two alpha helices involved in the binding activity of the parent protein domain. Lately, amino acids outside of the binding surface have been substituted in the scaffold to create a surface entirely different from the ancestral protein A domain.
In contrast to antibodies, Affibody molecules are composed of alpha helices and lack disulfide bridges. The parent three-helix bundle structure is currently the fastest folding protein structure known. Specific Affibody molecules binding a desired target protein can be “fished out” from pools (libraries) containing billions of different variants, using phage display.
Production
Affibody molecules are based on a three-helix bundle domain, which can be expressed in soluble and proteolytically stable forms in various host cells on its own or via fusion with other protein partners.
They tolerate modification and are independently folding when incorporated into fusion proteins. Head-to-tail fusions of Affibody molecules of the same specificity have proven to give avidity effects in target binding, and head-to-tail fusion of Affibody molecules of different specificities makes it possible to get bi- or multi-specific affinity proteins. Fusions with other proteins can also be created genetically or by spontaneous isopeptide bond formation. A site for site-specific conjugation is facilitated by introduction of a single cysteine at a desired position, therefore this engineered protein can be used to conjugate to radionuclides such as technetium-99m and indium-111 to visualize receptor-overexpressing tumors.
A number of different Affibody molecules have been produced by chemical synthesis. Since they do not contain cysteines or disulfide bridges, they fold spontaneously and reversibly into the correct three-dimensional structures when the protection groups are removed after synthesis. In some studies, temperatures above the melting temperature have been used, with retained binding properties following return to ambient conditions. Cross-linked variants have been produced as well.
Properties
An Affibody molecule consists of three alpha helices with 58 amino acids and has a molar mass of about 6 kDa. A monoclonal antibody, for comparison, is 150 kDa, and a single-domain antibody, the smallest type of antigen-binding antibody fragment, 12–15 kDa.
Affibody molecules have been shown to withstand high temperatures () or acidic and alkaline conditions (pH 2.5 or pH 11, respectively).
Binders with an affinity of down to sub-nanomolar have been obtained from native library selections, and binders with picomolar affinity have been obtained following affinity maturation. Affibodies conjugated to weak electrophiles bind their targets covalently. Combination of small size, ease of engineering, high affinity and specificity makes Affibody molecules suitable alternative as monoclonal antibodies for both molecular imaging and therapeutical applications, especially for the receptor-overexpressing tumors. These proteins are characterized by a high rate of extravasation and rapid clearance of non-bound tracer from the circulation, as well as other nonspecific compartments, when compared to antibodies and their fragments
Applications
Affibody molecules can be used for protein purification, enzyme inhibition, research reagents for protein capture and detection, diagnostic imaging and targeted therapy. The second generation of Affibody molecule, ABY-025, binds selectively to HER2 receptors with picomolar affinity. These Affibody molecules are in clinical development for tumor diagnosis. Anti-HER2 Affibody molecule, fused with albumin binding domain (ABD), denoted as ABY-027, labeled with Lutetium-177 provided reduction of renal and hepatic uptake of radioactivity in mice xenografts. Recently, anti-ZEGFR Affibody ZEGFR:2377 labeled with technetium-99m was successfully used to visualize ZEGR expressing tumor in mice xenograft also.
References
External links
Affibody: Official homepage
Antibody mimetics | Affibody molecule | [
"Chemistry"
] | 1,043 | [
"Antibody mimetics",
"Molecular biology"
] |
23,254,182 | https://en.wikipedia.org/wiki/Deficit%20irrigation | Deficit irrigation (DI) is a watering strategy that can be applied by different types of irrigation application methods. The correct application of DI requires thorough understanding of the yield response to water (crop sensitivity to drought stress) and of the economic impact of reductions in harvest. In regions where water resources are restrictive it can be more profitable for a farmer to maximize crop water productivity instead of maximizing the harvest per unit land. The saved water can be used for other purposes or to irrigate extra units of land.
DI is sometimes referred to as incomplete supplemental irrigation or regulated DI.
Definition
Deficit irrigation (DI) has been reviewed and defined as follows:
Deficit irrigation is an optimization strategy in which irrigation is applied during drought-sensitive growth stages of a crop. Outside these periods, irrigation is limited or even unnecessary if rainfall provides a minimum supply of water. Water restriction is limited to drought-tolerant phenological stages, often the vegetative stages and the late ripening period. Total irrigation application is therefore not proportional to irrigation requirements throughout the crop cycle. While this inevitably results in plant drought stress and consequently in production loss, DI maximizes irrigation water productivity, which is the main limiting factor (English, 1990). In other words, DI aims at stabilizing yields and at obtaining maximum crop water productivity rather than maximum yields (Zhang and Oweis, 1999).
Crop water productivity
Crop water productivity (WP) or water use efficiency (WUE) expressed in kg/m³ is an efficiency term, expressing the amount of marketable product (e.g. kilograms of grain) in relation to the amount of input needed to produce that output (cubic meters of water). The water used for crop production is referred to as crop evapotranspiration. This is a combination of water lost by evaporation from the soil surface and transpiration by the plant, occurring simultaneously. Except by modeling, distinguishing between the two processes is difficult. Representative values of WUE for cereals at field level, expressed with evapotranspiration in the denominator, can vary between 0.10 and 4 kg/m3.
Experiences with deficit irrigation
For certain crops, experiments confirm that deficit irrigation (DI) can increase water use efficiency without severe yield reductions. For example for winter wheat in Turkey, planned DI increased yields by 65% as compared to winter wheat under rainfed cultivation, and had double the water use efficiency as compared to rainfed and fully irrigated winter wheat. Similar positive results have been described for cotton. Experiments in Turkey and India indicated that the irrigation water use for cotton could be reduced to up to 60 percent of the total crop water requirement with limited yield losses. In this way, high water productivity and a better nutrient-water balance was obtained.
Certain underutilized and horticultural crops also respond favorably to DI, such as tested at experimental and farmer level for the crop quinoa. Yields could be stabilized at around 1.6 tons per hectare by supplementing irrigation water if rainwater was lacking during the plant establishment and reproductive stages. Applying irrigation water throughout the whole season (full irrigation) reduced the water productivity. Also in viticulture and fruit tree cultivation, DI is practiced.
Scientists affiliated with the Agricultural Research Service (ARS) of the USDA found that conserving water by forcing drought (or deficit irrigation) on peanut plants early in the growing season has shown to cause early maturation of the plant yet still maintain sufficient yield of the crop. Inducing drought through deficit irrigation earlier in the season caused the peanut plants to physiologically "learn" how to adapt to a stressful drought environment, making the plants better able to cope with drought that commonly occurs later in the growing season. Deficit irrigation is beneficial for the farmers because it reduces the cost of water and prevents a loss of crop yield (for certain crops) later on in the growing season due to drought. In addition to these findings, ARS scientists suggest that deficit irrigation accompanied with conservation tillage would greatly reduce the peanut crop water requirement.
For other crops, the application of deficit irrigation will result in a lower water use efficiency and yield. This is the case when crops are sensitive to drought stress throughout the complete season, such as maize.
Apart from university research groups and farmers associations, international organizations such as FAO, ICARDA, IWMI and the CGIAR Challenge Program on Water and Food are studying DI.
Reasons for increased water productivity under deficit irrigation
If crops have certain phenological phases in which they are tolerant to water stress, DI can increase the ratio of yield over crop water consumption (evapotranspiration) by either reducing the water loss by unproductive evaporation, and/or by
increasing the proportion of marketable yield to the totally produced biomass (harvest index), and/or by increasing the proportion of total biomass production to transpiration due to hardening of the crop - although this effect is very limited due to the conservative relation between biomass production and crop transpiration, - and/or due to adequate fertilizer application and/or by avoiding bad agronomic conditions during crop growth, such as water logging in the root zone, pests and diseases, etc.
Advantages
The correct application of deficit irrigation for a certain crop:
maximizes the productivity of water, generally with adequate harvest quality;
allows economic planning and stable income due to a stabilization of the harvest in comparison with rainfed cultivation;
decreases the risk of certain diseases linked to high humidity (e.g. fungi) in comparison with full irrigation;
reduces nutrient loss by leaching of the root zone, which results in better groundwater quality and lower fertilizer needs as for cultivation under full irrigation;
improves control over the sowing date and length of the growing period independent from the onset of the rainy season and therefore improves agricultural planning.
Constraints
A number of constraints apply to deficit irrigation:
Exact knowledge of the crop response to water stress is imperative.
There should be sufficient flexibility in access to water during periods of high demand (drought sensitive stages of a crop).
A minimum quantity of water should be guaranteed for the crop, below which DI has no significant beneficial effect.
An individual farmer should consider the benefit for the total water users community (extra land can be irrigated with the saved water), when he faces a below-maximum yield;
Because irrigation is applied more efficiently, the risk for soil salinization is higher under DI as compared to full irrigation.
Modeling
Field experimentation is necessary for correct application of DI for a particular crop in a particular region. In addition, simulation of the soil water balance and related crop growth (crop water productivity modeling) can be a valuable decision support tool. By conjunctively simulating the effects of different influencing factors (climate, soil, management, crop characteristics) on crop production, models allow to (1) better understand the mechanism behind improved water use efficiency, to (2) schedule the necessary irrigation applications during the drought sensitive crop growth stages, considering the possible variability in climate, to (3) test DI strategies of specific crops in new regions, and to (4) investigate the effects of future climate scenarios or scenarios of altered management practices on crop production.
See also
Dryland farming
Irrigation
Irrigation in viticulture
Environmental impact of irrigation
Virtual water
Water crisis
Water footprint
References
External links
AquaCrop: the new crop water productivity model from FAO
The International Water Management Institute
The International Center for Agricultural Research in the Dry Areas
The Food and Agricultural Organization of the United Nations
CGIAR challenge program on Water and Food
European project on deficit irrigation
Agronomy
Hydrology
Biological engineering
Irrigation
Water and the environment
Water supply | Deficit irrigation | [
"Chemistry",
"Engineering",
"Biology",
"Environmental_science"
] | 1,542 | [
"Hydrology",
"Water supply",
"Biological engineering",
"Environmental engineering"
] |
23,258,067 | https://en.wikipedia.org/wiki/Matroid%20intersection | In combinatorial optimization, the matroid intersection problem is to find a largest common independent set in two matroids over the same ground set. If the elements of the matroid are assigned real weights, the weighted matroid intersection problem is to find a common independent set with the maximum possible weight. These problems generalize many problems in combinatorial optimization including finding maximum matchings and maximum weight matchings in bipartite graphs and finding arborescences in directed graphs.
The matroid intersection theorem, due to Jack Edmonds, says that there is always a simple upper bound certificate, consisting of a partitioning of the ground set amongst the two matroids, whose value (sum of respective ranks) equals the size of a maximum common independent set. Based on this theorem, the matroid intersection problem for two matroids can be solved in polynomial time using matroid partitioning algorithms.
Examples
Let G = (U,V,E) be a bipartite graph. One may define a partition matroid MU on the ground set E, in which a set of edges is independent if no two of the edges have the same endpoint in U. Similarly one may define a matroid MV in which a set of edges is independent if no two of the edges have the same endpoint in V. Any set of edges that is independent in both MU and MV has the property that no two of its edges share an endpoint; that is, it is a matching. Thus, the largest common independent set of MU and MV is a maximum matching in G.
Similarly, if each edge has a weight, then the maximum-weight independent set of MU and MV is a Maximum weight matching in G.
Algorithms
There are several polynomial-time algorithms for weighted matroid intersection, with different run-times. The run-times are given in terms of - the number of elements in the common base-set, - the maximum between the ranks of the two matroids, - the number of operations required for a circuit-finding oracle, and - the number of elements in the intersection (in case we want to find an intersection of a specific size ).
Edmonds' algorithm uses linear programming and polyhedra.
Lawler's algorithm.
Iri and Tomizawa's algorithm
Andras Frank's algorithm uses arithmetic operations.
Orlin and Vande-Vate's algorithm.
Cunningham's algorithm requires operations on general matroids, and operations on linear matroid, for two r-by-n matrices.
Brezovec, Cornuejos and Glover present two algorithms for weighted matroid intersection.
The first algorithm requires that all weights be integers, and finds an intersection of cardinality in time .
The second algorithm runs in time .
Huang, Kakimura and Kamiyama show that the weighted matroid intersection problem can be solved by solving W instances of the unweighted matroid intersection problem, where W is the largest given weight, assuming that all given weights are integral. This algorithm is faster than previous algorithms when W is small. They also present an approximation algorithm that finds an e-approximate solution by solving instances of the unweighted matroid intersection problem, where r is the smaller rank of the two input matroids.
Ghosh, Gurjar and Raj study the run-time complexity of matroid intersection in the parallel computing model.
Bérczi, Király, Yamaguchi and Yokoi present strongly polynomial-time algorithms for weighted matroid intersection using more restricted oracles.
Extensions
Maximizing weight subject to cardinality
In a variant of weighted matroid intersection, called "(Pk)", the goal is to find a common independent set with the maximum possible weight among all such sets with cardinality k, if such a set exists. This variant, too, can be solved in polynomial time.
Three matroids
The matroid intersection problem becomes NP-hard when three matroids are involved, instead of only two.
One proof of this hardness result uses a reduction from the Hamiltonian path problem in directed graphs. Given a directed graph G with n vertices, and specified nodes s and t, the Hamiltonian path problem is the problem of determining whether there exists a simple path of length n − 1 that starts at s and ends at t. It may be assumed without loss of generality that s has no incoming edges and t has no outgoing edges. Then, a Hamiltonian path exists if and only if there is a set of n − 1 elements in the intersection of three matroids on the edge set of the graph: two partition matroids ensuring that the in-degree and out-degree of the selected edge set are both at most one, and the graphic matroid of the undirected graph formed by forgetting the edge orientations in G, ensuring that the selected edge set has no cycles.
Matroid parity
Another computational problem on matroids, the matroid parity problem, was formulated by Lawler as a common generalization of matroid intersection and non-bipartite graph matching. However, although it can be solved in polynomial time for linear matroids, it is NP-hard for other matroids, and requires exponential time in the matroid oracle model.
Valuated matroids
A valuated matroid is a matroid equipped with a value function v on the set of its bases, with the following exchange property: for any two distinct bases and , if , then there exists an element such that both and are bases, and :.
Given a weighted bipartite graph G = (X+Y, E) and two valuated matroids, one on X with bases set BX and valuation vX, and one on Y with bases BY and valuation vY, the valuated independent assignment problem is the problem of finding a matching M in G, such that MX (the subset of X matched by M) is a base in BX , MY is a base in BY , and subject to this, the sum is maximized. The weighted matroid intersection problem is a special case in which the matroid valuations are constant, so we only seek to maximize subject to MX is a base in BX and MY is a base in BY . Murota presents a polynomial-time algorithm for this problem.
See also
Matroid partitioning - a related problem.
References
Further reading
.
.
..
Combinatorial optimization
Intersection | Matroid intersection | [
"Mathematics"
] | 1,293 | [
"Matroid theory",
"Combinatorics"
] |
24,748,949 | https://en.wikipedia.org/wiki/Colorimeter%20%28chemistry%29 | A colorimeter is a device used in colorimetry that measures the absorbance of particular wavelengths of light by a specific solution. It is commonly used to determine the concentration of a known solute in a given solution by the application of the Beer–Lambert law, which states that the concentration of a solute is proportional to the absorbance.
Construction
The essential parts of a colorimeter are:
a light source (often an ordinary low-voltage filament lamp);
an adjustable aperture;
a set of colored filters;
a cuvette to hold the working solution;
a detector (usually a photoresistor) to measure the transmitted light;
a meter to display the output from the detector.
In addition, there may be:
a voltage regulator, to protect the instrument from fluctuations in mains voltage;
a second light path, cuvette and detector. This enables comparison between the working solution and a "blank", consisting of pure solvent, to improve accuracy.
There are many commercialized colorimeters as well as open source versions with construction documentation for education and for research.
Filters
Changeable optics filters are used in the colorimeter to select the wavelength which the solute absorbs the most, in order to maximize accuracy. The usual wavelength range is from 400 to 700 nm. If it is necessary to operate in the ultraviolet range then some modifications to the colorimeter are needed. In modern colorimeters the filament lamp and filters may be replaced by several (light-emitting diode) of different colors.
Cuvettes
In a manual colorimeter the cuvettes are inserted and removed by hand. An automated colorimeter (as used in an AutoAnalyzer) is fitted with a flowcell through which solution flows continuously.
Output
The output from a colorimeter may be displayed by an analogue or digital meter and may be shown as transmittance (a linear scale from 0 to 100%) or as absorbance (a logarithmic scale from zero to infinity). The useful range of the absorbance scale is from 0 to 2 but it is desirable to keep within the range 0–1, because above 1 the results become unreliable due to scattering of light.
In addition, the output may be sent to a chart recorder, data logger, or computer.
See also
Spectronic 20
Spectrophotometer
Lovibond Colorimeter
Notes
References
The Nuffield Foundation 2003. March 30, 2003.
"Colour." Encyclopædia Britannica. Encyclopædia Britannica Online. Encyclopædia Britannica Inc. (2011) Accessed 17 November 2011.
"Colorimetry" Encyclopædia Britannica. Encyclopædia Britannica Online. Encyclopædia Britannica Inc. (2011) 17 November 2011.
Orion Colorimetry Theory. The Technical Edge.
Scientific instruments
Color
Optical instruments
Spectroscopy
Laboratory equipment | Colorimeter (chemistry) | [
"Physics",
"Chemistry",
"Technology",
"Engineering"
] | 601 | [
"Molecular physics",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Measuring instruments",
"Scientific instruments",
"Spectroscopy"
] |
24,750,413 | https://en.wikipedia.org/wiki/Argand%20system | In mathematics, an nth-order Argand system (named after French mathematician Jean-Robert Argand) is a coordinate system constructed around the nth roots of unity. From the origin, n axes extend such that the angle between each axis and the axes immediately before and after it is 360/n degrees. For example, the number line is the 2nd-order Argand system because the two axes extending from the origin represent 1 and −1, the 2nd roots of unity. The complex plane (sometimes called the Argand plane, also named after Argand) is the 4th-order Argand system because the 4 axes extending from the origin represent 1, i, −1, and −i, the 4th roots of unity.
References
Flanigan, Francis J., Complex Variables: Harmonic and Analytic Functions, Dover, 1983,
Jones, Phillip S., "Argand, Jean-Robert", Dictionary of Scientific Biography 237–240, Charles Scribner's Sons, 1970,
Mathematical structures | Argand system | [
"Mathematics"
] | 213 | [
"Mathematical structures",
"Mathematical objects"
] |
5,391,037 | https://en.wikipedia.org/wiki/Constant%20of%20motion | In mechanics, a constant of motion is a physical quantity conserved throughout the motion, imposing in effect a constraint on the motion. However, it is a mathematical constraint, the natural consequence of the equations of motion, rather than a physical constraint (which would require extra constraint forces). Common examples include energy, linear momentum, angular momentum and the Laplace–Runge–Lenz vector (for inverse-square force laws).
Applications
Constants of motion are useful because they allow properties of the motion to be derived without solving the equations of motion. In fortunate cases, even the trajectory of the motion can be derived as the intersection of isosurfaces corresponding to the constants of motion. For example, Poinsot's construction shows that the torque-free rotation of a rigid body is the intersection of a sphere (conservation of total angular momentum) and an ellipsoid (conservation of energy), a trajectory that might be otherwise hard to derive and visualize. Therefore, the identification of constants of motion is an important objective in mechanics.
Methods for identifying constants of motion
There are several methods for identifying constants of motion.
The simplest but least systematic approach is the intuitive ("psychic") derivation, in which a quantity is hypothesized to be constant (perhaps because of experimental data) and later shown mathematically to be conserved throughout the motion.
The Hamilton–Jacobi equations provide a commonly used and straightforward method for identifying constants of motion, particularly when the Hamiltonian adopts recognizable functional forms in orthogonal coordinates.
Another approach is to recognize that a conserved quantity corresponds to a symmetry of the Lagrangian. Noether's theorem provides a systematic way of deriving such quantities from the symmetry. For example, conservation of energy results from the invariance of the Lagrangian under shifts in the origin of time, conservation of linear momentum results from the invariance of the Lagrangian under shifts in the origin of space (translational symmetry) and conservation of angular momentum results from the invariance of the Lagrangian under rotations. The converse is also true; every symmetry of the Lagrangian corresponds to a constant of motion, often called a conserved charge or current.
A quantity is a constant of the motion if its total time derivative is zero which occurs when 's Poisson bracket with the Hamiltonian equals minus its partial derivative with respect to time
Another useful result is Poisson's theorem, which states that if two quantities and are constants of motion, so is their Poisson bracket .
A system with n degrees of freedom, and n constants of motion, such that the Poisson bracket of any pair of constants of motion vanishes, is known as a completely integrable system. Such a collection of constants of motion are said to be in involution with each other. For a closed system (Lagrangian not explicitly dependent on time), the energy of the system is a constant of motion (a conserved quantity).
In quantum mechanics
An observable quantity Q will be a constant of motion if it commutes with the Hamiltonian, H, and it does not itself depend explicitly on time. This is because
where
is the commutator relation.
Derivation
Say there is some observable quantity which depends on position, momentum and time,
And also, that there is a wave function which obeys Schrödinger's equation
Taking the time derivative of the expectation value of requires use of the product rule, and results in
So finally,
Comment
For an arbitrary state of a Quantum Mechanical system, if and commute, i.e. if
and is not explicitly dependent on time, then
But if is an eigenfunction of the Hamiltonian, then even if
it is still the case that
provided is independent of time.
Derivation
Since
then
This is the reason why eigenstates of the Hamiltonian are also called stationary states.
Relevance for quantum chaos
In general, an integrable system has constants of motion other than the energy. By contrast, energy is the only constant of motion in a non-integrable system; such systems are termed chaotic. In general, a classical mechanical system can be quantized only if it is integrable; as of , there is no known consistent method for quantizing chaotic dynamical systems.
Integral of motion
A constant of motion may be defined in a given force field as any function of phase-space coordinates (position and velocity, or position and momentum) and time that is constant throughout a trajectory. A subset of the constants of motion are the integrals of motion, or first integrals, defined as any functions of only the phase-space coordinates that are constant along an orbit. Every integral of motion is a constant of motion, but the converse is not true because a constant of motion may depend on time. Examples of integrals of motion are the angular momentum vector, , or a Hamiltonian without time dependence, such as . An example of a function that is a constant of motion but not an integral of motion would be the function for an object moving at a constant speed in one dimension.
Dirac observables
In order to extract physical information from gauge theories, one either constructs gauge invariant observables or fixes a gauge. In a canonical language, this usually means either constructing functions which Poisson-commute on the constraint surface with the gauge generating first class constraints or to fix the flow of the latter by singling out points within each gauge orbit. Such gauge invariant observables are thus the `constants of motion' of the gauge generators and referred to as Dirac observables.
References
Classical mechanics | Constant of motion | [
"Physics"
] | 1,165 | [
"Mechanics",
"Classical mechanics"
] |
5,391,903 | https://en.wikipedia.org/wiki/Dithioerythritol | Dithioerythritol (DTE) is a sulfur containing sugar alcohol derived from the corresponding 4-carbon monosaccharide erythrose. It is an epimer of dithiothreitol (DTT). The molecular formula for DTE is C4H10O2S2.
Chemical properties
DTE is a crystalline solid soluble in water and alcohols.
Applications
Like DTT, DTE makes an excellent reducing agent, which can be used for reduction of disulfide bonds. The reduction potential of DTE is the same as for DTT, about –0.331 mV. The pKa values of the thiol groups of DTE are 9.0 and 9.9, which is higher than the corresponding values for DTT (9.3 and 9.5). Since reduction of disulfide bonds requires thiolate (ionized thiol), DTE is less efficient at lower pH compared to DTT.
Reduction with DTE is slower than with DTT. This is presumably because the orientation of the OH groups in its cyclic disulfide-bonded form (oxidized form) is less stable due to greater steric repulsion than their orientation in the disulfide-bonded form of DTT. In the disulfide-bonded form of DTT, these hydroxyl groups are trans to each other, whereas they are cis to each other in DTE.
References
External links
Thiols
Vicinal diols
Reducing agents | Dithioerythritol | [
"Chemistry"
] | 310 | [
"Organic compounds",
"Thiols",
"Redox",
"Reducing agents"
] |
43,123,622 | https://en.wikipedia.org/wiki/Dynamic%20shear%20rheometer | A dynamic shear rheometer, commonly known as DSR, is used for research and development as well as for quality control in the manufacture of a wide range of materials. Dynamic shear rheometers have been used since 1993 when Superpave was used for characterising and understanding high temperature rheological properties of asphalt binders in both the molten and solid state and is fundamental in order formulate the chemistry and predict the end-use performance of these materials.
This is done by deriving the complex modulus (G*) from the storage modulus (elastic response, G') and loss modulus (viscous behaviour, G") yielding G* as a function of stress over strain. It is used to characterize the viscoelastic behavior of asphalt binders at intermediate temperatures from .
References
Fluid mechanics
Rheology
Materials testing | Dynamic shear rheometer | [
"Chemistry",
"Materials_science",
"Engineering"
] | 173 | [
"Materials science",
"Materials testing",
"Civil engineering",
"Fluid mechanics",
"Rheology",
"Fluid dynamics"
] |
43,125,367 | https://en.wikipedia.org/wiki/Predictive%20control%20of%20switching%20power%20converters | Predictive controllers rely on optimum control systems theory and aim to solve a cost function minimization problem. Predictive controllers are relatively easy to numerically implement but electronic power converters are non-linear time-varying dynamic systems, so a different approach to predictive must be taken.
Principles of non-linear predictive optimum control
The first step to designing a predictive controller is to derive a detailed direct dynamic model (including non-linearities) of the switching power converter. This model must contain enough detail of the converter dynamics to allow, from initial conditions, a forecast in real time and with negligible error, of the future behavior of the converter.
Sliding mode control of switching power converters chooses a vector to reach sliding mode as fast as possible (high switching frequency).
It would be better to choose a vector to ensure zero error at the end of the sampling period Δt.To find such a vector, a previous calculation can be made (prediction);
The converter has a finite number of vectors (states) and is usually non-linear: one way is to try all vectors to find the one that minimizes the control errors, prior to the application of that vector to the converter.
Direct dynamics model-based predictive control (DDMBPC)
Inverse dynamics optimum predictive control (IDOPC)
References
Power supplies
Power electronics
Electric power conversion | Predictive control of switching power converters | [
"Engineering"
] | 285 | [
"Electronic engineering",
"Power electronics"
] |
43,125,749 | https://en.wikipedia.org/wiki/Seismic%20code | Seismic codes or earthquake codes are building codes designed to protect property and life in buildings in case of earthquakes. The need for such codes is reflected in the saying, "Earthquakes don't kill people—buildings do." Or in expanded version, "Earthquakes do not injure or kill people. Poorly built manmade structures injure and kill people".
Seismic codes were created and developed as a response to major earthquakes, including 1755 Lisbon, 1880 Luzon, and 1908 Messina which have caused devastation in highly populated regions. Often these are revised based on knowledge gained from recent earthquakes and research findings, and as such they are constantly evolving. There are many seismic codes used worldwide. Most codes at their root share common fundamental approaches regarding how to design buildings for earthquake effects, but will differ in their technical requirements and will have language addressing local geologic conditions, common construction types, historic issues, etc.
Origin
The 1755 Lisbon earthquake (Portugal) resulted in prescriptive rules for building certain kinds of buildings common in the area.
Following the 1908 Messina earthquake (Italy), the Royal Government of Italy established Geological Committee and Engineering Committee in early 1909 to study the disaster and recommend earthquake disaster mitigation measures. The Engineering Committee, after studying the lateral load resistance of buildings which survived the earthquake motion, recommended that the seismic ratio (seismic acceleration divided by the gravity acceleration) equal to 1/12 for the first floor and 1/8 for the floors above should be used in seismic design of buildings. The Committee proposed equivalent vertical forces much larger than the horizontal forces because vertical motion acted as impacts. This is believed to be the first known quantitative recommendation of design seismic forces in the history of seismic codes. The recommendation was adopted in Royal Decree No. 573 of April 29, 1915. The height of the buildings was limited to two stories, and the first story should be designed for a horizontal force equal to 1/8 the second floor weight and the second story for 1/6 of the roof weight.
The 1923 Great Kantō earthquake (Japan) and earlier events inspired Japanese engineer Toshikata Sano to develop a lateral force procedure that was officially implemented in the 1924 Japanese Urban Building Law, which directed engineers to design buildings for horizontal forces of about 10% of the weight of the building.
In 1925, the city of Santa Barbara, California, added a building code requirement that structures be designed to withstand horizontal forces, but was nonspecific regarding design loads or procedure. This is considered to be the first explicit policy and legal consideration of the seismic safety of structures in the U.S. The city of Palo Alto, California, led by professors at Stanford, also added similar language to its building code in 1926.
In January 1928, the first edition of the Uniform Building Code (UBC) was published, and included an appendix with non-mandatory matter with §2311 recommending a minimum lateral design force for earthquake resistance of V = 0.075W for buildings on foundations with allowable bearing pressures of 4,000 psf or more, and 0.10 W for all other buildings including those on pile foundations. Building weight (seismic mass) was defined as: W = Dead load + Live load. These provisions were inspired by Japan's newly developed seismic code. The non-mandatory lateral design provisions are not known to have been explicitly adopted by any jurisdiction at the time, but may have been used voluntarily for the design of some buildings.
In response to the 1933 Long Beach earthquake (California), the city of Los Angeles adopted the first earthquake design provisions enforced in the U.S., enacted by City Council under Ordinance No. 72,968 published on September 6, 1933. The requirements included a design lateral base shear V = 0.08 W for regular use buildings, 0.10 W for school buildings and 0.04 W for the portion of a building above a flexible story. Building weight (seismic mass) was defined as W = Dead load + 0.5 Live load (except 1.0 Live for warehouses). Building frames were required to be designed to withstand at least 0.25V independent of any walls.
Immediately after the 1933 Long Beach earthquake, careful analysis of structural failures in that quake by architect Louis John Gill formed the basis for much of the California seismic legislation (Field Act for schools and Riley Act for all buildings). The 1933 Riley Act required all California local governments to have a building department and inspect new construction, mandating that all structures in the state be designed to withstand a horizontal acceleration of 0.02 times the acceleration due to gravity.
Around the world
Mexico
The first Mexico City building code was issued in 1942; since 1966, it contains a complete set of regulations for structural design and has served as a reference for municipalities across the country. In 1976, the code adopted a coherent format for all materials and structural systems, based on limit states design philosophy. In February 2004 a new set of seismic codes was issued.
Spain
In Spain, the seismic code is called the "Norma de Construcción Sismorresistente". (See the article in Spanish Wikipedia)
Turkey
The earliest Turkish seismic codes were published in the 1940s; TS500, Requirements for Design and Construction of Reinforced Concrete Structures and the Turkish Building Seismic Code. Several revisions of the codes have been published with additional stringent specifications. The last revision was published in 2018 and came into effect the following year. These codes, however, only affects reinforced concrete buildings; historical buildings, coastal and port infrastructure were excluded.
The 1940 seismic code was developed in response to the 1939 Erzincan earthquake which killed 32,000 people. It drew parallels with Italy's seismic codes at the time. A seismic zonation map was also developed in 1942 which assessed the seismic hazard of all Turkish provinces on three levels; "hazardous", "less hazardous" and "no hazard". The 1948 seismic codes were prepared in consideration of the seismic zone map. A new code was revised in 1961 and in 1963, the seismic zonation map was updated with four hazard levels based on predicted shaking on the Modified Mercalli intensity scale. It was subsequently added a fifth hazard level in the 1972 revision.
The 1968 seismic codes introduced additional demands for reinforced concrete component and modern concepts relating to spectral shape and dynamic response. Following the 1972 seismic zonation, the seismic codes were updated in 1975. It included new methods to compute seismic loading on buildings and ductile detailing for reinforced concrete. The zonation map and codes were revised in 1997.
Poor enforcement of seismic codes was a contributing factor to the devastation of the 2023 Turkey–Syria earthquakes in which over 50,000 people died. There were high incidences of support column failure leading to the pancake type collapses which complicates rescue efforts. In a bid to shore up support going into an election in 2018, the government began to offer amnesties for violations of the building code allowing the non-compliance. This has been done by previous governments too. Experts lamented the practice would turn cities into graveyards.
United States
In the United States, the Federal Emergency Management Agency (FEMA) publishes "Recommended Seismic Provisions for New Buildings and Other Structures. 2015 Edition".
See also
2023 Turkey–Syria earthquake#Criticism of government
References
Building codes
Earthquake engineering | Seismic code | [
"Engineering"
] | 1,473 | [
"Structural engineering",
"Building engineering",
"Civil engineering",
"Building codes",
"Earthquake engineering"
] |
43,127,616 | https://en.wikipedia.org/wiki/Cyclic%20corrosion%20testing | Cyclic Corrosion Testing (CCT) has evolved in recent years, largely within the automotive industry, as a way of accelerating real-world corrosion failures, under laboratory controlled conditions.
As the name implies, the test comprises different climates which are cycled automatically so the samples under test undergo the same sort of changing environment that would be encountered in the natural world. The intention being to bring about the type of failure that might occur naturally, but more quickly i.e. accelerated. By doing this manufacturers and suppliers can predict, more accurately, the service life expectancy of their products.
Until the development of Cyclic Corrosion Testing, the traditional Salt spray test was virtually all that manufacturers could use for this purpose. However, this test was never intended for this purpose. Because the test conditions specified for salt spray testing are not typical of a naturally occurring environment, this type of test cannot be used as a reliable means of predicting the ‘real world’ service life expectancy for the samples under test. The sole purpose of the salt spray test is to compare and contrast results with previous experience to perform a quality audit. So, for example, a spray test can be used to ‘police’ a production process and forewarn of potential manufacturing problems or defects, which might affect corrosion resistance.
To recreate these different environments within an environmental chamber requires much more flexible testing procedures than are available in a standard salt spray chamber.
The lack of correlation between results obtained from traditional salt spray testing and the ‘real world’ atmospheric corrosion of vehicles, left the automotive industry without a reliable test method for predicting the service life expectancy of their products. This was and remains of particular concern in an industry where anti-corrosion warranties have been gradually increasing and now run to several years for new vehicles.
With ever increasing consumer pressure for improved vehicle corrosion resistance and a few ‘high profile’ corrosion failures amongst some vehicle manufactures – with disastrous commercial consequences, the automotive industry recognized the need for a different type of corrosion test.
Such a test would need to simulate the types of conditions a vehicle might encounter naturally, but recreate and accelerate these conditions, with good repeatability, within the convenience of the laboratory.
CCT is effective for evaluating a variety of corrosion types, including galvanic corrosion and crevice corrosion. One of the earliest introduced cyclic testing machines was the Prohesion cabinet.
Test Stages
Taking results gathered largely from ‘real world’ exposure sites, automotive companies, led originally by the Japanese automobile industry, developed their own Cyclic Corrosion Tests. These have evolved in different ways for different vehicle manufacturers, and such tests still remain largely industry specific, with no truly international CCT standard. However, they all generally require most of the following conditions to be created, in a repeating sequence or ‘cycle’, though not necessarily in the following order:
• A salt spray ‘pollution’ phase. This may be similar to the traditional salt spray test although in some cases direct impingement by the salt solution on the test specimens, or even complete immersion in salt water, is required. However, this ‘pollution’ phase is generally shorter in duration than a traditional salt spray test.
• An air drying phase. Depending on the test, this may be conducted at ambient temperature, or at an elevated temperature, with or without control over the relative humidity and usually by introducing a continuous supply of relatively fresh air around the test samples at the same time. It is generally required that the samples under test should be visibly ‘dry’ at the end of this test phase.
• A condensation humidity ‘wetting’ phase. This is usually conducted at an elevated temperature and generally a high humidity of 95-100%RH. The purpose of this phase is to promote the formation of condensation on the surfaces of the samples under test.
• A controlled humidity/humidity cycling phase. This requires the tests samples to be exposed to a controlled temperature and controlled humidity climate, which can either be constant or cycling between different levels. When cycling between different levels, the rate of change may also be specified.
The above list is not exhaustive, since some automotive companies may also require other climates to be created in sequence as well, for example; sub-zero refrigeration, but it does list the most common requirements.
Tests Standards
The below list is not exhaustive, but here are some examples of popular cyclic corrosion test standards,
ACT 1 (Volvo)
ACT 2 (Volvo)
CETP 00.00-L-467 (Ford)
D17 2028 (Renault)
JASO M 609
SAE J 2334
VDA 621-415
See also
Environmental chamber
Salt spray test
List of ASTM standards
List of ISO standards
List of DIN standards
Society of Automotive Engineers
Accelerated aging
Failure causes
Further reading
Cyclic Cabinet Corrosion Testing - Gardner S.Haynes - 1995
ASTM American Society for Testing of Materials. ASTM B 117-11 Standard Practice for Operating Salt Spray (Fog) Apparatus, 2011
Corrosion Testing and Evaluation, Issue 1000 - Robert Baboian, S. W. Dean - ATM International - 1990
Laboratory Corrosion Tests and Standards: A Symposium by ASTM Committee G-1 on Corrosion of Metals - Gardner S. Haynes, Robert Baboian - 1985
Corrosion Basics, An Introduction, L.S. Van Delinder, ed. (Houston, TX: NACE, 1984).
Laboratory Corrosion Tests and Standards, Haynes GS, Baboian R, 1985
References
Corrosion
Environmental testing | Cyclic corrosion testing | [
"Chemistry",
"Materials_science",
"Engineering"
] | 1,097 | [
"Reliability engineering",
"Metallurgy",
"Corrosion",
"Electrochemistry",
"Environmental testing",
"Materials degradation"
] |
43,130,739 | https://en.wikipedia.org/wiki/Regional%20Atmospheric%20Modeling%20System | The Regional Atmospheric Modeling System (RAMS) is a set of computer programs that simulate the atmosphere for weather and climate research and for numerical weather prediction (NWP). Other components include a data analysis and a visualization package.
RAMS was developed in the 1980s at Colorado State University (CSU), spearheaded by William R. Cotton and Roger A. Pielke, for mesoscale meteorological modeling. Subsequent development is primarily done by Robert L. Walko and Craig J. Tremback under the supervision of Cotton and Pielke. It is a comprehensive non-hydrostatic model. It is written primarily in Fortran with some C code and it runs best under the Unix operating system. Version 6 was released in 2009.
RAMS is the basis for a system simulating the Martian atmosphere that is named MRAMS.
See also
Downscaling
References
External links
Colorado State University site
ATMET (Atmospheric, Meteorological, and Environmental Technologies) site
RAMS Documentation
Numerical climate and weather models
Physics software | Regional Atmospheric Modeling System | [
"Physics"
] | 199 | [
"Physics software",
"Computational physics"
] |
29,388,093 | https://en.wikipedia.org/wiki/Coenergy | In physics and engineering, Coenergy (or co-energy) is a non-physical quantity, measured in energy units, used in theoretical analysis of energy in physical systems.The concept of co-energy can be applied to many conservative systems (inertial mechanical, electromagnetic, etc.), which can be described by a linear relationship between the input and stored energy.The co-energy analysis techniques cannot be applied to non-linear systems. However, small nonlinearities are often neglected by linearisation of the problems.
Example - magnetic coenergy
Consider a system with a single coil and a non-moving armature (i.e. no mechanical work is done). Hence, all of the electric energy supplied to the device is stored in the magnetic field.
where (e is the voltage, i is the current, and is the flux linkage):
therefore
For a general problem the relationship is non-linear (see also magnetic hysteresis).
If there is a finite change in flux linkage from one value to another (e.g. from to ), it can be calculated as:
(If the changes are cyclic there will be losses for hysteresis and eddy currents. The additional energy for this would be taken from the input energy, so that the flux linkage to the coil is not affected by the losses and the coil can be treated as an ideal lossless coil. Such system is therefore conservative.)
For calculations either the flux linkage or the current i can be used as the independent variable.
The total energy stored in the system is equal to the area OABO, which is in turn equal to OACO, therefore:
For linear lossless systems the coenergy is equal in value to the stored energy. The coenergy has no real physical meaning, but it is useful in calculating mechanical forces in electromagnetic systems. To distinguish it from the "real" energy in calculations it is usually marked with an apostrophe.
The total area of the rectangle OCABO is equal to the sum of the two triangles (energy + coenergy), so:
Hence for at a given operating point with current i and flux linkage :
The self inductance is defined as flux linkage over current:
and the energy stored in a coil is:
In a magnetic circuit with a movable-armature the inductance will be a function of position .
Therefore the field energy can be written as a function of two mathematically independent variables and :
And the coenergy is a function of two independent variables and :
The last two expressions are general equations for energy and coenergy in magnetostatic system.
Applications of coenergy theory
The concept of coenergy is practically used for instance in finite element analysis for calculations of mechanical forces between magnetized parts.
References
Electromagnetism | Coenergy | [
"Physics"
] | 582 | [
"Electromagnetism",
"Physical phenomena",
"Fundamental interactions"
] |
29,388,477 | https://en.wikipedia.org/wiki/Multiple-prism%20grating%20laser%20oscillator | Multiple-prism grating laser oscillators, or MPG laser oscillators, use multiple-prism beam expansion to illuminate a diffraction grating mounted either in Littrow configuration or grazing-incidence configuration. Originally, these narrow-linewidth tunable dispersive oscillators were introduced as multiple-prism Littrow (MPL) grating oscillators, or hybrid multiple-prism near-grazing-incidence (HMPGI) grating cavities, in organic dye lasers. However, these designs were quickly adopted for other types of lasers such as gas lasers, diode lasers, and more recently fiber lasers.
Excitation
Multiple-prism grating laser oscillators can be excited either electrically, as in the case of gas lasers and semiconductor lasers, or optically, as in the case of crystalline lasers and organic dye lasers. In the case of optical excitation it is often necessary to match the polarization of the excitation laser to the polarization preference of the multiple-prism grating oscillator. This can be done using a polarization rotator thus improving the laser conversion efficiency.
Linewidth performance
The multiple-prism dispersion theory is applied to design these beam expanders either in additive configuration, thus adding or subtracting their dispersion to the dispersion of the grating, or in compensating configuration (yielding zero dispersion at a design wavelength) thus allowing the diffraction grating to control the tuning characteristics of the laser cavity. Under those conditions, that is, zero dispersion from the multiple-prism beam expander, the single-pass laser linewidth is given by
where is the beam divergence and M is the beam magnification provided by the beam expander that multiplies the angular dispersion provided by the diffraction grating. In the case of multiple-prism beam expanders this factor can be as high as 100–200.
When the dispersion of the multiple-prism expander is not equal to zero, then the single-pass linewidth is given by
where the first differential refers to the angular dispersion from the grating and the second differential refers to the overall dispersion from the multiple-prism beam expander.
Optimized solid-state multiple-prism grating laser oscillators have been shown, by Duarte, to generate pulsed single-longitudinal-mode emission limited only by Heisenberg's uncertainty principle. The laser linewidth in these experiments is reported as ≈ 350 MHz (or ≈ 0.0004 nm at 590 nm) in pulses ~ 3 ns wide, at power levels in the kW regime.
Applications
Applications of these tunable narrow-linewidth lasers include:
Coherent anti-Stokes Raman spectroscopy and combustion diagnostics
LIDAR
Laser spectroscopy
Atomic vapor laser isotope separation
See also
Dye lasers
Solid state dye lasers
Laser cavity
Laser linewidth
Multiple-prism dispersion theory
Polarization rotator
Tunable lasers
References
External links
Diagrams of MPG laser oscillators
Quantum optics
Prisms (optics)
Laser types | Multiple-prism grating laser oscillator | [
"Physics"
] | 654 | [
"Quantum optics",
"Quantum mechanics"
] |
29,389,515 | https://en.wikipedia.org/wiki/Helical%20orbit%20spectrometer | The helical orbit spectrometer (HELIOS) is a measurement device for studying nuclear reactions in inverse kinematics. It is installed at the ATLAS facility at Argonne National Laboratory.
History
The HELIOS concept was first proposed at the Workshop on Experimental Equipment for an Advanced ISOL Facility at Lawrence Berkeley National Laboratory in 1998. The concept was introduced as a next-generation large-acceptance spectrometer for measuring heavy ion reactions.
Concept
Schematically, HELIOS is based around a large-bore superconducting solenoid. Accelerated heavy-ion beams enter
the solenoid along the magnetic axis, passing through a hollow detector array. The beam then intercepts a "light-ion" target, also on the magnetic axis. In the configuration shown in the figure, charged reaction products ejected rearward in the laboratory frame move in helical orbits to the detector array. Heavy beam-like recoils are kinematically focused forward in a narrow cone and intercepted by the so-called recoil detector array.
Development
The HELIOS Collaboration was formed with members from Argonne National Laboratory, Western Michigan University, and Manchester University to construct, characterize, and commission the HELIOS spectrometer. The construction of the spectrometer began with the delivery of the superconducting solenoid upon which HELIOS is based. The solenoid was delivered to Argonne on December 8, 2006. Over the next 20 months, the solenoid was transformed into a nuclear spectrometer and connected to the ATLAS beam line. The first stable beam was tuned to the HELIOS target area on Tuesday, August 12, 2008 at 13:29. This first commissioning measurement studied the well-known nuclear reaction 28Si(d,p) in inverse kinematics in order characterize the performance of the spectrometer.
The radioactive ion beam commissioning of HELIOS took place in early March, 2009. This was the second measurement made with HELIOS and is considered the first actual "experiment" conducted using HELIOS.
See also
Canadian Penning Trap Mass Spectrometer
Gammasphere
References
External links
HELIOS page on the Physics Division Website
Physics World article
APS Physics Synopsis
Argonne National Laboratory
Spectrometers | Helical orbit spectrometer | [
"Physics",
"Chemistry"
] | 466 | [
"Spectrometers",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.