id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
33,417,972 | https://en.wikipedia.org/wiki/Cantor%20tree%20surface | In dynamical systems, the Cantor tree is an infinite-genus surface homeomorphic to a sphere with a Cantor set removed. The blooming Cantor tree is a Cantor tree with an infinite number of handles added in such a way that every end is a limit of handles.
See also
Jacob's ladder surface
Loch Ness monster surface
References
Fractals
Surfaces
Eponyms in geometry | Cantor tree surface | [
"Mathematics"
] | 77 | [
"Eponyms in geometry",
"Mathematical analysis",
"Functions and mappings",
"Mathematical objects",
"Fractals",
"Mathematical relations",
"Geometry"
] |
33,418,142 | https://en.wikipedia.org/wiki/Achieser%E2%80%93Zolotarev%20filter | Achieser–Zolotarev filter, or just Zolotarev filter is a class of signal processing filter based on Zolotarev polynomials. Achieser is spelled as "Akhiezer" in some sources. The filter response is similar to the Chebychev filter except that the first ripple is larger than the rest. The filter is especially useful in some waveguide applications.
Naming
The filter is named after Yegor Ivanovich Zolotarev who, in 1868, introduced the Zolotarev polynomials which are used as the basis of this filter. Zolotarev's work on approximation theory was further developed by Naum Akhiezer in 1956. Zolotarev polynomials were first applied to the design of filters by Ralph Levy in 1970.
Properties
Achieser–Zolotarev filters have similar properties to Chebyshev filters of the first kind. In fact, Chebyshev polynomials are a special case of Zolotarev polynomials, so Chebyshev filters can be considered a special case of Achieser–Zolotarev filter.
Like the Chebyshev filter, the Achieser–Zolotarev filter has equal ripple attenuation in the passband. The essential difference is that the first peak in attenuation of the Achieser–Zolotarev filter is greater than the design preset ripple for the other peaks.
An inverse Zolotarev filter (type II Zolotarev filter) is possible using the reciprocal of the Zolotarev polynomial instead. This procedure is the same as that for the inverse Chebyshev filter, and like that filter, this filter will have all the ripple in the stopband and a monotonic passband. The inverse Zolotarev filter has equiripple in the stopband except for the last peak with increasing frequency. This is a peak of minimum attenuation (maximum gain) rather than a peak of maximum attenuation.
Uses
Waveguide filter designs sometimes use the Achieser–Zolotarev response as low-pass filters. It is used in this role because it provides a better impedance match than the more common Chebyshev filter. The higher attenuation at the very lowest frequencies is acceptable in waveguide filters because in this medium there is always a guide cutoff frequency below which waves cannot propagate anyway. The region of high attenuation of the Achieser–Zolotarev filter can be made to occur below the guide cutoff frequency, in which case the response is indistinguishable from a low-pass response because the low-frequency attenuation is masked by the guide cutoff effect. As with the Chebychev filter, the designer of an Achieser–Zolotarev filter can exchange increased steepness of the transition band for more passband ripple.
The advantage of the Zolotarev response is that it results in a filter with a better impedance match to the connecting waveguides compared to the Chebyshev filter or image-parameter filters. Waveguide filters will usually require stepped impedance matching at their input and output. This is especially true of corrugated waveguide designs such as the waffle-iron filter which have a high input impedance compared to the waveguide to which it is connected. A better match results in fewer impedance steps being required and a significant reduction in bulk and weight. Waveguide designs are very bulky compared to other technologies but are preferred for microwave high-power applications and where low loss is needed. In applications such as airborne radar, weight and bulk are important considerations.
There is a further advantage of the Achieser–Zolotarev filter over the Chebyshev in distributed-element filter designs. The dimensions of the elements of the Achieser–Zolotarev tend to be more convenient to manufacture. Internal gaps tend to be larger and the impedance changes tend to be smaller (making for a smaller change in mechanical dimensions). These same features increase the power-handling capability of the assembly.
An adaptation of the Achieser–Zolotarev filter has applications for enhancement and restoration of images and video. In this role 2-D FIR filters are required of the bandstop filter form with extremely narrow stopbands. Such filters can be adapted from a 1-D Achieser–Zolotarev filter.
See also
Elliptic filter (Cauer filter), occasionally called a Zolotarev filter.
References
Bibliography
.
.
.
.
.
.
Linear filters
Network synthesis filters
Electronic design | Achieser–Zolotarev filter | [
"Engineering"
] | 950 | [
"Electronic design",
"Electronic engineering",
"Design"
] |
33,418,730 | https://en.wikipedia.org/wiki/Translation%20surface | In mathematics a translation surface is a surface obtained from identifying the sides of a polygon in the Euclidean plane by translations. An equivalent definition is a Riemann surface together with a holomorphic 1-form.
These surfaces arise in dynamical systems where they can be used to model billiards, and in Teichmüller theory. A particularly interesting subclass is that of Veech surfaces (named after William A. Veech) which are the most symmetric ones.
Definitions
Geometric definition
A translation surface is the space obtained by identifying pairwise by translations the sides of a collection of plane polygons.
Here is a more formal definition. Let be a collection of (not necessarily convex) polygons in the Euclidean plane and suppose that for every side of any there is a side of some with and for some nonzero vector (and so that . Consider the space obtained by identifying all with their corresponding through the map .
The canonical way to construct such a surface is as follows: start with vectors and a permutation on , and form the broken lines and starting at an arbitrarily chosen point. In the case where these two lines form a polygon (i.e. they do not intersect outside of their endpoints) there is a natural side-pairing.
The quotient space is a closed surface. It has a flat metric outside the set images of the vertices. At a point in the sum of the angles of the polygons around the vertices which map to it is a positive multiple of , and the metric is singular unless the angle is exactly .
Analytic definition
Let be a translation surface as defined above and the set of singular points. Identifying the Euclidean plane with the complex plane one gets coordinates charts on with values in . Moreover, the changes of charts are holomorphic maps, more precisely maps of the form for some . This gives the structure of a Riemann surface, which extends to the entire surface by Riemann's theorem on removable singularities. In addition, the differential where is any chart defined above, does not depend on the chart. Thus these differentials defined on chart domains glue together to give a well-defined holomorphic 1-form on . The vertices of the polygon where the cone angles are not equal to are zeroes of (a cone angle of corresponds to a zero of order ).
In the other direction, given a pair where is a compact Riemann surface and a holomorphic 1-form one can construct a polygon by using the complex numbers where are disjoint paths between the zeroes of which form an integral basis for the relative cohomology.
Examples
The simplest example of a translation surface is obtained by gluing the opposite sides of a parallelogram. It is a flat torus with no singularities.
If is a regular -gon then the translation surface obtained by gluing opposite sides is of genus with a single singular point, with angle .
If is obtained by putting side to side a collection of copies of the unit square then any translation surface obtained from is called a square-tiled surface. The map from the surface to the flat torus obtained by identifying all squares is a branched covering with branch points the singularities (the cone angle at a singularity is proportional to the degree of branching).
Riemann–Roch and Gauss–Bonnet
Suppose that the surface is a closed Riemann surface of genus and that is a nonzero holomorphic 1-form on , with zeroes of order . Then the Riemann–Roch theorem implies that
If the translation surface is represented by a polygon then triangulating it and summing angles over all vertices allows to recover the formula above (using the relation between cone angles and order of zeroes), in the same manner as in the proof of the Gauss–Bonnet formula for hyperbolic surfaces or the proof of Euler's formula from Girard's theorem.
Translation surfaces as foliated surfaces
If is a translation surface there is a natural measured foliation on . If it is obtained from a polygon it is just the image of vertical lines, and the measure of an arc is just the euclidean length of the horizontal segment homotopic to the arc. The foliation is also obtained by the level lines of the imaginary part of a (local) primitive for and the measure is obtained by integrating the real part.
Moduli spaces
Strata
Let be the set of translation surfaces of genus (where two such are considered the same if there exists a holomorphic diffeomorphism such that ). Let be the moduli space of Riemann surfaces of genus ; there is a natural map mapping a translation surface to the underlying Riemann surface. This turns into a locally trivial fiber bundle over the moduli space.
To a compact translation surface there is associated the data where are the orders of the zeroes of . If is any integer partition of then the stratum is the subset of of translation surfaces which have a holomorphic form whose zeroes match the partition.
The stratum is naturally a complex orbifold of complex dimension (note that is the moduli space of tori, which is well-known to be an orbifold; in higher genus, the failure to be a manifold is even more dramatic). Local coordinates are given by
where and is as above a symplectic basis of this space.
Masur-Veech volumes
The stratum admits a -action and thus a real and complex projectivization . The real projectivization admits a natural section if we define it as the space of translation surfaces of area 1.
The existence of the above period coordinates allows to endow the stratum with an integral affine structure and thus a natural volume form . We also get a volume form on by disintegration of . The Masur-Veech volume is the total volume of for . This volume was proved to be finite independently by William A. Veech and Howard Masur.
In the 90's Maxim Kontsevich and Anton Zorich evaluated these volumes numerically by counting the lattice points of . They observed that should be of the form times a rational number. From this observation they expected the existence of a formula expressing the volumes in terms of intersection numbers on moduli spaces of curves.
Alex Eskin and Andrei Okounkov gave the first algorithm to compute these volumes. They showed that the generating series of these numbers are q-expansions of computable quasi-modular forms. Using this algorithm they could confirm the numerical observation of Kontsevich and Zorich.
More recently Chen, Möller, Sauvaget, and don Zagier showed that the volumes can be computed as intersection numbers on an algebraic compactification of . Currently the problem is still open to extend this formula to strata of half-translation surfaces.
The SL2(R)-action
If is a translation surface obtained by identifying the faces of a polygon and then the translation surface is that associated to the polygon . This defined a continuous action of on the moduli space which preserves the strata . This action descends to an action on that is ergodic with respect to .
Half-translation surfaces
Definitions
A half-translation surface is defined similarly to a translation surface but allowing the gluing maps to have a nontrivial linear part which is a half turn. Formally, a translation surface is defined geometrically by taking a collection of polygons in the Euclidean plane and identifying faces by maps of the form (a "half-translation"). Note that a face can be identified with itself. The geometric structure obtained in this way is a flat metric outside of a finite number of singular points with cone angles positive multiples of .
As in the case of translation surfaces there is an analytic interpretation: a half-translation surface can be interpreted as a pair where is a Riemann surface and a quadratic differential on . To pass from the geometric picture to the analytic picture one simply takes the quadratic differential defined locally by (which is invariant under half-translations), and for the other direction one takes the Riemannian metric induced by , which is smooth and flat outside of the zeros of .
Relation with Teichmüller geometry
If is a Riemann surface then the vector space of quadratic differentials on is naturally identified with the tangent space to Teichmüller space at any point above . This can be proven by analytic means using the Bers embedding. Half-translation surfaces can be used to give a more geometric interpretation of this: if are two points in Teichmüller space then by Teichmüller's mapping theorem there exists two polygons whose faces can be identified by half-translations to give flat surfaces with underlying Riemann surfaces isomorphic to respectively, and an affine map of the plane sending to which has the smallest distortion among the quasiconformal mappings in its isotopy class, and which is isotopic to .
Everything is determined uniquely up to scaling if we ask that be of the form , where , for some ; we denote by the Riemann surface obtained from the polygon . Now the path in Teichmüller space joins to , and differentiating it at gives a vector in the tangent space; since was arbitrary we obtain a bijection.
In facts the paths used in this construction are Teichmüller geodesics. An interesting fact is that while the geodesic ray associated to a flat surface corresponds to a measured foliation, and thus the directions in tangent space are identified with the Thurston boundary, the Teichmüller geodesic ray associated to a flat surface does not always converge to the corresponding point on the boundary, though almost all such rays do so.
Veech surfaces
The Veech group
If is a translation surface its Veech group is the Fuchsian group which is the image in of the subgroup of transformations such that is isomorphic (as a translation surface) to . Equivalently, is the group of derivatives of affine diffeomorphisms (where affine is defined locally outside the singularities, with respect to the affine structure induced by the translation structure). Veech groups have the following properties:
They are discrete subgroups in ;
They are never cocompact.
Veech groups can be either finitely generated or not.
Veech surfaces
A Veech surface is by definition a translation surface whose Veech group is a lattice in , equivalently its action on the hyperbolic plane admits a fundamental domain of finite volume. Since it is not cocompact it must then contain parabolic elements.
Examples of Veech surfaces are the square-tiled surfaces, whose Veech groups are commensurable to the modular group . The square can be replaced by any parallelogram (the translation surfaces obtained are exactly those obtained as ramified covers of a flat torus). In fact the Veech group is arithmetic (which amounts to it being commensurable to the modular group) if and only if the surface is tiled by parallelograms.
There exists Veech surfaces whose Veech group is not arithmetic, for example the surface obtained from two regular pentagons glued along an edge: in this case the Veech group is a non-arithmetic Hecke triangle group. On the other hand, there are still some arithmetic constraints on the Veech group of a Veech surface: for example its trace field is a number field that is totally real.
Geodesic flow on translation surfaces
Geodesics
A geodesic in a translation surface (or a half-translation surface) is a parametrised curve which is, outside of the singular points, locally the image of a straight line in Euclidean space parametrised by arclength. If a geodesic arrives at a singularity it is required to stop there. Thus a maximal geodesic is a curve defined on a closed interval, which is the whole real line if it does not meet any singular point. A geodesic is closed or periodic if its image is compact, in which case it is either a circle if it does not meet any singularity, or an arc between two (possibly equal) singularities. In the latter case the geodesic is called a saddle connection.
If (or in the case of a half-translation surface) then the geodesics with direction theta are well-defined on : they are those curves which satisfy (or in the case of a half-translation surface ). The geodesic flow on with direction is the flow on where is the geodesic starting at with direction if is not singular.
Dynamical properties
On a flat torus the geodesic flow in a given direction has the property that it is either periodic or ergodic. In general this is not true: there may be directions in which the flow is minimal (meaning every orbit is dense in the surface) but not ergodic. On the other hand, on a compact translation surface the flow retains from the simplest case of the flat torus the property that it is ergodic in almost every direction.
Another natural question is to establish asymptotic estimates for the number of closed geodesics or saddle connections of a given length. On a flat torus there are no saddle connections and the number of closed geodesics of length is equivalent to . In general one can only obtain bounds: if is a compact translation surface of genus then there exists constants (depending only on the genus) such that the both of closed geodesics and of saddle connections of length satisfy
Restraining to a probabilistic results it is possible to get better estimates: given a genus , a partition of and a connected component of the stratum there exists constants such that for almost every the asymptotic equivalent holds:
,
The constants are called Siegel–Veech constants. Using the ergodicity of the -action on , it was shown that these constants can explicitly be computed as ratios of certain Masur-Veech volumes.
Veech dichotomy
The geodesic flow on a Veech surface is much better behaved than in general. This is expressed via the following result, called the Veech dichotomy:
Let be a Veech surface and a direction. Then either all trajectories defied over are periodic or the flow in the direction is ergodic.
Relation with billiards
If is a polygon in the Euclidean plane and a direction there is a continuous dynamical system called a billiard. The trajectory of a point inside the polygon is defined as follows: as long as it does not touch the boundary it proceeds in a straight line at unit speed; when it touches the interior of an edge it bounces back (i.e. its direction changes with an orthogonal reflection in the perpendicular of the edge), and when it touches a vertex it stops.
This dynamical system is equivalent to the geodesic flow on a flat surface: just double the polygon along the edges and put a flat metric everywhere but at the vertices, which become singular points with cone angle twice the angle of the polygon at the corresponding vertex. This surface is not a translation surface or a half-translation surface, but in some cases it is related to one. Namely, if all angles of the polygon are rational multiples of there is ramified cover of this surface which is a translation surface, which can be constructed from a union of copies of . The dynamics of the billiard flow can then be studied through the geodesic flow on the translation surface.
For example, the billiard in a square is related in this way to the billiard on the flat torus constructed from four copies of the square; the billiard in an equilateral triangle gives rise to the flat torus constructed from an hexagon. The billiard in a "L" shape constructed from squares is related to the geodesic flow on a square-tiled surface; the billiard in the triangle with angles is related to the Veech surface constructed from two regular pentagons constructed above.
Relation with interval exchange transformations
Let be a translation surface and a direction, and let be the geodesic flow on with direction . Let be a geodesic segment in the direction orthogonal to , and defined the first recurrence, or Poincaré map as follows: is equal to where for . Then this map is an interval exchange transformation and it can be used to study the dynamic of the geodesic flow.
Notes
References
Surfaces
Dynamical systems | Translation surface | [
"Physics",
"Mathematics"
] | 3,389 | [
"Mechanics",
"Dynamical systems"
] |
33,424,683 | https://en.wikipedia.org/wiki/Lipotoxicity | Lipotoxicity is a metabolic syndrome that results from the accumulation of lipid intermediates in non-adipose tissue, leading to cellular dysfunction and death. The tissues normally affected include the kidneys, liver, heart and skeletal muscle. Lipotoxicity is believed to have a role in heart failure, obesity, and diabetes, and is estimated to affect approximately 25% of the adult American population.
Cause
In normal cellular operations, there is a balance between the production of lipids, and their oxidation or transport. In lipotoxic cells, there is an imbalance between the amount of lipids produced and the amount used. Upon entrance of the cell, fatty acids can be converted to different types of lipids for storage. Triacylglycerol consists of three fatty acids bound to a glycerol molecule and is considered the most neutral and harmless type of intracellular lipid storage. Alternatively, fatty acids can be converted to lipid intermediates like diacylglycerol, ceramides and fatty acyl-CoAs. These lipid intermediates can impair cellular function, which is referred to as lipotoxicity.
Adipocytes, the cells that normally function as lipid store of the body, are well equipped to handle the excess lipids. Yet, too great of an excess will overburden these cells and cause a spillover into non-adipose cells, which do not have the necessary storage space. When the storage capacity of non-adipose cells is exceeded, cellular dysfunction and/or death result. The mechanism by which lipotoxicity causes death and dysfunction is not well understood. The cause of apoptosis and extent of cellular dysfunction is related to the type of cell affected, as well as the type and quantity of excess lipids. A theory has been put forward by Cambridge researchers relating the development of lipotoxicity to the perturbation of membrane glycerophospholipid/sphingolipid homeostasis and their associated signalling events.
Currently, there is no universally accepted theory for why certain individuals are afflicted with lipotoxicity. Research is ongoing into a genetic cause, but no individual gene has been named as the causative agent. The causative role of obesity in lipotoxicity is controversial. Some researchers claim that obesity has protective effects against lipotoxicity as it results in extra adipose tissue in which excess lipids can be stored. Others claim obesity is a risk factor for lipotoxicity. Both sides accept that high fat diets put patients at increased risk for lipotoxic cells. Individuals with high numbers of lipotoxic cells usually experience both leptin and insulin resistance. However, no causative mechanism has been found for this correlation.
Effects in different organs
Kidneys
Renal lipotoxicity occurs when excess long-chain nonesterified fatty acids are stored in the kidney and proximal tubule cells. It is believed that these fatty acids are delivered to the kidneys via serum albumin. This condition leads to tubulointerstitial inflammation and fibrosis in mild cases, and to kidney failure and death in severe cases. The current accepted treatments for lipotoxicity in renal cells are fibrate therapy and intensive insulin therapy.
Liver
An excess of free fatty acids in liver cells plays a role in Nonalcoholic Fatty Liver Disease (NAFLD). In the liver, it is the type of fatty acid, not the quantity, that determines the extent of the lipotoxic effects. In hepatocytes, the ratio of monounsaturated fatty acids and saturated fatty acids leads to apoptosis and liver damage. There are several potential mechanisms by which the excess fatty acids can cause cell death and damage. They may activate death receptors, stimulate apoptotic pathways, or initiate cellular stress response in the endoplasmic reticulum. These lipotoxic effects have been shown to be prevented by the presence of excess triglycerides within the hepatocytes.
Heart
Lipotoxicity in cardiac tissue is attributed to excess saturated fatty acids. The apoptosis that follows is believed to be caused by unfolded protein response in the endoplasmic reticulum. Researchers are working on treatments that will increase the oxidation of these fatty acids within the heart in order to prevent the lipotoxic effects.
Pancreas
Lipotoxicity affects the pancreas when excess free fatty acids are found in beta cells, causing their dysfunction and death. The effects of the lipotoxicity is treated with leptin therapy and insulin sensitizers.
Skeletal muscle
The skeletal muscle accounts for more than 80 percent of the postprandial whole body glucose uptake and therefore plays an important role in glucose homeostasis. Skeletal muscle lipid levels – intramyocellular lipids (IMCL) – correlate negatively with insulin sensitivity in a sedentary population and hence were considered predictive for insulin resistance and causative in obesity-associated insulin resistance. However, endurance athletes also have high IMCL levels despite being highly insulin sensitive, which indicates that not the level of IMCL accumulation per se, but rather the characteristics of this intramyocellular fat determine whether it negatively affects insulin signaling. Intramyocellular lipids are mainly stored in lipid droplets, the organelles for fat storage. Recent research indicates that creating intramyocellular neutral lipid storage capacity for example by increasing the abundance of lipid droplet coat proteins protects against obesity-associated insulin resistance in skeletal muscle.
Prevention and treatment
The methods to prevent and treat lipotoxicity are divided into three main groups.
The first strategy focuses on decreasing the lipid content of non-adipose tissues. This can be accomplished by either increasing the oxidation of the lipids, or increasing their secretion and transport. Current treatments involve extreme weight loss and leptin treatment.
Another strategy is focusing on diverting excess lipids away from non-adipose tissues, and towards adipose tissues. This is accomplished with thiazolidinediones, a group of medications that activate nuclear receptor proteins responsible for lipid metabolism.
The final strategy focuses on inhibiting the apoptotic pathways and signaling cascades. This is accomplished by using drugs that inhibit production of specific chemicals required for the pathways to be functional. While this may prove to the most effective protection against cell death, it will also require the most research and development due to the specificity required of the medications.
Lipoexpediency
Lipoexpediency refers to the beneficial effects of lipids in a cell or a tissue, primarily lipid-mediated signal transmission events, that may occur even in the setting of excess fatty acids. The term was coined as an antonym to lipotoxicity.
References
Diabetes
Metabolism
Lipids | Lipotoxicity | [
"Chemistry",
"Biology"
] | 1,406 | [
"Biomolecules by chemical classification",
"Lipids",
"Organic compounds",
"Cellular processes",
"Biochemistry",
"Metabolism"
] |
31,847,498 | https://en.wikipedia.org/wiki/Lindstr%C3%B6m%E2%80%93Gessel%E2%80%93Viennot%20lemma | In mathematics, the Lindström–Gessel–Viennot lemma provides a way to count the number of tuples of non-intersecting lattice paths, or, more generally, paths on a directed graph. It was proved by Gessel–Viennot in 1985, based on previous work of Lindström published in 1973. The lemma is named after Bernt Lindström, Ira Gessel and Gérard Viennot.
Statement
Let G be a locally finite directed acyclic graph. This means that each vertex has finite degree, and that G contains no directed cycles. Consider base vertices and destination vertices , and also assign a weight to each directed edge e. These edge weights are assumed to belong to some commutative ring. For each directed path P between two vertices, let be the product of the weights of the edges of the path. For any two vertices a and b, write e(a,b) for the sum over all paths from a to b. This is well-defined if between any two points there are only finitely many paths; but even in the general case, this can be well-defined under some circumstances (such as all edge weights being pairwise distinct formal indeterminates, and being regarded as a formal power series). If one assigns the weight 1 to each edge, then e(a,b) counts the number of paths from a to b.
With this setup, write
.
An n-tuple of non-intersecting paths from A to B means an n-tuple (P1, ..., Pn) of paths in G with the following properties:
There exists a permutation of such that, for every i, the path Pi is a path from to .
Whenever , the paths Pi and Pj have no two vertices in common (not even endpoints).
Given such an n-tuple (P1, ..., Pn), we denote by the permutation of from the first condition.
The Lindström–Gessel–Viennot lemma then states that the determinant of M is the signed sum over all n-tuples P = (P1, ..., Pn) of non-intersecting paths from A to B:
That is, the determinant of M counts the weights of all n-tuples of non-intersecting paths starting at A and ending at B, each affected with the sign of the corresponding permutation of , given by taking to .
In particular, if the only permutation possible is the identity (i.e., every n-tuple of non-intersecting paths from A to B takes ai to bi for each i) and we take the weights to be 1, then det(M) is exactly the number of non-intersecting n-tuples of paths starting at A and ending at B.
Proof
To prove the Lindström–Gessel–Viennot lemma, we first introduce some notation.
An n-path from an n-tuple of vertices of G to an n-tuple of vertices of G will mean an n-tuple of paths in G, with each leading from to . This n-path will be called non-intersecting just in case the paths Pi and Pj have no two vertices in common (including endpoints) whenever . Otherwise, it will be called entangled.
Given an n-path , the weight of this n-path is defined as the product .
A twisted n-path from an n-tuple of vertices of G to an n-tuple of vertices of G will mean an n-path from to for some permutation in the symmetric group . This permutation will be called the twist of this twisted n-path, and denoted by (where P is the n-path). This, of course, generalises the notation introduced before.
Recalling the definition of M, we can expand det M as a signed sum of permutations; thus we obtain
It remains to show that the sum of over all entangled twisted n-paths vanishes. Let denote the set of entangled twisted n-paths. To establish this, we shall construct an involution with the properties and for all . Given such an involution, the rest-term
in the above sum reduces to 0, since its addends cancel each other out (namely, the addend corresponding to each cancels the addend corresponding to ).
Construction of the involution: The idea behind the definition of the involution is to take choose two intersecting paths within an entangled path, and switch their tails after their point of intersection. There are in general several pairs of intersecting paths, which can also intersect several times; hence, a careful choice needs to be made. Let be any entangled twisted n-path. Then is defined as follows. We call a vertex crowded if it belongs to at least two of the paths . The fact that the graph is acyclic implies that this is equivalent to "appearing at least twice in all the paths". Since P is entangled, there is at least one crowded vertex. We pick the smallest such that contains a crowded vertex. Then, we pick the first crowded vertex v on ("first" in sense of "encountered first when travelling along "), and we pick the largest j such that v belongs to . The crowdedness of v implies j > i. Write the two paths and as
where , and where and are chosen such that v is the -th vertex along and the -th vertex along (that is, ). We set and and and . Now define the twisted n-path to coincide with except for components and , which are replaced by
It is immediately clear that is an entangled twisted n-path. Going through the steps of the construction, it is easy to see that , and furthermore that and , so that applying again to involves swapping back the tails of and leaving the other components intact. Hence . Thus is an involution. It remains to demonstrate the desired antisymmetry properties:
From the construction one can see that coincides with except that it swaps and , thus yielding . To show that we first compute, appealing to the tail-swap
Hence .
Thus we have found an involution with the desired properties and completed the proof of the Lindström–Gessel–Viennot lemma.
Remark. Arguments similar to the one above appear in several sources, with variations regarding the choice of which tails to switch. A version with j smallest (unequal to i) rather than largest appears in the Gessel-Viennot 1989 reference (proof of Theorem 1).
Applications
Schur polynomials
The Lindström–Gessel–Viennot lemma can be used to prove the equivalence of the following two different definitions of Schur polynomials. Given a partition of n, the Schur polynomial can be defined as:
where the sum is over all semistandard Young tableaux T of shape λ, and the weight of a tableau T is defined as the monomial obtained by taking the product of the xi indexed by the entries i of T. For instance, the weight of the tableau
is .
where hi are the complete homogeneous symmetric polynomials (with hi understood to be 0 if i is negative). For instance, for the partition (3,2,2,1), the corresponding determinant is
To prove the equivalence, given any partition λ as above, one considers the r starting points and the r ending points , as points in the lattice , which acquires the structure of a directed graph by asserting that the only allowed directions are going one to the right or one up; the weight associated to any horizontal edge at height i is xi, and the weight associated to a vertical edge is 1. With this definition, r-tuples of paths from A to B are exactly semistandard Young tableaux of shape λ, and the weight of such an r-tuple is the corresponding summand in the first definition of the Schur polynomials. For instance, with the tableau
,
one gets the corresponding 4-tuple
On the other hand, the matrix M is exactly the matrix written above for D. This shows the required equivalence. (See also §4.5 in Sagan's book, or the First Proof of Theorem 7.16.1 in Stanley's EC2, or §3.3 in Fulmek's arXiv preprint, or §9.13 in Martin's lecture notes, for slight variations on this argument.)
The Cauchy–Binet formula
One can also use the Lindström–Gessel–Viennot lemma to prove the Cauchy–Binet formula, and in particular the multiplicativity of the determinant.
Generalizations
Talaska's formula
The acyclicity of G is an essential assumption in the Lindström–Gessel–Viennot lemma; it guarantees (in reasonable situations) that the sums are well-defined, and it advects into the proof (if G is not acyclic, then f might transform a self-intersection of a path into an intersection of two distinct paths, which breaks the argument that f is an involution). Nevertheless, Kelli Talaska's 2012 paper establishes a formula generalizing the lemma to arbitrary digraphs. The sums are replaced by formal power series, and the sum over nonintersecting path tuples now becomes a sum over collections of nonintersecting and non-self-intersecting paths and cycles, divided by a sum over collections of nonintersecting cycles. The reader is referred to Talaska's paper for details.
See also
Matrix tree theorem
References
Lemmas
Combinatorics
Theorems in combinatorics | Lindström–Gessel–Viennot lemma | [
"Mathematics"
] | 2,040 | [
"Mathematical theorems",
"Theorems in combinatorics",
"Discrete mathematics",
"Combinatorics",
"Theorems in discrete mathematics",
"Mathematical problems",
"Lemmas"
] |
31,849,318 | https://en.wikipedia.org/wiki/Operation%20Groundhog | Operation Groundhog was reported a joint US/Kazakh/Russian program to secure radioactive residues of Soviet-era nuclear bomb tests. In 2003, reports appeared in Science Magazine that the program included paving some areas with thick layers of reinforced concrete to protect plutonium contaminating the ground, in order to prevent terrorists from acquiring contaminated material for making a dirty bomb.
See also
Cactus Dome
References
Radioactive waste | Operation Groundhog | [
"Physics",
"Chemistry",
"Technology"
] | 80 | [
"Nuclear chemistry stubs",
"Nuclear and atomic physics stubs",
"Hazardous waste",
"Radioactivity",
"Nuclear physics",
"Environmental impact of nuclear power",
"Radioactive waste"
] |
31,859,217 | https://en.wikipedia.org/wiki/Conolidine | Conolidine is an indole alkaloid. Preliminary reports suggest that it could provide analgesic effects with few of the detrimental side-effects associated with opioids such as morphine, though at present it has only been evaluated in mouse models.
Conolidine was first isolated in 2004 from the bark of the Tabernaemontana divaricata (crape jasmine) shrub which is used in traditional Chinese medicine.
The first asymmetric total synthesis of conolidine was developed by Micalizio and coworkers in 2011. This synthetic route allows access to either enantiomer (mirror image) of conolidine via an early enzymatic resolution. Notably, evaluation of the synthetic material resulted in the discovery that both enantiomers of the synthetic compound show analgesic effects.
Syntheses
The Micalizio route (2011) achieved the end product in 9 steps from a commercially available acetyl-pyridine. Notable reactions include a [2,3]-Still-Wittig rearrangement and a conformationally-controlled intramolecular Mannich cyclization.
The Weinreb group (2014) used a conjugative addition of an indole precursor to an oxime-substituted nitrosoalkene to generate the tetracyclic skeleton of conolidine in 4 steps.
Takayama and colleagues (2016) synthesized conolidine and apparicine through a gold(I)-catalyzed exo-dig synthesis of a racemic piperidinyl aldehyde.
Ohno and Fujii (2016) accessed the tricyclic pre-Mannich intermediate through a chiral gold(I) catalyzed cascade cyclization.
In 2019, a six step synthesis was developed using Gold-catalyzed cyclization reaction and Pictet-Spengler reaction having 19% overall yield.
Pharmacology
In 2011, the Bohn lab noted antinociception against both chemically induced and inflammation-derived pain, and experiments indicated lack of opioid receptor modulation, but were unable to define a particular target. A 2019 study by a cross-site Australian and U.S. group discovered through cultured neuronal networks that conolidine may inhibit the Ca v2.2 channel, a mechanism seen in molecules like conotoxin. The group was unable to rule out partial polypharmacology against other targets.
Conolidine has been discovered to bind to novel opioid receptor ACKR3 (CXCR7). By binding to this receptor, the endogenous opioid peptides (such as endorphins and enkephalins) cannot be trapped thus increasing availability of those peptides to their target sites.
Derivatives
DS54360155, a novel compound with a unique and original bicyclic skeleton, is more a potent analgesic than conolidine in mice. DS39201083 and DS34942424 are other similar derivatives. They all lack mu-opioid activity. The researchers who found conolidine binding site ACKR3/CKCR7 also developed a synthetic analogue of it called RTI-5152-12. It displays an even greater activity on that receptor.
See also
Lochnericine
Pericine
Stemmadenine
Conofoline
LIH383
RTI-5152-12
References
Alkaloids found in Apocynaceae
Analgesics
Indole alkaloids
Opioid modulators | Conolidine | [
"Chemistry"
] | 725 | [
"Alkaloids by chemical classification",
"Indole alkaloids"
] |
44,910,360 | https://en.wikipedia.org/wiki/Dasolampanel | Dasolampanel (INN, USAN, code name NGX-426) is an orally bioavailable analog of tezampanel and thereby competitive antagonist of the AMPA and kainate receptors which was under development by Raptor Pharmaceuticals/Torrey Pines Therapeutics for the treatment of chronic pain conditions including neuropathic pain and migraine. It was developed as a follow-on compound to tezampanel, as tezampanel is not bioavailable orally and must be administered by intravenous injection, but ultimately neither drug was ever marketed.
See also
Irampanel
Selurampanel
References
AMPA receptor antagonists
Analgesics
Antimigraine drugs
Carboxylic acids
Decahydroisoquinolines
Kainate receptor antagonists
Chlorobenzene derivatives
Prodrugs
Tetrazoles | Dasolampanel | [
"Chemistry"
] | 180 | [
"Chemicals in medicine",
"Carboxylic acids",
"Functional groups",
"Prodrugs"
] |
44,912,903 | https://en.wikipedia.org/wiki/C18H19N3O2 | {{DISPLAYTITLE:C18H19N3O2}}
The molecular formula C18H19N3O2 (molar mass: 309.36 g/mol, exact mass: 309.1477 u) may refer to:
CGS-20625
Irampanel
Nerisopam
Molecular formulas | C18H19N3O2 | [
"Physics",
"Chemistry"
] | 70 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
44,913,360 | https://en.wikipedia.org/wiki/Toyota%20Active%20Control%20Suspension | Toyota Active Control Suspension was (according to Toyota) the world's first fully active suspension.
Two versions of Toyota's Active Control Suspension system went into production - the first was a very limited production run from 1990 to 1991 of 300 units of the ST183 Celica, called the Active Sports. This was the first production car in the world to utilise an active suspension system. The suspension employed conventional coil-spring struts and 4-wheel steering. No anti-roll (stabiliser) bars were fitted as the strut damping was actively controlled by a combined power steering/suspension fluid pump and valve body that counteracted roll and pitch forces. This system of controlling damping force while utilising conventional springs was largely achieved with the much simpler Toyota Electronic Modulated Suspension system (TEMS).
The second version of the Active Suspension system came with the UZZ32 Soarer produced between 1991 and 1996. It was a complex, computer-controlled system that removed both conventional springs and anti-roll (stabiliser) bars in favour of fully hydropneumatic struts controlled by an array of sensors (such as axis accelerometers, suspension height and wheel speed) that detected cornering, acceleration and braking forces. The system worked well and gave an unusually controlled yet smooth ride with no body roll. However, the additional weight and power requirements of the system affected straight-line performance somewhat.
Due to the complexity and cost of the UZZ32 Soarer, only 873 were produced.
Mercedes-Benz introduced a very similar active suspension, called Active Body Control, on the Mercedes-Benz CL-Class in 1999.
Vehicles
Toyota Celica GT-R based "Active Sports" (ST183) 1990-1991
Toyota Soarer (UZZ32) 1991–1996
Toyota Curren (ST207) 1994-1995 XS Touring Selection
See also
Active Body Control
Toyota Electronic Modulated Suspension
References
Toyota
Automotive suspension technologies
Automotive technology tradenames
Automotive safety technologies
Auto parts
Mechanical power control | Toyota Active Control Suspension | [
"Physics"
] | 418 | [
"Mechanics",
"Mechanical power control"
] |
44,916,843 | https://en.wikipedia.org/wiki/Selepressin | Selepressin (INN) (code name FE-202158), also known as [Phe(2),Ile(3), Hgn(4),Orn(iPr)(8)]vasopressin) is a potent, highly selective, short-acting peptide full agonist of the vasopressin 1A receptor and analog of vasopressin which was under development by Ferring Pharmaceuticals for the treatment of vasodilatory hypotension in septic shock.
The Phase 2b/3 adaptive trial (SEPSIS-ACT) was terminated in February 2018 for futility. The trial was halted prior to the initiation of the arm for the highest dosing regimen of 5.0 ng/kg/min.
References
Antihypotensive agents
Peptides
Vasopressin receptor agonists | Selepressin | [
"Chemistry"
] | 178 | [
"Biomolecules by chemical classification",
"Peptides",
"Molecular biology"
] |
44,917,387 | https://en.wikipedia.org/wiki/Cyclooctadiene%20iridium%20chloride%20dimer | Cyclooctadiene iridium chloride dimer is an organoiridium compound with the formula [Ir(μ2-Cl)(COD)]2, where COD is the diene 1,5-cyclooctadiene (C8H12). It is an orange-red solid that is soluble in organic solvents. The complex is used as a precursor to other iridium complexes, some of which are used in homogeneous catalysis. The solid is air-stable but its solutions degrade in air.
Preparation, structure, reactions
The compound is prepared by heating hydrated iridium trichloride and cyclooctadiene in alcohol solvent. In the process, Ir(III) is reduced to Ir(I).
In terms of its molecular structure, the iridium centers are square planar as is typical for a d8 complex. The Ir2Cl2 core is folded with a dihedral angle of 86°. The molecule crystallizes in yellow-orange and red-orange polymorphs; the latter one is more common.
The complex is widely used precursor to other iridium complexes. A notable derivative is Crabtree's catalyst. The chloride ligands can also be replaced with methoxide to give cyclooctadiene iridium methoxide dimer, Ir2(OCH3)2(C8H12)2. The cyclooctadiene ligand is prone to isomerize in cationic complexes of the type [(C8H12)IrL2]+.
See also
Chlorobis(cyclooctene)iridium dimer
Cyclooctadiene rhodium chloride dimer
References
Homogeneous catalysis
Cyclooctadiene complexes
Organoiridium compounds
Chloro complexes | Cyclooctadiene iridium chloride dimer | [
"Chemistry"
] | 373 | [
"Catalysis",
"Homogeneous catalysis"
] |
30,868,906 | https://en.wikipedia.org/wiki/Surface%20plasmon%20polariton | Surface plasmon polaritons (SPPs) are electromagnetic waves that travel along a metal–dielectric or metal–air interface, practically in the infrared or visible-frequency. The term "surface plasmon polariton" explains that the wave involves both charge motion in the metal ("surface plasmon") and electromagnetic waves in the air or dielectric ("polariton").
They are a type of surface wave, guided along the interface in much the same way that light can be guided by an optical fiber. SPPs have a shorter wavelength than light in vacuum at the same frequency (photons). Hence, SPPs can have a higher momentum and local field intensity. Perpendicular to the interface, they have subwavelength-scale confinement. An SPP will propagate along the interface until its energy is lost either to absorption in the metal or scattering into other directions (such as into free space).
Application of SPPs enables subwavelength optics in microscopy and photolithography beyond the diffraction limit. It also enables the first steady-state micro-mechanical measurement of a fundamental property of light itself: the momentum of a photon in a dielectric medium. Other applications are photonic data storage, light generation, and bio-photonics.
Excitation
SPPs can be excited by both electrons and photons. Excitation by electrons is created by firing electrons into the bulk of a metal. As the electrons scatter, energy is transferred into the bulk plasma. The component of the scattering vector parallel to the surface results in the formation of a surface plasmon polariton.
For a photon to excite an SPP, both must have the same frequency and momentum. However, for a given frequency, a free-space photon has less momentum than an SPP because the two have different dispersion relations (see below). This momentum mismatch is the reason that a free-space photon from air cannot couple directly to an SPP. For the same reason, an SPP on a smooth metal surface cannot emit energy as a free-space photon into the dielectric (if the dielectric is uniform). This incompatibility is analogous to the lack of transmission that occurs during total internal reflection.
Nevertheless, coupling of photons into SPPs can be achieved using a coupling medium such as a prism or grating to match the photon and SPP wave vectors (and thus match their momenta). A prism can be positioned against a thin metal film in the Kretschmann configuration or very close to a metal surface in the Otto configuration (Figure 1). A grating coupler matches the wave vectors by increasing the parallel wave vector component by an amount related to the grating period (Figure 2). This method, while less frequently utilized, is critical to the theoretical understanding of the effect of surface roughness. Moreover, simple isolated surface defects such as a groove, a slit or a corrugation on an otherwise planar surface provide a mechanism by which free-space radiation and SPs can exchange energy and hence couple.
Fields and dispersion relation
The properties of an SPP can be derived from Maxwell's equations. We use a coordinate system where the metal–dielectric interface is the plane, with the metal at and dielectric at . The electric and magnetic fields as a function of position and time t are as follows:
where
n indicates the material (1 for the metal at or 2 for the dielectric at );
ω is the angular frequency of the waves;
the is + for the metal, − for the dielectric.
are the x- and z-components of the electric field vector, is the y-component of the magnetic field vector, and the other components () are zero. In other words, SPPs are always TM (transverse magnetic) waves.
k is the wave vector; it is a complex vector, and in the case of a lossless SPP, it turns out that the x components are real and the z components are imaginary—the wave oscillates along the x direction and exponentially decays along the z direction. is always the same for both materials, but is generally different from
, where is the permittivity of material 1 (the metal), and c is the speed of light in vacuum. As discussed below, this can also be written.
A wave of this form satisfies Maxwell's equations only on condition that the following equations also hold:
and
Solving these two equations, the dispersion relation for a wave propagating on the surface is
In the free electron model of an electron gas, which neglects attenuation, the metallic dielectric function is
where the bulk plasma frequency in SI units is
where n is the electron density, e is the charge of the electron, m∗ is the effective mass of the electron and is the permittivity of free-space. The dispersion relation is plotted in Figure 3. At low k, the SPP behaves like a photon, but as k increases, the dispersion relation bends over and reaches an asymptotic limit called the "surface plasma frequency". Since the dispersion curve lies to the right of the light line, ω = k⋅c, the SPP has a shorter wavelength than free-space radiation such that the out-of-plane component of the SPP wavevector is purely imaginary and exhibits evanescent decay. The surface plasma frequency is the asymptote of this curve, and is given by
In the case of air, this result simplifies to
If we assume that ε2 is real and ε2 > 0, then it must be true that ε1 < 0, a condition which is satisfied in metals. Electromagnetic waves passing through a metal experience damping due to Ohmic losses and electron-core interactions. These effects show up in as an imaginary component of the dielectric function. The dielectric function of a metal is expressed ε1 = ε1′ + i⋅ε1″ where ε1′ and ε1″ are the real and imaginary parts of the dielectric function, respectively. Generally >> ε1″ so the wavenumber can be expressed in terms of its real and imaginary components as
The wave vector gives us insight into physically meaningful properties of the electromagnetic wave such as its spatial extent and coupling requirements for wave vector matching.
Propagation length and skin depth
As an SPP propagates along the surface, it loses energy to the metal due to absorption. The intensity of the surface plasmon decays with the square of the electric field, so at a distance x, the intensity has decreased by a factor of . The propagation length is defined as the distance for the SPP intensity to decay by a factor of 1/e. This condition is satisfied at a length
Likewise, the electric field falls off evanescently perpendicular to the metal surface. At low frequencies, the SPP penetration depth into the metal is commonly approximated using the skin depth formula. In the dielectric, the field will fall off far more slowly. The decay lengths in the metal and dielectric medium can be expressed as
where i indicates the medium of propagation. SPPs are very sensitive to slight perturbations within the skin depth and because of this, SPPs are often used to probe inhomogeneities of a surface.
Animations
Experimental applications
Nanofabricated systems that exploit SPPs demonstrate potential for designing and controlling the propagation of light in matter. In particular, SPPs can be used to channel light efficiently into nanometer scale volumes, leading to direct modification of resonate frequency dispersion properties (substantially shrinking the wavelength of light and the speed of light pulses for example), as well as field enhancements suitable for enabling strong interactions with nonlinear materials. The resulting enhanced sensitivity of light to external parameters (for example, an applied electric field or the dielectric constant of an adsorbed molecular layer) shows great promise for applications in sensing and switching.
Current research is focused on the design, fabrication, and experimental characterization of novel components for measurement and communications based on nanoscale plasmonic effects. These devices include ultra-compact plasmonic interferometers for applications such as biosensing, optical positioning and optical switching, as well as the individual building blocks (plasmon source, waveguide and detector) needed to integrate a high-bandwidth, infrared-frequency plasmonic communications link on a silicon chip.
In addition to building functional devices based on SPPs, it appears feasible to exploit the dispersion characteristics of SPPs traveling in confined metallo-dielectric spaces to create photonic materials with artificially tailored bulk optical characteristics, otherwise known as metamaterials. Artificial SPP modes can be realized in microwave and terahertz frequencies by metamaterials; these are known as spoof surface plasmons.
The excitation of SPPs is frequently used in an experimental technique known as surface plasmon resonance (SPR). In SPR, the maximum excitation of surface plasmons are detected by monitoring the reflected power from a prism coupler as a function of incident angle, wavelength or phase.
Surface plasmon-based circuits, including both SPPs and localized plasmon resonances, have been proposed as a means of overcoming the size limitations of photonic circuits for use in high performance data processing nano devices.
The ability to dynamically control the plasmonic properties of materials in these nano-devices is key to their development. A new approach that uses plasmon-plasmon interactions has been demonstrated recently. Here the bulk plasmon resonance is induced or suppressed to manipulate the propagation of light. This approach has been shown to have a high potential for nanoscale light manipulation and the development of a fully CMOS- compatible electro-optical plasmonic modulator.
CMOS compatible electro-optic plasmonic modulators will be key components in chip-scale photonic circuits.
In surface second harmonic generation, the second harmonic signal is proportional to the square of the electric field. The electric field is stronger at the interface because of the surface plasmon resulting in a non-linear optical effect. This larger signal is often exploited to produce a stronger second harmonic signal.
The wavelength and intensity of the plasmon-related absorption and emission peaks are affected by molecular adsorption that can be used in molecular sensors. For example, a fully operational prototype device detecting casein in milk has been fabricated. The device is based on monitoring changes in plasmon-related absorption of light by a gold layer.
Materials used
Surface plasmon polaritons can only exist at the interface between a positive-permittivity material and a negative-permittivity material. The positive-permittivity material, often called the dielectric material, can be any transparent material such as air or (for visible light) glass. The negative-permittivity material, often called the plasmonic material, may be a metal or other material. It is more critical, as it tends to have a large effect on the wavelength, absorption length, and other properties of the SPP. Some plasmonic materials are discussed next.
Metals
For visible and near-infrared light, the only plasmonic materials are metals, due to their abundance of free electrons, which leads to a high plasma frequency. (Materials have negative real permittivity only below their plasma frequency.)
Unfortunately, metals suffer from ohmic losses that can degrade the performance of plasmonic devices. The need for lower loss has fueled research aimed at developing new materials for plasmonics and optimizing the deposition conditions of existing materials. Both the loss and polarizability of a material affect its optical performance. The quality factor for a SPP is defined as . The table below shows the quality factors and SPP propagation lengths for four common plasmonic metals; Al, Ag, Au and Cu deposited by thermal evaporation under optimized conditions. The quality factors and SPP propagation lengths were calculated using the optical data from the Al, Ag, Au and Cu films.
Silver exhibits the lowest losses of current materials in both the visible, near-infrared (NIR) and telecom wavelengths. Gold and copper perform equally well in the visible and NIR with copper having a slight advantage at telecom wavelengths. Gold has the advantage over both silver and copper of being chemically stable in natural environments making it well suited for plasmonic biosensors. However, an interband transition at ~470 nm greatly increases the losses in gold at wavelengths below 600 nm. Aluminum is the best plasmonic material in the ultraviolet regime (< 330 nm) and is also CMOS compatible along with copper.
Other materials
The fewer electrons a material has, the lower (i.e. longer-wavelength) its plasma frequency becomes. Therefore, at infrared and longer wavelengths, various other plasmonic materials also exist besides metals. These include transparent conducting oxides, which have typical plasma frequency in the NIR-SWIR infrared range. At longer wavelengths, semiconductors may also be plasmonic.
Some materials have negative permittivity at certain infrared wavelengths related to phonons rather than plasmons (so-called reststrahlen bands). The resulting waves have the same optical properties as surface plasmon polaritons, but are called by a different term, surface phonon polaritons.
Effects of roughness
In order to understand the effect of roughness on SPPs, it is beneficial to first understand how a SPP is coupled by a grating Figure2. When a photon is incident on a surface, the wave vector of the photon in the dielectric material is smaller than that of the SPP. In order for the photon to couple into a SPP, the wave vector must increase by . The grating harmonics of a periodic grating provide additional momentum parallel to the supporting interface to match the terms.
where is the wave vector of the grating, is the angle of incidence of the incoming photon, a is the grating period, and n is an integer.
Rough surfaces can be thought of as the superposition of many gratings of different periodicities. Kretschmann proposed that a statistical correlation function be defined for a rough surface
where is the height above the mean surface height at the position , and is the area of integration. Assuming that the statistical correlation function is Gaussian of the form
where is the root mean square height, is the distance from the point , and is the correlation length, then the Fourier transform of the correlation function is
where is a measure of the amount of each spatial frequency which help couple photons into a surface plasmon.
If the surface only has one Fourier component of roughness (i.e. the surface profile is sinusoidal), then the is discrete and exists only at , resulting in a single narrow set of angles for coupling. If the surface contains many Fourier components, then coupling becomes possible at multiple angles. For a random surface, becomes continuous and the range of coupling angles broadens.
As stated earlier, SPPs are non-radiative. When a SPP travels along a rough surface, it usually becomes radiative due to scattering. The Surface Scattering Theory of light suggests that the scattered intensity per solid angle per incident intensity is
where is the radiation pattern from a single dipole at the metal/dielectric interface. If surface plasmons are excited in the Kretschmann geometry and the scattered light is observed in the plane of incidence (Fig. 4), then the dipole function becomes
with
where is the polarization angle and is the angle from the z-axis in the xz-plane. Two important consequences come out of these equations. The first is that if (s-polarization), then and the scattered light . Secondly, the scattered light has a measurable profile which is readily correlated to the roughness. This topic is treated in greater detail in reference.
See also
Dyakonov surface waves
Graphene plasmonics
Localized surface plasmon
Plasmonic lens
Superlens
Surface plasmon
Surface plasmon resonance
Surface wave
Notes
References
Further reading
Free PDF download.
Free PDF download.
Free PDF download.
External links
"Submitted as coursework for AP272. Winter 2007".
Quasiparticles
Metamaterials
Plasmonics
Surface waves | Surface plasmon polariton | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 3,378 | [
"Plasmonics",
"Matter",
"Physical phenomena",
"Metamaterials",
"Surface waves",
"Materials science",
"Surface science",
"Waves",
"Nanotechnology",
"Condensed matter physics",
"Quasiparticles",
"Solid state engineering",
"Subatomic particles"
] |
30,869,356 | https://en.wikipedia.org/wiki/Plasmonic%20metamaterial | A plasmonic metamaterial is a metamaterial that uses surface plasmons to achieve optical properties not seen in nature. Plasmons are produced from the interaction of light with metal-dielectric materials. Under specific conditions, the incident light couples with the surface plasmons to create self-sustaining, propagating electromagnetic waves known as surface plasmon polaritons (SPPs). Once launched, the SPPs ripple along the metal-dielectric interface. Compared with the incident light, the SPPs can be much shorter in wavelength.
The properties stem from the unique structure of the metal-dielectric composites, with features smaller than the wavelength of light separated by subwavelength distances. Light hitting such a metamaterial is transformed into surface plasmon polaritons, which are shorter in wavelength than the incident light.
Plasmonic materials
Plasmonic materials are metals or metal-like materials that exhibit negative real permittivity. The most common plasmonic materials are gold and silver. However, many other materials show metal-like optical properties in specific wavelength ranges. Various research groups are experimenting with different approaches to make plasmonic materials that exhibit lower losses and tunable optical properties.
Negative index
Plasmonic metamaterials are realizations of materials first proposed by Victor Veselago, a Russian theoretical physicist, in 1967. Also known as left-handed or negative index materials, Veselago theorized that they would exhibit optical properties opposite to those of glass or air. In negative index materials energy is transported in a direction opposite to that of propagating wavefronts, rather than paralleling them, as is the case in positive index materials.
Normally, light traveling from, say, air into water bends upon passing through the normal (a plane perpendicular to the surface) and entering the water. In contrast, light reaching a negative index material through air would not cross the normal. Rather, it would bend the opposite way.
Negative refraction was first reported for microwave and infrared frequencies. A negative refractive index in the optical range was first demonstrated in 2005 by Shalaev et al. (at the telecom wavelength λ = 1.5 μm) and by Brueck et al. (at λ = 2 μm) at nearly the same time. In 2007, a collaboration between the California Institute of Technology, and the NIST reported narrow band, negative refraction of visible light in two dimensions.
To create this response, incident light couples with the undulating, gas-like charges (plasmons) normally on the surface of metals. This photon-plasmon interaction results in SPPs that generate intense, localized optical fields. The waves are confined to the interface between metal and insulator. This narrow channel serves as a transformative guide that, in effect, traps and compresses the wavelength of incoming light to a fraction of its original value.
Nanomechanical systems incorporating metamaterials exhibit negative radiation pressure.
Light falling on conventional materials, with a positive index of refraction, exerts a positive pressure, meaning that it can push an object away from the light source. In contrast, illuminating negative index metamaterials should generate a negative pressure that pulls an object toward light.
Three-dimensional negative index
Computer simulations predict plasmonic metamaterials with a negative index in three dimensions. Potential fabrication methods include multilayer thin film deposition, focused ion beam milling and self-assembly.
Gradient index
PMMs can be made with a gradient index (a material whose refractive index varies progressively across the length or area of the material). One such material involved depositing a thermoplastic, known as a PMMA, on a gold surface via electron beam lithography.
Hyperbolic
Hyperbolic metamaterials behave as a metal when light passes through it in one direction and like a dielectric when light passes in the perpendicular direction, called extreme anisotropy. The material's dispersion relation forms a hyperboloid. The associated wavelength can in principle be infinitely small. Recently, hyperbolic metasurfaces in the visible region has been demonstrated with silver or gold nanostructures by lithographic techniques. The reported hyperbolic devices showed multiple functions for sensing and imaging, e.g., diffraction-free, negative refraction and enhanced plasmon resonance effects, enabled by their unique optical properties.
These specific properties are also highly required to fabricate integrated optical meta-circuits for the quantum information applications.
Isotropy
The first metamaterials created exhibit anisotropy in their effects on plasmons. I.e., they act only in one direction.
More recently, researchers used a novel self-folding technique to create a three-dimensional array of split-ring resonators that exhibits isotropy when rotated in any direction up to an incident angle of 40 degrees. Exposing strips of nickel and gold deposited on a polymer/silicon substrate to air allowed mechanical stresses to curl the strips into rings, forming the resonators. By arranging the strips at different angles to each other, 4-fold symmetry was achieved, which allowed the resonators to produce effects in multiple directions.
Materials
Silicon sandwich
Negative refraction for visible light was first produced in a sandwich-like construction with thin layers. An insulating sheet of silicon nitride was covered by a film of silver and underlain by another of gold. The critical dimension is the thickness of the layers, which summed to a fraction of the wavelength of blue and green light. By incorporating this metamaterial into integrated optics on an IC chip, negative refraction was demonstrated over blue and green frequencies. The collective result is a relatively significant response to light.
Graphene
Graphene also accommodates surface plasmons, observed via near field infrared optical microscopy techniques and infrared spectroscopy. Potential applications of graphene plasmonics involve terahertz to midinfrared frequencies, in devices such as optical modulators, photodetectors and biosensors.
Superlattice
A hyperbolic metamaterial made from titanium nitride (metal) and aluminum scandium nitride (dielectric) have compatible crystal structures and can form a superlattice, a crystal that combines two (or more) materials. The material is compatible with existing CMOS technology (unlike traditional gold and silver), mechanically strong and thermally stable at higher temperatures. The material exhibits higher photonic densities of states than Au or Ag. The material is an efficient light absorber.
The material was created using epitaxy inside a vacuum chamber with a technique known as magnetron sputtering. The material featured ultra-thin and ultra-smooth layers with sharp interfaces.
Possible applications include a "planar hyperlens" that could make optical microscopes able to see objects as small as DNA, advanced sensors, more efficient solar collectors, nano-resonators, quantum computing and diffraction free focusing and imaging.
The material works across a broad spectrum from near-infrared to visible light. Near-infrared is essential for telecommunications and optical communications, and visible light is important for sensors, microscopes and efficient solid-state light sources.
Applications
Microscopy
One potential application is microscopy beyond the diffraction limit. Gradient index plasmonics were used to produce Luneburg and Eaton lenses that interact with surface plasmon polaritons rather than photons.
A theorized superlens could exceed the diffraction limit that prevents standard (positive-index) lenses from resolving objects smaller than one-half of the wavelength of visible light. Such a superlens would capture spatial information that is beyond the view of conventional optical microscopes. Several approaches to building such a microscope have been proposed. The subwavelength domain could be optical switches, modulators, photodetectors and directional light emitters.
Biological and chemical sensing
Other proof-of-concept applications under review involve high sensitivity biological and chemical sensing. They may enable the development of optical sensors that exploit the confinement of surface plasmons within a certain type of Fabry-Perot nano-resonator. This tailored confinement allows efficient detection of specific bindings of target chemical or biological analytes using the spatial overlap between the optical resonator mode and the analyte ligands bound to the resonator cavity sidewalls. Structures are optimized using finite difference time domain electromagnetic simulations, fabricated using a combination of electron beam lithography and electroplating, and tested using both near-field and far-field optical microscopy and spectroscopy.
Optical computing
Optical computing replaces electronic signals with light processing devices.
In 2014 researchers announced a 200 nanometer, terahertz speed optical switch. The switch is made of a metamaterial consisting of nanoscale particles of vanadium dioxide (), a crystal that switches between an opaque, metallic phase and a transparent, semiconducting phase. The nanoparticles are deposited on a glass substrate and overlain by even smaller gold nanoparticles that act as a plasmonic photocathode.
Femtosecond laser pulses free electrons in the gold particles that jump into the and cause a subpicosecond phase change.
The device is compatible with current integrated circuit technology, silicon-based chips and high-K dielectrics materials. It operates in the visible and near-infrared region of the spectrum. It generates only 100 femtojoules/bit/operation, allowing the switches to be packed tightly.
Photovoltaics
Gold group metals (Au, Ag and Cu) have been used as direct active materials in photovoltaics and solar cells. The materials act simultaneously as electron and hole donor, and thus can be sandwiched between electron and hole transport layers to make a photovoltaic cell. At present these photovoltaic cells allow powering smart sensors for the Internet of Things (IoT) platform.
See also
History of metamaterials
Metamaterial absorber
Metamaterial antennas
Metamaterial cloaking
Nonlinear metamaterials
Photonic metamaterials
Photonic crystal
Spoof surface plasmon
Terahertz metamaterials
Tunable metamaterials
Transformation optics
Theories of cloaking
References
Further reading
Theo Murphy Meeting Issue organized and edited by William L. Barnes.
External links
Plasmonic metamaterials - From microscopes to invisibility cloaks. Jan 21, 2011. PhysOrg.com.
Metamaterials
Nanotechnology
Plasmonics | Plasmonic metamaterial | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 2,170 | [
"Plasmonics",
"Metamaterials",
"Materials science",
"Surface science",
"Condensed matter physics",
"Nanotechnology",
"Solid state engineering"
] |
30,870,002 | https://en.wikipedia.org/wiki/New%20manufacturing%20economy | The new manufacturing economy (NME) describes the role of advanced manufacturing in the rise of the New Economy. The term describes manufacturing enabled by digital technologies, advanced systems and processes and a highly trained and knowledgeable workforce. The new manufacturing economy integrates networks, 3D printers and other proficiencies into business strategies to further develop manufacturing practices.
Thomas Friedman references Lawrence F. Katz that hubs of "universities, high-tech manufacturers, software/service providers and highly nimble start-ups" are a needed economic development strategy. This is very similar to NME thoughts even though that exact term is not used.
The Pillars of the new manufacturing economy
Technology
Focus on geographic expansion, information technology and internet commerce are on the rise for industrial manufacturing companies according to the PricewaterhouseCoopers Q4 2010 Manufacturing Barometer. Such conditions compel companies to incorporate new technologies into business plans and to concentrate on the application of open-source product development in the creation of physical goods as a form of competitive advantage.
New technologies influence various industries to emphasize innovation as a business tool . Advanced manufacturing is feasible due to continuous improvement investments and modernization of the workforce, technologies and supply chains in order to increase global competitiveness, environmental sustainability and product customization to meet consumer expectations.
Workforce
Incorporating modern CNC equipment in new manufacturing processes requires better trained employees with more exacting skills than were previously required in heavy industry. Past manufacturing job consisted largely of physical labor and worker assembly line requirements, but in response to technological evolution are becoming tech-savvy and information intense with focus on creativity and resourcefulness.
Strategy
The new manufacturing economy is centered around "niche" businesses who satisfy the needs of small consumer markets by offering what customers want, when they want it. The primary foundation of this strategy is selling less of more. Adopting the efficiencies of digital and Web-based technologies into current business strategies is an emerging trend in manufacturing practices.
Industries
Advanced technology in the manufacturing marketplace has led to growth in areas such as software development and biotechnology and to emphasis on numerous industries such as:
Liquid and biofuels
Solar energy
Renewable resources
Environmental sustainability
Pharmaceutical manufacturing
Logistics
Related terms
Smart manufacturing
New Economy
Manufacturing intelligence
The Long Tail
References
External links
Dexigner.com – Chris Anderson to Discuss New Manufacturing Economy, February 2010 talk.
In the Next Industrial Revolution, Atoms Are the New Bits - Wired, Chris Anderson, January 25, 2010.
Manufacturing Solutions Center
Manufacturing
Industry (economics)
Economies
Knowledge economy | New manufacturing economy | [
"Engineering"
] | 491 | [
"Manufacturing",
"Mechanical engineering"
] |
30,870,527 | https://en.wikipedia.org/wiki/Combined%20DNA%20Index%20System | The Combined DNA Index System (CODIS) is the United States national DNA database created and maintained by the Federal Bureau of Investigation. CODIS consists of three levels of information; Local DNA Index Systems (LDIS) where DNA profiles originate, State DNA Index Systems (SDIS) which allows for laboratories within states to share information, and the National DNA Index System (NDIS) which allows states to compare DNA information with one another.
The CODIS software contains multiple different databases depending on the type of information being searched against. Examples of these databases include, missing persons, convicted offenders, and forensic samples collected from crime scenes. Each state, and the federal system, has different laws for collection, upload, and analysis of information contained within their database. However, for privacy reasons, the CODIS database does not contain any personal identifying information, such as the name associated with the DNA profile. The uploading agency is notified of any hits to their samples and are tasked with the dissemination of personal information pursuant to their laws.
Establishment
The creation of a national DNA database within the U.S. was first mentioned by the Technical Working Group on DNA Analysis Methods (TWGDAM) in 1989. The FBI's strategic goal was to maximize the voluntary participation of states and avoid what happened several years early, when eight western states frustrated with the progress creating a national Automated Fingerprint Identification System (AFIS) network formed the Western Identification Network (WIN). The FBI's strategy to discourage states from creating systems that competed with CODIS was to develop DNA databasing software and provide it free of charge to state and local crime laboratories.This strategic decision--to provide software free of charge for the purpose of gaining market share--was innovative at that time and predated the browser wars. In 1990, the FBI began a pilot DNA databasing program with 14 state and local laboratories.
In 1994, Congress passed the DNA Identification Act which authorized the FBI to create a national DNA database of convicted offenders as well as separate databases for missing persons and forensic samples collected from crime scenes. (Some in the Bureau believed the Act was not required to establish a national DNA database because the FBI's Criminal Justice Information Services Division was already using similar authorities to provide data-sharing solutions to federal, state, local, and tribal law enforcement agencies.) The DNA Identification Act also required that laboratories participating in the CODIS program maintain accreditation from an independent nonprofit organization that is actively involved in the forensic fields and that scientists processing DNA samples for submission into CODIS maintain proficiency and are routinely tested to ensure the quality of the profiles being uploaded into the database. The national level of CODIS (NDIS) was implemented in October 1998. Today, all 50 states, the District of Columbia, federal law enforcement, the Army Laboratory, and Puerto Rico participate in the national sharing of DNA profiles.
Database structure
The CODIS database contains several different indexes for the storage of DNA profile information. For assistance in criminal investigations three indexes exist: the offender index, which contains DNA profiles of those convicted of crimes; the arrestee index, which contains profiles of those arrested of crimes pursuant to the laws of the particular state; and the forensic index, which contains profiles collected from a crime scene. Additional indexes, such as the unidentified human remain index, the missing persons index, and the biological relatives of missing persons index, are used to assist in identifying missing persons. Specialty indexes also exist for other specimens that do not fall into the other categories. These indexes include the staff index, for profiles of employees who work with the samples, and the multi-allelic offender index, for single-source samples that have three or more alleles at two or more loci.
Non-criminal indexes
While CODIS is generally used for linking crimes to other crimes and potentially to suspects there are non-criminal portions of the database such as the missing person indexes. The National Missing Person DNA Database, also known as CODIS(mp), is maintained by the FBI at the NDIS level of CODIS allowing all states to share information with one another. Created in 2000 using the existing CODIS infrastructure, this section of the database is designed to help identify human remains by collecting and storing DNA information on the missing or the relatives of missing individuals. Unidentified remains are processed for DNA by the University of North Texas Center for Human Identification which is funded by the National Institute of Justice. Nuclear, Y-STR (for males only), and mitochondrial analysis can be performed on both unknown remains and on known relatives in order to maximize the chance of identifying remains.
Statistics
, NDIS contained more than 14 million offender profiles, more than four million arrestee profiles and more than one million forensic profiles. The effectiveness of CODIS is measured by the number of investigations aided through database hits. , CODIS had aided in over 520 thousand investigations and produced more than 530 thousand hits. Each state has their own SDIS database and each state can set their own inclusionary standards that can be less strict than the national level. For this reason, a number of profiles that are present in state level databases are not in the national database and are not routinely searched across state lines.
Scientific basis
The bulk of identifications using CODIS rely on short tandem repeats (STRs) that are scattered throughout the human genome and on statistics that are used to calculate the rarity of that specific profile in the population. STRs are a type of copy-number variation and comprise a sequence of nucleotide base pairs that is repeated over and over again. At each location tested during DNA analysis, also known as a locus (plural loci), a person has two sets of repeats, one from the father and one from the mother. Each set is measured and the number of repeat copies is recorded. If both strands, inherited from the parents, contain the same number of repeats at that locus the person is said to be homozygous at that locus. If the repeat numbers differ they are said to be heterozygous. Every possible difference at a locus is an allele. This repeat determination is performed across a number of loci and the repeat values is the DNA profile that is uploaded to CODIS. As of January 1, 2017, requirements for upload to national level for known offender profiles is 20 loci.
Alternatively, CODIS allows for the upload of mitochondrial DNA (mtDNA) information into the missing persons indexes. Since mtDNA is passed down from mother to offspring it can be used to link remains to still living relatives who have the same mtDNA.
Loci
Prior to January 1, 2017, the national level of CODIS required that known offender profiles have a set of 13 loci called the "CODIS core". Since then, the requirement has expanded to include seven additional loci. Partial profiles are also allowed in CODIS in separate indexes and are common in crime scene samples that are degraded or are mixtures of multiple individuals. Upload of these profiles to the national level of CODIS requires at least eight of the core loci to be present as well as a profile rarity of 1 in 10 million (calculated using population statistics).
Loci that fall within a gene are named after the gene. For example, TPOX, is named after the human thyroid peroxidase gene. Loci that do not fall within genes are given a standard naming scheme for uniformity. These loci are named D + the chromosome the locus is on + S + the order in which the location on that chromosome was described. For example, D3S1358 is on the third chromosome and is the 1358th location described. The CODIS core are listed below; loci with asterisks are the new core and were added to the list in January 2017.
The loci used in CODIS were chosen because they are in regions of noncoding DNA, sections that do not code for proteins. These sections should not be able to tell investigators any additional information about the person such as their hair or eye color, or their race. However, new advancements in the understanding of genetic markers and ancestry have indicated that the CODIS loci may contain phenotypic information.
International use
While the U.S. database is not directly connected to any other country, the underlying CODIS software is used by other agencies around the world. , the CODIS software is used by 90 international laboratories in 50 countries. International police agencies that want to search the U.S. database can submit a request to the FBI for review. If the request is reasonable and the profile being searched would meet inclusionary standards for a U.S. profile, such as number of loci, the request can be searched at the national level or forwarded to any states where reasonable suspicion exists that they may be present in that level of the database.
Controversies
Arrestee collection
The original purpose of the CODIS database was to build upon the sex offender registry through the DNA collection of convicted sex offenders. Over time, that has expanded. Currently, all 50 states collect DNA from those convicted of felonies. A number of states also collect samples from juveniles as well as those who are arrested, but not yet convicted, of a crime. Note that even in states which limit collection of DNA retained in the state database only to those convicted of a crime, local databases, such as the forensic laboratory operated by New York City's Office of Chief Medical Examiner, may collect DNA samples of arrestees who have not been convicted. The collection of arrestee samples raised constitutional issues, specifically the Fourth Amendment prohibiting unreasonable search and seizure. It was argued that the collection of DNA from those that were not convicted of a crime, without an explicit order to collect, was considered a warrantless search and therefore unlawful. In 2013, the United States Supreme Court ruled in Maryland v. King that the collection of DNA from those arrested for a crime, but not yet convicted, is part of the police booking procedure and is reasonable when that collection is used for identification purposes.
Familial searching
The inheritance pattern of some DNA means that close relatives share a higher percentage of alleles between each other than with other, random, members of society. This allows for the searching of close matches within CODIS when an exact match is not found. By focusing on close matches, investigators can potentially find a close relative whose profile is in CODIS narrowing their search to one specific family. Familial searching has led to several convictions after the exhaustion of all other leads including the Grim Sleeper serial killer. This practice also raised Fourth Amendment challenges as the individual who ends up being charged with a crime was only implicated because someone else's DNA was in the CODIS database. , twelve states have approved the use of familial searching in CODIS.
See also
Debbie Smith Act
GEDmatch
Integrated Automated Fingerprint Identification System (IAFIS)
References
External links
CODIS page on FBI.gov
"ACLU Warns of Privacy Abuses in Government Plan to Expand DNA Databases". ACLU. March 1, 1999.
A Not So Perfect Match, CBS, 2007
"DNA didn't prove anything, as it only had five points out of 13. Juror Explains Verdict In Double Murder". November 13, 2008.
Biometrics
DNA
Federal Bureau of Investigation
Law enforcement databases in the United States
Identity documents of the United States
National DNA databases
Sex offender registration
Forensic databases
Biological databases
1998 introductions | Combined DNA Index System | [
"Biology"
] | 2,324 | [
"Bioinformatics",
"Biological databases"
] |
30,871,229 | https://en.wikipedia.org/wiki/Glazing%20%28window%29 | Glazing, which derives from the Middle English for 'glass', is a part of a wall or window, made of glass. Glazing also describes the work done by a professional "glazier". Glazing is also less commonly used to describe the insertion of ophthalmic lenses into an eyeglass frame.
Common types of glazing that are used in architectural applications include clear and tinted float glass, tempered glass, and laminated glass as well as a variety of coated glasses, all of which can be glazed singly or as double, or even triple, glazing units. Ordinary clear glass has a slight green tinge, but special colorless glasses are offered by several manufacturers.
Glazing can be mounted on the surface of a window sash or door stile, usually made of wood, aluminium or PVC. The glass is fixed into a rabbet (rebate) in the frame in a number of ways including triangular glazing points, putty, etc. Toughened and laminated glass can be glazed by bolting panes directly to a metal framework by bolts passing through drilled holes.
Glazing is commonly used in low temperature solar thermal collectors because it helps retain the collected heat.
History
The first recorded use of glazing in windows was by the Romans in the first century AD. This glass was rudimentary, essentially a blown cylinder that had been flattened out, and was not very transparent. In the eleventh century, techniques were developed where the glass was spun into a disc, creating a thinner circular window, or a cylinder was again formed, but this time it was cut from edge to edge and unrolled to make a rectangle-shaped window. The newer cylinder method remained the dominant method until the 19th century, and individual panes of glass were therefore limited in size to the dimensions of those cylinders.
Continuous plate production was invented in 1848 by Henry Bessemer, who drew a ribbon of glass through rollers. This standardized the thickness of the glass, but its use in mass-production was limited by the need to polish both sides of the glass after manufacture, which was time-consuming and expensive. The process was slowly refined throughout the next century, with automated grinders and polishers being added to bring the cost down.
The breakthrough in large, mass-produced, continuous glass production happened in the 1950s with the development of the Float glass manufacturing process. Molten glass is poured over a surface of molten tin, where it flattens out and can be drawn off in a ribbon. The advantage of this process is that it is scalable to any size and produces high quality panes without any further polishing or grinding. Float glass has continued to be the most used type of glazing to the present day.
Composition
The most common glass used for glazing is Soda–lime glass, which has many advantages over other glass types. Silica (SiO2) makes up the bulk of the composition of this material at 70–75% by weight. Pure silica has a melting point that would be prohibitively expensive to reach with large-scale manufacturing, so sodium oxide (soda, Na2O) is added, which reduces the melting point. However, the sodium ions are water-soluble, which is not a desired property, so calcium oxide (lime, CaO) is added to reduce the solubility. The end result is a product which is high quality, clear, relatively cheap to produce, and recycles easily.
Role in energy conservation
Approximately 25% to 30% of HVAC energy costs stem from heat gain and loss through the glazing in windows. Multiple methods have therefore been developed to minimize heat transfer through the glass. The glazing itself is a barrier to transfer via convection, so the two strategies for reducing heat transfer focus on minimizing conduction and radiation.
Double-paned windows
The strategy to reduce conduction is the use of Insulated glazing, where two or more panes of glass are used in series, each separated from each other by a space. Double-paned windows are the norm in new residential installations, as they offer substantial energy savings in comparison to single-paned glass. Each individual glass pane has poor insulation properties, with an R-value (insulation), or measure of an object's resistance to heat conduction, of 0.9. However, when two panes are placed in series with a gap between them, held in place and sealed by a spacer, the still gas in the gap acts as an insulator. The ideal gap size varies by location, but on average it ranges from 15-18 mm thick, giving a final assembly size of 23-26 mm assuming a typical glazing thickness of 4 mm. A double-paned window with air in the gap has an R-value of 2.1, which is much better than the 0.9 that a single pane of glass yields. A triple-paned window, which is not as popular but is used occasionally in environments with extreme temperatures, has an R-value of 3.2. While these values are much lower than those of walls, which have R-values starting at 12-15, the reduction in heat transfer is nevertheless substantial. Higher R-values still can be obtained by filling the gap with a less conductive gas such as argon (or less commonly, krypton or xenon). One final alternate method to reducing conduction is by creating and maintaining a vacuum in between the panes of glass, achieving a very high R-value of 10 while also greatly minimizing the required gap between the panes to 2 mm, yielding an assembly size as small as 10 mm. This technology was first launched commercially in 1996, and while several million units have been produced in the ensuing decades, it remains prohibitively expensive for most use cases and has yet to see widespread adoption.
Low-emissivity coating
The strategy to reduce radiation involves coating the glass with a low-emissivity (Low-E) coating, which reflects away much of the infrared light that hits it. There are two types of low-e coating. The first is Solar Control Low-E, where the intent is to block incoming solar radiation, which reduces heat gain inside the building and therefore the cooling costs associated with removing that heat. When installed on a double-paned window, the coating is placed on the inner face of the outside pane, and optionally on the inner face of the inner pane to improve insulating performance as well. This type of coating is most appropriate for cooling-dominated climates and buildings with large internal loads, where the goal is primarily to stop the buildings from overheating.
In a heating-dominated climate, the second type of low-e coating is more appropriate. This is Passive Low-E, where the goal is to retain heat inside the building. These coatings do not block as much of the short-wave infrared light from the sun, but do block any long-wave infrared light coming from the inside, functioning as somewhat of a greenhouse. These coatings are placed on the inner pane of glass, on the outer face if less solar heat gain is desired, and on the inner face if more solar heat gain is desired. Especially when combined with double-or-triple-paned windows, the R-values achieved with low-e coatings can be quite high, with a 3-paned window filled with argon with one low-e coating having an R-value of 5.4. One trade-off of low-e coatings is that while they are primarily aimed at reducing the amount of infrared light passing through the window, they do also somewhat reduce the amount of visible light passing through, and the building may incur higher lighting demand as a result.
There are two methods of applying the Low-E coating to the glazing: Hard Coat and Soft Coat. Hard Coat is applied either in or directly after the tin bath in the float glass manufacturing process. This produces a coating which is very durable and inexpensive, as it is added during the existing production process. However, it is not as energy efficient and allows more infrared light to pass through than the Soft Coat method. The Soft Coat, on the other hand, is applied after the glass has already been manufactured and cut and tends to be clearer and better at insulating. However, the additional manufacturing step adds to the cost of production, and the coating will degrade when exposed to the elements, and so can only be placed on the inside faces of a double-paned window. Generally, solar control Low-E windows are soft coat and passive Low-E windows are hard coat due to the lower emissivity of the soft coat.
See also
Architectural glass
Fanlight
Insulated glazing
Quadruple glazing
Noise mitigation
Roof lantern
Solar thermal collector
References
Construction
Glass engineering and science
Glass architecture | Glazing (window) | [
"Materials_science",
"Engineering"
] | 1,829 | [
"Glass architecture",
"Glass engineering and science",
"Materials science",
"Construction"
] |
30,871,810 | https://en.wikipedia.org/wiki/North%20magnetic%20pole | The north magnetic pole, also known as the magnetic north pole, is a point on the surface of Earth's Northern Hemisphere at which the planet's magnetic field points vertically downward (in other words, if a magnetic compass needle is allowed to rotate in three dimensions, it will point straight down). There is only one location where this occurs, near (but distinct from) the geographic north pole. The geomagnetic north pole is the northern antipodal pole of an ideal dipole model of the Earth's magnetic field, which is the most closely fitting model of Earth's actual magnetic field.
The north magnetic pole moves over time according to magnetic changes and flux lobe elongation in the Earth's outer core. In 2001, it was determined by the Geological Survey of Canada to lie west of Ellesmere Island in northern Canada at . It was situated at in 2005. In 2009, while still situated within the Canadian Arctic at , it was moving toward Russia at between per year. In 2013, the distance between the north magnetic pole and the geographic north pole was approximately . As of 2021, the pole is projected to have moved beyond the Canadian Arctic to .
Its southern hemisphere counterpart is the south magnetic pole. Since Earth's magnetic field is not exactly symmetric, the north and south magnetic poles are not antipodal, meaning that a straight line drawn from one to the other does not pass through the geometric center of Earth.
Earth's north and south magnetic poles are also known as magnetic dip poles, with reference to the vertical "dip" of the magnetic field lines at those points.
Polarity
All magnets have two poles, where lines of magnetic flux enter one pole and emerge from the other pole. By analogy with Earth's magnetic field, these are called the magnet's "north" and "south" poles. Before magnetism was well understood, the north-seeking pole of a magnet was defined to have the north designation, according to their use in early compasses. However, opposite poles attract, which means that as a physical magnet, the magnetic north pole of the Earth is actually on the southern hemisphere. In other words, if we establish that true geographic north is north, then what we call the Earth's north magnetic pole is actually its south magnetic pole since it attracts the north magnetic pole of other magnets, such as compass needles.
The direction of magnetic field lines is defined such that the lines emerge from the magnet's north pole and enter into the magnet's south pole.
History
Early European navigators, cartographers and scientists believed that compass needles were attracted to a hypothetical "magnetic island" somewhere in the far north (see Rupes Nigra), or to Polaris, the pole star. The idea that Earth itself acts as essentially a giant magnet was first proposed in 1600, by the English physician and natural philosopher William Gilbert. He was also the first to define the north magnetic pole as the point where Earth's magnetic field points vertically downwards. This is the current definition, though it would be a few hundred years before the nature of Earth's magnetic field was understood with modern accuracy and precision.
Expeditions and measurements
First observations
The first group to reach the north magnetic pole was led by James Clark Ross, who found it at Cape Adelaide on the Boothia Peninsula on 1 June 1831, while serving on the second arctic expedition of his uncle, Sir John Ross. Roald Amundsen found the north magnetic pole in a slightly different location in 1903. The third observation was by Canadian government scientists Paul Serson and Jack Clark, of the Dominion Astrophysical Observatory, who found the pole at Allen Lake on Prince of Wales Island in 1947.
Project Polaris
At the start of the Cold War, the United States Department of War recognized a need for a comprehensive survey of the North American Arctic and asked the United States Army to undertake the task. An assignment was made in 1946 for the Army Air Forces' recently formed Strategic Air Command to explore the entire Arctic Ocean area. The exploration was conducted by the 46th (later re-designated the 72nd) Photo Reconnaissance Squadron and reported on as a classified Top Secret mission named Project Nanook. This project in turn was divided into many separate, but identically classified, projects, one of which was Project Polaris, which was a radar, photographic (trimetrogon, or three-angle, cameras) and visual study of the entire Canadian Archipelago. A Canadian officer observer was assigned to accompany each flight.
Frank O. Klein, the director of the project, noticed that the fluxgate compass did not behave as erratically as expected—it oscillated no more than 1 to 2 degrees over much of the region—and began to study northern terrestrial magnetism. With the cooperation of many of his squadron teammates in obtaining many hundreds of statistical readings, startling results were revealed: The center of the north magnetic dip pole was on Prince of Wales Island some NNW of the positions determined by Amundsen and Ross, and the dip pole was not a point but occupied an elliptical region with foci about apart on Boothia Peninsula and Bathurst Island. Klein called the two foci local poles, for their importance to navigation in emergencies when using a "homing" procedure. About three months after Klein's findings were officially reported, a Canadian ground expedition was sent into the Archipelago to locate the position of the magnetic pole. R. Glenn Madill, Chief of Terrestrial Magnetism, Department of Mines and Resources, Canada, wrote to Lt. Klein on 21 July 1948:
(The positions were less than apart.)
Modern (post-1996)
The Canadian government has made several measurements since, which show that the north magnetic pole is moving continually northwestward. In 2001, an expedition located the pole at .
In 2007, the latest survey found the pole at .
During the 20th century it moved , and since 1970 its rate of motion has accelerated from per year (2001–2007 average; see also polar drift). Members of the 2007 expedition to locate the magnetic north pole wrote that such expeditions have become logistically difficult, as the pole moves farther away from inhabited locations. They expect that in the future, the magnetic pole position will be obtained from satellite data instead of ground surveys.
This general movement is in addition to a daily or diurnal variation in which the north magnetic pole describes a rough ellipse, with a maximum deviation of from its mean position. This effect is due to disturbances of the geomagnetic field by charged particles from the Sun.
As of early 2019, the magnetic north pole is moving from Canada towards Siberia at a rate of approximately per year.
NOAA gives the 2024 location of the magnetic north pole as 86 degrees North, 142 degrees East. By 2025, it will have drifted to 138 degrees East (same latitude).
Exploration
The first team of novices to reach the magnetic north pole did so in 1996, led by David Hempleman-Adams. It included the first British woman Sue Stockdale and first Swedish woman to reach the Pole. The team also successfully tracked the location of the Magnetic North Pole on behalf of the University of Ottawa, and certified its location by magnetometer and theodolite at .
The Polar Race was a biannual competition that ran from 2003 until 2011. It took place between the community of Resolute, on the shores of Resolute Bay, Nunavut, in northern Canada and the 1996 location of the north magnetic pole at , also in northern Canada.
On 25 July 2007, the Top Gear: Polar Special was broadcast on BBC Two in the United Kingdom, in which Jeremy Clarkson, James May, and their support and camera team claimed to be the first people in history to reach the 1996 location of the north magnetic pole in northern Canada by car. Note that they did not reach the actual north magnetic pole, which at the time (2007) had moved several hundred kilometers further north from the 1996 position.
Magnetic north and magnetic declination
Historically, the magnetic compass was an important tool for navigation. While it has been widely replaced by Global Positioning Systems, many airplanes and ships still carry them, as do casual boaters and hikers.
The direction in which a compass needle points is known as magnetic north. In general, this is not exactly the direction of the north magnetic pole (or of any other consistent location). Instead, the compass aligns itself to the local geomagnetic field, which varies in a complex manner over Earth's surface, as well as over time. The local angular difference between magnetic north and true north is called the magnetic declination. Most map coordinate systems are based on true north, and magnetic declination is often shown on map legends so that the direction of true north can be determined from north as indicated by a compass.
In North America the line of zero declination (the agonic line) runs from the north magnetic pole down through Lake Superior and southward into the Gulf of Mexico (see figure). Along this line, true north is the same as magnetic north. West of the agonic line a compass will give a reading that is east of true north and by convention the magnetic declination is positive. Conversely, east of the agonic line a compass will point west of true north and the declination is negative.
North geomagnetic pole
As a first-order approximation, Earth's magnetic field can be modeled as a simple dipole (like a bar magnet), tilted about 10° with respect to Earth's rotation axis (which defines the geographic north and geographic south poles) and centered at Earth's center. The north and south geomagnetic poles are the antipodal points where the axis of this theoretical dipole intersects Earth's surface. If Earth's magnetic field were a perfect dipole then the field lines would be vertical at the geomagnetic poles, and they would coincide with the magnetic poles. However, the approximation is imperfect, and so the magnetic and geomagnetic poles lie some distance apart.
Like the north magnetic pole, the north geomagnetic pole attracts the north pole of a bar magnet and so is in a physical sense actually a magnetic south pole. It is the center of the region of the magnetosphere in which the Aurora Borealis can be seen. As of 2015 it was located at approximately , over Ellesmere Island, Canada but it is now drifting away from North America and toward Siberia.
Geomagnetic reversal
Over the life of Earth, the orientation of Earth's magnetic field has reversed many times, with magnetic north becoming magnetic south and vice versa – an event known as a geomagnetic reversal. Evidence of geomagnetic reversals can be seen at mid-ocean ridges where tectonic plates move apart and the seabed is filled in with magma. As the magma seeps out of the mantle, cools, and solidifies into igneous rock, it is imprinted with a record of the direction of the magnetic field at the time that the magma cooled.
See also
South magnetic pole
Polar alignment
References
External links
Map of pole's wandering
Polar regions of the Earth
Geography of the Arctic
Geomagnetism
Geology of the Arctic
Geography of the Northwest Territories
Orientation (geometry) | North magnetic pole | [
"Physics",
"Mathematics"
] | 2,291 | [
"Topology",
"Space",
"Geometry",
"Spacetime",
"Orientation (geometry)"
] |
30,871,821 | https://en.wikipedia.org/wiki/South%20magnetic%20pole | The south magnetic pole, also known as the magnetic south pole, is the point on Earth's Southern Hemisphere where the geomagnetic field lines are directed perpendicular to the nominal surface. The Geomagnetic South Pole, a related point, is the south pole of an ideal dipole model of the Earth's magnetic field that most closely fits the Earth's actual magnetic field.
For historical reasons, the "end" of a freely hanging magnet that points (roughly) north is itself called the "north pole" of the magnet, and the other end, pointing south, is called the magnet's "south pole". Because opposite poles attract, Earth's south magnetic pole is physically actually a magnetic north pole (see also ).
The south magnetic pole is constantly shifting due to changes in Earth's magnetic field.
As of 2005 it was calculated to lie at , placing it off the coast of Antarctica, between Adélie Land and Wilkes Land. In 2015 it lay at (est). That point lies outside the Antarctic Circle. Due to polar drift, the pole is moving northwest by about per year. Its current distance from the actual Geographic South Pole is approximately . The nearest permanent science station is Dumont d'Urville Station. While the north magnetic pole began wandering very quickly in the mid 1990s, the movement of the south magnetic pole did not show a matching change of speed.
Expeditions
Early unsuccessful attempts to reach the magnetic south pole included those of French explorer Jules Dumont d'Urville (1837–1840), American Charles Wilkes (expedition of 1838–1842) and Briton James Clark Ross (expedition of 1839–1843).
The first calculation of the magnetic inclination to locate the magnetic South Pole was made on 23 January 1838 by the hydrographer , a member of the Dumont d'Urville expedition in Antarctica and Oceania on the corvettes and in 1837–1840, which discovered Adélie Land.
On 16 January 1909 three men (Douglas Mawson, Edgeworth David, and Alistair Mackay) from Sir Ernest Shackleton's Nimrod Expedition claimed to have found the south magnetic pole, which was at that time located on land. They planted a flagpole at the spot and claimed it for the British Empire. However, there is now some doubt as to whether their location was correct. The approximate position of the pole on 16 January 1909 was .
Fits to global data sets
The south magnetic pole has also been estimated by fits to global sets of data such as the World Magnetic Model (WMM) and the International Geomagnetic Reference Field (IGRF). For earlier years back to about 1600, the model GUFM1 is used, based on a compilation of data from ship logs.
South geomagnetic pole
Earth's geomagnetic field can be approximated by a tilted dipole (like a bar magnet) placed at the center of Earth. The south geomagnetic pole is the point where the axis of this best-fitting tilted dipole intersects Earth's surface in the southern hemisphere. As of 2005 it was calculated to be located at , near the Vostok Station. Because the field is not an exact dipole, the south geomagnetic pole does not coincide with the south magnetic pole. Furthermore, the south geomagnetic pole is wandering for the same reason its northern geomagnetic counterpart wanders.
See also
North magnetic pole
Polar alignment
References
External links
Australian Antarctic Division
Polar regions of the Earth
East Antarctica
Geography of Antarctica
Geomagnetism
Orientation (geometry) | South magnetic pole | [
"Physics",
"Mathematics"
] | 721 | [
"Topology",
"Space",
"Geometry",
"Spacetime",
"Orientation (geometry)"
] |
30,872,150 | https://en.wikipedia.org/wiki/Object-Z | Object-Z is an object-oriented extension to the Z notation developed at the University of Queensland, Australia.
Object-Z extends Z by the addition of language constructs resembling the object-oriented paradigm, most notably, classes. Other object-oriented notions such as polymorphism and inheritance are also supported.
While not as popular as its base language Z, Object-Z has still received significant attention in the formal methods community, and research on aspects of the language are ongoing, including hybrid languages using Object-Z, tool support (e.g., through the Community Z Tools project) and refinement calculi.
See also
Z++
References
External links
The Object-Z Home Page
Community Z Tools (CZT) project
Z notation
Specification languages
Formal specification languages
Object-oriented programming
University of Queensland | Object-Z | [
"Mathematics",
"Engineering"
] | 165 | [
"Software engineering",
"Specification languages",
"Z notation"
] |
30,872,162 | https://en.wikipedia.org/wiki/DNA%20barcoding | DNA barcoding is a method of species identification using a short section of DNA from a specific gene or genes. The premise of DNA barcoding is that by comparison with a reference library of such DNA sections (also called "sequences"), an individual sequence can be used to uniquely identify an organism to species, just as a supermarket scanner uses the familiar black stripes of the UPC barcode to identify an item in its stock against its reference database. These "barcodes" are sometimes used in an effort to identify unknown species or parts of an organism, simply to catalog as many taxa as possible, or to compare with traditional taxonomy in an effort to determine species boundaries.
Different gene regions are used to identify the different organismal groups using barcoding. The most commonly used barcode region for animals and some protists is a portion of the cytochrome c oxidase I (COI or COX1) gene, found in mitochondrial DNA. Other genes suitable for DNA barcoding are the internal transcribed spacer (ITS) rRNA often used for fungi and RuBisCO used for plants. Microorganisms are detected using different gene regions. The 16S rRNA gene for example is widely used in identification of prokaryotes, whereas the 18S rRNA gene is mostly used for detecting microbial eukaryotes. These gene regions are chosen because they have less intraspecific (within species) variation than interspecific (between species) variation, which is known as the "Barcoding Gap".
Some applications of DNA barcoding include: identifying plant leaves even when flowers or fruits are not available; identifying pollen collected on the bodies of pollinating animals; identifying insect larvae which may have fewer diagnostic characters than adults; or investigating the diet of an animal based on its stomach content, saliva or feces. When barcoding is used to identify organisms from a sample containing DNA from more than one organism, the term DNA metabarcoding is used, e.g. DNA metabarcoding of diatom communities in rivers and streams, which is used to assess water quality.
Background
DNA barcoding techniques were developed from early DNA sequencing work on microbial communities using the 5S rRNA gene. In 2003, specific methods and terminology of modern DNA barcoding were proposed as a standardized method for identifying species, as well as potentially allocating unknown sequences to higher taxa such as orders and phyla, in a paper by Paul D.N. Hebert et al. from the University of Guelph, Ontario, Canada. Hebert and his colleagues demonstrated the utility of the cytochrome c oxidase I (COI) gene, first utilized by Folmer et al. in 1994, using their published DNA primers as a tool for phylogenetic analyses at the species levels as a suitable discriminatory tool between metazoan invertebrates. The "Folmer region" of the COI gene is commonly used for distinction between taxa based on its patterns of variation at the DNA level. The relative ease of retrieving the sequence, and variability mixed with conservation between species, are some of the benefits of COI. Calling the profiles "barcodes", Hebert et al. envisaged the development of a COI database that could serve as the basis for a "global bioidentification system".
Methods
Sampling and preservation
Barcoding can be done from tissue from a target specimen, from a mixture of organisms (bulk sample), or from DNA present in environmental samples (e.g. water or soil). The methods for sampling, preservation or analysis differ between those different types of sample.
Tissue samples
To barcode a tissue sample from the target specimen, a small piece of skin, a scale, a leg or antenna is likely to be sufficient (depending on the size of the specimen). To avoid contamination, it is necessary to sterilize used tools between samples. It is recommended to collect two samples from one specimen, one to archive, and one for the barcoding process. Sample preservation is crucial to overcome the issue of DNA degradation.
Bulk samples
A bulk sample is a type of environmental sample containing several organisms from the taxonomic group under study. The difference between bulk samples (in the sense used here) and other environmental samples is that the bulk sample usually provides a large quantity of good-quality DNA. Examples of bulk samples include aquatic macroinvertebrate samples collected by kick-net, or insect samples collected with a Malaise trap. Filtered or size-fractionated water samples containing whole organisms like unicellular eukaryotes are also sometimes defined as bulk samples. Such samples can be collected by the same techniques used to obtain traditional samples for morphology-based identification.
eDNA samples
The environmental DNA (eDNA) method is a non-invasive approach to detect and identify species from cellular debris or extracellular DNA present in environmental samples (e.g. water or soil) through barcoding or metabarcoding. The approach is based on the fact that every living organism leaves DNA in the environment, and this environmental DNA can be detected even for organisms that are at very low abundance. Thus, for field sampling, the most crucial part is to use DNA-free material and tools on each sampling site or sample to avoid contamination, if the DNA of the target organism(s) is likely to be present in low quantities. On the other hand, an eDNA sample always includes the DNA of whole-cell, living microorganisms, which are often present in large quantities. Therefore, microorganism samples taken in the natural environment also are called eDNA samples, but contamination is less problematic in this context due to the large quantity of target organisms. The eDNA method is applied on most sample types, like water, sediment, soil, animal feces, stomach content or blood from e.g. leeches.
DNA extraction, amplification and sequencing
DNA barcoding requires that DNA in the sample is extracted. Several different DNA extraction methods exist, and factors like cost, time, sample type and yield affect the selection of the optimal method.
When DNA from organismal or eDNA samples is amplified using polymerase chain reaction (PCR), the reaction can be affected negatively by inhibitor molecules contained in the sample. Removal of these inhibitors is crucial to ensure that high quality DNA is available for subsequent analyzing.
Amplification of the extracted DNA is a required step in DNA barcoding. Typically, only a small fragment of the total DNA material is sequenced (typically 400–800 base pairs) to obtain the DNA barcode. Amplification of eDNA material is usually focused on smaller fragment sizes (<200 base pairs), as eDNA is more likely to be fragmented than DNA material from other sources. However, some studies argue that there is no relationship between amplicon size and detection rate of eDNA.
When the DNA barcode marker region has been amplified, the next step is to sequence the marker region using DNA sequencing methods. Many different sequencing platforms are available, and technical development is proceeding rapidly.
Marker selection
Markers used for DNA barcoding are called barcodes. In order to successfully characterize species based on DNA barcodes, selection of informative DNA regions is crucial. A good DNA barcode should have low intra-specific and high inter-specific variability and possess conserved flanking sites for developing universal PCR primers for wide taxonomic application. The goal is to design primers that will detect and distinguish most or all the species in the studied group of organisms (high taxonomic resolution). The length of the barcode sequence should be short enough to be used with current sampling source, DNA extraction, amplification and sequencing methods.
Ideally, one gene sequence would be used for all taxonomic groups, from viruses to plants and animals. However, no such gene region has been found yet, so different barcodes are used for different groups of organisms, or depending on the study question.
For animals, the most widely used barcode is mitochondrial cytochrome C oxidase I (COI) locus. Other mitochondrial genes, such as Cytb, 12S or 16S are also used. Mitochondrial genes are preferred over nuclear genes because of their lack of introns, their haploid mode of inheritance and their limited recombination. Moreover, each cell has various mitochondria (up to several thousand) and each of them contains several circular DNA molecules. Mitochondria can therefore offer abundant source of DNA even when sample tissue is limited.
In plants, however, mitochondrial genes are not appropriate for DNA barcoding because they exhibit low mutation rates. A few candidate genes have been found in the chloroplast genome, the most promising being maturase K gene (matK) by itself or in association with other genes. Multi-locus markers such as ribosomal internal transcribed spacers (ITS DNA) along with matK, rbcL, trnH or other genes have also been used for species identification. The best discrimination between plant species has been achieved when using two or more chloroplast barcodes.
For bacteria, the small subunit of ribosomal RNA (16S) gene can be used for different taxa, as it is highly conserved. Some studies suggest COI, type II chaperonin (cpn60) or β subunit of RNA polymerase (rpoB) also could serve as bacterial DNA barcodes.
Barcoding fungi is more challenging, and more than one primer combination might be required. The COI marker performs well in certain fungi groups, but not equally well in others. Therefore, additional markers are being used, such as ITS rDNA and the large subunit of nuclear ribosomal RNA (28S LSU rRNA).
Within the group of protists, various barcodes have been proposed, such as the D1–D2 or D2–D3 regions of 28S rDNA, V4 subregion of 18S rRNA gene, ITS rDNA and COI. Additionally, some specific barcodes can be used for photosynthetic protists, for example the large subunit of ribulose-1,5-bisphosphate carboxylase-oxygenase gene (rbcL) and the chloroplastic 23S rRNA gene.
Reference libraries and bioinformatics
Reference libraries are used for the taxonomic identification, also called annotation, of sequences obtained from barcoding or metabarcoding. These databases contain the DNA barcodes assigned to previously identified taxa. Most reference libraries do not cover all species within an organism group, and new entries are continually created. In the case of macro- and many microorganisms (such as algae), these reference libraries require detailed documentation (sampling location and date, person who collected it, image, etc.) and authoritative taxonomic identification of the voucher specimen, as well as submission of sequences in a particular format. However, such standards are fulfilled for only a small number of species. The process also requires the storage of voucher specimens in museum collections, herbaria and other collaborating institutions. Both taxonomically comprehensive coverage and content quality are important for identification accuracy. In the microbial world, there is no DNA information for most species names, and many DNA sequences cannot be assigned to any Linnaean binomial. Several reference databases exist depending on the organism group and the genetic marker used. There are smaller, national databases (e.g. FinBOL), and large consortia like the International Barcode of Life Project (iBOL).
BOLD
Launched in 2007, the Barcode of Life Data System (BOLD) is one of the biggest databases, containing about 780 000 BINs (Barcode Index Numbers) in 2022. It is a freely accessible repository for the specimen and sequence records for barcode studies, and it is also a workbench aiding the management, quality assurance and analysis of barcode data. The database mainly contains BIN records for animals based on the COI genetic marker. For plant identification, BOLD accepts sequences from matK and rbcL.
UNITE
The UNITE database was launched in 2003 and is a reference database for the molecular identification of fungal (and since 2018 all eukaryotic) species with the nuclear ribosomal internal transcribed spacer (ITS) genetic marker region. This database is based on the concept of species hypotheses: you choose the % at which you want to work, and the sequences are sorted in comparison to sequences obtained from voucher specimens identified by experts.
Diat.barcode
Diat.barcode database was first published under the name R-syst::diatom in 2016 starting with data from two sources: the Thonon culture collection (TCC) in the hydrobiological station of the French National Institute for Agricultural Research (INRA), and from the NCBI (National Center for Biotechnology Information) nucleotide database. Diat.barcode provides data for two genetic markers, rbcL (Ribulose-1,5-bisphosphate carboxylase/oxygenase) and 18S (18S ribosomal RNA). The database also involves additional, trait information of species, like morphological characteristics (biovolume, size dimensions, etc.), life-forms (mobility, colony-type, etc.) or ecological features (pollution sensitivity, etc.).
Bioinformatic analysis
In order to obtain well structured, clean and interpretable data, raw sequencing data must be processed using bioinformatic analysis. The FASTQ file with the sequencing data contains two types of information: the sequences detected in the sample (FASTA file) and a quality file with quality scores (PHRED scores) associated with each nucleotide of each DNA sequence. The PHRED scores indicate the probability with which the associated nucleotide has been correctly scored.
In general, the PHRED score decreases towards the end of each DNA sequence. Thus some bioinformatics pipelines simply cut the end of the sequences at a defined threshold.
Some sequencing technologies, like MiSeq, use paired-end sequencing during which sequencing is performed from both directions producing better quality. The overlapping sequences are then aligned into contigs and merged. Usually, several samples are pooled in one run, and each sample is characterized by a short DNA fragment, the tag. In a demultiplexing step, sequences are sorted using these tags to reassemble the separate samples. Before further analysis, tags and other adapters are removed from the barcoding sequence DNA fragment. During trimming, the bad quality sequences (low PHRED scores), or sequences that are much shorter or longer than the targeted DNA barcode, are removed. The following dereplication step is the process where all of the quality-filtered sequences are collapsed into a set of unique reads (individual sequence units ISUs) with the information of their abundance in the samples. After that, chimeras (i.e. compound sequences formed from pieces of mixed origin) are detected and removed. Finally, the sequences are clustered into OTUs (Operational Taxonomic Units), using one of many clustering strategies. The most frequently used bioinformatic software include Mothur, Uparse, Qiime, Galaxy, Obitools, JAMP, Barque, and DADA2.
Comparing the abundance of reads, i.e. sequences, between different samples is still a challenge because both the total number of reads in a sample as well as the relative amount of reads for a species can vary between samples, methods, or other variables. For comparison, one may then reduce the number of reads of each sample to the minimal number of reads of the samples to be compared – a process called rarefaction. Another way is to use the relative abundance of reads.
Species identification and taxonomic assignment
The taxonomic assignment of the OTUs to species is achieved by matching of sequences to reference libraries. The Basic Local Alignment Search Tool (BLAST) is commonly used to identify regions of similarity between sequences by comparing sequence reads from the sample to sequences in reference databases. If the reference database contains sequences of the relevant species, then the sample sequences can be identified to species level. If a sequence cannot be matched to an existing reference library entry, DNA barcoding can be used to create a new entry.
In some cases, due to the incompleteness of reference databases, identification can only be achieved at higher taxonomic levels, such as assignment to a family or class. In some organism groups such as bacteria, taxonomic assignment to species level is often not possible. In such cases, a sample may be assigned to a particular operational taxonomic unit (OTU).
In some cases, specimens with identical (COI) DNA barcodes clearly belong to different species, e.g. species of the fish genus Chromis.
Applications
Applications of DNA barcoding include identification of new species, safety assessment of food, identification and assessment of cryptic species, detection of alien species, identification of endangered and threatened species, linking egg and larval stages to adult species, securing intellectual property rights for bioresources, framing global management plans for conservation strategies, elucidate feeding niches, and forensic science. DNA barcode markers can be applied to address basic questions in systematics, ecology, evolutionary biology and conservation, including community assembly, species interaction networks, taxonomic discovery, and assessing priority areas for environmental protection.
Identification of species
Specific short DNA sequences or markers from a standardized region of the genome can provide a DNA barcode for identifying species. Molecular methods are especially useful when traditional methods are not applicable. DNA barcoding has great applicability in identification of larvae for which there are generally few diagnostic characters available, and in association of different life stages (e.g. larval and adult) in many animals. Identification of species listed in the Convention of the International Trade of Endangered Species (CITES) appendixes using barcoding techniques is used in monitoring of illegal trade.
Detection of invasive species
Alien species can be detected via barcoding. Barcoding can be suitable for detection of species in e.g. border control, where rapid and accurate morphological identification is often not possible due to similarities between different species, lack of sufficient diagnostic characteristics and/or lack of taxonomic expertise. Barcoding and metabarcoding can also be used to screen ecosystems for invasive species, and to distinguish between an invasive species and native, morphologically similar, species. The high efficiency of DNA identification is shown relative to the traditional monitoring of biological invasions.
Delimiting cryptic species
DNA barcoding enables the identification and recognition of cryptic species. The results of DNA barcoding analyses depend however upon the choice of analytical methods, so the process of delimiting cryptic species using DNA barcodes can be as subjective as any other form of taxonomy. Hebert et al. (2004) concluded that the butterfly Astraptes fulgerator in north-western Costa Rica actually consists of 10 different species. These results, however, were subsequently challenged by Brower (2006), who pointed out numerous serious flaws in the analysis, and concluded that the original data could support no more than the possibility of three to seven cryptic taxa rather than ten cryptic species. Smith et al. (2007) used cytochrome c oxidase I DNA barcodes for species identification of the 20 morphospecies of Belvosia parasitoid flies (Diptera: Tachinidae) reared from caterpillars (Lepidoptera) in Area de Conservación Guanacaste (ACG), northwestern Costa Rica. These authors discovered that barcoding raises the species count to 32, by revealing that each of the three parasitoid species, previously considered as generalists, actually are arrays of highly host-specific cryptic species. For 15 morphospecies of polychaetes within the deep Antarctic benthos studied through DNA barcoding, cryptic diversity was found in 50% of the cases. Furthermore, 10 previously overlooked morphospecies were detected, increasing the total species richness in the sample by 233%.
Diet analysis and food web application
DNA barcoding and metabarcoding can be useful in diet analysis studies, and is typically used if prey specimens cannot be identified based on morphological characters. There is a range of sampling approaches in diet analysis: DNA metabarcoding can be conducted on stomach contents, feces, saliva or whole body analysis. In fecal samples or highly digested stomach contents, it is often not possible to distinguish tissue from single species, and therefore metabarcoding can be applied instead. Feces or saliva represent non-invasive sampling approaches, while whole body analysis often means that the individual needs to be killed first. For smaller organisms, sequencing for stomach content is then often done by sequencing the entire animal.
Barcoding for food safety
DNA barcoding represents an essential tool to evaluate the quality of food products. The purpose is to guarantee food traceability, to minimize food piracy, and to valuate local and typical agro-food production. Another purpose is to safeguard public health; for example, metabarcoding offers the possibility to identify groupers causing Ciguatera fish poisoning from meal remnants, or to separate poisonous mushrooms from edible ones (Ref).
Biomonitoring and ecological assessment
DNA barcoding can be used to assess the presence of endangered species for conservation efforts (Ref), or the presence of indicator species reflective to specific ecological conditions (Ref), for example excess nutrients or low oxygen levels.
Forensic Science
DNA barcoding is often used for species identification in forensic science cases. Unknown animal or plant samples at crime scenes can be found, collected, and identified, in hopes of linking it to a suspect and getting a conviction. Poaching, killing of endangered species, and animal abuse are examples of crimes where DNA barcoding is used, since animal DNA is often found. On the other hand, plant DNA is usually used as trace evidence to link a suspect to a crime scene.
Potentials and shortcomings
Potentials
Traditional bioassessment methods are well established internationally, and serve biomonitoring well, as for example for aquatic bioassessment within the EU Directives WFD and MSFD. However, DNA barcoding could improve traditional methods for the following reasons; DNA barcoding (i) can increase taxonomic resolution and harmonize the identification of taxa which are difficult to identify or lack experts, (ii) can more accurately/precisely relate environmental factors to specific taxa (iii) can increase comparability among regions, (iv) allows for the inclusion of early life stages and fragmented specimens, (v) allows delimitation of cryptic/rare species (vi) allows for development of new indices e.g. rare/cryptic species which may be sensitive/tolerant to stressors, (vii) increases the number of samples which can be processed and reduces processing time resulting in increased knowledge of species ecology, (viii) is a non-invasive way of monitoring when using eDNA methods.
Time and cost
DNA barcoding is faster than traditional morphological methods all the way from training through to taxonomic assignment. It takes less time to gain expertise in DNA methods than becoming an expert in taxonomy. In addition, the DNA barcoding workflow (i.e. from sample to result) is generally quicker than traditional morphological workflow and allows the processing of more samples.
Taxonomic resolution
DNA barcoding allows the resolution of taxa from higher (e.g. family) to lower (e.g. species) taxonomic levels, that are otherwise too difficult to identify using traditional morphological methods, like e.g. identification via microscopy. For example, Chironomidae (the non-biting midge) are widely distributed in both terrestrial and freshwater ecosystems. Their richness and abundance make them important for ecological processes and networks, and they are one of many invertebrate groups used in biomonitoring. Invertebrate samples can contain as many as 100 species of chironomids which often make up as much as 50% of a sample. Despite this, they are usually not identified below the family level because of the taxonomic expertise and time required. This may result in different chironomid species with different ecological preferences grouped together, resulting in inaccurate assessment of water quality.
DNA barcoding provides the opportunity to resolve taxa, and directly relate stressor effects to specific taxa such as individual chironomid species. For example, Beermann et al. (2018) DNA barcoded Chironomidae to investigate their response to multiple stressors; reduced flow, increased fine-sediment and increased salinity. After barcoding, it was found that the chironomid sample consisted of 183 Operational Taxonomic Units (OTUs), i.e. barcodes (sequences) that are often equivalent to morphological species. These 183 OTUs displayed 15 response types rather than the previously reported two response types recorded when all chironomids were grouped together in the same multiple stressor study. A similar trend was discovered in a study by Macher et al. (2016) which discovered cryptic diversity within the New Zealand mayfly species Deleatidium sp. This study found different response patterns of 12 molecular distinct OTUs to stressors which may change the consensus that this mayfly is sensitive to pollution.
Shortcomings
Despite the advantages offered by DNA barcoding, it has also been suggested that DNA barcoding is best used as a complement to traditional morphological methods. This recommendation is based on multiple perceived challenges.
Physical parameters
It is not completely straightforward to connect DNA barcodes with ecological preferences of the barcoded taxon in question, as is needed if barcoding is to be used for biomonitoring. For example, detecting target DNA in aquatic systems depends on the concentration of DNA molecules at a site, which in turn can be affected by many factors. The presence of DNA molecules also depends on dispersion at a site, e.g. direction or strength of currents. It is not really known how DNA moves around in streams and lakes, which makes sampling difficult. Another factor might be the behavior of the target species, e.g. fish can have seasonal changes of movements, crayfish or mussels will release DNA in larger amounts just at certain times of their life (moulting, spawning). For DNA in soil, even less is known about distribution, quantity or quality.
The major limitation of the barcoding method is that it relies on barcode reference libraries for the taxonomic identification of the sequences. The taxonomic identification is accurate only if a reliable reference is available. However, most databases are still incomplete, especially for smaller organisms e.g. fungi, phytoplankton, nematoda etc. In addition, current databases contain misidentifications, spelling mistakes and other errors. There is massive curation and completion effort around the databases for all organisms necessary, involving large barcoding projects (for example the iBOL project for the Barcode of Life Data Systems (BOLD) reference database). However, completion and curation are difficult and time-consuming. Without vouchered specimens, there can be no certainty about whether the sequence used as a reference is correct.
DNA sequence databases like GenBank contain many sequences that are not tied to vouchered specimens (for example, herbarium specimens, cultured cell lines, or sometimes images). This is problematic in the face of taxonomic issues such as whether several species should be split or combined, or whether past identifications were sound. Reusing sequences, not tied to vouchered specimens, of initially misidentified organism may support incorrect conclusions and must be avoided. Therefore, best practice for DNA barcoding is to sequence vouchered specimens. For many taxa, it can be however difficult to obtain reference specimens, for example with specimens that are difficult to catch, available specimens are poorly conserved, or adequate taxonomic expertise is lacking.
Importantly, DNA barcodes can also be used to create interim taxonomy, in which case OTUs can be used as substitutes for traditional Latin binomials – thus significantly reducing dependency on fully populated reference databases.
Technological bias
DNA barcoding also carries methodological bias, from sampling to bioinformatics data analysis. Beside the risk of contamination of the DNA sample by PCR inhibitors, primer bias is one of the major sources of errors in DNA barcoding. The isolation of an efficient DNA marker and the design of primers is a complex process and considerable effort has been made to develop primers for DNA barcoding in different taxonomic groups. However, primers will often bind preferentially to some sequences, leading to differential primer efficiency and specificity and unrepresentative communities’ assessment and richness inflation. Thus, the composition of the sample's communities sequences is mainly altered at the PCR step. Besides, PCR replication is often required, but leads to an exponential increase in the risk of contamination. Several studies have highlighted the possibility to use mitochondria-enriched samples or PCR-free approaches to avoid these biases, but as of , the DNA metabarcoding technique is still based on the sequencing of amplicons. Other biases enter the picture during the sequencing and during the bioinformatic processing of the sequences, like the creation of chimeras.
Lack of standardization
Even as DNA barcoding is more widely used and applied, there is no agreement concerning the methods for DNA preservation or extraction, the choices of DNA markers and primers set, or PCR protocols. The parameters of bioinformatics pipelines (for example OTU clustering, taxonomic assignment algorithms or thresholds etc.) are at the origin of much debate among DNA barcoding users. Sequencing technologies are also rapidly evolving, together with the tools for the analysis of the massive amounts of DNA data generated, and standardization of the methods is urgently needed to enable collaboration and data sharing at greater spatial and time-scale. This standardisation of barcoding methods at the European scale is part of the objectives of the European COST Action DNAqua-net and is also addressed by CEN (the European Committee for Standardization).
Another criticism of DNA barcoding is its limited efficiency for accurate discrimination below species level (for example, to distinguish between varieties), for hybrid detection, and that it can be affected by evolutionary rates.
Mismatches between conventional (morphological) and barcode based identification
It is important to know that taxa lists derived by conventional (morphological) identification are not, and maybe never will be, directly comparable to taxa lists derived from barcode based identification because of several reasons. The most important cause is probably the incompleteness and lack of accuracy of the molecular reference databases preventing a correct taxonomic assignment of eDNA sequences. Taxa not present in reference databases will not be found by eDNA, and sequences linked to a wrong name will lead to incorrect identification. Other known causes are a different sampling scale and size between a traditional and a molecular sample, the possible analysis of dead organisms, which can happen in different ways for both methods depending on organism group, and the specific selection of identification in either method, i.e. varying taxonomical expertise or possibility to identify certain organism groups, respectively primer bias leading also to a potential biased analysis of taxa.
Estimates of richness/diversity
DNA Barcoding can result in an over or underestimate of species richness and diversity. Some studies suggest that artifacts (identification of species not present in a community) are a major cause of inflated biodiversity. The most problematic issue are taxa represented by low numbers of sequencing reads. These reads are usually removed during the data filtering process, since different studies suggest that most of these low-frequency reads may be artifacts. However, real rare taxa may exist among these low-abundance reads. Rare sequences can reflect unique lineages in communities which make them informative and valuable sequences. Thus, there is a strong need for more robust bioinformatics algorithms that allow the differentiation between informative reads and artifacts. Complete reference libraries would also allow a better testing of bioinformatics algorithms, by permitting a better filtering of artifacts (i.e. the removal of sequences lacking a counterpart among extant species) and therefore, it would be possible obtain a more accurate species assignment. Cryptic diversity can also result in inflated biodiversity as one morphological species may actually split into many distinct molecular sequences. This will go a long way in generating DNA reference data which is crucial for environmental DNA-based biodiversity monitoring.
Megabarcoding
Megabarcoding is a term used to describe high-throughput specimen-based DNA barcoding, where thousands of specimens can be barcoded simultaneously for species identification and discovery.This is enabled by the use of third-generation sequencing platforms including PacBio (Sequel I/II) by Pacific Biosciences and MinION, PromethION by Oxford Nanopore Technology. As compared to Sanger sequencing, megabarcoding is faster and cheaper, allowing for the large-scale generation of DNA barcodes for thousands of species.
Applications
Megabarcoding can help fill the dark taxa. DNA barcode reference data gap for insects and accelerate species discovery, understand species diversity patterns, evaluate species richness, generate rapid biodiversity species inventories, track baseline shifts, and matching life-history stages.
Metabarcoding
Metabarcoding is defined as the barcoding of DNA or eDNA (environmental DNA) that allows for simultaneous identification of many taxa within the same (environmental) sample, however often within the same organism group. The main difference between the approaches is that metabarcoding, in contrast to barcoding, does not focus on one specific organism, but instead aims to determine species composition within a sample.
Methodology
The metabarcoding procedure, like general barcoding, covers the steps of DNA extraction, PCR amplification, sequencing and data analysis. A barcode consists of a short variable gene region (for example, see different markers/barcodes) which is useful for taxonomic assignment flanked by highly conserved gene regions which can be used for primer design. Different genes are used depending if the aim is to barcode single species or metabarcoding several species. In the latter case, a more universal gene is used. Metabarcoding does not use single species DNA/RNA as a starting point, but DNA/RNA from several different organisms derived from one environmental or bulk sample.
Applications
Metabarcoding has the potential to complement biodiversity measures, and even replace them in some instances, especially as the technology advances and procedures gradually become cheaper, more optimized and widespread.
DNA metabarcoding applications include Biodiversity monitoring in terrestrial and aquatic environments, Paleontology and ancient ecosystems, Plant-pollinator interactions, Diet analysis and Food safety.
Advantages and challenges
The general advantages and shortcomings for barcoding reviewed above are valid also for metabarcoding. One particular drawback for metabarcoding studies is that there is no consensus yet regarding the optimal experimental design and bioinformatics criteria to be applied in eDNA metabarcoding. However, there are current joined attempts, like e.g. the EU COST network DNAqua-Net, to move forward by exchanging experience and knowledge to establish best-practice standards for biomonitoring.
Artificial DNA barcoding
In 2014, researchers from ETH Zurich suggested using artificial, sub-micrometer-sized DNA barcodes as an "invisible oil tag". The barcodes consist of synthetic DNA sequences inside magnetically recoverable silica particles. They can be added to food oil in a very small amount (down to 1 ppb) as a label, and can be retrieved at any time for authenticity test by PCR/sequencing. This method can be used to test olive oil for adulteration.
See also
Subtopics:
Related topics:
Also see the sidebar navigation at the top of the article.
References
External links
SweBOL
FinBOL
International Barcode of Life Project (iBOL)
BOLD
UNITE
Diat.barcode
DNA barcoding | DNA barcoding | [
"Biology"
] | 7,371 | [
"Genetics techniques",
"Phylogenetics",
"Molecular genetics",
"DNA barcoding"
] |
30,873,253 | https://en.wikipedia.org/wiki/Somatic%20hypermutation | Somatic hypermutation (or SHM) is a cellular mechanism by which the immune system adapts to the new foreign elements that confront it (e.g. microbes). A major component of the process of affinity maturation, SHM diversifies B cell receptors used to recognize foreign elements (antigens) and allows the immune system to adapt its response to new threats during the lifetime of an organism. Somatic hypermutation involves a programmed process of mutation affecting the variable regions of immunoglobulin genes. Unlike germline mutation, SHM affects only an organism's individual immune cells, and the mutations are not transmitted to the organism's offspring. Because this mechanism is merely selective and not precisely targeted, somatic hypermutation has been strongly implicated in the development of B-cell lymphomas and many other cancers.
Targeting
When a B cell recognizes an antigen, it is stimulated to divide (or proliferate). During proliferation, the B-cell receptor locus undergoes an extremely high rate of somatic mutation that is at least 105–106 fold greater than the normal rate of mutation across the genome. Variation is mainly in the form of single-base substitutions, with insertions and deletions being less common. These mutations occur mostly at "hotspots" in the DNA, which are concentrated in hypervariable regions. These regions correspond to the complementarity-determining regions; the sites involved in antigen recognition on the immunoglobulin. The "hotspots" of somatic hypermutation vary depending on the base that is being mutated. RGYW (i.e. A/G G C/T A/T) for a G, WRCY for a C, WA for an A and TW for a T. The overall result of the hypermutation process is achieved by a balance between error-prone and high fidelity repair. This directed hypermutation allows for the selection of B cells that express immunoglobulin receptors possessing an enhanced ability to recognize and bind a specific foreign antigen.
Mechanisms
The mechanism of SHM involves deamination of cytosine to uracil in DNA by the enzyme activation-induced cytidine deaminase, or AID. A cytosine:guanine pair is thus directly mutated to a uracil:guanine mismatch. Uracil residues are not normally found in DNA, therefore, to maintain the integrity of the genome, most of these mutations must be repaired by high-fidelity base excision repair enzymes. The uracil bases are removed by the repair enzyme, uracil-DNA glycosylase, followed by cleavage of the DNA backbone by apurinic endonuclease. Error-prone DNA polymerases are then recruited to fill in the gap and create mutations.
The synthesis of this new DNA involves error-prone DNA polymerases, which often introduce mutations at the position of the deaminated cytosine itself or neighboring base pairs. The introduction of mutations in the rapidly proliferating population of B cells ultimately culminates in the production of thousands of B cells, possessing slightly different receptors and varying specificity for the antigen, from which the B cell with highest affinities for the antigen can be selected. The B cells with the greatest affinity will then be selected to differentiate into plasma cells producing antibody and long-lived memory B cells contributing to enhanced immune responses upon reinfection.
The hypermutation process also utilizes cells that auto-select against the 'signature' of an organism's own cells. It is hypothesized that failures of this auto-selection process may also lead to the development of an auto-immune response.
Somatic gene conversion
In birds which have a very limited number of genes available to V(D)J recombination, gene conversion between pseudogenic V segments and the currently-active V segment occur with SMH, thereby introducing extra diversity. Mammals such as cattle, sheep, and horses have a sufficiently large selection for V(D)J, but they also perform somatic gene conversion. This kind of gene conversion is also started by the AID enzyme, leading to a double-strand break, which is then repaired by using other V or pseudogenic-V segments as templates. Humans are not known to perform such gene conversion, except for one report of indirect evidence.
See also
Affinity maturation
Anergy
Immune system
V(D)J recombination
Immunoglobulin class switching
References
External links
Immune system
Antibodies | Somatic hypermutation | [
"Biology"
] | 938 | [
"Immune system",
"Organ systems"
] |
40,240,263 | https://en.wikipedia.org/wiki/Discrete%20time%20and%20continuous%20time | In mathematical dynamics, discrete time and continuous time are two alternative frameworks within which variables that evolve over time are modeled.
Discrete time
Discrete time views values of variables as occurring at distinct, separate "points in time", or equivalently as being unchanged throughout each non-zero region of time ("time period")—that is, time is viewed as a discrete variable. Thus a non-time variable jumps from one value to another as time moves from one time period to the next. This view of time corresponds to a digital clock that gives a fixed reading of 10:37 for a while, and then jumps to a new fixed reading of 10:38, etc. In this framework, each variable of interest is measured once at each time period. The number of measurements between any two time periods is finite. Measurements are typically made at sequential integer values of the variable "time".
A discrete signal or discrete-time signal is a time series consisting of a sequence of quantities.
Unlike a continuous-time signal, a discrete-time signal is not a function of a continuous argument; however, it may have been obtained by sampling from a continuous-time signal. When a discrete-time signal is obtained by sampling a sequence at uniformly spaced times, it has an associated sampling rate.
Discrete-time signals may have several origins, but can usually be classified into one of two groups:
By acquiring values of an analog signal at constant or variable rate. This process is called sampling.
By observing an inherently discrete-time process, such as the weekly peak value of a particular economic indicator.
Continuous time
In contrast, continuous time views variables as having a particular value only for an infinitesimally short amount of time. Between any two points in time there are an infinite number of other points in time. The variable "time" ranges over the entire real number line, or depending on the context, over some subset of it such as the non-negative reals. Thus time is viewed as a continuous variable.
A continuous signal or a continuous-time signal is a varying quantity (a signal)
whose domain, which is often time, is a continuum (e.g., a connected interval of the reals). That is, the function's domain is an uncountable set. The function itself need not to be continuous. To contrast, a discrete-time signal has a countable domain, like the natural numbers.
A signal of continuous amplitude and time is known as a continuous-time signal or an analog signal. This (a signal) will have some value at every instant of time. The electrical signals derived in proportion with the physical quantities such as temperature, pressure, sound etc. are generally continuous signals. Other examples of continuous signals are sine wave, cosine wave, triangular wave etc.
The signal is defined over a domain, which may or may not be finite, and there is a functional mapping from the domain to the value of the signal. The continuity of the time variable, in connection with the law of density of real numbers, means that the signal value can be found at any arbitrary point in time.
A typical example of an infinite duration signal is:
A finite duration counterpart of the above signal could be:
and otherwise.
The value of a finite (or infinite) duration signal may or may not be finite. For example,
and otherwise,
is a finite duration signal but it takes an infinite value for .
In many disciplines, the convention is that a continuous signal must always have a finite value, which makes more sense in the case of physical signals.
For some purposes, infinite singularities are acceptable as long as the signal is integrable over any finite interval (for example, the signal is not integrable at infinity, but is).
Any analog signal is continuous by nature. Discrete-time signals, used in digital signal processing, can be obtained by sampling and quantization of continuous signals.
Continuous signal may also be defined over an independent variable other than time. Another very common independent variable is space and is particularly useful in image processing, where two space dimensions are used.
Relevant contexts
Discrete time is often employed when empirical measurements are involved, because normally it is only possible to measure variables sequentially. For example, while economic activity actually occurs continuously, there being no moment when the economy is totally in a pause, it is only possible to measure economic activity discretely. For this reason, published data on, for example, gross domestic product will show a sequence of quarterly values.
When one attempts to empirically explain such variables in terms of other variables and/or their own prior values, one uses time series or regression methods in which variables are indexed with a subscript indicating the time period in which the observation occurred. For example, yt might refer to the value of income observed in unspecified time period t, y3 to the value of income observed in the third time period, etc.
Moreover, when a researcher attempts to develop a theory to explain what is observed in discrete time, often the theory itself is expressed in discrete time in order to facilitate the development of a time series or regression model.
On the other hand, it is often more mathematically tractable to construct theoretical models in continuous time, and often in areas such as physics an exact description requires the use of continuous time. In a continuous time context, the value of a variable y at an unspecified point in time is denoted as y(t) or, when the meaning is clear, simply as y.
Types of equations
Discrete time
Discrete time makes use of difference equations, also known as recurrence relations. An example, known as the logistic map or logistic equation, is
in which r is a parameter in the range from 2 to 4 inclusive, and x is a variable in the range from 0 to 1 inclusive whose value in period t nonlinearly affects its value in the next period, t+1. For example, if and , then for t=1 we have , and for t=2 we have .
Another example models the adjustment of a price P in response to non-zero excess demand for a product as
where is the positive speed-of-adjustment parameter which is less than or equal to 1, and where is the excess demand function.
Continuous time
Continuous time makes use of differential equations. For example, the adjustment of a price P in response to non-zero excess demand for a product can be modeled in continuous time as
where the left side is the first derivative of the price with respect to time (that is, the rate of change of the price), is the speed-of-adjustment parameter which can be any positive finite number, and is again the excess demand function.
Graphical depiction
A variable measured in discrete time can be plotted as a step function, in which each time period is given a region on the horizontal axis of the same length as every other time period, and the measured variable is plotted as a height that stays constant throughout the region of the time period. In this graphical technique, the graph appears as a sequence of horizontal steps. Alternatively, each time period can be viewed as a detached point in time, usually at an integer value on the horizontal axis, and the measured variable is plotted as a height above that time-axis point. In this technique, the graph appears as a set of dots.
The values of a variable measured in continuous time are plotted as a continuous function, since the domain of time is considered to be the entire real axis or at least some connected portion of it.
See also
Aliasing
Bernoulli process
Digital data
Discrete calculus
Discrete system
Discretization
Normalized frequency
Nyquist–Shannon sampling theorem
Time-scale calculus
References
Time in science
Dynamical systems | Discrete time and continuous time | [
"Physics",
"Mathematics"
] | 1,560 | [
"Physical quantities",
"Time",
"Mechanics",
"Time in science",
"Spacetime",
"Dynamical systems"
] |
38,943,054 | https://en.wikipedia.org/wiki/Restricted%20root%20system | In mathematics, restricted root systems, sometimes called relative root systems, are the root systems associated with a symmetric space. The associated finite reflection group is called the restricted Weyl group. The restricted root system of a symmetric space and its dual can be identified. For symmetric spaces of noncompact type arising as homogeneous spaces of a semisimple Lie group, the restricted root system and its Weyl group are related to the Iwasawa decomposition of the Lie group.
See also
Satake diagram
References
Lie groups
Lie algebras | Restricted root system | [
"Mathematics"
] | 107 | [
"Lie groups",
"Mathematical structures",
"Algebraic structures"
] |
38,944,750 | https://en.wikipedia.org/wiki/Federation%20of%20European%20Pharmacological%20Societies | The Federation of European Pharmacological Societies (EPHAR) is a non-profit voluntary association established to advance research and education in the science of pharmacology and to promote co-operation between national/regional pharmacological societies in Europe and surrounding countries. It is an umbrella organization of currently 29 national societies for pharmacology and represents over 12,000 individual pharmacologists in Europe. Moreover it seeks to co-operate with other international organizations, especially the International Union of Basic and Clinical Pharmacology (IUPHAR) of which EPHAR is an associate member.
History
The efforts in the 1990s to unite European economical and political forces had found their counterpart in formation of European scientific federations. One of those scientific federations is EPHAR that was founded at the XIth International Congress of Pharmacology in Amsterdam (The Netherlands) in 1990. The establishment of the federation was prepared by a steering committee formed in 1988 under the initiative of Rodolfo Paoletti. The steering committee, headed by Börje Uvnäs, consisted of six members: Alasdair Breckenridge (UK), Flaminio Cattabeni (Italy), Gilles Fillion (France), Ove A. Nedergaard (Denmark), Rodolfo Paoletti (Italy), Hasso Scholz (Germany), Börje Uvnäs (Sweden). EPHAR was recognized as an affiliate member of IUPHAR in 1994. Since 1990, the Federation has sponsored important scientific events. In particular, EPHAR has contributed in the organization of residential courses with a restricted number of participants ("Molecular Biology in Pharmacology", Milan, 1990; "Neuroimmunomodulation in Pharmacology", Paris, 1992; "Electrophysiology in Pharmacology", London, 1993).
Activities
Pharmacology has developed into a multifaceted discipline extending from molecular biology at the one end to clinical pharmacology at the other. The necessity of collaborative efforts within all branches of biological sciences, in the development and analysis of new drugs, is the driving force of the activities of EPHAR. One challenge of this federation is the creation of new generations of pharmacologists with an analytical and broadminded attitude towards important aspects of modern medical biological sciences.
The Federation seeks to achieve its objects by arranging instructional courses and training programs in matters connected with pharmacology, facilitating the exchange of scientific information between European pharmacologists (by encouraging the holding of joint meetings between European member societies), disseminating information and encouraging the participation to important activities organized by member societies. This will include the production of a calendar of the National and joint meetings of each European Society.
Another important goal of the Federation is establishing common standards for basic courses in pharmacology, and fixing minimal standards for a European Pharmacologist Certificate.
The scientific dissemination is a fundamental step in progress and advance in research and education. For this reason, the most important scientific events established by the Federation are the EPHAR Congresses. They include plenary lectures, symposia, round tables, oral and poster communications devoted to the most recent advances in pharmacology and related sciences and therefore represented the adequate forum for discussing preclinical, clinical and therapeutic data. Particular emphasis was given to the impact of biotechnologies on drug development and to the identification of novel pharmacological approaches to incurable diseases. Sessions were also held on drug development, strategies for research funding and training of pharmacologists.
The EPHAR Congresses that have taken place since EPHAR’s foundation are:
1st EPHAR Congress: Milan (Italy), 16–19 June 1995
2nd EPHAR Congress: Budapest (Hungary), 3–7 July 1999
3rd EPHAR Congress: Lyon (France), 6–9 July 2001
4th EPHAR Congress: Porto (Portugal), 14–17 July 2004
5th EPHAR Congress: Manchester (UK), 13–17 July 2008
6th EPHAR Congress: Granada (Spain), 17–20 July 2012
7th EPHAR Congress: Istanbul (Turkey), 26–30 June 2016
8th EPHAR Congress: Prague (Czech Republic), 5–8 December 2021-virtual meeting
9th EPHAR Congress: Athens (Greece), June 23-26, 2024
EPHAR supports activities, organized by its member societies that are intended to improve the cooperation among European pharmacologists. They are traditionally
EPHAR Lectures
EPHAR Symposia
EPHAR Instructional Courses
One of these activities per year, for each country, is promoted by EPHAR. Calls for applications for these EPHAR-supported activities and the guidelines for these events are announced annually.
Moreover EPHAR gives prizes to European young investigators in the field of pharmacology, the EPHAR Young Investigators Awards. So far, the following awards were given:
EPHAR Young Investigator Award 2010
EPHAR Young Investigator Award 2012
EPHAR Young Investigator Award 2014
EPHAR Young Investigator Award 2017
EPHAR Young Investigator Award 2019
EPHAR Young Investigator Award 2021
EPHAR Young Investigator Award 2023
Executive committees
The term of the EPHAR Executive Committees is four years.
The present (2022–2026) Executive Committee is composed by the following members:
President: María Jesús Sanz Ferrando (University of Valencia, Spain)
Past President (2022–2024): Andreas Papapetropoulos (National and Kapodistrian University of Athens, Greece)
Treasurer: Giuseppe Cirino (University of Naples Federico II, Italy)
Secretary General: Nezahat Tugba Durlu-Kandilci (Hacettepe University, Turkey)
Maria Jose Diogenes (University of Lisbon, Portugal)
Peter Ferdinandy (Semmelweis University, Hungary)
Stephan von Gunten (University of Bern, Switzerland)
Clive Page (King's College London, UK)
Thomas Wieland (University of Heidelberg, Germany)
Past Presidents were:
2022-2024: Andreas Papapetropoulos (Athens, Greece)
2020-2022: Mojca Kržan (Ljubljana, Slovenia)-interim president
2018-2020: Mojca Kržan (Ljubljana, Slovenia)
2016-2018: Robin Hiley (Cambridge, UK)
2014–2016: Thomas Griesbacher (Graz, Austria)
2012–2014: Filippo Drago (Catania, Italy)
2010–2012: Ulrich Förstermann (Mainz, Germany)
2008–2010: Eeva Moilanen (Tampere, Finland)
2006–2008: Arthur Weston (Manchester, UK)
2004–2006: Manfred Göthert (Bonn, Germany)
2002–2004: Alan W. Cuthbert (Cambridge, UK)
1997–2002: Flaminio Cattabeni (Milan, Italy)
1990–1997: Rodolfo Paoletti (Milan, Italy)
Composition
The National Societies members of EPHAR are (in alphabetical order):
• Austrian Pharmacological Society (Österreichische Pharmakologische Gesellschaft) APHAR.
• Belgian Society of Fundamental and Clinical Physiology and Pharmacology. Société Belge de Physiologie et de Pharmacologie Fondamentales et Cliniques. Belgisch Genootschap voor Fundamentele en Klinische Fysiologie en Farmacologie.
• Association of Pharmacologists of the Federation of Bosnia and Herzegovina. Udruženje Farmakologa Federacije Bosne i Hercegovine.
• British Pharmacological Society (BPS).
• Bulgarian Pharmacological Society.
• Croatian Pharmacological Society. Hrvatsko Društvo Farmakologa. (HDF).
• Czech Society for Experimental and Clinical Pharmacology and Toxicology. Česká Společnost pro Experimentální a Klinickou Farmakologii a Toxicologii (ČSEKFT).
• Danish Society for Pharmacology, Toxicology and Medicinal Chemistry. Dansk Selskab for Farmakologi, Toksikologi og Medicinalkemi (DSFTM).
• Dutch Pharmacological Society. Nederlands Vereniging voor Farmacologie (NVF)
• Finnish Pharmacological Society. Suomen Farmakologiyhdistys. (SFY).
• French Society of Pharmacology and Therapeutics. Société Française de Pharmacologie et de Thérapeutique. (SFPT).
• German Society of Pharmacology. Deutsche Gesellschaft für Pharmakologie. (DGP).
• Hellenic (Greek) Society of Basic and Clinical Pharmacology. Ελληνική Εταιρεία Φαρμακαλογίας (Ellinikí Etaireía Pharmakologías).
• Hungarian Society for Experimental and Clinical Pharmacology. Magyar Kisérletes és Klinikai Farmakólogiai Társaság (MFT).
• Israel Society for Physiology and Pharmacology (ISPP).
• Società Italiana di Farmacologia (SIF).
• Latvian Society of Pharmacology. Latvijas Farmakoloģijas biedrība. (LFB).
• Norwegian Society for Pharmacology and Toxicology. Norsk Selskap for Farmakologi og Toksikologi. (NSFT).
• Pharmacology Society of Malta.
• Polish Pharmacological Society. Polskie Towarzystwo Farmakologiczne. (PTST).
• Portuguese Pharmacological Society. Sociedade Portuguesa de Farmacologia (SPF).
• Russian Scientific Society of Pharmacology.
• Serbian Pharmacological Society. Српско Фармаколошко Друштво / Srpsko Farmakološko Društvo (СФД / SFD).
• Slovak Pharmacological Society. Slovenská Farmakologická Spoločnosť (SFaS).
• Slovenian Pharmacological Society. Slovensko Društvo Farmakologov. (SDF).
• Spanish Society of Pharmacology. Sociedad Española de Farmacología (SEF).
• Swedish Society for Pharmacology, Clinical Pharmacology and Therapeutics. Sektionen för Läkemedelslära.
• Swiss Society of Pharmacology and Toxicology. Société Suisse de Pharmacologie et de Toxicologie/ Schweizerische Gesellschaft für Pharmakologie und Toxikologie (SSPT/SGPT).
• Turkish Pharmacological Society. Türk Farmakoloji Derneği (TFD).
External links
EPHAR website: http://www.ephar.org/
The internet sites for each European pharmacological society, member of EPHAR, are hereby listed:
• Austria: http://www.aphar.at/
• Belgium: http://users.ugent.be/~jvdvoord/physiology&pharmacology/index.htm
• Croatia: http://pharma.mef.hr/
• Czech Republic: http://farmspol.cls.cz/
• Denmark https://web.archive.org/web/20130903234249/http://dsftm.dk/index.php/en/
• Netherlands: http://www.nvfarmaco.nl/
• Finland: http://www.sfy.fi/
• France: http://www.pharmacol-fr.org/
• Germany: http://www.dg-pharmakologie.de/
• Greece: https://web.archive.org/web/20131007085419/http://gsp.med.auth.gr/
• Hungary: http://www.mapharm.hu/
• Italy: http://www.sifweb.org/
• Norway: http://www.nsft.net/
• Poland: http://www.ptf.info.pl/
• Serbia: http://www.sfarmd.org/
• Slovakia: http://www.sfarm.sk/
• Spain: http://www.socesfar.com/
• Sweden http://www.lakemedelslara.se/
• Switzerland: http://www.swisspharmtox.ch/
• Turkey: http://www.tfd.org.tr/
• United Kingdom: https://web.archive.org/web/20130410071823/http://www.bps.ac.uk/view/index.html
References
European medical and health organizations
International medical associations of Europe
Pharmacological societies | Federation of European Pharmacological Societies | [
"Chemistry"
] | 2,731 | [
"Pharmacology",
"Pharmacological societies"
] |
38,946,392 | https://en.wikipedia.org/wiki/Transport%20and%20Map%20Symbols | Transport and Map Symbols is a Unicode block containing transportation and map icons, largely for compatibility with Japanese telephone carriers' emoji implementations of Shift JIS, and to encode characters in the Wingdings and Wingdings 2 character sets.
Block
Emoji
The Transport and Map Symbols block contains 105 emoji:
U+1F680–U+1F6C5, U+1F6CB–U+1F6D2, U+1F6D5–U+1F6D7, U+1F6DC–U+1F6E5, U+1F6E9, U+1F6EB–U+1F6EC, U+1F6F0 and U+1F6F3–U+1F6FC.
The block has 46 standardized variants defined to specify emoji-style (U+FE0F VS16) or text presentation (U+FE0E VS15) for the
following 23 base characters: U+1F687, U+1F68D, U+1F691, U+1F694, U+1F698, U+1F6AD, U+1F6B2, U+1F6B9–U+1F6BA, U+1F6BC, U+1F6CB, U+1F6CD–U+1F6CF, U+1F6E0–U+1F6E5, U+1F6E9, U+1F6F0 and U+1F6F3.
All of these base characters default to a text presentation.
Emoji modifiers
The Transport and Map Symbols block has six emoji that represent people or body parts.
They can be modified using U+1F3FB–U+1F3FF to provide for a range of human skin color using the Fitzpatrick scale:
Additional human emoji can be found in other Unicode blocks: Dingbats, Emoticons, Miscellaneous Symbols, Miscellaneous Symbols and Pictographs, Supplemental Symbols and Pictographs and Symbols and Pictographs Extended-A.
History
The following Unicode-related documents record the purpose and process of defining specific characters in the Transport and Map Symbols block:
References
Unicode blocks
Emoji
Transport | Transport and Map Symbols | [
"Physics"
] | 498 | [
"Physical systems",
"Transport"
] |
38,946,837 | https://en.wikipedia.org/wiki/Autapse | An autapse is a chemical or electrical synapse from a neuron onto itself. It can also be described as a synapse formed by the axon of a neuron on its own dendrites, in vivo or in vitro.
History
The term "autapse" was first coined in 1972 by Van der Loos and Glaser, who observed them in Golgi preparations of the rabbit occipital cortex while originally conducting a quantitative analysis of neocortex circuitry. Also in the 1970s, autapses have been described in dog and rat cerebral cortex, monkey neostriatum, and cat spinal cord.
In 2000, they were first modeled as supporting persistence in recurrent neural networks. In 2004, they were modeled as demonstrating oscillatory behavior, which was absent in the same model neuron without autapse. More specifically, the neuron oscillated between high firing rates and firing suppression, reflecting the spike bursting behavior typically found in cerebral neurons. In 2009, autapses were, for the first time, associated with sustained activation. This proposed a possible function for excitatory autapses within a neural circuit. In 2014, electrical autapses were shown to generate stable target and spiral waves in a neural model network. This indicated that they played a significant role in stimulating and regulating the collective behavior of neurons in the network. In 2016, a model of resonance was offered.
Autapses have been used to simulate "same cell" conditions to help researchers make quantitative comparisons, such as studying how N-methyl-D-aspartate receptor (NMDAR) antagonists affect synaptic versus extrasynaptic NMDARs.
Formation
Recently, it has been proposed that autapses could possibly form as a result of neuronal signal transmission blockage, such as in cases of axonal injury induced by poisoning or impeding ion channels. Dendrites from the soma in addition to an auxiliary axon may develop to form an autapse to help remediate the neuron's signal transmission.
Structure and function
Autapses can be either glutamate-releasing (excitatory) or GABA-releasing (inhibitory), just like their traditional synapse counterparts. Similarly, autapses can be electrical or chemical by nature.
Broadly speaking, negative feedback in autapses tends to inhibit excitable neurons whereas positive feedback can stimulate quiescent neurons.
Although the stimulation of inhibitory autapses did not induce hyperpolarizing inhibitory post-synaptic potentials in interneurons of layer V of neocortical slices, they have been shown to impact excitability. Upon using a GABA-antagonist to block autapses, the likelihood of an immediate subsequent second depolarization step increased following a first depolarization step. This suggests that autapses act by suppressing the second of two closely timed depolarization steps and therefore, they may provide feedback inhibition onto these cells. This mechanism may also potentially explain shunting inhibition.
In cell culture, autapses have been shown to contribute to the prolonged activation of B31/B32 neurons, which significantly contribute food-response behavior in Aplysia. This suggests that autapses may play a role in mediating positive feedback. The B31/B32 autapse was unable to play a role in initiating the neuron's activity, although it is believed to have helped sustain the neuron's depolarized state. The extent to which autapses maintain depolarization remains unclear, particularly since other components of the neural circuit (i.e. B63 neurons) are also capable of providing strong synaptic input throughout the depolarization. Additionally, it has been suggested that autapses provide B31/B32 neurons with the ability to quickly repolarize. Bekkers (2009) has proposed that specifically blocking the contribution of autapses and then assessing the differences with or without blocked autapses could better illuminate the function of autapses.
Hindmarsh–Rose (HR) model neurons have demonstrated chaotic, regular spiking, quiescent, and periodic patterns of burst firing without autapses. Upon the introduction of an electrical autapse, the periodic state switches to the chaotic state and displays an alternating behavior that increases in frequency with a greater autaptic intensity and time delay. On the other hand, excitatory chemical autapses enhanced the overall chaotic state. The chaotic state was reduced and suppressed in the neurons with inhibitory chemical autapses. In HR model neurons without autapses, the pattern of firing altered from quiescent to periodic and then to chaotic as DC current was increased. Generally, HR model neurons with autapses have the ability to swap into any firing pattern, regardless of the prior firing pattern.
Location
Neurons from several brain regions, such as the neocortex, substantia nigra, and hippocampus have been found to contain autapses.
Autapses have been observed to be relatively more abundant in GABAergic basket and dendrite-targeting cells of the cat visual cortex compared to spiny stellate, double bouquet, and pyramidal cells, suggesting that the degree of neuron self-innervation is cell-specific. Additionally, dendrite-targeting cell autapses were, on average, further from the soma compared to basket cell autapses.
80% of layer V pyramidal neurons in developing rat neocortices contained autaptic connections, which were located more so on basal dendrites and apical oblique dendrites rather than main apical dendrites. The dendritic positions of synaptic connections of the same cell type were similar to those of autapses, suggesting that autaptic and synaptic networks share a common mechanism of formation.
Disease implications
In the 1990s, paroxysmal depolarizing shift-type interictal epileptiform discharges has been suggested to be primarily dependent on autaptic activity for solitary excitatory hippocampal rat neurons grown in microculture.
More recently, in human neocortical tissues of patients with intractable epilepsy, the GABAergic output autapses of fast-spiking (FS) neurons have been shown to have stronger asynchronous release (AR) compared to both non-epileptic tissue and other types of synapses involving FS neurons. The study found similar results using a rat model as well. An increase in residual Ca2+ concentration in addition to the action potential amplitude in FS neurons was suggested to cause this increase in AR of epileptic tissue. Anti-epileptic drugs could potentially target this AR of GABA that seems to rampantly occur at FS neuron autapses.
Effects of drugs
Using a glia-conditioned medium to treat glia-free purified rat retinal ganglion microcultures has been shown to significantly increase the number of autapses per neuron compared to a control. This suggests that glia-derived soluble, proteinase K-sensitive factors induce autapse formation in rat retinal ganglion cells.
References
Neurophysiology
Cellular neuroscience
Computational neuroscience
Cell signaling
Signal transduction | Autapse | [
"Chemistry",
"Biology"
] | 1,541 | [
"Biochemistry",
"Neurochemistry",
"Signal transduction"
] |
38,947,394 | https://en.wikipedia.org/wiki/Medical%20image%20sharing | Medical image sharing is the electronic exchange of medical images between hospitals, physicians and patients. Rather than using traditional media, such as a CD or DVD, and either shipping it out or having patients carry it with them, technology now allows for the sharing of these images using the cloud. The primary format for images is DICOM (Digital Imaging and Communications in Medicine). Typically, non-image data such as reports may be attached in standard formats like PDF (Portable Document Format) during the sending process. Additionally, there are standards in the industry, such as IHE Cross Enterprise Document Sharing for Imaging (XDS-I), for managing the sharing of documents between healthcare enterprises. A typical architecture involved in setup is a locally installed server, which sits behind the firewall, allowing secure transmissions with outside facilities. In 2009, the Radiological Society of North America launched the "Image Share" project, with the goal of giving patients control of their imaging histories (reports and images) by allowing them to manage these records as they would online banking or shopping.
Uses
Care Facilities: Institutions use medical image sharing to facilitate transfers between other facilities that may or may not be on the same network. They are also able to instantly send results to referring physicians in the community, as well as directly to patients.
Physicians: Doctors use the technology to have immediate access to images, as opposed to waiting for physical media to arrive. Having access to a patient's medical history improves the point of care service.
Patients: In conjunction with recent US government initiatives, patients are able to receive their imaging exams electronically, without needing to carry and store physical media. It allows for the ability to see physicians in multiple locations and have their imaging at the ready.
Benefits
Improved access to patients’ medical imaging histories
Ability to view images instantly
Real-time collaboration by specialists
Avoiding duplicate care reduces costs
Decreased radiation exposure for patients
Expertise and specialized opinion is remotely accessible to patients
Health
Medical Image Sharing contributes to many of the "Health" initiatives across the industry. Being able to instantly and electronically exchange medical information can improve communication between physicians, as well as with patients.
Meaningful Use: The goal of meaningful use is to promote the spread of electronic health records to improve health care in the United States, which is to be rolled out in 3 stages through 2015. Some benefits of the initiative include better access to medical information and patient empowerment. Medical image sharing helps achieve meaningful use by improving access to medical images to patients and physicians.
Telehealth: The practice of delivering healthcare services utilizing telecommunication technologies is known as Telehealth. A major goal is to support long-distance health care for patients who are unable to easily travel to the point of care. Patients and professionals are also able to obtain further knowledge on health topics. As a part of the U.S. Department of Health and Human Services, the Office for the Advancement of Telehealth (OAT) promotes the use of telehealth technologies. Sharing medical images over long distances can happen instantaneously with these technologies, allowing a physician to review a patient's images during the conference.
Patient Engagement: Recent changes in the healthcare industry have placed more emphasis on empowering patients to control and have access to more of their medical information. The use of tools such as Electronic health records will help patients take a more active role in their health. With medical image sharing, patients can receive their medical imaging electronically, and then be able to share that information with the next physician they are seeing.
Cloud Computing: Using software that is delivered as a service over the internet is referred to as Cloud computing. Typically, medical image sharing will be rolled out as a service for hospitals, clinicians and patients.
Mobile: The use of mobile electronic devices has been rising across many industries, with healthcare included. As a physician, having access to medical images on the go is an important development in the field.
Architecture
A typical architecture for a medical image sharing platform includes transmitting data from a system installed directly on the hospital network and behind the firewall, to and from an outside entity. Some of the standard architectural pieces involved include:
Data transmission is the physical transfer of data through a communication channel, such as wires, wireless technologies or physical media. The most common use case for image sharing would be transmitting the image files using the cloud, allowing for instant access and exchange with anyone, anywhere. A Virtual private network (VPN) can be set up to enable exchange, but this is typically requires more to maintain for the facilities involved.
Data compression is used to help facilitate the exchange of large files by encoding using smaller bits than the original version. This process helps reduce the resources being used and improves the transmission capabilities.
Security: One widely utilized security tool is TSL/SSL, or Transport Layer Security. The Transport Layer Security (TLS)/Secure Sockets Layer (SSLv3) is used to secure electronic communications. TLS/SSLv3 helps to secure transmitted data using encryption. TLS/SSLv3 authenticates clients to prove the identities of parties engaged in secure communication, as well as authenticates servers. The TLS/SSLv3 security protocol can protect against data disclosure, masquerade attacks, bucket brigade attacks, rollback attacks, and replay attacks.
Data Centers: A Data center is used to house computer systems and associated pieces. The main use of these facilities in medical image sharing is to provide backup. The infrastructure commonly includes redundant power, redundant generators, redundant Internet connections, redundant firewalls, redundant network switches, and redundant storage. This is a vital piece to ensure that medical images are safe and secure in the cloud.
Integrations
Image sharing platforms can integrate directly with many hospital systems, such as:
Active Directory - Link to a hospital Active Directory for seamless use by staffed physicians.
Picture archiving and communication system (PACS) - A medical imaging technology that provides economical storage of, and convenient access to, images from multiple modalities within a facility.
Electronic medical record (EMR) - A computerized medical record created in an organization that delivers care, such as a hospital or physician's office.
Vendor Neutral Archive (VNA) - A medical imaging technology in which images and documents (and potentially any file of clinical relevance) are stored (archived) in a standard format with a standard interface, such that they can be accessed in a vendor-neutral manner by other systems.
Decision support system - A computer-based information system that supports business or organizational decision-making activities.
Health information exchange (HIE) - The mobilization of healthcare information electronically across organizations within a region, community or hospital system.
Personal health record (PHR) - A health record where health data and information related to the care of a patient is maintained by the patient.
Standards
DICOM - A standard for handling, storing, printing, and transmitting information in medical imaging.
Cross Enterprise Document Sharing (XDS) - Focused on providing a standards-based specification for managing the sharing of documents between any healthcare enterprise, ranging from a private physician office to a clinic to an acute care in-patient facility and personal health record systems.
Cross-enterprise Document Sharing for Imaging (XDS-I) - Extends XDS to share images, diagnostic reports and related information across a group of care sites.
HL7
Privacy
Health Insurance Portability and Accountability Act (HIPAA) - Enacted by the United States Congress and signed by President Bill Clinton in 1996. Title II of HIPAA, known as the Administrative Simplification (AS) provisions, requires the establishment of national standards for electronic health care transactions and national identifiers for providers, health insurance plans, and employers.
Government Initiatives
HITECH Act: The Health Information Technology for Economic and Clinical Health (HITECH) Act was instituted on February 17, 2009, in hopes of raising the overall meaningful use of health IT. It was created as a part of the American Recovery and Reinvestment Act of 2009. It also addressed security and privacy issues related to electronic exchange of medical information.
Blue Button: A patient is provided with a highly visible, clickable button to download his or her medical records in digital form from a secure website offered by their doctors, insurers, pharmacies or other health-related service. People can log into this secure website to view and have the option to download their health information, so they can examine it, check it, and share it with their doctors and others as they see fit. The Blue Button download capability is a tool that can help individuals get access to their information so they can more effectively participate in and manage their health and health care. It is mainly being used by the Department of Veteran Affairs in the United States.
RSNA Image Share Project
RSNA Image Share is a network created to enable radiologists to share medical images with patients using personal health record (PHR) accounts. This pilot project, funded by the National Institute for Biomedical Imaging and Bioengineering (Nibib) and administered by RSNA, began enrolling patients in 2011.
Currently, there are five participating medical centers in the program - Mount Sinai Hospital, New York, UCSF Medical Center, University of Maryland Medical Center, University of Chicago Medical Center, and Mayo Clinic. Patients at these sites are able to receive and access their medical images electronically. As of January 2017, there were seven software companies who have completed the RSNA Image Share Validation, Agfa Healthcare, Ambra Health (formerly DICOM Grid), GE Healthcare, Lexmark Healthcare, LifeImage, Inc., Mach7 Technologies and Novarad.
There are three main architectural pieces to the project:
A clearinghouse in the cloud
An Edge Server at each local radiology site
A PHR to receive the images and reports
See also
Medical imaging
DICOM
Medical software
Telemedicine
Electronic health record (EHR)
Radiology
Radiology Information System
Electronic Medical Record (EMR)
Vendor Neutral Archive (VNA)
Picture Archiving & Communications System (PACS)
Imaging Informatics
Cloud computing
References
Computing in medical imaging
Health informatics
Image-sharing websites | Medical image sharing | [
"Biology"
] | 2,038 | [
"Health informatics",
"Medical technology"
] |
38,948,504 | https://en.wikipedia.org/wiki/Riesz%E2%80%93Markov%E2%80%93Kakutani%20representation%20theorem | In mathematics, the Riesz–Markov–Kakutani representation theorem relates linear functionals on spaces of continuous functions on a locally compact space to measures in measure theory. The theorem is named for who introduced it for continuous functions on the unit interval, who extended the result to some non-compact spaces, and who extended the result to compact Hausdorff spaces.
There are many closely related variations of the theorem, as the linear functionals can be complex, real, or positive, the space they are defined on may be the unit interval or a compact space or a locally compact space, the continuous functions may be vanishing at infinity or have compact support, and the measures can be Baire measures or regular Borel measures or Radon measures or signed measures or complex measures.
The representation theorem for positive linear functionals on Cc(X)
The statement of the theorem for positive linear functionals on , the space of compactly supported complex-valued continuous functions, is as follows:
Theorem Let be a locally compact Hausdorff space and a positive linear functional on . Then there exists a unique positive Borel measure on such that
which has the following additional properties for some containing the Borel σ-algebra on :
for every compact ,
Outer regularity: holds for every Borel set ;
Inner regularity: holds whenever is open or when is Borel and ;
is a complete measure space
As such, if all open sets in are σ-compact then is a Radon measure.
One approach to measure theory is to start with a Radon measure, defined as a positive linear functional on . This is the way adopted by Bourbaki; it does of course assume that starts life as a topological space, rather than simply as a set. For locally compact spaces an integration theory is then recovered.
Without the condition of regularity the Borel measure need not be unique. For example, let be the set of ordinals at most equal to the first uncountable ordinal , with the topology generated by "open intervals". The linear functional taking a continuous function to its value at corresponds to the regular Borel measure with a point mass at . However it also corresponds to the (non-regular) Borel measure that assigns measure to any Borel set if there is closed and unbounded set with , and assigns measure to other Borel sets. (In particular the singleton gets measure , contrary to the point mass measure.)
The representation theorem for the continuous dual of C0(X)
The following representation, also referred to as the Riesz–Markov theorem, gives a concrete realisation of the topological dual space of , the set of continuous functions on which vanish at infinity.
Theorem Let be a locally compact Hausdorff space. For any continuous linear functional on , there is a unique complex-valued regular Borel measure on such that
A complex-valued Borel measure is called regular if the positive measure satisfies the regularity conditions defined above. The norm of as a linear functional is the total variation of , that is
Finally, is positive if and only if the measure is positive.
One can deduce this statement about linear functionals from the statement about positive linear functionals by first showing that a bounded linear functional can be written as a finite linear combination of positive ones.
Historical remark
In its original form by the theorem states that every continuous linear functional over the space of continuous functions in the interval can be represented as
where is a function of bounded variation on the interval , and the integral is a Riemann–Stieltjes integral. Since there is a one-to-one correspondence between Borel regular measures in the interval and functions of bounded variation (that assigns to each function of bounded variation the corresponding Lebesgue–Stieltjes measure, and the integral with respect to the Lebesgue–Stieltjes measure agrees with the Riemann–Stieltjes integral for continuous functions), the above stated theorem generalizes the original statement of F. Riesz.
Notes
References
; a category theoretic presentation as natural transformation.
Theorems in functional analysis
Duality theories
Integral representations
Linear functionals | Riesz–Markov–Kakutani representation theorem | [
"Mathematics"
] | 847 | [
"Theorems in mathematical analysis",
"Mathematical structures",
"Theorems in functional analysis",
"Category theory",
"Duality theories",
"Geometry"
] |
36,065,235 | https://en.wikipedia.org/wiki/Thiophosphoric%20acid | Thiophosphoric acid is an inorganic compound with the chemical formula . Structurally, it is the acid derived from phosphoric acid with one oxygen atom replaced by sulfur atom, although it cannot be prepared from phosphoric acid. It is a colorless compound that is rarely isolated in pure form, but rather as a solution. The structure of the compound has not been reported, but two tautomers are reasonable: and .
Preparation
The compound has been prepared in a multistep process starting with the base hydrolysis of phosphorus pentasulfide to give dithiophosphate, which is isolated as its barium salt:
In a second stage, the barium salt is decomposed with sulfuric acid, precipitating barium sulfate and liberating free dithiophosphoric acid:
Under controlled conditions, dithiophosphoric acid hydrolyses to give the monothioderivative:
References
Phosphorothioates
Acids | Thiophosphoric acid | [
"Chemistry"
] | 203 | [
"Acids",
"Inorganic compounds",
"Phosphorothioates",
"Functional groups",
"Inorganic compound stubs"
] |
36,066,636 | https://en.wikipedia.org/wiki/Thermodynamics%20of%20micellization | In colloidal chemistry, the critical micelle concentration (CMC) of a surfactant is one of the parameters in the Gibbs free energy of micellization. The concentration at which the monomeric surfactants self-assemble into thermodynamically stable aggregates is the CMC. The Krafft temperature of a surfactant is the lowest temperature required for micellization to take place. There are many parameters that affect the CMC. The interaction between the hydrophilic heads and the hydrophobic tails play a part, as well as the concentration of salt within the solution and surfactants.
Micelle
A micelle is an aggregation of surfactants or block copolymer in aqueous solution or organic solution, often spherical.
Surfactants
Surfactants are composed of a polar head group that is hydrophilic and a nonpolar tail group that is hydrophobic. The head groups can be anionic, cationic, zwitterionic, or nonionic. The tail group can be a hydrocarbon, fluorocarbon, or a siloxane. Extensive variation in the surfactant’s solution and interfacial properties is allowed through different molecular structures of surfactants.
Hydrophobic coagulation occurs when a positively charged solution is added with a sodium alkyl sulfate. The coagulation value is smaller when the alkyl chain length of the coagulator is longer. Hydrophobic coagulation occurs when a negatively charged solution contains a cationic surfactant. The coulomb attraction between the head groups and surface competes with the hydrophobic attraction for the entire tail in a favorable manner.
Block copolymer
Block copolymers are interesting because they can "microphase separate" to form periodic nanostructures,[13][14][15] Microphase separation is a situation similar to that of oil and water. Oil and water are immiscible - they phase separate. Due to incompatibility between the blocks, block copolymers undergo a similar phase separation. Because the blocks are covalently bonded to each other, they cannot demix macroscopically as water and oil. In "microphase separation" the blocks form nanometer-sized structures.
Driving mechanism for micellization
The driving mechanism for micellization is the transfer of hydrocarbon chains from water into the oil-like interior. This entropic effect is called the hydrophobic effect. Compared to the increase of entropy of the surrounding water molecules, this hydrophobic interaction is relatively small. The water molecules are highly ordered around the hydrocarbon chain. The CMC decreases while the length of the alkyl chain increases when all the hydrocarbon chains are hidden inside micelles.
Ionic micelles and salt concentration
The driving force for adsorption is the attraction between the surface and the surfactant head-group with low surfactant concentrations and the adsorption on hydrophilic surfaces. This means that the surfactant adsorbs at low surfactant concentrations with its head-group contacting the surface.
Depending on the type of head-group and surface, the attraction will have a short-range contribution for both non-ionic and ionic surfactants. Ionic surfactants will also experience a generic electrostatic interaction. If the surfactants and the surface are oppositely charged then the interaction will be attractive. If the surfactants and the surface are like charges then the interaction will be repulsive.
Aggregation is opposed due to the repulsion of the polar head groups as they come closer to each other. Hydration repulsion occurs because head groups have to be dehydrated as they come closer to each other. The head groups’ thermal fluctuations become smaller as they come closer together because they are confined by neighboring head groups. This causes their entropy to decrease and leads to a repulsion.
Gibbs free energy of micellization
In general, the Gibbs free energy of micellization can be approximated as:
where is the change in Gibbs free energy of micellization, is the universal gas constant, is the absolute temperature, and is the critical micelle concentration.
Non-ionic micelles
Two methods to extract the Gibbs free energy based on the value of CMC and exist; Phillips method based on the law of mass action and the pseudo-phase separation model. The law of mass action postulates that the micelle formation can be modeled as a chemical equilibrium process between the micelles and its constituents, the surfactant monomers, :
,
where is the average number of surfactant monomers in solution that associate into a micelle, commonly denoted the aggregation number.
The equilibrium is characterized by an equilibrium constant defined by , where and are the concentrations of micelles and free surfactant monomers, respectively. In combination with the law of conservation of mass, the system is fully specified by: , where is the total surfactant concentration. Phillips defined the CMC as the point corresponding to the maximum change in gradient in an ideal property-concentration ( against ) relationship =0. By implicit differentiation of three times with respect to and equating to zero it can be shown that the micellization constant is given by for . According to Phillips method the Gibbs free energy change of micellization is therefore given by:
The pseudo-phase separation model was originally derived on its own basis, but it has later been shown that it can be interpreted as an approximation to the mass-action model for large . That is, for micelles behaving in accordance with the law of mass-action, the pseudo-phase phase separation model is only an approximation and will only become asymptotically equal to the mass-action model as the micelle becomes a true macroscopic phase i.e. for →∞. However, the approximation that the aggregation number is large is in most cases sufficient:
Ionic micelles
Ionic micelles are typically very affected by the salt concentration. In ionic micelles the monomers are typically fully ionized, but the high electric field strength at the surface of the micelles will cause adsorption of some proportion of the free counter-ions. In this case a chemical equilibrium process can be assumed between the charged micelles and its constituents, the bile salt monomers, and bound counter-ions :
where is the average aggregation number and is the average degree of counter-ion binding to the micelle. In this case, the Gibbs free energy is given by:
where is the Gibbs energy of micellization and is the free counter-ion concentration at CMC. For large , that is in the limit when then the micelles becomes a true macroscopic phase, the Gibbs free energy is usually approximated by:
Dressed micelle model
In the dressed micelle model, the total Gibbs energy is broken down into several components accounting for the hydrophobic tail, the electrostatic repulsion of the head groups, and the interfacial energy on the surface of the micelle.
where the components of the total Gibbs micellization energy are hydrophobic, electrostatic, and interfacial.
Effect of concentration and temperature
Solubility and cloud point
Specific temperature at a specific pressure at which large groups of micelles begin to precipitate out into a quasi-separate phase. As temperature is raised above the cloud point this causes the distinct surfactant phase to form densely packed micelle groups known as aggregates. The phase separation is a reversible separation controlled by enthalpy (promotes aggregation/separation) above the cloud point, and entropy (promotes miscibility of micelles in water) below the cloud point. The cloud point is the equilibrium between the two free energies.
Critical micelle concentration
The critical micelle concentration (CMC) is the exact concentration of surfactants at which aggregates become thermodynamically soluble in an aqueous solution. Below the CMC there is not a high enough density of surfactant to spontaneously precipitate into a distinct phase. Above the CMC, the solubility of the surfactant within the aqueous solution has been exceeded. The energy required to keep the surfactant in solution no longer is the lowest energy state. To decrease free energy of the system the surfactant is precipitated out. CMC is determined by establishing inflection points for pre-determined surface tension of surfactants in solution. Plotting the inflection point against the surfactant concentration will provide insight into the critical micelle concentration by showing stabilization of phases.
Krafft temperature
The Krafft temperature is the temperature at which the CMC can be achieved. This temperature determines the relative solubility of surfactant in an aqueous solution. This is the minimum temperature the solution must be at to allow the surfactant to precipitate into aggregates. Below this temperature no level of solubility will be sufficient to precipitate aggregates due to minimal movement of particles in solution. The Krafft Temperature (Tk) is based on the concentration of counter-ions (Caq). Counter-ions are typically in the form of salt. Because the Tk is fundamentally based on the Caq, which is controlled by surfactant and salt concentration, different combinations of the respective parameters can be altered. Although, the Caq will maintain the same value despite changes in concentration of surfactant and salt, therefore, thermodynamically speaking the Krafft temperature will remain constant.
Surfactant packing parameter
Differences in shape
The shape of a surfactant molecule can be described by its surfactant packing parameter, (Israelachvili, 1976). The packing parameter takes into account the volume of the hydrophobic chain (), the equilibrium area per molecule at
the aggregate interface (), and the length of the hydrophobic chain ():
The packing parameter for a specific surfactant is not a constant. It is dependent on various conditions which affect each the volume of the hydrophobic chain, the cross sectional area of the hydrophilic head group, and the length of the hydrophobic chain. Things that can affect these include, but are not limited to, the properties of the solvent, the solvent temperature, and the ionic strength of the solvent.
Cone, wedge, and cylinder shaped surfactants
The shape of a micelle is directly dependent on the packing parameter of the surfactant. Surfactants with a packing parameter of ≤ 1/3 appear to have a cone-like shape which will pack together to form spherical micelles when in an aqueous environment (top in figure). Surfactants with a packing parameter of 1/3 < ≤ 1/2 appear to have a wedge-like shape and will aggregate together in an aqueous environment to form cylindrical micelles (bottom in figure). Surfactants with a packing parameter of > 1/2 appear to have a cylindrical shape and pack together to form a bilayer in an aqueous environment (middle in figure).
Data
References
Colloidal chemistry | Thermodynamics of micellization | [
"Chemistry"
] | 2,275 | [
"Colloidal chemistry",
"Surface science",
"Colloids"
] |
36,066,821 | https://en.wikipedia.org/wiki/Compaction%20of%20ceramic%20powders | Compaction of ceramic powders is a forming technique for ceramics in which granular ceramic materials are made cohesive through mechanical densification, either by hot or cold pressing. The resulting green part must later be sintered in a kiln. The compaction process permits an efficient production of parts to close tolerances with low drying shrinkage. It can be used for parts ranging widely in size and shape, and for both technical and nontechnical ceramics.
Background: traditional & advanced ceramics
The ceramics industry is broadly developed in the world. In Europe alone, the current investment is estimated at € 26 billion. Advanced ceramics are crucial for new technologies, particularly thermo-mechanical and bio-medical applications, while traditional ceramics have a worldwide market and have been suggested as materials to minimize the impact on the environment (when compared to other finishing materials).
The production process of ceramics
Up-to-date ceramic technology involves invention and design of new components and optimization of production processes of complex structures. Ceramics can be formed by a variety of different methods which can be divided into three main groups, depending on whether the starting materials involve a gas, a liquid, or a solid. Examples of methods involving gases are: chemical vapour deposition, directed metal oxidation and reaction bonding. Examples of methods involving liquids are: sol-gel process and polymer pyrolysis. Methods involving solids, especially powder methods, dominate ceramic forming and are extensively used in the industry.
The practical realization of ceramic products by powder methods requires the following steps: ceramic powder production, powder treatment, handling and processing, cold forming, sintering, and evaluation of the performance of the final product. Since these processes permit an efficient production of parts ranging widely in size and shape to close tolerances, there is an evident interest in industry. For instance, metallurgical, pharmaceutical, and traditional and advanced structural ceramics represent common applications.
Mechanics of forming of ceramic powders
It is a well-established fact that the performance of a ceramic component critically depends on the manufacturing process. Initial powder characteristics and processing, including cold forming and sintering, have a strong impact on the mechanical properties of the components as they may generate a defect population (microcracks, density gradients, pores, agglomerates) within the green and sintered compounds. The mechanical characteristics of the solid obtained after cold forming (the so-called ‘green body’) strongly affect the subsequent sintering process and thus the mechanical properties of the final piece.
Many technical, still unresolved difficulties arise in the forming process of ceramic materials. On the one hand the compact should result to be intact after ejection, it should be handleable without failure and essentially free of macro defects. On the other hand, defects of various nature are always present in the green bodies, negatively affecting local shrinkage during sintering, Fig. 1.
Defects can be caused by the densification process, which may involve highly inhomogeneous strain fields, or by mold ejection. Currently, there is a high production rejection rate, due to the fact that manufacturing technologies are mainly based on empirically engineered processes, rather than on rational and scientific methodologies.
The industrial technologies involved in the production of ceramics, with particular reference to tile and sanitaryware products, generate a huge amount of waste of material and energy. Consequently, the set-up of manufacturing processes is very costly and time consuming and not yet optimal in terms of quality of the final piece.
There is therefore a strong interest from the ceramic industry in the availability of tools capable of modelling and simulating: i) the powder compaction process and ii) the criticality of defects possibly present in the final piece after sintering. Recently, an EU IAPP research project has been financed with the aim of enhance mechanical modelling of ceramic forming in view of industrial applications.
During cold powder compaction, a granular material is made cohesive through mechanical densification, a process for which modelling requires the description of the transition from a granular to a dense and even a fully dense state (Fig. 2).
Since granular materials are characterized by mechanical properties almost completely different from those typical of dense solids, the mechanical modelling must describe a transition between two distinctly different states of a material. This is a scientific challenge addressed by Piccolroaz et al. in terms of plasticity theory.
A key point in their analysis is the use of the ‘Bigoni & Piccolroaz yield surface’, previously developed, see Fig. 3.
The mechanical model developed by Piccoloraz et al. (2006 a;b) permits the description of the forming process (Fig. 4).
The INTERCER2 research project is aimed to develop novel constitutive descriptions for ceramic powders and more robust implementation in a numerical code.
See also
Powder Metallurgy
Plasticity
Yield surface
Ceramic forming techniques
Mechanical powder press
Notes
References
External links
https://ssmg.unitn.it/
https://bigoni.dicam.unitn.it/
https://apiccolroaz.dicam.unitn.it/
https://bigoni.dicam.unitn.it/compaction.html
http://www.substech.com/dokuwiki/doku.php?id=methods_of_shape_forming_ceramic_powders
http://intercer2.unitn.it/
Ceramic materials
Ceramic engineering
Powders | Compaction of ceramic powders | [
"Physics",
"Engineering"
] | 1,112 | [
"Materials",
"Powders",
"Ceramic materials",
"Ceramic engineering",
"Matter"
] |
36,069,663 | https://en.wikipedia.org/wiki/Australasian%20Institute%20of%20Mining%20and%20Metallurgy | The Australasian Institute of Mining and Metallurgy (AusIMM) provides services to professionals engaged in all facets of the global minerals sector and is based in Carlton, Victoria, Australia.
History
The Institute had its genesis in 1893 with the formation in Adelaide of the Australasian Institute of Mining Engineers drawing its inspiration from the success of the American Institute of Mining Engineers, and some impetus from the Mine Managers Association of Broken Hill. Office-holders were equally from South Australia and "The Hill", where the Institute established its headquarters.
This approach to the foundation of a federal organization was welcomed in mining districts of other Australian colonies. and branches were formed in Broken Hill, the Thames Goldfield (New Zealand), Ballarat, and elsewhere. Succeeding annual conferences were held at Ballarat, Hobart, Broken Hill and other mining centres. The 1926 conference was held in Otago, New Zealand.
In 1896 its headquarters were removed from Broken Hill to Melbourne, and in June 1919 adopted its present name.
In 1954 the institute applied for a royal charter, granted 1955.
The AusIMM represents more than 15 500 members drawn from all sections of the industry and supported by a network of branches and societies in Australasia and internationally.
Member grades and post-nominals
Some notable members
AIME
Sir Henry Ayers foundation president, 1893
Uriah Dudley foundation general secretary 1893–1897
David Lauder Stirling (c. 1871 – 30 August 1949); president 1894, secretary 1906–1941 or later; also secretary, Victorian Chamber of Mines 1898–1945
H. W. Ferd Kayser (mine manager Mount Bischoff Tin Mining Company), vice-president 1894, president 1898, 1899
Alexander Montgomery (government geologist in New Zealand, Tasmania, and Western Australia), president 1895
Ernest Lidgey geological surveyor in Victoria; conducted Australia's first geophysical surveys; president 1901
Samuel Henry McGowan (c. 1845 – 13 May 1921), accountant specializing in gold mining companies, mayor of Bendigo 1899–1900; president 1902
F. Danvers Power, lecturer at Sydney University, president 1897, 1904.
Robert C. Sticht general manager, Mount Lyell Mining & Railway Company, president 1905, 1915, vice-president 1909
G. D. Delprat (manager of the Broken Hill mine), president 1906
Dr. Alfred William Howitt, C.M.G., F.G.S., the eminent naturalist, was president 1907
Frank A. Moss, (general manager of Kalgurli Gold Mines), president 1907
C. F. Courtney (general manager of the Sulphide Corporation), president 1908
Richard Hamilton, (general manager of the Great Boulder Proprietary mine), president 1909, vice-president 1910
G. A. Richard (of Mount Morgan, Queensland), president 1910
Herman Carl Bellinger from US; mine manager, Cobar 1909–1914, president 1912
James Hebbard (manager of the Central Mine, Broken Hill), president 1913
John Warren (mining) (manager of Block 10, Broken Hill), vice-president 1894, president 1902
Hyman Herman (director of the Victorian geological survey), joined 1897, president 1914, remained councillor to 1959.
Robert Silvers Black, (general manager of Kalgurli Gold Mines), president 1917
J. W. Sutherland metallurgist at Lake View Consols and Golden Horse Shoe gold mines; president 1918
Professor D. B. Waters of Otago, New Zealand, vice-president 1917,1918 (absent for most of this period — he was with New Zealand Tunnelling Company in France).
AIMM
R. W. Chapman, vice-president 1906, president 1920
Colin Fraser (later Sir Colin), president 1923
H. W. Gepp, later Sir Herbert William Gepp, president 1924
Ernest W. Skeats (professor of geology, University of Melbourne), vice-president 1924, president 1925
David Lauder Stirling, general secretary 1922–45
R. M. Murray (general manager, Mount Lyell Mining & Railway Company), president 1927
Alfred Stephen Kenyon, treasurer 1897, secretary 1906, president 1928
E. C. Andrews (New South Wales Government Geologist), president 1929
William Edward Wainwright (general manager of Broken Hill South), president 1919, 1930, vice-president 1916–18, 1933, 1934
Wiliam Harley Wainwright son of W. E. Wainwright, (chief metallurgist, BHP) life member
Essington Lewis (managing director of BHP) vice-president 1932, president 1935
Andrew Fairweather, president 1932 (succeeded W. E. Mainwright at Broken Hill South mine and as General Manager)
Professor J. Neill Greenwood (dean of Melbourne University Faculty of Applied Science), president 1936,1937
Donald Yates, superintendent of Broken Hill Associated Smelters Pty., president 1937
Julius Kruttschnitt (general manager, Mount Isa Mines) president 1939
Oliver H. Woodward (general manager, North Mine, Broken Hill) active in tunnelling operations WWI, president 1940
Arthur H. P. Moline (1877–1965) (succeeded R. M. Murray as general manager, Mount Lyell, in 1944), president 1945
Asdruebal James Keast (general manager, Zinc Corporation; Australian Aluminium Production Commission 1951–55), president 1946, vice-president 1947
Frank R. Hockey / Francis Richard Hockey (general superintendent, BHP), president 1947, vice-president 1949,1950
F. F. Espie / Frank Fancett Espie (general superintendent, Western Mining Corporation), president 1948
Godfrey Bernard O'Malley, vice-president 1943–46
Maurice Alan Edgar Mawby (director of exploration, Zinc Corporation, Limited), vice-president 1950,1951, president 1953,1954
Ian Munro McLennan (General Manager, BHP), president 1951
Beryl Elaine Jacka MBE, typist 1936; assistant general secretary 1945–52, secretary 1952–1976
Gordon Colvin Lindesay Clark CMG
See also
British
North of England Institute of Mining and Mechanical Engineers (known as the Mining Institute) founded 1852
Institution of Mining Engineers founded 1889, incorporating the Mining Institute above
Institution of Mining and Metallurgy founded 1892
Institute of Materials, Minerals and Mining merger of IMM and Institute of Materials in 2002.
US
American Institute of Mining, Metallurgical, and Petroleum Engineers (originally American Institute of Mining Engineers founded 1871)
References
1893 establishments in Australia
Engineering societies based in Australia
Organizations established in 1893
Metallurgical organizations
Mining organisations in Australia
Organisations based in Victoria (state)
Metallurgical industry of Australia | Australasian Institute of Mining and Metallurgy | [
"Chemistry",
"Materials_science",
"Engineering"
] | 1,345 | [
"Metallurgical industry of Australia",
"Metallurgical organizations",
"Metallurgy",
"Metallurgical industry by country"
] |
29,420,546 | https://en.wikipedia.org/wiki/Bioresource%20Technology | Bioresource Technology is a peer-reviewed scientific journal published biweekly by Elsevier, covering the field of bioresource technology. The journal was established in 1979 as Agricultural Wastes and renamed to Biological Wastes in 1987, before obtaining its current title in 1991. It covers all areas concerning biomass, biological waste treatment, bioenergy, biotransformations and bioresource systems analysis, and technologies associated with conversion or production.
References
External links
Elsevier academic journals
Biweekly journals
English-language journals
Academic journals established in 1979
Biotechnology journals
Waste management journals | Bioresource Technology | [
"Biology",
"Environmental_science"
] | 120 | [
"Environmental science journals",
"Waste management journals",
"Biotechnology literature",
"Biotechnology journals"
] |
29,425,253 | https://en.wikipedia.org/wiki/Manifest%20covariance | In general relativity, a manifestly covariant equation is one in which all expressions are tensors. The operations of addition, tensor multiplication, tensor contraction, raising and lowering indices, and covariant differentiation may appear in the equation. Forbidden terms include but are not restricted to partial derivatives. Tensor densities, especially integrands and variables of integration, may be allowed in manifestly covariant equations if they are clearly weighted by the appropriate power of the determinant of the metric.
Writing an equation in manifestly covariant form is useful because it guarantees general covariance upon quick inspection. If an equation is manifestly covariant, and if it reduces to a correct, corresponding equation in special relativity when evaluated instantaneously in a local inertial frame, then it is usually the correct generalization of the special relativistic equation in general relativity.
Example
An equation may be Lorentz covariant even if it is not manifestly covariant. Consider the electromagnetic field tensor
where is the electromagnetic four-potential in the Lorenz gauge. The equation above contains partial derivatives and is therefore not manifestly covariant. Note that the partial derivatives may be written in terms of covariant derivatives and Christoffel symbols as
For a torsion-free metric assumed in general relativity, we may appeal to the symmetry of the Christoffel symbols
which allows the field tensor to be written in manifestly covariant form
See also
Lorentz covariance
Introduction to the mathematics of general relativity
References
General relativity
Tensors | Manifest covariance | [
"Physics",
"Engineering"
] | 326 | [
"General relativity",
"Tensors",
"Theory of relativity"
] |
29,427,253 | https://en.wikipedia.org/wiki/Taylor%E2%80%93Goldstein%20equation | The Taylor–Goldstein equation is an ordinary differential equation used in the fields of geophysical fluid dynamics, and more generally in fluid dynamics, in presence of quasi-2D flows. It describes the dynamics of the Kelvin–Helmholtz instability, subject to buoyancy forces (e.g. gravity), for stably stratified fluids in the dissipation-less limit. Or, more generally, the dynamics of internal waves in the presence of a (continuous) density stratification and shear flow. The Taylor–Goldstein equation is derived from the 2D Euler equations, using the Boussinesq approximation.
The equation is named after G.I. Taylor and S. Goldstein, who derived the equation independently from each other in 1931. The third independent derivation, also in 1931, was made by B. Haurwitz.
Formulation
The equation is derived by solving a linearized version of the Navier–Stokes equation, in presence of gravity and a mean density gradient (with gradient-length ), for the perturbation velocity field
where is the unperturbed or basic flow. The perturbation velocity has the wave-like solution (real part understood). Using this knowledge, and the streamfunction representation
for the flow, the following dimensional form of the Taylor–Goldstein equation is obtained:
where denotes the Brunt–Väisälä frequency. The eigenvalue parameter of the problem is . If the imaginary part of the wave speed is positive, then the flow is unstable, and the small perturbation introduced to the system is amplified in time.
Note that a purely imaginary Brunt–Väisälä frequency results in a flow which is always unstable. This instability is known as the Rayleigh–Taylor instability.
No-slip boundary conditions
The relevant boundary conditions are, in case of the no-slip boundary conditions at the channel top and bottom and
Notes
References
Atmospheric thermodynamics
Atmospheric dynamics
Equations of fluid dynamics
Fluid dynamics
Oceanography
Buoyancy | Taylor–Goldstein equation | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 405 | [
"Equations of fluid dynamics",
"Hydrology",
"Applied and interdisciplinary physics",
"Equations of physics",
"Atmospheric dynamics",
"Oceanography",
"Chemical engineering",
"Piping",
"Fluid dynamics"
] |
43,175,401 | https://en.wikipedia.org/wiki/Vehicle%20rescheduling%20problem | The vehicle rescheduling problem (VRSP) is a combinatorial optimization and integer programming problem seeking to service customers on a trip after change of schedule such as vehicle break down or major delay. Proposed by Li, Mirchandani and Borenstein in 2007, the VRSP is an important problem in the fields of transportation and logistics.
Determining the optimal solution is an NP-complete problem in combinatorial optimization, so in practice heuristic and deterministic methods are used to find acceptably good solutions for the VRSP.
Overview
Several variations and specializations of the vehicle rescheduling problem exist:
Single Depot Vehicle Rescheduling Problem (SDVRSP): A number of trips need to be rescheduled due to delay, vehicle break down or for any other reason. The goal is to find optimal rescheduling of the existing fleet, using possibly extra vehicles from the depot, in order to minimise the delay and the operating costs. In the Single Depot variation, there is only one depot which contains all extra vehicles, and in which every vehicle starts and ends its schedule.
Multi Depot Vehicle Rescheduling Problem (MDVRSP): Similar to SDVRSP, except additional depots are introduced. Each depot has capacity constraints, as well as variable extra vehicles. Usually vehicle schedules have an additional constraint which requires that each vehicle returns to the depot where it started its schedule.
Open Vehicle Rescheduling Problem (OVRSP): Vehicles are not required to return to the depot.
Although VRSP is related to the Single Depot Vehicle Scheduling Problem and the Multi Depot Vehicle Scheduling Problem, there is a significant difference in runtime requirements, as VRSP need to be solved in near real-time to allow rescheduling during operations, while SDVSP and MDVSP are typically solved using long running linear programming methods.
Another field where VRSP is used is in transportation of goods in order to reschedule the routes when demand substantially changes
See also
Combinatorial optimization
Vehicle routing problem
Fundamentals of Transportation/Timetabling and Scheduling
References
External links
Optibus – Commercial SaaS platform for solving VRSP in real-time
Ecolane – Commercial software for the Demand responsive transport
Optimal scheduling
Transportation planning
Vehicle operation
NP-complete problems | Vehicle rescheduling problem | [
"Mathematics",
"Engineering"
] | 464 | [
"Optimal scheduling",
"Industrial engineering",
"Computational problems",
"Mathematical problems",
"NP-complete problems"
] |
43,178,654 | https://en.wikipedia.org/wiki/Weil%E2%80%93Brezin%20Map | In mathematics, the Weil–Brezin map, named after André Weil and Jonathan Brezin, is a unitary transformation that maps a Schwartz function on the real line to a smooth function on the Heisenberg manifold. The Weil–Brezin map gives a geometric interpretation of the Fourier transform, the Plancherel theorem and the Poisson summation formula. The image of Gaussian functions under the Weil–Brezin map are nil-theta functions, which are related to theta functions. The Weil–Brezin map is sometimes referred to as the Zak transform, which is widely applied in the field of physics and signal processing; however, the Weil–Brezin Map is defined via Heisenberg group geometrically, whereas there is no direct geometric or group theoretic interpretation from the Zak transform.
Heisenberg manifold
The (continuous) Heisenberg group is the 3-dimensional Lie group that can be represented by triples of real numbers with multiplication rule
The discrete Heisenberg group is the discrete subgroup of whose elements are represented by the triples of integers. Considering acts on on the left, the quotient manifold is called the Heisenberg manifold.
The Heisenberg group acts on the Heisenberg manifold on the right. The Haar measure on the Heisenberg group induces a right-translation-invariant measure on the Heisenberg manifold. The space of complex-valued square-integrable functions on the Heisenberg manifold has a right-translation-invariant orthogonal decomposition:
where
.
Definition
The Weil–Brezin map is the unitary transformation given by
for every Schwartz function , where convergence is pointwise.
The inverse of the Weil–Brezin map is given by
for every smooth function on the Heisenberg manifold that is in .
Fundamental unitary representation of the Heisenberg group
For each real number , the fundamental unitary representation of the Heisenberg group is an irreducible unitary representation of on defined by
.
By Stone–von Neumann theorem, this is the unique irreducible representation up to unitary equivalence satisfying the canonical commutation relation
.
The fundamental representation of on and the right translation of on are intertwined by the Weil–Brezin map
.
In other words, the fundamental representation on is unitarily equivalent to the right translation on through the Weil-Brezin map.
Relation to Fourier transform
Let be the automorphism on the Heisenberg group given by
.
It naturally induces a unitary operator , then the Fourier transform
as a unitary operator on .
Plancherel theorem
The norm-preserving property of and , which is easily seen, yields the norm-preserving property of the Fourier transform, which is referred to as the Plancherel theorem.
Poisson summation formula
For any Schwartz function ,
.
This is just the Poisson summation formula.
Relation to the finite Fourier transform
For each , the subspace can further be decomposed into right-translation-invariant orthogonal subspaces
where
.
The left translation is well-defined on , and are its eigenspaces.
The left translation is well-defined on , and the map
is a unitary transformation.
For each , and , define the map by
for every Schwartz function , where convergence is pointwise.
The inverse map is given by
for every smooth function on the Heisenberg manifold that is in .
Similarly, the fundamental unitary representation of the Heisenberg group is unitarily equivalent to the right translation on through :
.
For any ,
.
For each , let . Consider the finite dimensional subspace of generated by where
Then the left translations and act on and give rise to the irreducible representation of the finite Heisenberg group. The map acts on and gives rise to the finite Fourier transform
Nil-theta functions
Nil-theta functions are functions on the Heisenberg manifold that are analogous to the theta functions on the complex plane. The image of Gaussian functions under the Weil–Brezin Map are nil-theta functions. There is a model of the finite Fourier transform defined with nil-theta functions, and the nice property of the model is that the finite Fourier transform is compatible with the algebra structure of the space of nil-theta functions.
Definition of nil-theta functions
Let be the complexified Lie algebra of the Heisenberg group . A basis of is given by the left-invariant vector fields on :
These vector fields are well-defined on the Heisenberg manifold .
Introduce the notation . For each , the vector field on the Heisenberg manifold can be thought of as a differential operator on with the kernel generated by .
We call
the space of nil-theta functions of degree .
Algebra structure of nil-theta functions
The nil-theta functions with pointwise multiplication on form a graded algebra (here ).
Auslander and Tolimieri showed that this graded algebra is isomorphic to
,
and that the finite Fourier transform (see the preceding section #Relation to the finite Fourier transform) is an automorphism of the graded algebra.
Relation to Jacobi theta functions
Let be the Jacobi theta function. Then
.
Higher order theta functions with characteristics
An entire function on is called a theta function of order , period () and characteristic if it satisfies the following equations:
,
.
The space of theta functions of order , period and characteristic is denoted by .
.
A basis of is
.
These higher order theta functions are related to the nil-theta functions by
.
See also
Nilmanifold
Nilpotent group
Nilpotent Lie algebra
Weil representation
Theta representation
Oscillator representation
References
Harmonic analysis
Representation theory | Weil–Brezin Map | [
"Mathematics"
] | 1,155 | [
"Representation theory",
"Fields of abstract algebra"
] |
43,180,070 | https://en.wikipedia.org/wiki/Actinide%20chemistry | Actinide chemistry (or actinoid chemistry) is one of the main branches of nuclear chemistry that investigates the processes and molecular systems of the actinides. The actinides derive their name from the group 3 element actinium. The informal chemical symbol An is used in general discussions of actinide chemistry to refer to any actinide. All but one of the actinides are f-block elements, corresponding to the filling of the 5f electron shell; lawrencium, a d-block element, is also generally considered an actinide. In comparison with the lanthanides, also mostly f-block elements, the actinides show much more variable valence. The actinide series encompasses the 15 metallic chemical elements with atomic numbers from 89 to 103, actinium through lawrencium.
Main branches
Organoactinide chemistry
In contrast to the relatively early flowering of organotransition-metal chemistry (1955 to the present), the corresponding development of actinide organometallic chemistry has taken place largely within the past 15 or so years. During this period, 5f organometallic science has blossomed, and it is now apparent that the actinides have a rich, intricate, and highly informative organometallic chemistry. Intriguing parallels to and sharp differences from the d-block elements have emerged. Actinides can coordinate the organic active groups or bind to carbon by the covalent bonds.
Thermodynamics of actinides
The necessity of obtaining accurate thermodynamic quantities for the actinide elements and their compounds was recognized at the outset of the Manhattan Project, when a dedicated team of scientists and engineers initiated the program to exploit nuclear energy for military purposes. Since the end of World War II, both fundamental and applied objectives have motivated a great deal of further study of actinide thermodynamics.
Nanotechnology and supramolecular chemistry of actinides
The possibility of using unique properties of lanthanides in the nanotechnology is demonstrated. The origination of linear and nonlinear optical properties of lanthanide compounds with phthalocyanines, porphyrins, naphthalocyanines, and their analogs in solutions and condensed state and the prospects of obtaining novel materials on their basis are discussed. Based on the electronic structure and properties of lanthanides and their compounds, namely, optical and magnetic characteristics, electronic and ionic conductivity, and fluctuating valence, molecular engines are classified. High-speed storage engines or memory storage engines; photoconversion molecular engines based on Ln(II) and Ln(III); electrochemical molecular engines involving silicate and phosphate glasses; molecular engines whose operation is based on insulator – semiconductor, semiconductor – metal, and metal – superconductor types of conductivity phase transitions; solid electrolyte molecular engines; and miniaturized molecular engines for medical analysis are distinguished. It is shown that thermodynamically stable nanoparticles of LnxMy composition can be formed by d elements of the second halves of the series, i.e., those arranged after M = Mn, Tc, and Re.
Biological and environmental chemistry of actinides
Generally, ingested insoluble actinide compounds such as high-fired uranium dioxide and mixed oxide (MOX) fuel will pass through the digestive system with little effect since they cannot dissolve and be absorbed by the body. Inhaled actinide compounds, however, will be more damaging as they remain in the lungs and irradiate the lung tissue. Ingested Low-fired oxides and soluble salts such as nitrate can be absorbed into the blood stream. If they are inhaled then it is possible for the solid to dissolve and leave the lungs. Hence the dose to the lungs will be lower for the soluble form.
Radon and radium are not actinides—they are both radioactive daughters from the decay of uranium. Aspects of their biology and environmental behaviour is discussed at radium in the environment.
In India, a large amount of thorium ore can be found in the form of monazite in placer deposits of the Western and Eastern coastal dune sands, particularly in the Tamil Nadu coastal areas. The residents of this area are exposed to a naturally occurring radiation dose ten times higher than the worldwide average.
Thorium has been linked to liver cancer. In the past thoria (thorium dioxide) was used as a contrast agent for medical X-ray radiography but its use has been discontinued. It was sold under the name Thorotrast.
Uranium is about as abundant as arsenic or molybdenum. Significant concentrations of uranium occur in some substances such as phosphate rock deposits, and minerals such as lignite, and monazite sands in uranium-rich ores (it is recovered commercially from these sources).
Seawater contains about 3.3 parts per billion of uranium by weight as uranium(VI) forms soluble carbonate complexes. The extraction of uranium from seawater has been considered as a means of obtaining the element. Because of the very low specific activity of uranium the chemical effects of it upon living things can often outweigh the effects of its radioactivity.
Plutonium, like other actinides, readily forms a plutonium dioxide (plutonyl) core (PuO2). In the environment, this plutonyl core readily complexes with carbonate as well as other oxygen moieties (OH−, , , and ) to form charged complexes which can be readily mobile with low affinities to soil.
Nuclear reactions
Some early evidence for nuclear fission was the formation of a short-lived radioisotope of barium which was isolated from neutron irradiated uranium (139Ba, with a half-life of 83 minutes and 140Ba, with a half-life of 12.8 days, are major fission products of uranium). At the time, it was thought that this was a new radium isotope, as it was then standard radiochemical practice to use a barium sulfate carrier precipitate to assist in the isolation of radium.
PUREX
The PUREX process is a liquid–liquid extraction ion-exchange method used to reprocess spent nuclear fuel, in order to extract primarily uranium and plutonium, independent of each other, from the other constituents. The current method of choice is to use the PUREX liquid–liquid extraction process which uses a tributyl phosphate/hydrocarbon mixture to extract both uranium and plutonium from nitric acid. This extraction is of the nitrate salts and is classed as being of a solvation mechanism. For example, the extraction of plutonium by an extraction agent (S) in a nitrate medium occurs by the following reaction.
(aq) + 4 (aq) + 2 S(organic) → [](organic)
A complex bond is formed between the metal cation, the nitrates and the tributyl phosphate, and a model compound of a dioxouranium(VI) complex with two nitrates and two triethyl phosphates has been characterised by X-ray crystallography. After the dissolution step it is normal to remove the fine insoluble solids, because otherwise they will disturb the solvent extraction process by altering the liquid-liquid interface. It is known that the presence of a fine solid can stabilize an emulsion. Emulsions are often referred to as third phases in the solvent extraction community.
An organic solvent composed of 30% tributyl phosphate (TBP) in a hydrocarbon solvent, such as kerosene, is used to extract the uranium as UO2(NO3)2·2TBP complexes, and plutonium as similar complexes, from other fission products, which remain in the aqueous phase. The transuranium elements americium and curium also remain in the aqueous phase. The nature of the organic soluble uranium complex has been the subject of some research. A series of complexes of uranium with nitrate and trialkyl phosphates and phosphine oxides have been characterized.
Plutonium is separated from uranium by treating the kerosene solution with aqueous ferrous sulphamate, which selectively reduces the plutonium to the +3 oxidation state. The plutonium passes into the aqueous phase. The uranium is stripped from the kerosene solution by back-extraction into nitric acid at a concentration of ca. .
See also
Nuclear chemistry
Actinides in the environment
Important publications in nuclear chemistry
References
Nuclear chemistry
Chemistry
Actinides | Actinide chemistry | [
"Physics",
"Chemistry"
] | 1,759 | [
"Nuclear chemistry",
"nan",
"Nuclear physics"
] |
43,183,712 | https://en.wikipedia.org/wiki/Bioluminescent%20bacteria | Bioluminescent bacteria are light-producing bacteria that are predominantly present in sea water, marine sediments, the surface of decomposing fish and in the gut of marine animals. While not as common, bacterial bioluminescence is also found in terrestrial and freshwater bacteria. These bacteria may be free living (such as Vibrio harveyi) or in symbiosis with animals such as the Hawaiian Bobtail squid (Aliivibrio fischeri) or terrestrial nematodes (Photorhabdus luminescens). The host organisms provide these bacteria a safe home and sufficient nutrition. In exchange, the hosts use the light produced by the bacteria for camouflage, prey and/or mate attraction. Bioluminescent bacteria have evolved symbiotic relationships with other organisms in which both participants benefit each other equally. Bacteria also use luminescence reaction for quorum sensing, an ability to regulate gene expression in response to bacterial cell density.
History
Records of bioluminescence due to bacteria have existed for thousands of years. They appear in the folklore of many regions, including Scandinavia and the Indian subcontinent. Both Aristotle and Charles Darwin have described the phenomenon of the glowing oceans which is most likely due to these light-producing organisms. Since its discovery less than 30 years ago, the enzyme luciferase and its regulatory gene, lux, have led to major advances in molecular biology, through its use as a reporter gene. Luciferase was first purified by McElroy and Green in 1955. It was later discovered that there were two subunits to luciferase, called subunits α and β. The genes encoding these enzymes, luxA and luxB, respectively, were first isolated in the lux operon of Aliivibrio fisheri.
Purpose of bio-luminescence
The wide-ranged biological purposes of bio-luminescence include but are not limited to attraction of mates, defense against predators, and warning signals. In the case of bioluminescent bacteria, bio-luminescence mainly serves as a form of dispersal. It has been hypothesized that enteric bacteria (bacteria that survive in the guts of other organisms) - especially those prevalent in the depths of the ocean - employ bio-luminescence as an effective form of distribution. After making their way into the digestive tracts of fish and other marine organisms and being excreted in fecal pellets, bioluminescent bacteria are able to utilize their bio-luminescent capabilities to lure in other organisms and prompt ingestion of these bacterial-containing fecal pellets. The bio-luminescence of bacteria thereby ensures their survival, persistence, and dispersal as they are able to enter and inhabit other organisms.
Regulation of bio-luminescence
The regulation of bio-luminescence in bacteria is achieved through the regulation of the oxidative enzyme called luciferase. It is important that bio-luminescent bacteria decrease production rates of luciferase when the population is sparse in number in order to conserve energy. Thus, bacterial bioluminescence is regulated by means of chemical communication referred to as quorum sensing. Essentially, certain signaling molecules named autoinducers with specific bacterial receptors become activated when the population density of bacteria is high enough. The activation of these receptors leads to a coordinated induction of luciferase production that ultimately yields visible luminescence.
Biochemistry of bio-luminescence
The chemical reaction that is responsible for bio-luminescence is catalyzed by the enzyme luciferase. In the presence of oxygen, luciferase catalyzes the oxidation of an organic molecule called luciferin. Though bio-luminescence across a diverse range of organisms such as bacteria, insects, and dinoflagellates function in this general manner (utilizing luciferase and luciferin), there are different types of luciferin-luciferase systems. For bacterial bio-luminescence specifically, the biochemical reaction involves the oxidation of an aliphatic aldehyde by a reduced flavin mononucleotide. The products of this oxidation reaction include an oxidized flavin mononucleotide, a fatty acid chain, and energy in the form of a blue-green visible light.
Reaction: FMNH2 + O2 + RCHO → FMN + RCOOH + H2O + light
Evolution of bio-luminescence
Of all light emitters in the ocean, bio-luminescent bacteria is the most abundant and diverse. However, the distribution of bio-luminescent bacteria is uneven, which suggests evolutionary adaptations. The bacterial species in terrestrial genera such as Photorhabdus are bio-luminescent. On the other hand, marine genera with bio-luminescent species such as Vibrio and Shewanella oneidensis have different closely related species that are not light emitters. Nevertheless, all bio-luminescent bacteria share a common gene sequence: the enzymatic oxidation of Aldehyde and reduced Flavin mononucleotide by luciferase which are contained in the lux operon. Bacteria from distinct ecological niches contain this gene sequence; therefore, the identical gene sequence evidently suggests that bio-luminescence bacteria result from evolutionary adaptations.
Use as laboratory tool
After the discovery of the lux operon, the use of bioluminescent bacteria as a laboratory tool is claimed to have revolutionized the area of environmental microbiology. The applications of bioluminescent bacteria include biosensors for detection of contaminants, measurement of pollutant toxicity and monitoring of genetically engineered bacteria released into the environment. Biosensors, created by placing a lux gene construct under the control of an inducible promoter, can be used to determine the concentration of specific pollutants. Biosensors are also able to distinguish between pollutants that are bioavailable and those that are inert and unavailable. For example, Pseudomonas fluorescens has been genetically engineered to be capable of degrading salicylate and naphthalene, and is used as a biosensor to assess the bioavailability of salicylate and naphthalene. Biosensors can also be used as an indicator of cellular metabolic activity and to detect the presence of pathogens.
Evolution
The light-producing chemistry behind bioluminescence varies across the lineages of bioluminescent organisms. Based on this observation, bioluminescence is believed to have evolved independently at least 40 times. In bioluminescent bacteria, the reclassification of the members ofVibrio fischeri species group as a new genus, Aliivibrio, has led to increased interest in the evolutionary origins of bioluminescence. Among bacteria, the distribution of bioluminescent species is polyphyletic. For instance, while all species in the terrestrial genus Photorhabdus are luminescent, the genera Aliivibrio, Photobacterium, Shewanella and Vibrio contain both luminous and non-luminous species. Despite bioluminescence in bacteria not sharing a common origin, they all share a gene sequence in common. The appearance of the highly conserved lux operon in bacteria from very different ecological niches suggests a strong selective advantage despite the high energetic costs of producing light. DNA repair is thought to be the initial selective advantage for light production in bacteria. Consequently, the lux operon may have been lost in bacteria that evolved more efficient DNA repair systems but retained in those where visible light became a selective advantage. The evolution of quorum sensing is believed to have afforded further selective advantage for light production. Quorum sensing allows bacteria to conserve energy by ensuring that they do not synthesize light-producing chemicals unless a sufficient concentration are present to be visible.
Bacterial groups that exhibit bioluminescence
All bacterial species that have been reported to possess bioluminescence belong within the families Vibrionaceae, Shewanellaceae, or Enterobacteriaceae, all of which are assigned to the class Gammaproteobacteria.
(List from Dunlap and Henryk (2013), "Luminous Bacteria", The Prokaryotes )
Distribution
Bioluminescent bacteria are most abundant in marine environments during spring blooms when there are high nutrient concentrations. These light-emitting organisms are found mainly in coastal waters near the outflow of rivers, such as the northern Adriatic Sea, Gulf of Trieste, northwestern part of the Caspian Sea, coast of Africa and many more. These are known as milky seas. Bioluminescent bacteria are also found in freshwater and terrestrial environments but are less wide spread than in seawater environments. They are found globally, as free-living, symbiotic or parasitic forms and possibly as opportunistic pathogens. Factors that affect the distribution of bioluminescent bacteria include temperature, salinity, nutrient concentration, pH level and solar radiation. For example, Aliivibrio fischeri grows favourably in environments that have temperatures between 5 and 30 °C and a pH that is less than 6.8; whereas, Photobacterium phosphoreum thrives in conditions that have temperatures between 5 and 25 °C and a pH that is less than 7.0.
Genetic diversity
All bioluminescent bacteria share a common gene sequence: the lux operon characterized by the luxCDABE gene organization. LuxAB codes for luciferase while luxCDE codes for a fatty-acid reductase complex that is responsible for synthesizing aldehydes for the bioluminescent reaction. Despite this common gene organization, variations, such as the presence of other lux genes, can be observed among species. Based on similarities in gene content and organization, the lux operon can be organized into the following four distinct types: the Aliivibrio/Shewanella type, the Photobacterium type, theVibrio/Candidatus Photodesmus type, and the Photorhabdus type. While this organization follows the genera classification level for members of Vibrionaceae (Aliivibrio, Photobacterium, and Vibrio), its evolutionary history is not known.
With the exception of the Photorhabdus operon type, all variants of the lux operon contain the flavin reductase-encoding luxG gene. Most of the Aliivibrio/Shewanella type operons contain additional luxI/luxR regulatory genes that are used for autoinduction during quorum sensing. The Photobacterum operon type is characterized by the presence of rib genes that code for riboflavin, and forms the lux-rib operon. TheVibrio/Candidatus Photodesmus operon type differs from both the Aliivibrio/Shewanella and the Photobacterium operon types in that the operon has no regulatory genes directly associated with it.
Mechanism
All bacterial luciferases are approximately 80 KDa heterodimers containing two subunits: α and β. The α subunit is responsible for light emission. The luxA and luxB genes encode for the α and β subunits, respectively. In most bioluminescent bacteria, the luxA and luxB genes are flanked upstream by luxC and luxD and downstream by luxE.
The bioluminescent reaction is as follows:
FMNH2 + O2 + R-CHO -> FMN + H2O + R-COOH + Light (~ 495 nm)
Molecular oxygen reacts with FMNH2 (reduced flavin mononucleotide) and a long-chain aldehyde to produce FMN (flavin mononucleotide), water and a corresponding fatty acid. The blue-green light emission of bioluminescence, such as that produced by Photobacterium phosphoreum and Vibro harveyi, results from this reaction. Because light emission involves expending six ATP molecules for each photon, it is an energetically expensive process. For this reason, light emission is not constitutively expressed in bioluminescent bacteria; it is expressed only when physiologically necessary.
Quorum sensing
Bioluminescence in bacteria can be regulated through a phenomenon known as autoinduction or quorum sensing. Quorum sensing is a form of cell-to-cell communication that alters gene expression in response to cell density. Autoinducer is a diffusible pheromone produced constitutively by bioluminescent bacteria and serves as an extracellular signalling molecule. When the concentration of autoinducer secreted by bioluminescent cells in the environment reaches a threshold (above 107 cells per mL), it induces the expression of luciferase and other enzymes involved in bioluminescence. Bacteria are able to estimate their density by sensing the level of autoinducer in the environment and regulate their bioluminescence such that it is expressed only when there is a sufficiently high cell population. A sufficiently high cell population ensures that the bioluminescence produced by the cells will be visible in the environment.
A well known example of quorum sensing is that which occurs between Aliivibrio fischeri and its host. This process is regulated by LuxI and LuxR, encoded by luxI and luxR respectively. LuxI is autoinducer synthase that produces autoinducer (AI) while LuxR functions as both a receptor and transcription factor for the lux operon. When LuxR binds AI, LuxR-AI complex activates transcription of the lux operon and induces the expression of luciferase. Using this system, A. fischeri has shown that bioluminescence is expressed only when the bacteria are host-associated and have reached sufficient cell densities.
Another example of quorum sensing by bioluminescent bacteria is by Vibrio harveyi, which are known to be free-living. Unlike Aliivibrio fischeri, V. harveyi do not possess the luxI/luxR regulatory genes and therefore have a different mechanism of quorum sensing regulation. Instead, they use the system known as three-channel quorum sensing system. Vibrio use small non-coding RNAs called Qrr RNAs to regulate quorum sensing, using them to control translation of energy-costly molecules.
Role
The uses of bioluminescence and its biological and ecological significance for animals, including host organisms for bacteria symbiosis, have been widely studied. The biological role and evolutionary history for specifically bioluminescent bacteria still remains quite mysterious and unclear. However, there are continually new studies being done to determine the impacts that bacterial bioluminescence can have on our constantly changing environment and society. Aside from the many scientific and medical uses, scientists have also recently begun to come together with artists and designers to explore new ways of incorporating bioluminescent bacteria, as well as bioluminescent plants, into urban light sources to reduce the need for electricity. They have also begun to use bioluminescent bacteria as a form of art and urban design for the wonder and enjoyment of human society.
One explanation for the role of bacterial bioluminescence is from the biochemical aspect. Several studies have shown the biochemical roles of the luminescence pathway. It can function as an alternate pathway for electron flow under low oxygen concentration, which can be advantageous when no fermentable substrate is available. In this process, light emission is a side product of the metabolism.
Evidence also suggests that bacterial luciferase contributes to the resistance of oxidative stress. In laboratory culture, luxA and luxB mutants of Vibrio harveyi, which lacked luciferase activity, showed impairment of growth under high oxidative stress compared to wild type. The luxD mutants, which had an unaffected luciferase but were unable to produce luminescence, showed little or no difference. This suggests that luciferase mediates the detoxification of reactive oxygen.
Bacterial bioluminescence has also been proposed to be a source of internal light in photoreactivation, a DNA repair process carried out by photolyase. Experiments have shown that non-luminescent V. harveyi mutants are more sensitive to UV irradiation, suggesting the existence of a bioluminescent-mediated DNA repair system.
Another hypothesis, called the "bait hypothesis", is that bacterial bioluminescence attracts predators who will assist in their dispersal. They are either directly ingested by fish or indirectly ingested by zooplankton that will eventually be consumed by higher trophic levels. Ultimately, this may allow passage into the fish gut, a nutrient-rich environment where the bacteria can divide, be excreted, and continue their cycle. Experiments using luminescent Photobacterium leiognathi and non-luminescent mutants have shown that luminescence attracts zooplankton and fish, thus supporting this hypothesis.
Symbiosis with other organisms
The symbiotic relationship between the Hawaiian bobtail squid Euprymna scolopes and the marine gram-negative bacterium Aliivibrio fischeri has been well studied. The two organisms exhibit a mutualistic relationship in which bioluminescence produced by A. fischeri helps to attract prey to the squid host, which provides nutrient-rich tissues and a protected environment forA. fischeri. Bioluminescence provided by A. fischeri also aids in the defense of the squid E. scolopes by providing camouflage during its nighttime foraging activity. Following bacterial colonization, the specialized organs of the squid undergo developmental changes and a relationship becomes established. The squid expels 90% of the bacterial population each morning, because it no longer needs to produce bioluminescence in the daylight. This expulsion benefits the bacteria by aiding in their dissemination. A single expulsion by one bobtail squid produces enough bacterial symbionts to fill 10,000m3 of seawater at a concentration that is comparable to what is found in coastal waters. Thus, in at least some habitats, the symbiotic relationship between A. fischeri and E. scolopes plays a key role in determining the abundance and distribution of E. scolopes. There is a higher abundance of A. fischeri in the vicinity of a population of E. scolopes and this abundance markedly decreases with increasing distance from the host's habitat.
Bioluminescent Photobacterium species also engage in mutually beneficial associations with fish and squid. Dense populations of P. kishitanii, P. leiogathi, and P. mandapamensis can live in the light organs of marine fish and squid, and are provided with nutrients and oxygen for reproduction in return for providing bioluminescence to their hosts, which can aid in sex-specific signaling, predator avoidance, locating or attracting prey, and schooling. Interestingly, Meyer-Rochow reported in 1976 that if the fish cannot obtain food and is starving, the light of its bioluminescent symbiont becomes increasingly dim until the light emission stops altogether.
See also
Bioluminescence
Bioluminescent shunt hypothesis
List of bioluminescent organisms
References
Further reading
Bioluminescence
Bacteria | Bioluminescent bacteria | [
"Chemistry",
"Biology"
] | 3,946 | [
"Luminescence",
"Bioluminescence",
"Prokaryotes",
"Bacteria",
"Biochemistry",
"Microorganisms"
] |
4,039,848 | https://en.wikipedia.org/wiki/Hunsdiecker%20reaction | The Hunsdiecker reaction (also called the Borodin reaction or the Hunsdiecker–Borodin reaction) is a name reaction in organic chemistry whereby silver salts of carboxylic acids react with a halogen to produce an organic halide. It is an example of both a decarboxylation and a halogenation reaction as the product has one fewer carbon atoms than the starting material (lost as carbon dioxide) and a halogen atom is introduced its place. A catalytic approach has been developed.
History
The reaction is named after Cläre Hunsdiecker and her husband Heinz Hunsdiecker, whose work in the 1930s developed it into a general method.
The reaction was first demonstrated by Alexander Borodin in 1861 in his reports of the preparation of methyl bromide () from silver acetate ().
Three decades later, Angelo Simonini, working as a student of Adolf Lieben at the University of Vienna, investigated the reactions of silver carboxylates with iodine. He found that the products formed are determined by the stoichiometry within the reaction mixture. Using a carboxylate-to-iodine ratio of 1:1 leads to an alkyl iodide product, in line with Borodin's findings and the modern understanding of the Hunsdiecker reaction. However, a 2:1 ratio favours the formation of an ester product that arises from decarboxylation of one carboxylate and coupling the resulting alkyl chain with the other.
Using a 3:2 ratio of reactants leads to the formation of a 1:1 mixture of both products. These processes are sometimes known as the Simonini reaction rather than as modifications of the Hunsdiecker reaction.
3 + 2 → + + 2 + 3
Reaction mechanism
In terms of reaction mechanism, the Hunsdiecker reaction is believed to involve organic radical intermediates. The silver salt 1 reacts with bromine to form the acyl hypohalite intermediate 2. Formation of the diradical pair 3 allows for radical decarboxylation to form the diradical pair 4, which recombines to form the organic halide 5. The trend in the yield of the resulting halide is primary > secondary > tertiary.
Variations
The reaction cannot be performed in protic solvents, as these induce decomposition of the intermediate acetyl hypohalite.
Other counterions than silver typically have slow reaction rates. The toxic relativistic metals (mercury, thallium, and lead) are preferred: inert counterions, such as the alkali metals, have only rarely led to reported success. The Kochi reaction is a variation on the Hunsdiecker reaction developed by Jay Kochi that uses lead(IV) acetate and lithium chloride (lithium bromide can also be used) to effect the halogenation and decarboxylation.
In the presence of multiple bonds, the intermediate acetyl hypohalite prefers to add to the bond, producing an α-haloester. Steric considerations suppress this tendency in α,β-unsaturated carboxylic acids, which instead polymerize (see below).
Mercuric oxide and bromine convert 3-chlorocyclobutanecarboxylic acid to 1-bromo-3-chlorocyclobutane. This is known as Cristol-Firth modification. The 1,3-dihalocyclobutanes were key precursors to propellanes. The reaction has been applied to the preparation of ω-bromo esters with chain lengths between five and seventeen carbon atoms, with the preparation of methyl 5-bromovalerate published in Organic Syntheses as an exemplar.
Reaction with α,β-unsaturated carboxylic acids
For unsaturated compounds, the radical conditions associated with the Hunsdiecker reaction can also induce polymerization instead of decarboxylation. Consequently, reactions with α,β-unsaturated carboxylic acids typically give low yield. Kuang et al have found that an alternate radical halogenating agent, N-halosuccinimide, combined with a lithium acetate catalyst, gives a higher yield of β-halostyrenes. The reaction also improves in the presence of microwave irradiation, which preferentially synthesizes (E)-β-arylvinyl halides.
For a green metal-free reaction, tetrabutylammonium trifluoroacetate serves as an alternative catalyst. However, it only exhibits comparable yields to the original lithium acetate when performed with micellular surfactants.
See also
Barton decarboxylation
Barton–McCombie deoxygenation
References
External links
Animation of the reaction mechanism
1861 in science
1861 introductions
Free radical reactions
Halogenation reactions
Name reactions
Alexander Borodin | Hunsdiecker reaction | [
"Chemistry"
] | 1,018 | [
"Name reactions",
"Free radical reactions",
"Organic reactions"
] |
4,040,947 | https://en.wikipedia.org/wiki/Maupertuis%27s%20principle | In classical mechanics, Maupertuis's principle (named after Pierre Louis Maupertuis, 1698 – 1759) states that the path followed by a physical system is the one of least length (with a suitable interpretation of path and length). It is a special case of the more generally stated principle of least action. Using the calculus of variations, it results in an integral equation formulation of the equations of motion for the system.
Mathematical formulation
Maupertuis's principle states that the true path of a system described by generalized coordinates between two specified states and is a minimum or a saddle point of the abbreviated action functional,
where are the conjugate momenta of the generalized coordinates, defined by the equation
where is the Lagrangian function for the system. In other words, any first-order perturbation of the path results in (at most) second-order changes in . Note that the abbreviated action is a functional (i.e. a function from a vector space into its underlying scalar field), which in this case takes as its input a function (i.e. the paths between the two specified states).
Jacobi's formulation
For many systems, the kinetic energy is quadratic in the generalized velocities
although the mass tensor may be a complicated function of the generalized coordinates . For such systems, a simple relation relates the kinetic energy, the generalized momenta and the generalized velocities
provided that the potential energy does not involve the generalized velocities. By defining a normalized distance or metric in the space of generalized coordinates
one may immediately recognize the mass tensor as a metric tensor. The kinetic energy may be written in a massless form
or,
Therefore, the abbreviated action can be written
since the kinetic energy equals the (constant) total energy minus the potential energy . In particular, if the potential energy is a constant, then Jacobi's principle reduces to minimizing the path length in the space of the generalized coordinates, which is equivalent to Hertz's principle of least curvature.
Comparison with Hamilton's principle
Hamilton's principle and Maupertuis's principle are occasionally confused with each other and both have been called the principle of least action. They differ from each other in three important ways:
their definition of the action...
the solution that they determine...
...and the constraints on the variation.
History
Maupertuis was the first to publish a principle of least action, as a way of adapting Fermat's principle for waves to a corpuscular (particle) theory of light. Pierre de Fermat had explained Snell's law for the refraction of light by assuming light follows the path of shortest time, not distance. This troubled Maupertuis, since he felt that time and distance should be on an equal footing: "why should light prefer the path of shortest time over that of distance?" Maupertuis defined his action as , which was to be minimized over all paths connecting two specified points. Here is the velocity of light the corpuscular theory. Fermat had minimized where is wave velocity; the two velocities are reciprocal so the two forms are equivalent.
Koenig's claim
In 1751, Maupertuis's priority for the principle of least action was challenged in print (Nova Acta Eruditorum of Leipzig) by an old acquaintance, Johann Samuel Koenig, who quoted a 1707 letter purportedly from Gottfried Wilhelm Leibniz to Jakob Hermann that described results similar to those derived by Leonhard Euler in 1744.
Maupertuis and others demanded that Koenig produce the original of the letter to authenticate its having been written by Leibniz. Leibniz died in 1716 and Hermann in 1733, so neither could vouch for Koenig. Koenig claimed to have the letter copied from the original owned by Samuel Henzi, and no clue as to the whereabouts of the original, as Henzi had been executed in 1749 for organizing the Henzi conspiracy for overthrowing the aristocratic government of Bern. Subsequently, the Berlin Academy under Euler's direction declared the letter to be a forgery and that Maupertuis, could continue to claim priority for having invented the principle. Curiously Voltaire got involved in the quarrel by composing Diatribe du docteur Akakia ("Diatribe of Doctor Akakia") to satirize Maupertuis' scientific theories (not limited to the principle of least action). While this work damaged Maupertuis's reputation, his claim to priority for least action remains secure.
See also
Analytical mechanics
Hamilton's principle
Gauss's principle of least constraint (also describes Hertz's principle of least curvature)
Hamilton–Jacobi equation
References
Pierre Louis Maupertuis, Accord de différentes loix de la nature qui avoient jusqu'ici paru incompatibles (original 1744 French text); Accord between different laws of Nature that seemed incompatible (English translation)
Leonhard Euler, Methodus inveniendi/Additamentum II (original 1744 Latin text); Methodus inveniendi/Appendix 2 (English translation)
Pierre Louis Maupertuis, Les loix du mouvement et du repos déduites d'un principe metaphysique (original 1746 French text); Derivation of the laws of motion and equilibrium from a metaphysical principle (English translation)
Leonhard Euler, Exposé concernant l'examen de la lettre de M. de Leibnitz (original 1752 French text); Investigation of the letter of Leibniz (English translation)
König J. S. "De universali principio aequilibrii et motus", Nova Acta Eruditorum, 1751, 125–135, 162–176.
J. J. O'Connor and E. F. Robertson, "The Berlin Academy and forgery", (2003), at The MacTutor History of Mathematics archive.
C. I. Gerhardt, (1898) "Über die vier Briefe von Leibniz, die Samuel König in dem Appel au public, Leide MDCCLIII, veröffentlicht hat", Sitzungsberichte der Königlich Preussischen Akademie der Wissenschaften, I, 419–427.
W. Kabitz, (1913) "Über eine in Gotha aufgefundene Abschrift des von S. König in seinem Streite mit Maupertuis und der Akademie veröffentlichten, seinerzeit für unecht erklärten Leibnizbriefes", Sitzungsberichte der Königlich Preussischen Akademie der Wissenschaften, II, 632–638.
L. D. Landau and E. M. Lifshitz, (1976) Mechanics, 3rd. ed., Pergamon Press, pp. 140–143. (hardcover) and (softcover)
G. C. J. Jacobi, Vorlesungen über Dynamik, gehalten an der Universität Königsberg im Wintersemester 1842–1843. A. Clebsch (ed.) (1866); Reimer; Berlin. 290 pages, available online Œuvres complètes volume 8 at Gallica-Math from the Gallica Bibliothèque nationale de France.
H. Hertz, (1896) Principles of Mechanics, in Miscellaneous Papers, vol. III, Macmillan.
Calculus of variations
Hamiltonian mechanics
Mathematical principles | Maupertuis's principle | [
"Physics",
"Mathematics"
] | 1,588 | [
"Mathematical principles",
"Theoretical physics",
"Classical mechanics",
"Hamiltonian mechanics",
"Dynamical systems"
] |
4,043,742 | https://en.wikipedia.org/wiki/Physics%20beyond%20the%20Standard%20Model | Physics beyond the Standard Model (BSM) refers to the theoretical developments needed to explain the deficiencies of the Standard Model, such as the inability to explain the fundamental parameters of the standard model, the strong CP problem, neutrino oscillations, matter–antimatter asymmetry, and the nature of dark matter and dark energy. Another problem lies within the mathematical framework of the Standard Model itself: the Standard Model is inconsistent with that of general relativity, and one or both theories break down under certain conditions, such as spacetime singularities like the Big Bang and black hole event horizons.
Theories that lie beyond the Standard Model include various extensions of the standard model through supersymmetry, such as the Minimal Supersymmetric Standard Model (MSSM) and Next-to-Minimal Supersymmetric Standard Model (NMSSM), and entirely novel explanations, such as string theory, M-theory, and extra dimensions. As these theories tend to reproduce the entirety of current phenomena, the question of which theory is the right one, or at least the "best step" towards a Theory of Everything, can only be settled via experiments, and is one of the most active areas of research in both theoretical and experimental physics.
Problems with the Standard Model
Despite being the most successful theory of particle physics to date, the Standard Model is not perfect. A large share of the published output of theoretical physicists consists of proposals for various forms of "Beyond the Standard Model" new physics proposals that would modify the Standard Model in ways subtle enough to be consistent with existing data, yet address its imperfections materially enough to predict non-Standard Model outcomes of new experiments that can be proposed.
Phenomena not explained
The Standard Model is inherently an incomplete theory. There are fundamental physical phenomena in nature that the Standard Model does not adequately explain:
Gravity. The standard model does not explain gravity. The approach of simply adding a graviton to the Standard Model does not recreate what is observed experimentally without other modifications, as yet undiscovered, to the Standard Model. Moreover, the Standard Model is widely considered to be incompatible with the most successful theory of gravity to date, general relativity.
Dark matter. Assuming that general relativity and Lambda CDM are true, cosmological observations tell us the standard model explains about 5% of the mass-energy present in the universe. About 26% should be dark matter (the remaining 69% being dark energy) which would behave just like other matter, but which only interacts weakly (if at all) with the Standard Model fields. Yet, the Standard Model does not supply any fundamental particles that are good dark matter candidates.
Dark energy. As mentioned, the remaining 69% of the universe's energy should consist of the so-called dark energy, a constant energy density for the vacuum. Attempts to explain dark energy in terms of vacuum energy of the standard model lead to a mismatch of 120 orders of magnitude.
Neutrino oscillations. According to the Standard Model, neutrinos do not oscillate. However, experiments and astronomical observations have shown that neutrino oscillation does occur. These are typically explained by postulating that neutrinos have mass. Neutrinos do not have mass in the Standard Model, and mass terms for the neutrinos can be added to the Standard Model by hand, but these lead to new theoretical problems. For example, the mass terms need to be extraordinarily small and it is not clear if the neutrino masses would arise in the same way that the masses of other fundamental particles do in the Standard Model. There are also other extensions of the Standard Model for neutrino oscillations which do not assume massive neutrinos, such as Lorentz-violating neutrino oscillations.
Matter–antimatter asymmetry. The universe is made out of mostly matter. However, the standard model predicts that matter and antimatter should have been created in (almost) equal amounts if the initial conditions of the universe did not involve disproportionate matter relative to antimatter. Yet, there is no mechanism in the Standard Model to sufficiently explain this asymmetry.
Experimental results not explained
No experimental result is accepted as definitively contradicting the Standard Model at the 5 level, widely considered to be the threshold of a discovery in particle physics. Because every experiment contains some degree of statistical and systemic uncertainty, and the theoretical predictions themselves are also almost never calculated exactly and are subject to uncertainties in measurements of the fundamental constants of the Standard Model (some of which are tiny and others of which are substantial), it is to be expected that some of the hundreds of experimental tests of the Standard Model will deviate from it to some extent, even if there were no new physics to be discovered.
At any given moment there are several experimental results standing that significantly differ from a Standard Model-based prediction. In the past, many of these discrepancies have been found to be statistical flukes or experimental errors that vanish as more data has been collected, or when the same experiments were conducted more carefully. On the other hand, any physics beyond the Standard Model would necessarily first appear in experiments as a statistically significant difference between an experiment and the theoretical prediction. The task is to determine which is the case.
In each case, physicists seek to determine if a result is merely a statistical fluke or experimental error on the one hand, or a sign of new physics on the other. More statistically significant results cannot be mere statistical flukes but can still result from experimental error or inaccurate estimates of experimental precision. Frequently, experiments are tailored to be more sensitive to experimental results that would distinguish the Standard Model from theoretical alternatives.
Some of the most notable examples include the following:
B meson decay etc. – results from a BaBar experiment may suggest a surplus over Standard Model predictions of a type of particle decay . In this, an electron and positron collide, resulting in a B meson and an antimatter meson, which then decays into a D meson and a tau lepton as well as a tau antineutrino. While the level of certainty of the excess (3.4 in statistical jargon) is not enough to declare a break from the Standard Model, the results are a potential sign of something amiss and are likely to affect existing theories, including those attempting to deduce the properties of Higgs bosons. In 2015, LHCb reported observing a 2.1 excess in the same ratio of branching fractions. The Belle experiment also reported an excess. In 2017 a meta analysis of all available data reported a cumulative 5 deviation from SM.
Neutron lifetime puzzle - Free neutrons are not stable but decay after some time. Currently there are two methods used to measure this lifetime ("bottle" versus "beam") that give different values not within each other's error margin. Currently the lifetime from the bottle method is at with a difference of 10 seconds below the beam method value of .
Theoretical predictions not observed
Observation at particle colliders of all of the fundamental particles predicted by the Standard Model has been confirmed. The Higgs boson is predicted by the Standard Model's explanation of the Higgs mechanism, which describes how the weak SU(2) gauge symmetry is broken and how fundamental particles obtain mass; it was the last particle predicted by the Standard Model to be observed. On July 4, 2012, CERN scientists using the Large Hadron Collider announced the discovery of a particle consistent with the Higgs boson, with a mass of about . A Higgs boson was confirmed to exist on March 14, 2013, although efforts to confirm that it has all of the properties predicted by the Standard Model are ongoing.
A few hadrons (i.e. composite particles made of quarks) whose existence is predicted by the Standard Model, which can be produced only at very high energies in very low frequencies have not yet been definitively observed, and "glueballs" (i.e. composite particles made of gluons) have also not yet been definitively observed. Some very low frequency particle decays predicted by the Standard Model have also not yet been definitively observed because insufficient data is available to make a statistically significant observation.
Unexplained relations
Koide formula – an unexplained empirical equation remarked upon by Yoshio Koide in 1981, and later by others. It relates the masses of the three charged leptons: . The Standard Model does not predict lepton masses (they are free parameters of the theory). However, the value of the Koide formula being equal to 2/3 within experimental errors of the measured lepton masses suggests the existence of a theory which is able to predict lepton masses.
The CKM matrix, if interpreted as a rotation matrix in a 3-dimensional vector space, "rotates" a vector composed of square roots of down-type quark masses into a vector of square roots of up-type quark masses , up to vector lengths, a result due to Kohzo Nishida.
The sum of squares of the Yukawa couplings of all Standard Model fermions is approximately 0.984, which is very close to 1. To put it another way, the sum of squares of fermion masses is very close to half of squared Higgs vacuum expectation value. This sum is dominated by the top quark.
The sum of squares of boson masses (that is, W, Z, and Higgs bosons) is also very close to half of squared Higgs vacuum expectation value, the ratio is approximately 1.004.
Consequently, the sum of squared masses of all Standard Model particles is very close to the squared Higgs vacuum expectation value, the ratio is approximately 0.994.
It is unclear if these empirical relationships represent any underlying physics; according to Koide, the rule he discovered "may be an accidental coincidence".
Theoretical problems
Some features of the standard model are added in an ad hoc way. These are not problems per se (i.e. the theory works fine with the ad hoc insertions), but they imply a lack of understanding. These contrived features have motivated theorists to look for more fundamental theories with fewer parameters. Some of the contrivances are:
Hierarchy problem – the standard model introduces particle masses through a process known as spontaneous symmetry breaking caused by the Higgs field. Within the standard model, the mass of the Higgs particle gets some very large quantum corrections due to the presence of virtual particles (mostly virtual top quarks). These corrections are much larger than the actual mass of the Higgs. This means that the bare mass parameter of the Higgs in the standard model must be fine tuned in such a way that almost completely cancels the quantum corrections. This level of fine-tuning is deemed unnatural by many theorists.
Number of parameters – the standard model depends on 19 parameter numbers. Their values are known from experiment, but the origin of the values is unknown. Some theorists have tried to find relations between different parameters, for example, between the masses of particles in different generations or calculating particle masses, such as in asymptotic safety scenarios.
Quantum triviality – suggests that it may not be possible to create a consistent quantum field theory involving elementary scalar Higgs particles. This is sometimes called the Landau pole problem.
Strong CP problem – it can be argued theoretically that the standard model should contain a term in the strong interaction that breaks CP symmetry, causing slightly different interaction rates for matter vs. antimatter. Experimentally, however, no such violation has been found, implying that the coefficient of this term – if any – would be suspiciously close to zero.
Additional experimental results
Research from experimental data on the cosmological constant, LIGO noise, and pulsar timing, suggests it's very unlikely that there are any new particles with masses much higher than those which can be found in the standard model or the Large Hadron Collider. However, this research has also indicated that quantum gravity or perturbative quantum field theory will become strongly coupled before 1 PeV, leading to other new physics in the TeVs.
Grand unified theories
The standard model has three gauge symmetries; the colour SU(3), the weak isospin SU(2), and the weak hypercharge U(1) symmetry, corresponding to the three fundamental forces. Due to renormalization the coupling constants of each of these symmetries vary with the energy at which they are measured. Around these couplings become approximately equal. This has led to speculation that above this energy the three gauge symmetries of the standard model are unified in one single gauge symmetry with a simple gauge group, and just one coupling constant. Below this energy the symmetry is spontaneously broken to the standard model symmetries. Popular choices for the unifying group are the special unitary group in five dimensions SU(5) and the special orthogonal group in ten dimensions SO(10).
Theories that unify the standard model symmetries in this way are called Grand Unified Theories (or GUTs), and the energy scale at which the unified symmetry is broken is called the GUT scale. Generically, grand unified theories predict the creation of magnetic monopoles in the early universe, and instability of the proton. Neither of these have been observed, and this absence of observation puts limits on the possible GUTs.
Supersymmetry
Supersymmetry extends the Standard Model by adding another class of symmetries to the Lagrangian. These symmetries exchange fermionic particles with bosonic ones. Such a symmetry predicts the existence of supersymmetric particles, abbreviated as sparticles, which include the sleptons, squarks, neutralinos and charginos. Each particle in the Standard Model would have a superpartner whose spin differs by 1/2 from the ordinary particle. Due to the breaking of supersymmetry, the sparticles are much heavier than their ordinary counterparts; they are so heavy that existing particle colliders may not be powerful enough to produce them.
Neutrinos
In the standard model, neutrinos cannot spontaneously change flavor. Measurements however indicated that neutrinos do spontaneously change flavor, in what is called neutrino oscillations.
Neutrino oscillations are usually explained using massive neutrinos. In the standard model, neutrinos have exactly zero mass, as the standard model only contains left-handed neutrinos. With no suitable right-handed partner, it is impossible to add a renormalizable mass term to the standard model.
These measurements only give the mass differences between the different flavours. The best constraint on the absolute mass of the neutrinos comes from precision measurements of tritium decay, providing an upper limit 2 eV, which makes them at least five orders of magnitude lighter than the other particles in the standard model.
This necessitates an extension of the standard model, which not only needs to explain how neutrinos get their mass, but also why the mass is so small.
One approach to add masses to the neutrinos, the so-called seesaw mechanism, is to add right-handed neutrinos and have these couple to left-handed neutrinos with a Dirac mass term. The right-handed neutrinos have to be sterile, meaning that they do not participate in any of the standard model interactions. Because they have no charges, the right-handed neutrinos can act as their own anti-particles, and have a Majorana mass term. Like the other Dirac masses in the standard model, the neutrino Dirac mass is expected to be generated through the Higgs mechanism, and is therefore unpredictable. The standard model fermion masses differ by many orders of magnitude; the Dirac neutrino mass has at least the same uncertainty. On the other hand, the Majorana mass for the right-handed neutrinos does not arise from the Higgs mechanism, and is therefore expected to be tied to some energy scale of new physics beyond the standard model, for example the Planck scale.
Therefore, any process involving right-handed neutrinos will be suppressed at low energies. The correction due to these suppressed processes effectively gives the left-handed neutrinos a mass that is inversely proportional to the right-handed Majorana mass, a mechanism known as the see-saw.
The presence of heavy right-handed neutrinos thereby explains both the small mass of the left-handed neutrinos and the absence of the right-handed neutrinos in observations. However, due to the uncertainty in the Dirac neutrino masses, the right-handed neutrino masses can lie anywhere. For example, they could be as light as keV and be dark matter,
they can have a mass in the LHC energy range
and lead to observable lepton number violation,
or they can be near the GUT scale, linking the right-handed neutrinos to the possibility of a grand unified theory.
The mass terms mix neutrinos of different generations. This mixing is parameterized by the PMNS matrix, which is the neutrino analogue of the CKM quark mixing matrix. Unlike the quark mixing, which is almost minimal, the mixing of the neutrinos appears to be almost maximal. This has led to various speculations of symmetries between the various generations that could explain the mixing patterns.
The mixing matrix could also contain several complex phases that break CP invariance, although there has been no experimental probe of these. These phases could potentially create a surplus of leptons over anti-leptons in the early universe, a process known as leptogenesis. This asymmetry could then at a later stage be converted in an excess of baryons over anti-baryons, and explain the matter-antimatter asymmetry in the universe.
The light neutrinos are disfavored as an explanation for the observation of dark matter, based on considerations of large-scale structure formation in the early universe. Simulations of structure formation show that they are too hot – that is, their kinetic energy is large compared to their mass – while formation of structures similar to the galaxies in our universe requires cold dark matter. The simulations show that neutrinos can at best explain a few percent of the missing mass in dark matter. However, the heavy, sterile, right-handed neutrinos are a possible candidate for a dark matter WIMP.
There are however other explanations for neutrino oscillations which do not necessarily require neutrinos to have masses, such as Lorentz-violating neutrino oscillations.
Preon models
Several preon models have been proposed to address the unsolved problem concerning the fact that there are three generations of quarks and leptons. Preon models generally postulate some additional new particles which are further postulated to be able to combine to form the quarks and leptons of the standard model. One of the earliest preon models was the Rishon model.
To date, no preon model is widely accepted or fully verified.
Theories of everything
Theoretical physics continues to strive toward a theory of everything, a theory that fully explains and links together all known physical phenomena, and predicts the outcome of any experiment that could be carried out in principle.
In practical terms the immediate goal in this regard is to develop a theory which would unify the Standard Model with General Relativity in a theory of quantum gravity. Additional features, such as overcoming conceptual flaws in either theory or accurate prediction of particle masses, would be desired.
The challenges in putting together such a theory are not just conceptual - they include the experimental aspects of the very high energies needed to probe exotic realms.
Several notable attempts in this direction are supersymmetry, loop quantum gravity, and String theory.
Supersymmetry
Loop quantum gravity
Theories of quantum gravity such as loop quantum gravity and others are thought by some to be promising candidates to the mathematical unification of quantum field theory and general relativity, requiring less drastic changes to existing theories. However recent work places stringent limits on the putative effects of quantum gravity on the speed of light, and disfavours some current models of quantum gravity.
String theory
Extensions, revisions, replacements, and reorganizations of the Standard Model exist in attempt to correct for these and other issues. String theory is one such reinvention, and many theoretical physicists think that such theories are the next theoretical step toward a true Theory of Everything.
Among the numerous variants of string theory, M-theory, whose mathematical existence was first proposed at a String Conference in 1995 by Edward Witten, is believed by many to be a proper "ToE" candidate, notably by physicists Brian Greene and Stephen Hawking. Though a full mathematical description is not yet known, solutions to the theory exist for specific cases. Recent works have also proposed alternate string models, some of which lack the various harder-to-test features of M-theory (e.g. the existence of Calabi–Yau manifolds, many extra dimensions, etc.) including works by well-published physicists such as Lisa Randall.
See also
Antimatter tests of Lorentz violation
Beyond black holes
Fundamental physical constants in the standard model
Higgsless model
Holographic principle
Little Higgs
Lorentz-violating neutrino oscillations
Minimal Supersymmetric Standard Model
Neutrino Minimal Standard Model
Peccei–Quinn theory
Preon
Standard-Model Extension
Supergravity
Seesaw mechanism
Supersymmetry
Superfluid vacuum theory
String theory
Technicolor (physics)
Theory of everything
Unsolved problems in physics
Unparticle physics
Footnotes
References
Further reading
External resources
Standard Model Theory @ SLAC
Scientific American Apr 2006
LHC. Nature July 2007
Les Houches Conference, Summer 2005
Particle physics
Physical cosmology
Unsolved problems in physics | Physics beyond the Standard Model | [
"Physics",
"Astronomy"
] | 4,534 | [
"Astronomical sub-disciplines",
"Theoretical physics",
"Unsolved problems in physics",
"Astrophysics",
"Particle physics",
"Physics beyond the Standard Model",
"Physical cosmology"
] |
4,044,234 | https://en.wikipedia.org/wiki/Higgs%20sector | In particle physics, the Higgs sector is the collection of quantum fields and/or particles that are responsible for the Higgs mechanism, i.e. for the spontaneous symmetry breaking of the Higgs field. The word "sector" refers to a subgroup of the total set of fields and particles.
See also
Higgs boson
Hidden sector
References
Standard Model
Symmetry | Higgs sector | [
"Physics",
"Mathematics"
] | 74 | [
"Standard Model",
"Particle physics",
"Geometry",
"Particle physics stubs",
"Symmetry"
] |
4,044,299 | https://en.wikipedia.org/wiki/Spin%E2%80%93charge%20separation | In condensed matter physics, spin–charge separation is an unusual behavior of electrons in some materials in which they 'split' into three independent particles, the spinon, the orbiton and the holon (or chargon). The electron can always be theoretically considered as a bound state of the three, with the spinon carrying the spin of the electron, the orbiton carrying the orbital degree of freedom and the chargon carrying the charge, but in certain conditions they can behave as independent quasiparticles.
The theory of spin–charge separation originates with the work of Sin-Itiro Tomonaga who developed an approximate method for treating one-dimensional interacting quantum systems in 1950. This was then developed by Joaquin Mazdak Luttinger in 1963 with an exactly solvable model which demonstrated spin–charge separation. In 1981 F. Duncan M. Haldane generalized Luttinger's model to the Tomonaga–Luttinger liquid concept whereby the physics of Luttinger's model was shown theoretically to be a general feature of all one-dimensional metallic systems. Although Haldane treated spinless fermions, the extension to spin-½ fermions and associated spin–charge separation was so clear that the promised follow-up paper did not appear.
Spin–charge separation is one of the most unusual manifestations of the concept of quasiparticles. This property is counterintuitive, because neither the spinon, with zero charge and spin half, nor the chargon, with charge minus one and zero spin, can be constructed as combinations of the electrons, holes, phonons and photons that are the constituents of the system. It is an example of fractionalization, the phenomenon in which the quantum numbers of the quasiparticles are not multiples of those of the elementary particles, but fractions.
The same theoretical ideas have been applied in the framework of ultracold atoms. In a two-component Bose gas in 1D, strong interactions can produce a maximal form of spin–charge separation.
Observation
Building on physicist F. Duncan M. Haldane's 1981 theory, experts from the Universities of Cambridge and Birmingham proved experimentally in 2009 that a mass of electrons artificially confined in a small space together will split into spinons and holons due to the intensity of their mutual repulsion (from having the same charge). A team of researchers working at the Advanced Light Source (ALS) of the U.S. Department of Energy's Lawrence Berkeley National Laboratory observed the peak spectral structures of spin–charge separation three years prior.
References
External links
Observation of Spin-Charge Separation in One-Dimensional SrCuO2
Distinct spinon and holon dispersions in photoemission spectral functions from one-dimensional SrCuO2 : Abstract
Quasiparticles
Condensed matter physics | Spin–charge separation | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 577 | [
"Matter",
"Phases of matter",
"Materials science",
"Condensed matter physics",
"Quasiparticles",
"Subatomic particles"
] |
4,045,128 | https://en.wikipedia.org/wiki/Programming%20model | A programming model is an execution model coupled to an API or a particular pattern of code. In this style, there are actually two execution models in play: the execution model of the base programming language and the execution model of the programming model. An example is Spark where Java is the base language, and Spark is the programming model. Execution may be based on what appear to be library calls. Other examples include the POSIX Threads library and Hadoop's MapReduce. In both cases, the execution model of the programming model is different from that of the base language in which the code is written. For example, the C programming language has no behavior in its execution model for input/output or thread behavior. But such behavior can be invoked from C syntax, by making what appears to be a call to a normal C library.
What distinguishes a programming model from a normal library is that the behavior of the call cannot be understood in terms of the language the program is written in. For example, the behavior of calls to the POSIX thread library cannot be understood in terms of the C language. The reason is that the call invokes an execution model that is different from the execution model of the language. This invocation of an outside execution model is the defining characteristic of a programming model, in contrast to a programming language.
In parallel computing, the execution model often must expose features of the hardware in order to achieve high performance. The large amount of variation in parallel hardware causes a concurrent need for a similarly large number of parallel execution models. It is impractical to make a new language for each execution model, hence it is a common practice to invoke the behaviors of the parallel execution model via an API. So, most of the programming effort is done via parallel programming models rather than parallel languages. The terminology around such programming models tends to focus on the details of the hardware that inspired the execution model, and in that insular world the mistaken belief is formed that a programming model is only for the case when an execution model is closely matched to hardware features.
References
Computer programming | Programming model | [
"Technology",
"Engineering"
] | 423 | [
"Software engineering",
"Computer programming",
"Computers"
] |
34,985,593 | https://en.wikipedia.org/wiki/Fractal%20catalytic%20model | A fractal catalytic model is a mathematical representation of chemical catalysis in an environment with fractal characteristics.
References
Fractals
Catalysis | Fractal catalytic model | [
"Chemistry",
"Mathematics"
] | 33 | [
"Catalysis",
"Mathematical analysis",
"Functions and mappings",
"Mathematical analysis stubs",
"Mathematical objects",
"Fractals",
"Mathematical relations",
"Chemical reaction stubs",
"Chemical kinetics",
"Chemical process stubs"
] |
34,991,512 | https://en.wikipedia.org/wiki/Great%20South%20Australian%20Coastal%20Upwelling%20System | The Great South Australian Coastal Upwelling System is a seasonal upwelling system in the eastern Great Australian Bight, extending from Ceduna, South Australia, to Portland, Victoria, over a distance of about . Upwelling events occur in the austral summer (from November to May) when seasonal winds blow from the southeast. These winds blow parallel to the shoreline at certain areas of the coast, which forces coastal waters offshore via Ekman transport and draws up cold, nutrient-rich waters from the ocean floor.
Because the deep water carries abundant nutrients up from the ocean floor, the upwelling area differs from the rest of the Great Australian Bight, especially the areas offshore of Western Australia and the Nullarbor Plain in South Australia, which are generally nutrient-poor. Every summer, the upwelling sustains a bountiful ecosystem that attracts blue whales and supports rich fisheries.
The Great South Australian Coastal Upwelling System (GSACUS) is Australia's only deep-reaching coastal upwelling system, with nutrient-enriched water stemming from depths exceeding .
Recently, a new upwelling centre has been discovered on the western shelf of Tasmania. Since this new upwelling centre is located outside South Australian waters, the entire upwelling system should be rather called the Great Southern Australian Coastal Upwelling System.
Oceanographic processes
During the austral summer, high-pressure systems over the Great Australian Bight cause southeasterly winds to blow over the coasts of Victoria and South Australia. When winds blow parallel to the shoreline, Ekman transport pushes water to the left of the wind direction (in the southern hemisphere), which in this case is westward and offshore. To replace the water moving offshore, cold waters from the ocean floor rise to the surface. During upwelling events, local sea surface temperature drops by 2-3 degrees Celsius.
Key upwelling centres form in three different locations, described in the sub-sections below; the Kangaroo Island and Eyre Peninsula centres are linked by the same upwelling process. Upwelling events occur nearly simultaneously across the three separate centres, appearing within a few days of each other, despite spanning a distance of approximately . While the Bonney Upwelling, where the strongest and most reliable upwelling events occur, was reported and explored over 30 years ago, the full extent of the upwelling system was discovered only as recently as 2004.
Bonney Upwelling
The Bonney Upwelling is the largest and most predictable upwelling in the GSACUS. It stretches from Portland, Victoria to Robe, South Australia. The continental shelf is narrow offshore of the "Bonney Coast" - only about from the shore to the continental slope - and deep water is funneled to the surface through a series of submarine canyons.
Kangaroo Island and Eyre Peninsula
Upwelling at Kangaroo Island and the Eyre Peninsula is different than at the Bonney Upwelling. Here, the continental shelf is generally much wider than at the Bonney Coast - up to wide off the Eyre Peninsula - and water is not drawn directly from the seafloor to the surface. Instead, field data and hydrodynamic modelling suggest that the upwelling follows from a chain of processes. This chain of processes starts in the deep submarine canyons of the Murray Canyon Group, located south of Kangaroo Island, where localized sub-surface upwelling brings a pool of cold water from the abyssal plains up to the continental shelf. This dense-water pool, named the Kangaroo Island Pool, drifts along the shelf bottom just offshore of Kangaroo Island and the Eyre Peninsula. When a classical wind-driven upwelling event occurs, normally two to three times a summer, cold water is upwelled from the pool, not directly from the ocean floor.
Ecology
Extensive upwelling of nutrient-rich water makes the GSACUS an important marine hot spot on Australia's southern shelves. During upwelling events, the abundance of the GSACUS ecosystem can approach that of some of the world’s most productive upwelling centers, such as those offshore of Peru, California, and Namibia.
During upwelling events, surface chlorophyll a concentrations, an indicator of phytoplankton abundance, increase tenfold. Phytoplankton blooms bring about swarms of krill, which in turn attract blue whales. Blue whales are found in various locations off the southeast coast of Australia, but most predominantly in the Bonney Upwelling region, which is one of 12 identified blue whale feeding sites worldwide. Marine biologist Peter Gill estimates that 100 blue whales visit the Bonney Upwelling area every year, ranging over of ocean from Robe, South Australia to Cape Otway in Victoria. The feeding grounds may extend further northwest, encompassing the rest of the GSACUS, but incomplete whale surveys are insufficient to establish their true range.
Other marine life that thrives in the upwelling includes filter feeders like sponges, bryozoans, and corals. These animals feed predators such as seabirds, fishes, Australian fur seals, and penguins. The upwelling plays also an important role in the life cycle of juvenile southern bluefin tuna (Thunnus maccoyii), which accumulate in the eastern Great Australian Bight during the upwelling season and feed on sardines (Sardinops sagax) and anchovies (Engraulis australis). Furthermore, the many dead organisms that fall to the continental shelf support populations of southern rock lobster and giant crab.
Economic importance
The GSACUS supports a productive fishery, and local fishers recognize the bounty that the upwelling provides them. Every November, Portland, Victoria, hosts an Upwelling Festival to celebrate the abundance of the Bonney Upwelling, and to begin the summer fishing season.
Humans have exploited the GSACUS for thousands of years. Oral histories of local Aboriginal tribes indicate a close connection to the ocean, and said tribes may possibly have eaten beached whales. The Convincing Ground massacre, which took place near Portland, Victoria in 1829, arose over a dispute between European whalers and the Gunditjmara people over ownership of a beached whale. Beginning in the mid-1840s, whaling and sealing were established as organized industries off the Bonney Coast.
Today, southern rock lobster (referred to locally as crayfish) and trawling are the most important fishing industries in the Bonney Upwelling. whereas the upwelling off the Eyre Peninsula supports a large sardine fishery, operating chiefly out of Port Lincoln, South Australia.
Conservation
Next to the Great Barrier Reef, the GSACUS and its ecosystem can be regarded as one of Australia's natural wonders. Due to its importance as a blue whale feeding and aggregation site, in 2002 the Bonney Upwelling was listed as critical habitat "requiring effective protection from user impacts" under Australia's Environment Protection and Biodiversity Conservation Act 1999. In addition to the blue whale, one species of shark is listed as critically endangered, and five bird and two whale species are listed as endangered.
Significant reserves of natural gas are present beneath the Bonney Coast, and are undergoing exploration. Fears have been expressed that expanded gas drilling may threaten whales through noise pollution and ship collisions.
See also
Upwelling
Blue whale
References
External links
Photographs of the Bonney Upwelling
Images of marine life in the Bonney Upwelling
Beaches of South Australia
Ocean currents
Aquatic ecology
Oceanography
Fisheries science
Great Australian Bight | Great South Australian Coastal Upwelling System | [
"Physics",
"Chemistry",
"Biology",
"Environmental_science"
] | 1,532 | [
"Ocean currents",
"Hydrology",
"Applied and interdisciplinary physics",
"Oceanography",
"Ecosystems",
"Aquatic ecology",
"Fluid dynamics"
] |
44,639,771 | https://en.wikipedia.org/wiki/Flory%E2%80%93Stockmayer%20theory | Flory–Stockmayer theory is a theory governing the cross-linking and gelation of step-growth polymers. The Flory–Stockmayer theory represents an advancement from the Carothers equation, allowing for the identification of the gel point for polymer synthesis not at stoichiometric balance. The theory was initially conceptualized by Paul Flory in 1941 and then was further developed by Walter Stockmayer in 1944 to include cross-linking with an arbitrary initial size distribution.
The Flory–Stockmayer theory was the first theory investigating percolation processes. Flory–Stockmayer theory is a special case of random graph theory of gelation.
History
Gelation occurs when a polymer forms large interconnected polymer molecules through cross-linking. In other words, polymer chains are cross-linked with other polymer chains to form an infinitely large molecule, interspersed with smaller complex molecules, shifting the polymer from a liquid to a network solid or gel phase. The Carothers equation is an effective method for calculating the degree of polymerization for stoichiometrically balanced reactions. However, the Carothers equation is limited to branched systems, describing the degree of polymerization only at the onset of cross-linking. The Flory–Stockmayer Theory allows for the prediction of when gelation occurs using percent conversion of initial monomer and is not confined to cases of stoichiometric balance. Additionally, the Flory–Stockmayer Theory can be used to predict whether gelation is possible through analyzing the limiting reagent of the step-growth polymerization.
Flory’s assumptions
In creating the Flory–Stockmayer Theory, Flory made three assumptions that affect the accuracy of this model. These assumptions were:
All functional groups on a branch unit are equally reactive
All reactions occur between A and B
There are no intramolecular reactions
As a result of these assumptions, a conversion slightly higher than that predicted by the Flory–Stockmayer Theory is commonly needed to actually create a polymer gel. Since steric hindrance effects prevent each functional group from being equally reactive and intramolecular reactions do occur, the gel forms at slightly higher conversion.
Flory postulated that his treatment can also be applied to chain-growth polymerization mechanisms, as the three criteria stated above are satisfied under the assumptions that (1) the probability of chain termination is independent of chain length, and (2) multifunctional co-monomers react randomly with growing polymer chains.
General case
The Flory–Stockmayer Theory predicts the gel point for the system consisting of three types of monomer units
linear units with two A-groups (concentration ),
linear units with two B groups (concentration ),
branched A units (concentration ).
The following definitions are used to formally define the system
is the number of reactive functional groups on the branch unit (i.e. the functionality of that branch unit)
is the probability that A has reacted (conversion of A groups)
is the probability that B has reacted (conversion of B groups)
is the ratio of number of A groups in the branch unit to the total number of A groups
is the ratio between total number of A and B groups. So that
The theory states that the gelation occurs only if , where
is the critical value for cross-linking and is presented as a function of ,
or, alternatively, as a function of ,
.
One may now substitute expressions for into definition of and obtain
the critical values of that admit gelation. Thus gelation occurs if
alternatively, the same condition for reads,
The both inequalities are equivalent and one may use the one that is more convenient. For instance, depending on which conversion or is resolved analytically.
Trifunctional A monomer with difunctional B monomer
Since all the A functional groups are from the trifunctional monomer, ρ = 1 and
Therefore, gelation occurs when
or when,
Similarly, gelation occurs when
References
Polymer chemistry | Flory–Stockmayer theory | [
"Chemistry",
"Materials_science",
"Engineering"
] | 808 | [
"Materials science",
"Polymer chemistry"
] |
44,651,466 | https://en.wikipedia.org/wiki/Hamilton%27s%20optico-mechanical%20analogy | Hamilton's optico-mechanical analogy is a conceptual parallel between trajectories in classical mechanics and wavefronts in optics, introduced by William Rowan Hamilton around 1831. It may be viewed as linking Huygens' principle of optics with Maupertuis' principle of mechanics.
While Hamilton discovered the analogy in 1831, it was not applied practically until Hans Busch used it to explain electron beam focusing in 1925. According to Cornelius Lanczos, the analogy has been important in the development of ideas in quantum physics. Erwin Schrödinger cites the analogy in the very first sentence of his paper introducing his wave mechanics. Later in the body of his paper he says:
Quantitative and formal analysis based on the analogy use the Hamilton–Jacobi equation; conversely the analogy provides an alternative and more accessible path for introducing the Hamilton–Jacobi equation approach to mechanics. The orthogonality of mechanical trajectories characteristic of geometrical optics to the optical wavefronts characteristic of a full wave equation, resulting from the variational principle, leads to the corresponding differential equations.
Hamilton's analogy
The propagation of light can be considered in terms of rays and wavefronts in ordinary physical three-dimensional space. The wavefronts are two-dimensional curved surfaces; the rays are one-dimensional curved lines.
Hamilton's analogy amounts to two interpretations of a figure like the one shown here. In the optical interpretation, the green wavefronts are lines of constant phase and the orthogonal red lines are the rays of geometrical optics. In the mechanical interpretation, the green lines denote constant values of action derived by applying Hamilton's principle to mechanical motion and the red lines are the orthogonal object trajectories.
The orthogonality of the wavefronts to rays (or equal-action surfaces to trajectories) means we can compute one set from the other set. This explains how Kirchhoff's diffraction formula predicts a wave phenomenon – diffraction – using only geometrical ray tracing. Rays traced from the source to an aperture give a wavefront that becomes sources for rays reaching the diffraction pattern where they are summed using complex phases from the orthogonal wavefronts.
The wavefronts and rays or the equal-action surfaces and trajectories are dual objects linked by orthogonality.
On one hand, a ray can be regarded as the orbit of a particle of light. It successively punctures the wave surfaces. The successive punctures can be regarded as defining the trajectory of the particle.
On the other hand, a wave-front can be regarded as a level surface of displacement of some quantity, such as electric field intensity, hydrostatic pressure, particle number density, oscillatory phase, or probability amplitude. Then the physical meaning of the rays is less evident.
Huygens' principle; Fermat's principle
The Hamilton optico-mechanical analogy is closely related to Fermat's principle and thus to the Huygens–Fresnel principle. Fermat's principle states that the rays between wavefronts will take the path least time; the concept of successive wavefronts derives from Huygens principle.
Extended Huygens' principle
Going beyond ordinary three-dimensional physical space, one can imagine a higher dimensional abstract configuration "space", with a dimension a multiple of 3. In this space, one can imagine again rays as one-dimensional curved lines. Now the wavefronts are hypersurfaces of dimension one less than the dimension of the space. Such a multi-dimensional space can serve as a configuration space for a multi-particle system.
Classical limit of the Schrödinger equation
Albert Messiah considers a classical limit of the Schrödinger equation. He finds there an optical analogy. The trajectories of his particles are orthogonal to the surfaces of equal phase. He writes "In the language of optics, the latter are the wave fronts, and the trajectories of the particles are the rays. Hence the classical approximation is equivalent to the geometric optics approximation: we find once again, as a consequence of the Schrödinger equation, the basic postulate of the theory of matter waves."
History
Hamilton's optico-mechanical analogy played a critical part in the thinking of Schrödinger, one of the originators of quantum mechanics. Section 1 of his paper published in December 1926 is titled "The Hamiltonian analogy between mechanics and optics". Section 1 of the first of his four lectures on wave mechanics delivered in 1928 is titled "Derivation of the fundamental idea of wave mechanics from Hamilton's analogy between ordinary mechanics and geometrical optics".
In a brief paper in 1923, de Broglie wrote : "Dynamics must undergo the same evolution that optics has undergone when undulations took the place of purely geometrical optics." In his 1924 thesis, though Louis de Broglie did not name the optico-mechanical analogy, he wrote in his introduction,
In the opinion of Léon Rosenfeld, a close colleague of Niels Bohr, "... Schrödinger [was] inspired by Hamilton's beautiful comparison of classical mechanics and geometrical optics ..."
The first textbook in English on wave mechanics devotes the second of its two chapters to "Wave mechanics in relation to ordinary mechanics". It opines "... de Broglie and Schrödinger have turned this false analogy into a true one by using the natural Unit or Measure of Action, , .... ... We must now go into Hamilton's theory in more detail, for when once its true meaning is grasped the step to wave mechanics is but a short one—indeed now, after the event, almost seems to suggest itself."
According to one textbook, "The first part of our problem, namely, the establishment of a system of first-order equations satisfying the spacetime symmetry condition, can be solved in a very simple way, with the help of the analogy between mechanics and optics, which was the starting point for the development of wave mechanics and which can still be used—with reservations—as a source of inspiration."
Recently the concept has been extended to wavelength dependent regime.
References
Bibliography of cited sources
Arnold, V.I. (1974/1978). Mathematical Methods of Classical Mechanics, translated by K. Vogtmann, A. Weinstein, Springer, Berlin, .
Biggs, H.F. (1927). Wave Mechanics. An Introductory Sketch, Oxford University Press, London.
de Broglie, L. (1923). Waves and quanta, Nature 112: 540.
de Broglie, L., Recherches sur la théorie des quanta (Researches on the quantum theory), Thesis (Paris), 1924; de Broglie, L., Ann. Phys. (Paris) 3, 22 (1925). English translation by A.F. Kracklauer
Cohen, R.S, Stachel, J.J., editors, (1979). Selected papers of Léon Rosenfeld, D. Reidel Publishing Company, Dordrecht, .
Jammer, M. (1966). The Conceptual Development of Quantum Mechanics, MGraw–Hill, New York.
Frenkel, J. (1934). Wave mechanics. Advanced General Theory, Oxford University Press, London.
Kemble, E.C. (1937). The Fundamental Principles of Quantum Mechanics, with Elementary Applications, McGraw–Hill, New York.
Hamilton, W.R., (1834). On the application to dynamics of a general mathematical method previously applied to optics, British Association Report, pp. 513–518, reprinted in The Mathematical Papers of Sir William Rowan Hamilton (1940), ed. A.W. Conway, A.J. McConnell, volume 2, Cambridge University Press, London.
Lanczos, C. (1949/1970). The Variational Principles of Mechanics, 4th edition, University of Toronto Press, Toronto, .
Messiah, A. (1961). Quantum Mechanics, volume 1, translated by G.M. Temmer from the French Mécanique Quantique, North-Holland, Amsterdam.
Rosenfeld, L. (1971). Men and ideas in the history of atomic theory, Arch. Hist. Exact Sci., 7: 69–90. Reprinted on pp. 266–296 of Cohen, R.S, Stachel, J.J. (1979).
Schrödinger, E. (1926). An undulatory theory of the mechanics of atoms and molecules, Phys. Rev., second series 28 (6): 1049–1070.
Schrödinger, E. (1926/1928). Collected papers on Wave Mechanics, translated by J.F. Shearer and W.M. Deans from the second German edition, Blackie & Son, London.
Schrödinger, E. (1928). Four Lectures on Wave Mechanics. Delivered at the Royal Institution, London, on 5th, 7th, 12th, and 14th March, 1928, Blackie & Son, London.
Synge, J.L. (1954). Geometrical Mechanics and de Broglie Waves, Cambridge University Press, Cambridge UK.
Hamiltonian mechanics
Quantum mechanics
Wave mechanics | Hamilton's optico-mechanical analogy | [
"Physics",
"Mathematics"
] | 1,907 | [
"Physical phenomena",
"Theoretical physics",
"Classical mechanics",
"Quantum mechanics",
"Waves",
"Wave mechanics",
"Hamiltonian mechanics",
"Dynamical systems"
] |
37,506,123 | https://en.wikipedia.org/wiki/Normalized%20compression%20distance | Normalized compression distance (NCD) is a way of measuring the similarity between two objects, be it two documents, two letters, two emails, two music scores, two languages, two programs, two pictures, two systems, two genomes, to name a few. Such a measurement should not be application dependent or arbitrary. A reasonable definition for the similarity between two objects is how difficult it is to transform them into each other.
It can be used in information retrieval and data mining for cluster analysis.
Information distance
We assume that the objects one talks about are finite strings of 0s and 1s. Thus we mean string similarity. Every computer file is of this form, that is, if an object is a file in a computer it is of this form. One can define the information distance between strings and as the length of the shortest program that computes from and vice versa. This shortest program is in a fixed programming language. For technical reasons one uses the theoretical notion of Turing machines. Moreover, to express the length of one uses the notion of Kolmogorov complexity. Then, it has been shown
up to logarithmic additive terms which can be ignored. This information distance is shown to be a metric
(it satisfies the metric inequalities up to a logarithmic additive term), is universal (it minorizes
every computable distance as computed for example from features up to a constant additive term).
Normalized information distance (similarity metric)
The information distance is absolute, but if we want to express similarity, then we are more interested in relative ones. For example, if two strings of length 1,000,000 differ by 1000 bits, then we consider that those strings are relatively more similar than two strings of 1000 bits that differ by 1000 bits. Hence we need to normalize to obtain a similarity metric. This way one obtains the normalized information distance (NID),
where is algorithmic information of given as input. The NID is called the similarity metric. Since the function has been shown to satisfy the basic requirements for a metric distance measure. However, it is not computable or even semicomputable.
Normalized compression distance
While the NID metric is not computable, it has an abundance of applications. Simply approximating by real-world compressors, with is the binary length of the file compressed with compressor Z (for example "gzip", "bzip2", "PPMZ") in order to make NID easy to apply. Vitanyi and Cilibrasi rewrote the NID to obtain the Normalized Compression Distance (NCD)
The NCD is actually a family of distances parametrized with the compressor Z. The better Z is, the closer the NCD approaches the NID, and the better the results are.
Applications
The normalized compression distance has been used to fully automatically reconstruct language and phylogenetic trees. It can also be used for new applications of general clustering and classification of natural data in arbitrary domains, for clustering of heterogeneous data, and for anomaly detection across domains. The NID and NCD have been applied to numerous subjects, including music classification, to analyze network traffic and cluster computer worms and viruses, authorship attribution, gene expression dynamics, predicting useful versus useless stem cells, critical networks, image registration, question-answer systems.
Performance
Researchers from the datamining community use NCD and variants as "parameter-free, feature-free" data-mining tools. One group have experimentally tested a closely related metric on a large variety of sequence benchmarks. Comparing their compression method with 51 major methods found in 7 major data-mining conferences over the past decade, they established superiority of the compression method for clustering heterogeneous data, and for anomaly detection, and competitiveness in clustering domain data.
NCD has an advantage of being robust to noise. However, although NCD appears "parameter-free", practical questions include which compressor to use in computing the NCD and other possible problems.
Comparison with the Normalized Relative Compression (NRC)
In order to measure the information of a string relative to another there is the need to rely on relative semi-distances (NRC). These are measures that do not need to respect symmetry and triangle inequality distance properties. Although the NCD and the NRC seem very similar, they address different questions. The NCD measures how similar both strings are, mostly using the information content, while the NRC indicates the fraction of a target string that cannot be constructed using information from another string. For a comparison, with application to the evolution of primate genomes, see.
Normalized Google distance
Objects can be given literally, like the literal four-letter genome of a mouse, or the literal text of War and Peace by Tolstoy. For simplicity we take it that all meaning of the object is represented by the literal object itself. Objects can also be given by name, like "the four-letter genome of a mouse," or "the text of `War and Peace' by Tolstoy." There are also objects that cannot be given literally, but only by name, and that acquire their meaning from their contexts in background common knowledge in humankind, like "home" or "red."
We are interested in semantic similarity. Using code-word lengths obtained from the page-hit counts returned by Google from the web, we obtain a semantic distance using the NCD formula and viewing Google as a compressor useful for data mining, text comprehension, classification, and translation. The associated NCD, called the normalized Google distance (NGD) can be rewritten as
where denotes the number of pages containing the search term , and denotes the number of pages containing both and ,) as returned by Google or any search engine capable of returning an aggregate page count. The number can be set to the number of pages indexed although it is more proper to count each page according to the number of search terms or phrases it contains. As rule of the thumb one can multiply the number of pages by, say, a thousand...
See also
Word2vec
References
External links
Efficient Estimation of Word Representations in Vector Space
M. Li and P. Vitanyi, An Introduction to Kolmogorov Complexity and Its Applications, Springer-Verlag, New York, 4th Edition 2019
Statistical distance | Normalized compression distance | [
"Physics"
] | 1,306 | [
"Physical quantities",
"Statistical distance",
"Distance"
] |
37,507,962 | https://en.wikipedia.org/wiki/BioSphere%20Plastic | BioSphere Plastic is a manufacturer of biodegradable additives. The company states that their product enhances the biodegradation of synthetic polymers by addition of their technology. It has a capacity at or around 5,800 metric tons a year and has filed for patent protection over their additive worldwide.
BioSphere Plastic released their test reports showing biodegradability of Polyethylene and Polypropylene for public review in July 2012.
BioSphere Plastic scientists have studied microbial biodegradation of polymers and have used this information to increase biodegradation of plastic made with their plastic additives.
Offices
BioSphere Plastic offices are located in Portland, OR, USA, Santiago, Chile, and Bangkok, Thailand.
See also
List of companies based in Oregon
References
External links
Biosphere Plastic - official website
Companies based in Portland, Oregon
Bioplastics
Biodegradable materials | BioSphere Plastic | [
"Physics",
"Chemistry"
] | 181 | [
"Biodegradation",
"Biodegradable materials",
"Materials",
"Matter"
] |
37,508,586 | https://en.wikipedia.org/wiki/Langmuir%20states | In quantum mechanics Langmuir states are certain quantum states of Helium that in the classical limit correspond to two parallel circular orbits of electrons one above the other and with the nucleus in between. They are constructed in analogy to circular states of Hydrogen when the
electron has the maximum angular momentum and moves on the circle. Because of the magic value of the Helium nucleus charge 2e the triangle nucleus-electron-electron which sweeps the configuration space during the circular motion is equilateral.
References
Quantum mechanics | Langmuir states | [
"Physics"
] | 100 | [
"Theoretical physics",
"Quantum mechanics"
] |
28,075,395 | https://en.wikipedia.org/wiki/Quantum%20topology | Quantum topology is a branch of mathematics that connects quantum mechanics with low-dimensional topology.
Dirac notation provides a viewpoint of quantum mechanics which becomes amplified into a framework that can embrace the amplitudes associated with topological spaces and the related embedding of one space within another such as knots and links in three-dimensional space. This bra–ket notation of kets and bras can be generalised, becoming maps of vector spaces associated with topological spaces that allow tensor products.
Topological entanglement involving linking and braiding can be intuitively related to quantum entanglement.
See also
Topological quantum field theory
Reshetikhin–Turaev invariant
References
External links
Quantum Topology, a journal published by EMS Publishing House
Quantum mechanics
Topology | Quantum topology | [
"Physics",
"Mathematics"
] | 148 | [
"Theoretical physics",
"Quantum mechanics",
"Topology",
"Space",
"Geometry",
"Spacetime",
"Quantum physics stubs"
] |
28,076,584 | https://en.wikipedia.org/wiki/Marine%20Technology%20Society | The Marine Technology Society (MTS) is a professional society that serves an international community of approximately 2,000 ocean engineers, technologists, policy-makers, and educators. The goal of the society, which was founded in 1963, is to promote awareness, understanding, advancement and application of marine technology. The association is based in Washington, District of Columbia, United States.
Background
The society consists of 29 technical disciplines and presently has 17 sections, including overseas sections in Japan, Korea and Norway. In addition, MTS has 23 student sections at colleges and universities with related fields of study.
The flagship publication of the society is the MTS Journal. The journal is published 4 times annually and primarily features themed issues consisting of invited papers. The journal has a current Scopus Cite Score of 1.6.
MTS sponsors several conferences of note, including the OCEANS Conference (co-sponsosed with IEEE/OES), Underwater Intervention (co-sponsored with ADCI), Dynamic Positioning Conference, biennial Buoy Workshop (co-sponsored with the Office of Naval Research), and the hot-topic workshop series TechSurge.
In 1969 the group held its annual convention in Miami Beach. The convention was addressed by Spiro Agnew, who was then Vice President of the United States.
In 1993 the laser line scan, a U.S. Navy photography secret, made its debut at the society sponsored trade show in New Orleans.
In 2023 the MATE Remotely Operated Vehicle (ROV) Competition joined MTS as a fully integrated program within the Society. For more than 20 years, the MATE ROV Competition has given children, youth, and young adults an inclusive platform to think critically about real-world problems in a way that strengthens communication, builds peer-to-peer community, and inspires entrepreneurship. Since its inauguration, the annual competition has reached more than 20,000 students in 46 regions around the world.
References
External links
Engineering societies based in the United States
Marine engineering organizations
Organizations based in Maryland
Oceanography | Marine Technology Society | [
"Physics",
"Engineering",
"Environmental_science"
] | 410 | [
"Hydrology",
"Applied and interdisciplinary physics",
"Oceanography",
"Marine engineering",
"Marine engineering organizations"
] |
28,081,325 | https://en.wikipedia.org/wiki/Carbon%20carousel | A carbon carousel is made of CO2-sorbent panels that are spun around in the air to remove carbon dioxide. After spinning the panels are placed in a regeneration chamber where the carbon dioxide is desorbed.
References
Carbon dioxide
Carbon capture and storage | Carbon carousel | [
"Chemistry",
"Engineering"
] | 53 | [
"Greenhouse gases",
"Geoengineering",
"Carbon capture and storage",
"Carbon dioxide"
] |
28,081,441 | https://en.wikipedia.org/wiki/%CE%94-Carotene | δ-Carotene (delta-carotene) or ε,ψ-carotene is a form of carotene with an ε-ring at one end, and the other uncyclized, labelled ψ (psi). It is an intermediate synthesis product in some photosynthetic plants between lycopene and α-carotene (β,ε-carotene) or ε-carotene (ε,ε-carotene). δ-Carotene is fat soluble. Delta-carotene contains an alpha-ionone instead of a beta-ionone ring; this conversion is carried out by the gene Del which shifts the position of the double bond in the ring structure. The formation delta-carotene under the presence of the Del gene is sensitive to high temperatures.
References
Carotenoids
Tetraterpenes
Cyclohexenes | Δ-Carotene | [
"Chemistry",
"Biology"
] | 187 | [
"Biomarkers",
"Biotechnology stubs",
"Carotenoids",
"Biochemistry stubs",
"Biochemistry"
] |
44,921,963 | https://en.wikipedia.org/wiki/Second%20covariant%20derivative | In the math branches of differential geometry and vector calculus, the second covariant derivative, or the second order covariant derivative, of a vector field is the derivative of its derivative with respect to another two tangent vector fields.
Definition
Formally, given a (pseudo)-Riemannian manifold (M, g) associated with a vector bundle E → M, let ∇ denote the Levi-Civita connection given by the metric g, and denote by Γ(E) the space of the smooth sections of the total space E. Denote by T*M the cotangent bundle of M. Then the second covariant derivative can be defined as the composition of the two ∇s as follows:
For example, given vector fields u, v, w, a second covariant derivative can be written as
by using abstract index notation. It is also straightforward to verify that
Thus
When the torsion tensor is zero, so that , we may use this fact to write Riemann curvature tensor as
Similarly, one may also obtain the second covariant derivative of a function f as
Again, for the torsion-free Levi-Civita connection, and for any vector fields u and v, when we feed the function f into both sides of
we find
.
This can be rewritten as
so we have
That is, the value of the second covariant derivative of a function is independent on the order of taking derivatives.
Notes
Tensors in general relativity
Riemannian geometry | Second covariant derivative | [
"Physics",
"Engineering"
] | 304 | [
"Tensors",
"Physical quantities",
"Tensor physical quantities",
"Tensors in general relativity",
"Relativity stubs",
"Theory of relativity"
] |
33,428,711 | https://en.wikipedia.org/wiki/Analysis%20Situs%20%28book%29 | Analysis Situs is a book by the Princeton mathematician Oswald Veblen, published in 1922. It is based on his 1916 lectures at the Cambridge Colloquium of the American Mathematical Society. The book, which went into a second edition in 1931, was the first English-language textbook on topology, and served for many years as the standard reference for the domain. Its contents were based on the work of Henri Poincaré as well as Veblen's own work with his former student and colleague, James Alexander.
Among the many innovations in the book was the first definition of a topological manifold, and systematisations of Betti number, torsion, the fundamental group, and the topological classification problem.
References
Bibliography
1922 non-fiction books
Mathematics textbooks
Topology | Analysis Situs (book) | [
"Physics",
"Mathematics"
] | 157 | [
"Topology stubs",
"Topology",
"Space",
"Geometry",
"Spacetime"
] |
33,429,507 | https://en.wikipedia.org/wiki/Handan%E2%80%93Changzhi%20railway | The Handan–Changzhi railway or Hanchang railway (), is a major railroad in northern China for the transportation of coal. The railway is named after its terminal cities, Handan in Hebei Province and Changzhi in Shanxi Province. The line is in length and was built from 1971 to 1983.
Route
The Handan–Changzhi railway traverses the Taihang Mountains and has 19 tunnels. The line is used to carry coal from Shanxi to eastern China and for export overseas via ports in Shandong. The original carrying capacity of the line was raised from 13.9 million tons to 19 million in 1998 and again to 35 million tons in 2007.
Rail connections
Changzhi: Taiyuan–Jiaozuo railway, Shanxi–Henan–Shandong railway
Handan: Beijing–Guangzhou railway, Handan–Jinan railway
See also
List of railways in China
References
Railway lines in China
Mining railways
Rail transport in Hebei
Rail transport in Shanxi
Railway lines opened in 1984
Coal in China | Handan–Changzhi railway | [
"Engineering"
] | 201 | [
"Mining equipment",
"Mining railways"
] |
33,429,735 | https://en.wikipedia.org/wiki/Software%20reliability%20testing | Software reliability testing is a field of software-testing that relates to testing a software's ability to function, given environmental conditions, for a particular amount of time. Software reliability testing helps discover many problems in the software design and functionality.
Overview
Software reliability is the probability that software will work properly in a specified environment and for a given amount of time. Using the following formula, the probability of failure is calculated by testing a sample of all available input states.
Mean Time Between Failure(MTBF)=Mean Time To Failure(MTTF)+ Mean Time To Repair(MTTR)
Probability = Number of failing cases / Total number of cases under consideration
The set of all possible input states is called the input space. To find reliability of software, we need to find output space from given input space and software.
For reliability testing, data is gathered from various stages of development, such as the design and operating stages. The tests are limited due to restrictions such as cost and time restrictions. Statistical samples are obtained from the software products to test for the reliability of the software. Once sufficient data or information is gathered, statistical studies are done. Time constraints are handled by applying fixed dates or deadlines for the tests to be performed. After this phase, design of the software is stopped and the actual implementation phase starts. As there are restrictions on costs and time, the data is gathered carefully so that each data has some purpose and gets its expected precision.
To achieve the satisfactory results from reliability testing one must take care of some reliability characteristics.
For example, Mean Time to Failure (MTTF) is measured in terms of three factors:
operating time,
number of on off cycles,
and calendar time.
If the restrictions are on operation time or if the focus is on first point for improvement, then one can apply compressed time accelerations to reduce the testing time. If the focus is on calendar time (i.e. if there are predefined deadlines), then intensified stress testing is used.
Measurement
Software availability is measured in terms of mean time between failures (MTBF).
MTBF consists of mean time to failure (MTTF) and mean time to repair (MTTR). MTTF is the difference of time between two consecutive failures and MTTR is the time required to fix the failure.
Steady state availability represents the percentage the software is operational.
For example, if MTTF = 1000 hours for a software, then the software should work for 1000 hours of continuous operations.
For the same software if the MTTR = 2 hours, then the .
Accordingly,
Software reliability is measured in terms of failure rate ().
Reliability for software is a number between 0 and 1. Reliability increases when errors or bugs from the program are removed. There are many software reliability growth models (SRGM) (List of software reliability models) including, logarithmic, polynomial, exponential, power, and S-shaped
Objectives of reliability testing
The main objective of the reliability testing is to test software performance under given conditions without any type of corrective measure using known fixed procedures considering its specifications.
Secondary objectives
The secondary objectives of reliability testing is:
To find perceptual structure of repeating failures.
To find the number of failures occurring in a specified amount of time.
To find the mean life of the software.
To discover the main cause of failure.
Checking the performance of different units of software after taking preventive actions.
Points for defining objectives
Some restrictions on creating objectives include:
Behaviour of the software should be defined in given conditions.
The objective should be feasible.
Time constraints should be provided.
Importance of reliability testing
The application of computer software has crossed into many different fields, with software being an essential part of industrial, commercial and military systems. Because of its many applications in safety critical systems, software reliability is now an important research area. Although software engineering is becoming the fastest developing technology of the last century, there is no complete, scientific, quantitative measure to assess them. Software reliability testing is being used as a tool to help assess these software engineering technologies.
To improve the performance of software product and software development process, a thorough assessment of reliability is required. Testing software reliability is important because it is of great use for software managers and practitioners.
To verify the reliability of the software via testing:
A sufficient number of test cases should be executed for a sufficient amount of time to get a reasonable estimate of how long the software will execute without failure. Long duration tests are needed to identify defects (such as memory leakage and buffer overflows) that take time to cause a fault or failure to occur.
The distribution of test cases should match the actual or planned operational profile of the software. The more often a function or subset of the software is executed, the greater the percentage of test cases that should be allocated to that function or subset.
Types of reliability testing
Software reliability testing includes feature testing, load testing, and regression testing.
Feature test
Feature testing checks the features provided by the software and is conducted in the following steps:
Each operation in the software is executed once.
Interaction between the two operations is reduced and
Each operation is checked for its proper execution.
The feature test is followed by the load test.
Load test
This test is conducted to check the performance of the software under maximum work load. Any software performs better up to some amount of workload, after which the response time of the software starts degrading. For example, a web site can be tested to see how many simultaneous users it can support without performance degradation. This testing mainly helps for Databases and Application servers. Load testing also requires software performance testing, which checks how well some software performs under workload.
Regression test
Regression testing is used to check if any new bugs have been introduced through previous bug fixes. Regression testing is conducted after every change or update in the software features. This testing is periodic, depending on the length and features of the software.
Test planning
Reliability testing is more costly compared to other types of testing. Thus while doing reliability testing, proper management and planning is required. This plan includes testing process to be implemented, data about its environment, test schedule, test points, etc.
Problems in designing test cases
Some common problems that occur when designing test cases include:
Test cases can be designed simply by selecting only valid input values for each field in the software. When changes are made in a particular module, the previous values may not actually test the new features introduced after the older version of software.
There may be some critical runs in the software which are not handled by any existing test case. Therefore, it is necessary to ensure that all possible types of test cases are considered through careful test case selection.
Reliability enhancement through testing
Studies during development and design of software help for improving the reliability of a product. Reliability testing is essentially performed to eliminate the failure mode of the software. Life testing of the product should always be done after the design part is finished or at least the complete design is finalized.
Failure analysis and design improvement is achieved through testings.
Reliability growth testing
This testing is used to check new prototypes of the software which are initially supposed to fail frequently. The causes of failure are detected and actions are taken to reduce defects.
Suppose T is total accumulated time for prototype. n(T) is number of failure from start to time T. The graph drawn for n(T)/T is a straight line. This graph is called Duane Plot. One can get how much reliability can be gained after all other cycles of test and fix it.
solving eq.1 for n(T),
where K is e^b.
If the value of alpha in the equation is zero the reliability can not be improved as expected for given number of failure. For alpha greater than zero, cumulative time T increases. This explains that number of the failures doesn't depends on test lengths.
Designing test cases for current release
If new features are being added to the current version of software, then writing a test case for that operation is done differently.
First plan how many new test cases are to be written for current version.
If the new feature is part of any existing feature, then share the test cases of new and existing features among them.
Finally combine all test cases from current version and previous one and record all the results.
There is a predefined rule to calculate count of new test cases for the software. If N is the probability of occurrence of new operations for new release of the software, R is the probability of occurrence of used operations in the current release and T is the number of all previously used test cases then
Reliability evaluation based on operational testing
The method of operational testing is used to test the reliability of software. Here one checks how the software works in its relevant operational environment. The main problem with this type of evaluation is constructing such an operational environment. Such type of simulation is observed in some industries like nuclear industries, in aircraft, etc. Predicting future reliability is a part of reliability evaluation.
There are two techniques used for operational testing to test the reliability of software:
Steady state reliability estimation In this case, we use feedback from delivered software products. Depending on those results, we can predict the future reliability for the next version of product. This is similar to sample testing for physical products.
Reliability growth based prediction This method uses documentation of the testing procedure. For example, consider a developed software and that we are creating different new versions of that software. We consider data on the testing of each version and based on the observed trend, we predict the reliability of the new version of software.
Reliability growth assessment and prediction
In the assessment and prediction of software reliability, we use the reliability growth model. During operation of the software, any data about its failure is stored in statistical form and is given as input to the reliability growth model. Using this data, the reliability growth model can evaluate the reliability of software.
Much data about reliability growth model is available with probability models claiming to represent failure process. But there is no model which is best suited for all conditions. Therefore, we must choose a model based on the appropriate conditions.
Reliability estimation based on failure-free working
In this case, the reliability of the software is estimated with assumptions like the following:
If a defect is found, then is it going to be fixed by someone.
Fixing the defect will not have any effect on the reliability of the software.
Each fix in the software is accurate.
See also
Software testing
Load testing
Regression testing
Reliability engineering
List of software reliability models
References
External links
Mean Time Between Failure
Software Life Testing
Software testing | Software reliability testing | [
"Engineering"
] | 2,115 | [
"Software engineering",
"Software testing"
] |
33,431,450 | https://en.wikipedia.org/wiki/Modern%20searches%20for%20Lorentz%20violation | Modern searches for Lorentz violation are scientific studies that look for deviations from Lorentz invariance or symmetry, a set of fundamental frameworks that underpin modern science and fundamental physics in particular. These studies try to determine whether violations or exceptions might exist for well-known physical laws such as special relativity and CPT symmetry, as predicted by some variations of quantum gravity, string theory, and some alternatives to general relativity.
Lorentz violations concern the fundamental predictions of special relativity, such as the principle of relativity, the constancy of the speed of light in all inertial frames of reference, and time dilation, as well as the predictions of the standard model of particle physics. To assess and predict possible violations, test theories of special relativity and effective field theories (EFT) such as the Standard-Model Extension (SME) have been invented. These models introduce Lorentz and CPT violations through spontaneous symmetry breaking caused by hypothetical background fields, resulting in some sort of preferred frame effects. This could lead, for instance, to modifications of the dispersion relation, causing differences between the maximal attainable speed of matter and the speed of light.
Both terrestrial and astronomical experiments have been carried out, and new experimental techniques have been introduced. No Lorentz violations have been measured thus far, and exceptions in which positive results were reported have been refuted or lack further confirmations. For discussions of many experiments, see Mattingly (2005). For a detailed list of results of recent experimental searches, see Kostelecký and Russell (2008–2013). For a recent overview and history of Lorentz violating models, see Liberati (2013).
Assessing Lorentz invariance violations
Early models assessing the possibility of slight deviations from Lorentz invariance have been published between the 1960s and the 1990s. In addition, a series of test theories of special relativity and effective field theories (EFT) for the evaluation and assessment of many experiments have been developed, including:
The parameterized post-Newtonian formalism is widely used as a test theory for general relativity and alternatives to general relativity, and can also be used to describe Lorentz violating preferred frame effects.
The Robertson-Mansouri-Sexl framework (RMS) contains three parameters, indicating deviations in the speed of light with respect to a preferred frame of reference.
The c2 framework (a special case of the more general THεμ framework) introduces a modified dispersion relation and describes Lorentz violations in terms of a discrepancy between the speed of light and the maximal attainable speed of matter, in presence of a preferred frame.
Doubly special relativity (DSR) preserves the Planck length as an invariant minimum length-scale, yet without having a preferred reference frame.
Very special relativity describes space-time symmetries that are certain proper subgroups of the Poincaré group. It was shown that special relativity is only consistent with this scheme in the context of quantum field theory or CP conservation.
Noncommutative geometry (in connection with Noncommutative quantum field theory or the Noncommutative standard model) might lead to Lorentz violations.
Lorentz violations are also discussed in relation to Alternatives to general relativity such as Loop quantum gravity, Emergent gravity, Einstein aether theory, Hořava–Lifshitz gravity.
However, the Standard-Model Extension (SME) in which Lorentz violating effects are introduced by spontaneous symmetry breaking, is used for most modern analyses of experimental results. It was introduced by Kostelecký and colleagues in 1997 and the following years, containing all possible Lorentz and CPT violating coefficients not violating gauge symmetry. It includes not only special relativity, but the standard model and general relativity as well. Models whose parameters can be related to SME and thus can be seen as special cases of it, include the older RMS and c2 models, the Coleman-Glashow model confining the SME coefficients to dimension 4 operators and rotation invariance, and the Gambini-Pullin model or the Myers-Pospelov model corresponding to dimension 5 or higher operators of SME.
Speed of light
Terrestrial
Many terrestrial experiments have been conducted, mostly with optical resonators or in particle accelerators, by which deviations from the isotropy of the speed of light are tested. Anisotropy parameters are given, for instance, by the Robertson-Mansouri-Sexl test theory (RMS). This allows for distinction between the relevant orientation and velocity dependent parameters. In modern variants of the Michelson–Morley experiment, the dependence of light speed on the orientation of the apparatus and the relation of longitudinal and transverse lengths of bodies in motion is analyzed. Also modern variants of the Kennedy–Thorndike experiment, by which the dependence of light speed on the velocity of the apparatus and the relation of time dilation and length contraction is analyzed, have been conducted; the recently reached limit for Kennedy-Thorndike test yields 7 10−12. The current precision, by which an anisotropy of the speed of light can be excluded, is at the 10−17 level. This is related to the relative velocity between the Solar System and the rest frame of the cosmic microwave background radiation of ~368 km/s (see also Resonator Michelson–Morley experiments).
In addition, the Standard-Model Extension (SME) can be used to obtain a larger number of isotropy coefficients in the photon sector. It uses the even- and odd-parity coefficients (3×3 matrices) , and . They can be interpreted as follows: represent anisotropic shifts in the two-way (forward and backwards) speed of light, represent anisotropic differences in the one-way speed of counterpropagating beams along an axis, and represent isotropic (orientation-independent) shifts in the one-way phase velocity of light. It was shown that such variations in the speed of light can be removed by suitable coordinate transformations and field redefinitions, though the corresponding Lorentz violations cannot be removed, because such redefinitions only transfer those violations from the photon sector to the matter sector of SME. While ordinary symmetric optical resonators are suitable for testing even-parity effects and provide only tiny constraints on odd-parity effects, also asymmetric resonators have been built for the detection of odd-parity effects. For additional coefficients in the photon sector leading to birefringence of light in vacuum, which cannot be redefined as the other photon effects, see .
Another type of test of the related one-way light speed isotropy in combination with the electron sector of the SME was conducted by Bocquet et al. (2010). They searched for fluctuations in the 3-momentum of photons during Earth's rotation, by measuring the Compton scattering of ultrarelativistic electrons on monochromatic laser photons in the frame of the cosmic microwave background radiation, as originally suggested by Vahe Gurzadyan and Amur Margarian (for details on that 'Compton Edge' method and analysis see, discussion e.g.).
Solar System
Besides terrestrial tests also astrometric tests using Lunar Laser Ranging (LLR), i.e. sending laser signals from Earth to Moon and back, have been conducted. They are ordinarily used to test general relativity and are evaluated using the Parameterized post-Newtonian formalism. However, since these measurements are based on the assumption that the speed of light is constant, they can also be used as tests of special relativity by analyzing potential distance and orbit oscillations. For instance, Zoltán Lajos Bay and White (1981) demonstrated the empirical foundations of the Lorentz group and thus special relativity by analyzing the planetary radar and LLR data.
In addition to the terrestrial Kennedy–Thorndike experiments mentioned above, Müller & Soffel (1995) and Müller et al. (1999) tested the RMS velocity dependence parameter by searching for anomalous distance oscillations using LLR. Since time dilation is already confirmed to high precision, a positive result would prove that light speed depends on the observer's velocity and length contraction is direction dependent (like in the other Kennedy–Thorndike experiments). However, no anomalous distance oscillations have been observed, with a RMS velocity dependence limit of , comparable to that of Hils and Hall (1990, see table above on the right).
Vacuum dispersion
Another effect often discussed in connection with quantum gravity (QG) is the possibility of dispersion of light in vacuum (i.e. the dependence of light speed on photon energy), due to Lorentz-violating dispersion relations. This effect should be strong at energy levels comparable to, or beyond the Planck energy GeV, while being extraordinarily weak at energies accessible in the laboratory or observed in astrophysical objects. In an attempt to observe a weak dependence of speed on energy, light from distant astrophysical sources such as gamma ray bursts and distant galaxies has been examined in many experiments. Especially the Fermi-LAT group was able show that no energy dependence and thus no observable Lorentz violation occurs in the photon sector even beyond the Planck energy, which excludes a large class of Lorentz-violating quantum gravity models.
Vacuum birefringence
Lorentz violating dispersion relations due to the presence of an anisotropic space might also lead to vacuum birefringence and parity violations. For instance, the polarization plane of photons might rotate due to velocity differences between left- and right-handed photons. In particular, gamma ray bursts, galactic radiation, and the cosmic microwave background radiation are examined. The SME coefficients and for Lorentz violation are given, 3 and 5 denote the mass dimensions employed. The latter corresponds to in the EFT of Meyers and Pospelov by , being the Planck mass.
Maximal attainable speed
Threshold constraints
Lorentz violations could lead to differences between the speed of light and the limiting or maximal attainable speed (MAS) of any particle, whereas in special relativity the speeds should be the same. One possibility is to investigate otherwise forbidden effects at threshold energy in connection with particles having a charge structure (protons, electrons, neutrinos). This is because the dispersion relation is assumed to be modified in Lorentz violating EFT models such as SME. Depending on which of these particles travels faster or slower than the speed of light, effects such as the following can occur:
Photon decay at superluminal speed. These (hypothetical) high-energy photons would quickly decay into other particles, which means that high energy light cannot propagate over long distances. So the mere existence of high energy light from astronomic sources constrains possible deviations from the limiting velocity.
Vacuum Cherenkov radiation at superluminal speed of any particle (protons, electrons, neutrinos) having a charge structure. In this case, emission of Bremsstrahlung can occur, until the particle falls below threshold and subluminal speed is reached again. This is similar to the known Cherenkov radiation in media, in which particles are traveling faster than the phase velocity of light in that medium. Deviations from the limiting velocity can be constrained by observing high energy particles of distant astronomic sources that reach Earth.
The rate of synchrotron radiation could be modified, if the limiting velocity between charged particles and photons is different.
The Greisen–Zatsepin–Kuzmin limit could be evaded by Lorentz violating effects. However, recent measurements indicate that this limit really exists.
Since astronomic measurements also contain additional assumptions – like the unknown conditions at the emission or along the path traversed by the particles, or the nature of the particles –, terrestrial measurements provide results of greater clarity, even though the bounds are wider (the following bounds describe maximal deviations between the speed of light and the limiting velocity of matter):
Clock comparison and spin coupling
By this kind of spectroscopy experiments – sometimes called Hughes–Drever experiments as well – violations of Lorentz invariance in the interactions of protons and neutrons are tested by studying the energy levels of those nucleons in order to find anisotropies in their frequencies ("clocks"). Using spin-polarized torsion balances, also anisotropies with respect to electrons can be examined. Methods used mostly focus on vector spin interactions and tensor interactions, and are often described in CPT odd/even SME terms (in particular parameters of bμ and cμν). Such experiments are currently the most sensitive terrestrial ones, because the precision by which Lorentz violations can be excluded lies at the 10−33 GeV level.
These tests can be used to constrain deviations between the maximal attainable speed of matter and the speed of light, in particular with respect to the parameters of cμν that are also used in the evaluations of the threshold effects mentioned above.
Time dilation
The classic time dilation experiments such as the Ives–Stilwell experiment, the Moessbauer rotor experiments, and the time dilation of moving particles, have been enhanced by modernized equipment. For example, the Doppler shift of lithium ions traveling at high speeds is evaluated by using saturated spectroscopy in heavy ion storage rings. For more information, see Modern Ives–Stilwell experiments.
The current precision with which time dilation is measured (using the RMS test theory), is at the ~10−8 level. It was shown, that Ives-Stilwell type experiments are also sensitive to the isotropic light speed coefficient of the SME, as introduced above. Chou et al. (2010) even managed to measure a frequency shift of ~10−16 due to time dilation, namely at everyday speeds such as 36 km/h.
CPT and antimatter tests
Another fundamental symmetry of nature is CPT symmetry. It was shown that CPT violations lead to Lorentz violations in quantum field theory (even though there are nonlocal exceptions). CPT symmetry requires, for instance, the equality of mass, and equality of decay rates between matter and antimatter.
Modern tests by which CPT symmetry has been confirmed are mainly conducted in the neutral meson sector. In large particle accelerators, direct measurements of mass differences between top- and antitop-quarks have been conducted as well.
Using SME, also additional consequences of CPT violation in the neutral meson sector can be formulated. Other SME related CPT tests have been performed as well:
Using Penning traps in which individual charged particles and their counterparts are trapped, Gabrielse et al. (1999) examined cyclotron frequencies in proton-antiproton measurements, and couldn't find any deviation down to 9·10−11.
Hans Dehmelt et al. tested the anomaly frequency, which plays a fundamental role in the measurement of the electron's gyromagnetic ratio. They searched for sidereal variations, and differences between electrons and positrons as well. Eventually they found no deviations, thereby establishing bounds of 10−24 GeV.
Hughes et al. (2001) examined muons for sidereal signals in the spectrum of muons, and found no Lorentz violation down to 10−23 GeV.
The "Muon g-2" collaboration of the Brookhaven National Laboratory searched for deviations in the anomaly frequency of muons and anti-muons, and for sidereal variations under consideration of Earth's orientation. Also here, no Lorentz violations could be found, with a precision of 10−24 GeV.
Other particles and interactions
Third generation particles have been examined for potential Lorentz violations using SME. For instance, Altschul (2007) placed upper limits on Lorentz violation of the tau of 10−8, by searching for anomalous absorption of high energy astrophysical radiation. In the BaBar experiment (2007), the D0 experiment (2015), and the LHCb experiment (2016), searches have been made for sidereal variations during Earth's rotation using B mesons (thus bottom quarks) and their antiparticles. No Lorentz and CPT violating signal were found with upper limits in the range 10−15 − 10−14 GeV.
Also top quark pairs have been examined in the D0 experiment (2012). They showed that the cross section production of these pairs doesn't depend on sidereal time during Earth's rotation.
Lorentz violation bounds on Bhabha scattering have been given by Charneski et al. (2012). They showed that differential cross sections for the vector and axial couplings in QED become direction dependent in the presence of Lorentz violation. They found no indication of such an effect, placing upper limits on Lorentz violations of .
Gravitation
The influence of Lorentz violation on gravitational fields and thus general relativity was analyzed as well. The standard framework for such investigations is the Parameterized post-Newtonian formalism (PPN), in which Lorentz violating preferred frame effects are described by the parameters (see the PPN article on observational bounds on these parameters). Lorentz violations are also discussed in relation to Alternatives to general relativity such as Loop quantum gravity, Emergent gravity, Einstein aether theory or Hořava–Lifshitz gravity.
Also SME is suitable to analyze Lorentz violations in the gravitational sector. Bailey and Kostelecky (2006) constrained Lorentz violations down to by analyzing the perihelion shifts of Mercury and Earth, and down to in relation to solar spin precession. Battat et al. (2007) examined Lunar Laser Ranging data and found no oscillatory perturbations in the lunar orbit. Their strongest SME bound excluding Lorentz violation was . Iorio (2012) obtained bounds at the level by examining Keplerian orbital elements of a test particle acted upon by Lorentz-violating gravitomagnetic accelerations. Xie (2012) analyzed the advance of periastron of binary pulsars, setting limits on Lorentz violation at the level.
Neutrino tests
Neutrino oscillations
Although neutrino oscillations have been experimentally confirmed, the theoretical foundations are still controversial, as it can be seen in the discussion related to sterile neutrinos. This makes predictions of possible Lorentz violations very complicated. It is generally assumed that neutrino oscillations require a certain finite mass. However, oscillations could also occur as a consequence of Lorentz violations, so there are speculations as to how much those violations contribute to the mass of the neutrinos.
Additionally, a series of investigations have been published in which a sidereal dependence of the occurrence of neutrino oscillations was tested, which could arise when there were a preferred background field. This, possible CPT violations, and other coefficients of Lorentz violations in the framework of SME, have been tested. Here, some of the achieved GeV bounds for the validity of Lorentz invariance are stated:
Neutrino speed
Since the discovery of neutrino oscillations, it is assumed that their speed is slightly below the speed of light. Direct velocity measurements indicated an upper limit for relative speed differences between light and neutrinos of , see measurements of neutrino speed.
Also indirect constraints on neutrino velocity, on the basis of effective field theories such as SME, can be achieved by searching for threshold effects such as Vacuum Cherenkov radiation. For example, neutrinos should exhibit Bremsstrahlung in the form of electron-positron pair production. Another possibility in the same framework is the investigation of the decay of pions into muons and neutrinos. Superluminal neutrinos would considerably delay those decay processes. The absence of those effects indicate tight limits for velocity differences between light and neutrinos.
Velocity differences between neutrino flavors can be constrained as well. A comparison between muon- and electron-neutrinos by Coleman & Glashow (1998) gave a negative result, with bounds <6.
Reports of alleged Lorentz violations
Open reports
LSND, MiniBooNE
In 2001, the LSND experiment observed a 3.8σ excess of antineutrino interactions in neutrino oscillations, which contradicts the standard model. First results of the more recent MiniBooNE experiment appeared to exclude this data above an energy scale of 450 MeV, but they had checked neutrino interactions, not antineutrino ones. In 2008, however, they reported an excess of electron-like neutrino events between 200 and 475 MeV. And in 2010, when carried out with antineutrinos (as in LSND), the result was in agreement with the LSND result, that is, an excess at the energy scale from 450 to 1250 MeV was observed. Whether those anomalies can be explained by sterile neutrinos, or whether they indicate Lorentz violations, is still discussed and subject to further theoretical and experimental researches.
Solved reports
In 2011 the OPERA Collaboration published (in a non-peer reviewed arXiv preprint) the results of neutrino measurements, according to which neutrinos were traveling slightly faster than light. The neutrinos apparently arrived early by ~60 ns. The standard deviation was 6σ, clearly beyond the 5σ limit necessary for a significant result. However, in 2012 it was found that this result was due to measurement errors. The result was consistent with the speed of light; see Faster-than-light neutrino anomaly.
In 2010, MINOS reported differences between the disappearance (and thus the masses) of neutrinos and antineutrinos at the 2.3 sigma level. This would violate CPT symmetry and Lorentz symmetry. However, in 2011 MINOS updated their antineutrino results; after evaluating additional data, they reported that the difference is not as great as initially thought. In 2012, they published a paper in which they reported that the difference is now removed.
In 2007, the MAGIC Collaboration published a paper, in which they claimed a possible energy dependence of the speed of photons from the galaxy Markarian 501. They admitted, that also a possible energy-dependent emission effect could have cause this result as well.
However, the MAGIC result was superseded by the substantially more precise measurements of the Fermi-LAT group, which couldn't find any effect even beyond the Planck energy. For details, see section Dispersion.
In 1997, Nodland & Ralston claimed to have found a rotation of the polarization plane of light coming from distant radio galaxies. This would indicate an anisotropy of space.
This attracted some interest in the media. However, some criticisms immediately appeared, which disputed the interpretation of the data, and who alluded to errors in the publication.
More recent studies have not found any evidence for this effect (see section on Birefringence).
See also
Tests of special relativity
Phenomenological quantum gravity
References
External links
Kostelecký: Background information on Lorentz and CPT violation
Roberts, Schleif (2006); Relativity FAQ: What is the experimental basis of special relativity?
Physics experiments
Tests of special relativity | Modern searches for Lorentz violation | [
"Physics"
] | 4,830 | [
"Experimental physics",
"Physics experiments"
] |
33,432,143 | https://en.wikipedia.org/wiki/Hydrometeor%20loading | Hydrometeor loading is the induced drag effects on the atmosphere from a falling hydrometeor. When falling at terminal velocity, the value of this drag is equal to grh, where g is the acceleration due to gravity and rh is the mixing ratio of the hydrometeors. Hydrometeor loading has a net-negative effect on the atmospheric buoyancy equations. As the hydrometeor falls toward the surface, the surrounding air provides resistance against the acceleration due to gravity, and the air in the vicinity of the hydrometeor becomes denser. The increased weight of the atmosphere can support a present downdraft or even cause a downdraft to occur. Hydrometeor loading can also lead to increased high pressure inside of a mesohigh in a thunderstorm.
References
Fluid dynamics
Precipitation | Hydrometeor loading | [
"Chemistry",
"Engineering"
] | 171 | [
"Piping",
"Chemical engineering",
"Fluid dynamics stubs",
"Fluid dynamics"
] |
33,434,229 | https://en.wikipedia.org/wiki/Creep-testing%20machine | A creep-testing machine measures the alteration of a material after it has undergone stresses.
Engineers use Creep machines to determine the stability and behaviour of a material when put through ordinary stresses. They determine how much strain (load) an object can handle under pressure, so engineers and researchers are able to determine what materials to use.
The device generates a creep time-dependent curve by calculating the steady rate of creep in reference to the time it takes for the material to change.
Creep
Creep is the tendency of a material to change form over time after facing high temperature and stress. Creep increases with temperature and it is more common when a material is exposed to high temperatures for a long time or at the melting point of the material.
Creep machines are used to understand the creep of materials and determine which type can do the job better, which is important when making and designing materials for everyday uses. They most commonly test the creep of alloys and plastics for the understanding of their properties and advantages of one material's use over another.
Background
The first creep testing machines were created in 1948 in Britain to test materials for aircraft to see how they would stand in high altitudes, temperature and pressure. The machines were first developed to further calculate and understand the steady rate of creep in materials.
Design
Researchers look to test objects with a creep machine to understand the process of metallurgy and the physical mechanical properties of a metal, test the development of alloys, receive data from the loads that are derived and to find out whether a sample or material is within the boundary of what they are testing. The basic design of a creep machine is the furnace, loading device and support structure.
The main type of creep testing machine is a constant load creep testing machine. The constant load creep machine consists of a loading platform, foundation, fixture devices and furnace. The fixture devices are the grips and pull rods.
Load platform or load hanger is where the object will endure pressure at a constant rate.
Grips hold the material in a certain position. Position is important because if the alignment is off, the machine will deliver inaccurate creep readings.
Dial Gauge is used to measure the strain. It is the object that captures the movement of the object in the machine. The load beam transfers the movement from the grip to the dial gauge.
Heating Chamber is what surrounds the object and maintain the temperature.
Applications
Creep machines are most commonly used in experiments to determine how efficient and stable a material is. The machine is used by students and companies to create a creep curve on how much pressure and stress a material can handle. The machine is able to calculate the stress rate, time and pressure.
Creep testing has three different applications in the industry:
Displacement-Limited applications : the size must be precise and there must be little errors or tendency to change. This is most commonly found in turbine rotors in jet engines.
Rupture Limited applications: in this application the break cannot occur to the material but there can be various dimensions as the material goes through creep. High pressure tubes are examples of them.
Stress relaxation limited application : the tension at the beginning becomes more relaxed and the tension will continue to relax as the time goes by, such as cable wires and bolts.
Graphing of creep
Creep is dependent on time so the curve that the machine generates is a time vs. strain graph. The slope of a creep curve is the creep rate dε/dt The trend of the curve is an upward slope. The graphs are important to learn the trends of the alloys or materials used and by the production of the creep-time graph, it is easier to determine the better material for a specific application.
Stages of creep
There are three stages of creep:
Primary Creep: the initial creep stage where the slope is rising rapidly at first in a short amount of time. After a certain amount of time has elapsed, the slope will begin to slowly decrease from its initial rise.
Steady State Creep: the creep rate is constant so the line on the curve shows a straight line that is a steady rate.
Tertiary Creep: the last stage of creep when the object that is being subjected to pressure is going to reach its breaking point. In this stage, the object's creep continuously increases until the object breaks. The slope of this stage is very steep for most materials.
By examining the three stages above, scientists are able to determine the temperature and interval in which an object will be disturbed once exposed to the load. Some materials have a very small secondary creep state and may go straight from the primary creep to the tertiary creep state. This is dependent on the properties of the material that is being tested. This is important to note because going straight to the tertiary state causes the material to break faster from its form.
A linear graph denotes that the material under stress is gradually deforming, and this would be harder to track at what level of stress an object can handle. This would also mean that the material would not have distinct stages, which would make an object's breaking point less predictable. This is a disadvantage to scientists and engineers when trying to determine the level of creep the object can handle.
References
Measuring instruments | Creep-testing machine | [
"Technology",
"Engineering"
] | 1,027 | [
"Measuring instruments"
] |
29,432,015 | https://en.wikipedia.org/wiki/Productivity-improving%20technologies | The productivity-improving technologies are the technological innovations that have historically increased productivity.
Productivity is often measured as the ratio of (aggregate) output to (aggregate) input in the production of goods and services. Productivity is increased by lowering the amount of labor, capital, energy or materials that go into producing any given amount of economic goods and services. Increases in productivity are largely responsible for the increase in per capita living standards.
History
Productivity-improving technologies date back to antiquity, with rather slow progress until the late Middle Ages. Important examples of early to medieval European technology include the water wheel, the horse collar, the spinning wheel, the three-field system (after 1500 the four-field system—see crop rotation) and the blast furnace.
Technological progress was aided by literacy and the diffusion of knowledge that accelerated after the spinning wheel spread to Western Europe in the 13th century. The spinning wheel increased the supply of rags used for pulp in paper making, whose technology reached Sicily sometime in the 12th century. Cheap paper was a factor in the development of the movable type printing press, which led to a large increase in the number of books and titles published. Books on science and technology eventually began to appear, such as the mining technical manual De Re Metallica, which was the most important technology book of the 16th century and was the standard chemistry text for the next 180 years.
Francis Bacon (1561–1626) is known for the scientific method, which was a key factor in the scientific revolution. Bacon stated that the technologies that distinguished Europe of his day from the Middle Ages were paper and printing, gunpowder and the magnetic compass, known as the four great inventions, which had origins in China. Other Chinese inventions included the horse collar, cast iron, an improved plow and the seed drill.
Mining and metal refining technologies played a key role in technological progress. Much of our understanding of fundamental chemistry evolved from ore smelting and refining, with De re metallica being the leading chemistry text. Railroads evolved from mine carts and the first steam engines were designed specifically for pumping water from mines. The significance of the blast furnace goes far beyond its capacity for large scale production of cast iron. The blast furnace was the first example of continuous production and is a countercurrent exchange process, various types of which are also used today in chemical and petroleum refining. Hot blast, which recycled what would have otherwise been waste heat, was one of engineering's key technologies. It had the immediate effect of dramatically reducing the energy required to produce pig iron, but reuse of heat was eventually applied to a variety of industries, particularly steam boilers, chemicals, petroleum refining and pulp and paper.
Before the 17th century scientific knowledge tended to stay within the intellectual community, but by this time it became accessible to the public in what is called "open science". Near the beginning of the Industrial Revolution came publication of the Encyclopédie, written by numerous contributors and edited by Denis Diderot and Jean le Rond d'Alembert (1751–72). It contained many articles on science and was the first general encyclopedia to provide in depth coverage on the mechanical arts, but is far more recognized for its presentation of thoughts of the Enlightenment.
Economic historians generally agree that, with certain exceptions such as the steam engine, there is no strong linkage between the 17th century scientific revolution (Descartes, Newton, etc.) and the Industrial Revolution. However, an important mechanism for the transfer of technical knowledge was scientific societies, such as The Royal Society of London for Improving Natural Knowledge, better known as the Royal Society, and the Académie des Sciences. There were also technical colleges, such as the École Polytechnique. Scotland was the first place where science was taught (in the 18th century) and was where Joseph Black discovered heat capacity and latent heat and where his friend James Watt used knowledge of heat to conceive the separate condenser as a means to improve the efficiency of the steam engine.
Probably the first period in history in which economic progress was observable after one generation was during the British Agricultural Revolution in the 18th century. However, technological and economic progress did not proceed at a significant rate until the English Industrial Revolution in the late 18th century, and even then productivity grew about 0.5% annually. High productivity growth began during the late 19th century in what is sometimes called the Second Industrial Revolution. Most major innovations of the Second Industrial Revolution were based on the modern scientific understanding of chemistry, electromagnetic theory and thermodynamics and other principles known to the profession of engineering.
Major sources of productivity growth in economic history
New forms of energy and power
Before the industrial revolution the only sources of power were water, wind and muscle. Most good water power sites (those not requiring massive modern dams) in Europe were developed during the medieval period. In the 1750s John Smeaton, the "father of civil engineering," significantly improved the efficiency of the water wheel by applying scientific principles, thereby adding badly needed power for the Industrial Revolution. However water wheels remained costly, relatively inefficient and not well suited to very large power dams. Benoît Fourneyron's highly efficient turbine developed in the late 1820s eventually replaced waterwheels. Fourneyron type turbines can operate at 95% efficiency and used in today's large hydro-power installations. Hydro-power continued to be the leading source of industrial power in the United States until past the mid 19th century because of abundant sites, but steam power overtook water power in the UK decades earlier.
In 1711 a Newcomen steam engine was installed for pumping water from a mine, a job that typically was done by large teams of horses, of which some mines used as many as 500. Animals convert feed to work at an efficiency of about 5%, but while this was much more than the less than 1% efficiency of the early Newcomen engine, in coal mines there was low quality coal with little market value available. Fossil fuel energy first exceeded all animal and water power in 1870. The role energy and machines replacing physical work is discussed in Ayres-Warr (2004, 2009).
While steamboats were used in some areas, as recently as the late 19th Century thousands of workers pulled barges. Until the late 19th century most coal and other minerals were mined with picks and shovels and crops were harvested and grain threshed using animal power or by hand. Heavy loads like 382 pound bales of cotton were handled on hand trucks until the early 20th century.
Excavation was done with shovels until the late 19th century when steam shovels came into use. It was reported that a laborer on the western division of the Erie Canal was expected to dig 5 cubic yards per day in 1860; however, by 1890 only 3-1/2 yards per day were expected. Today's large electric shovels have buckets that can hold 168 cubic meters (220 cubic yards) and consume the power of a city of 100,000.
Dynamite, a safe to handle blend of nitroglycerin and diatomaceous earth was patented in 1867 by Alfred Nobel. Dynamite increased productivity of mining, tunneling, road building, construction and demolition and made projects such as the Panama Canal possible.
Steam power was applied to threshing machines in the late 19th century. There were steam engines that moved around on wheels under their own power that were used for supplying temporary power to stationary farm equipment such as threshing machines. These were called road engines, and Henry Ford seeing one as a boy was inspired to build an automobile. Steam tractors were used but never became popular.
With internal combustion came the first mass-produced tractors (Fordson ). Tractors replaced horses and mules for pulling reapers and combine harvesters, but in the 1930s self powered combines were developed. Output per man hour in growing wheat rose by a factor of about 10 from the end of World War II until about 1985, largely because of powered machinery, but also because of increased crop yields. Corn manpower showed a similar but higher productivity increase.
See below:Mechanized agriculture.
One of the greatest periods of productivity growth coincided with the electrification of factories which took place between 1900 and 1930 in the U.S. See: Mass production: Factory electrification.
Energy efficiency
In engineering and economic history the most important types of energy efficiency were in the conversion of heat to work, the reuse of heat and the reduction of friction. There was also a dramatic reduction energy required to transmit electronic signals, both voice and data.
Conversion of heat to work
The early Newcomen steam engine was about 0.5% efficient and was improved to slightly over 1% by John Smeaton before Watt's improvements, which increased thermal efficiency to 2%. In 1900, it took 7 lbs coal/ kw hr.
Electrical generation was the sector with the highest productivity growth in the U.S. in the early twentieth century. After the turn of the century large central stations with high pressure boilers and efficient steam turbines replaced reciprocating steam engines and by 1960 it took 0.9 lb coal per kw-hr. Counting the improvements in mining and transportation the total improvement was by a factor greater than 10. Today's steam turbines have efficiencies in the 40% range. Most electricity today is produced by thermal power stations using steam turbines.
The Newcomen and Watt engines operated near atmospheric pressure and used atmospheric pressure, in the form of a vacuum caused by condensing steam, to do work. Higher pressure engines were light enough, and efficient enough to be used for powering ships and locomotives. Multiple expansion (multi-stage) engines were developed in the 1870s and were efficient enough for the first time to allow ships to carry more freight than coal, leading to great increases in international trade.
The first important diesel ship was the MS Selandia launched in 1912. By 1950 one-third of merchant shipping was diesel powered. Today the most efficient prime mover is the two stroke marine diesel engine developed in the 1920s, now ranging in size to over 100,000 horsepower with a thermal efficiency of 50%.
Steam locomotives that used up to 20% of the U.S. coal production were replaced by diesel locomotives after World War II, saving a great deal of energy and reducing manpower for handling coal, boiler water and mechanical maintenance.
Improvements in steam engine efficiency caused a large increase in the number of steam engines and the amount of coal used, as noted by William Stanley Jevons in The Coal Question. This is called the Jevons paradox.
Electrification and the pre-electric transmission of power
Electricity consumption and economic growth are strongly correlated. Per capita electric consumption correlates almost perfectly with economic development.
Electrification was the first technology to enable long-distance transmission of power with minimal power losses. Electric motors did away with line shafts for distributing power and dramatically increased the productivity of factories. Very large central power stations created economies of scale and were much more efficient at producing power than reciprocating steam engines. Electric motors greatly reduced the capital cost of power compared to steam engines.
The main forms of pre-electric power transmission were line shafts, hydraulic power networks and pneumatic and wire rope systems. Line shafts were the common form of power transmission in factories from the earliest industrial steam engines until factory electrification. Line shafts limited factory arrangement and suffered from high power losses. Hydraulic power came into use in the mid 19th century. It was used extensively in the Bessemer process and for cranes at ports, especially in the UK. London and a few other cities had hydraulic utilities that provided pressurized water for industrial over a wide area.
Pneumatic power began being used industry and in mining and tunneling in the last quarter of the 19th century. Common applications included rock drills and jack hammers. Wire ropes supported by large grooved wheels were able to transmit power with low loss for a distance of a few miles or kilometers. Wire rope systems appeared shortly before electrification.
Reuse of heat
Recovery of heat for industrial processes was first widely used as hot blast in blast furnaces to make pig iron in 1828. Later heat reuse included the Siemens-Martin process which was first used for making glass and later for steel with the open hearth furnace. (See: Iron and steel below). Today heat is reused in many basic industries such as chemicals, oil refining and pulp and paper, using a variety of methods such as heat exchangers in many processes. Multiple-effect evaporators use vapor from a high temperature effect to evaporate a lower temperature boiling fluid. In the recovery of kraft pulping chemicals the spent black liquor can be evaporated five or six times by reusing the vapor from one effect to boil the liquor in the preceding effect. Cogeneration is a process that uses high pressure steam to generate electricity and then uses the resulting low pressure steam for process or building heat.
Industrial process have undergone numerous minor improvements which collectively made significant reductions in energy consumption per unit of production.
Reducing friction
Reducing friction was one of the major reasons for the success of railroads compared to wagons. This was demonstrated on an iron plate covered wooden tramway in 1805 at Croydon, U.K.
“ A good horse on an ordinary turnpike road can draw two thousand pounds, or one ton. A party of gentlemen were invited to witness the experiment, that the superiority of the new road might be established by ocular demonstration. Twelve wagons were loaded with stones, till each wagon weighed three tons, and the wagons were fastened together. A horse was then attached, which drew the wagons with ease, six miles in two hours, having stopped four times, in order to show he had the power of starting, as well as drawing his great load.”
Better lubrication, such as from petroleum oils, reduced friction losses in mills and factories. Anti-friction bearings were developed using alloy steels and precision machining techniques available in the last quarter of the 19th century. Anti-friction bearings were widely used on bicycles by the 1880s. Bearings began being used on line shafts in the decades before factory electrification and it was the pre-bearing shafts that were largely responsible for their high power losses, which were commonly 25 to 30% and often as much as 50%.
Lighting efficiency
Electric lights were far more efficient than oil or gas lighting and did not generate smoke, fumes nor as much heat. Electric light extended the work day, making factories, businesses and homes more productive. Electric light was not a great fire hazard like oil and gas light.
The efficiency of electric lights has continuously improved from the first incandescent lamps to tungsten filament lights. The fluorescent lamp, which became commercial in the late 1930s, is much more efficient than incandescent lighting. Light-emitting diodes or LED's are highly efficient and long lasting.
Infrastructures
The relative energy required for transport of a tonne-km for various modes of transport are: pipelines=1(basis), water 2, rail 3, road 10, air 100.
Roads
Unimproved roads were extremely slow, costly for transport and dangerous. In the 18th century layered gravel began being increasingly used, with the three layer Macadam coming into use in the early 19th century. These roads were crowned to shed water and had drainage ditches along the sides. The top layer of stones eventually crushed to fines and smoothed the surface somewhat. The lower layers were of small stones that allowed good drainage. Importantly, they offered less resistance to wagon wheels and horses hooves and feet did not sink in the mud. Plank roads also came into use in the U.S. in the 1810s–1820s. Improved roads were costly, and although they cut the cost of land transportation in half or more, they were soon overtaken by railroads as the major transportation infrastructure.
Ocean shipping and inland waterways
Sailing ships could transport goods for over a 3000 miles for the cost of 30 miles by wagon. A horse that could pull a one-ton wagon could pull a 30-ton barge. During the English or First Industrial Revolution, supplying coal to the furnaces at Manchester was difficult because there were few roads and because of the high cost of using wagons. However, canal barges were known to be workable, and this was demonstrated by building the Bridgewater Canal, which opened in 1761, bringing coal from Worsley to Manchester. The Bridgewater Canal's success started a frenzy of canal building that lasted until the appearance of railroads in the 1830s.
Railroads
Railroads greatly reduced the cost of overland transportation. It is estimated that by 1890 the cost of wagon freight was U.S. 24.5 cents/ton-mile versus 0.875 cents/ton-mile by railroad, for a decline of 96%.
Electric street railways (trams, trolleys or streetcars) were in the final phase of railroad building from the late 1890s and first two decades of the 20th century. Street railways were soon displaced by motor buses and automobiles after 1920.
Motorways
Highways with internal combustion powered vehicles completed the mechanization of overland transportation. When trucks appeared c. 1920 the price transporting farm goods to market or to rail stations was greatly reduced. Motorized highway transport also reduced inventories.
The high productivity growth in the U.S. during the 1930s was in large part due to the highway building program of that decade.
Pipelines
Pipelines are the most energy efficient means of transportation. Iron and steel pipelines came into use during latter part of the 19th century, but only became a major infrastructure during the 20th century. Centrifugal pumps and centrifugal compressors are efficient means of pumping liquids and natural gas.
Mechanization
Mechanized agriculture
The seed drill is a mechanical device for spacing and planting seed at the appropriate depth. It originated in ancient China before the 1st century BC. Saving seed was extremely important at a time when yields were measured in terms of seeds harvested per seed planted, which was typically between 3 and 5. The seed drill also saved planting labor. Most importantly, the seed drill meant crops were grown in rows, which reduced competition of plants and increase yields. It was reinvented in 16th century Europe based on verbal descriptions and crude drawings brought back from China. Jethro Tull patented a version in 1700; however, it was expensive and unreliable. Reliable seed drills appeared in the mid 19th century.
Since the beginning of agriculture threshing was done by hand with a flail, requiring a great deal of labor. The threshing machine (c. 1794) simplified the operation and allowed it to use animal power. By the 1860s threshing machines were widely introduced and ultimately displaced as much as a quarter of agricultural labor.
In Europe, many of the displaced workers were driven to the brink of starvation.
Before c. 1790 a worker could harvest 1/4 acre per day with a scythe. In the early 1800s the grain cradle was introduced, significantly increasing the productivity of hand labor. It was estimated that each of Cyrus McCormick's horse pulled reapers (Ptd. 1834) freed up five men for military service in the U.S. Civil War. By 1890 two men and two horses could cut, rake and bind 20 acres of wheat per day. In the 1880s the reaper and threshing machine were combined into the combine harvester. These machines required large teams of horses or mules to pull. Over the entire 19th century the output per man hour for producing wheat rose by about 500% and for corn about 250%.
Farm machinery and higher crop yields reduced the labor to produce 100 bushels of corn from 35 to 40 hours in 1900 to 2 hours 45 minutes in 1999. The conversion of agricultural mechanization to internal combustion power began after 1915. The horse population began to decline in the 1920s after the conversion of agriculture and transportation to internal combustion. In addition to saving labor, this freed up much land previously used for supporting draft animals.
The peak years for tractor sales in the U.S. were the 1950s. There was a large surge in horsepower of farm machinery in the 1950s.
Industrial machinery
The most important mechanical devices before the Industrial Revolution were water and wind mills. Water wheels date to Roman times and windmills somewhat later. Water and wind power were first used for grinding grain into flour, but were later adapted to power trip hammers for pounding rags into pulp for making paper and for crushing ore. Just before the Industrial revolution water power was applied to bellows for iron smelting in Europe. (Water powered blast bellows were used in ancient China.) Wind and water power were also used in sawmills.
The technology of building mills and mechanical clocks was important to the development of the machines of the Industrial Revolution.
The spinning wheel was a medieval invention that increased thread making productivity by a factor greater than ten. One of the early developments that preceded the Industrial Revolution was the stocking frame (loom) of c. 1589. Later in the Industrial Revolution came the flying shuttle, a simple device that doubled the productivity of weaving. Spinning thread had been a limiting factor in cloth making requiring 10 spinners using the spinning wheel to supply one weaver. With the spinning jenny a spinner could spin eight threads at once. The water frame (Ptd. 1768) adapted water power to spinning, but it could only spin one thread at a time. The water frame was easy to operate and many could be located in a single building. The spinning mule (1779) allowed a large number of threads to be spun by a single machine using water power. A change in consumer preference for cotton at the time of increased cloth production resulted in the invention of the cotton gin (Ptd. 1794). Steam power eventually was used as a supplement to water during the Industrial Revolution, and both were used until electrification. A graph of productivity of spinning technologies can be found in Ayres (1989), along with much other data related this article.
With a cotton gin (1792) in one day a man could remove seed from as much upland cotton as would have previously taken a woman working two months to process at one pound per day using a roller gin.
An early example of a large productivity increase by special purpose machines is the c. 1803 Portsmouth Block Mills. With these machines 10 men could produce as many blocks as 110 skilled craftsmen.
In the 1830s, several technologies came together to allow an important shift in wooden building construction. The circular saw (1777), cut nail machines (1794), and steam engine allowed slender pieces of lumber such as 2"×4"s to be efficiently produced and then nailed together in what became known as balloon framing (1832). This was the beginning of the decline of the ancient method of timber frame construction with wooden joinery.
Following mechanization in the textile industry was mechanization of the shoe industry.
The sewing machine, invented and improved during the early 19th century and produced in large numbers by the 1870s, increased productivity by more than 500%. The sewing machine was an important productivity tool for mechanized shoe production.
With the widespread availability of machine tools, improved steam engines and inexpensive transportation provided by railroads, the machinery industry became the largest sector (by profit added) of the U.S. economy by the last quarter of the 19th century, leading to an industrial economy.
The first commercially successful glass bottle blowing machine was introduced in 1905. The machine, operated by a two-man crew working 12-hour shifts, could produce 17,280 bottles in 24 hours, compared to 2,880 bottles made a crew of six men and boys working in a shop for a day. The cost of making bottles by machine was 10 to 12 cents per gross compared to $1.80 per gross by the manual glassblowers and helpers.
Machine tools
Machine tools, which cut, grind and shape metal parts, were another important mechanical innovation of the Industrial Revolution. Before machine tools it was prohibitively expensive to make precision parts, an essential requirement for many machines and interchangeable parts. Historically important machine tools are the screw-cutting lathe, milling machine and metal planer (metalworking), which all came into use between 1800 and 1840. However, around 1900, it was the combination of small electric motors, specialty steels and new cutting and grinding materials that allowed machine tools to mass-produce steel parts. Production of the Ford Model T required 32,000 machine tools.
Modern manufacturing began around 1900 when machines, aided by electric, hydraulic and pneumatic power, began to replace hand methods in industry. An early example is the Owens automatic glass bottle blowing machine, which reduced labor in making bottles by over 80%. See also: Mass production#Factory electrification
Mining
Large mining machines, such as steam shovels, appeared in the mid-nineteenth century, but were restricted to rails until the widespread introduction of continuous track and pneumatic tires in the late 19th and early 20th centuries. Until then much mining work was mostly done with pneumatic drills, jackhammers, picks and shovels.
Coal seam undercutting machines appeared around 1890 and were used for 75% of coal production by 1934. Coal loading was still being done manually with shovels around 1930, but mechanical pick up and loading machines were coming into use. The use of the coal boring machine improved productivity of sub-surface coal mining by a factor of three between 1949 and 1969.
There is currently a transition going under way from more labor-intensive methods of mining to more mechanization and even automated mining.
Mechanized materials handling
Bulk materials handling
Dry bulk materials handling systems use a variety of stationary equipment such as conveyors, stackers, reclaimers and mobile equipment such as power shovels and loaders to handle high volumes of ores, coal, grains, sand, gravel, crushed stone, etc. Bulk materials handling systems are used at mines, for loading and unloading ships and at factories that process bulk materials into finished goods, such as steel and paper mills.
Mechanical stokers for feeding coal to locomotives were in use in the 1920s. A completely mechanized and automated coal handling and stoking system was first used to feed pulverized coal to an electric utility boiler in 1921.
Liquids and gases are handled with centrifugal pumps and compressors, respectively.
Conversion to powered material handling increased during WW 1 as shortages of unskilled labor developed and unskilled wages rose relative to skilled labor.
A noteworthy use of conveyors was Oliver Evans's automatic flour mill built in 1785.
Around 1900 various types of conveyors (belt, slat, bucket, screw or auger), overhead cranes and industrial trucks began being used for handling materials and goods in various stages of production in factories. See: Types of conveyor systems. and mass production.
A well known application of conveyors is Ford. Motor Co.'s assembly line (c. 1913), although Ford used various industrial trucks, overhead cranes, slides and whatever devices necessary to minimize labor in handling parts in various parts of the factory.
Cranes
Cranes are an ancient technology but they became widespread following the Industrial Revolution. Industrial cranes were used to handle heavy machinery at the Nasmyth, Gaskell and Company (Bridgewater foundry) in the late 1830s. Hydraulic powered cranes became widely used in the late 19th century, especially at British ports. Some cities, such as London, had public utility hydraulic service networks to power. Steam cranes were also used in the late 19th century. Electric cranes, especially the overhead type, were introduce in factories at the end of the 19th century. Steam cranes were usually restricted to rails. Continuous track (caterpillar tread) was developed in the late 19th century.
The important categories of cranes are:
Overhead crane or bridge cranes-travel on a rail and have trolleys that move the hoist to any position inside the crane frame. Widely used in factories.
Mobile crane Usually gasoline or diesel powered and travel on wheels for on or off-road, rail or continuous track. They are widely used in construction, mining, excavation handling bulk materials.
Fixed crane In a fixed position but can usually rotate full circle. The most familiar example is the tower crane used to erect tall buildings.
In the early 20th century, electric operated cranes and motorized mobile loaders such as forklifts were used. Today non-bulk freight is containerized.
Palletization
Handling goods on pallets was a significant improvement over using hand trucks or carrying sacks or boxes by hand and greatly speeded up loading and unloading of trucks, rail cars and ships. Pallets can be handled with pallet jacks or forklift trucks which began being used in industry in the 1930s and became widespread by the 1950s. Loading docks built to architectural standards allow trucks or rail cars to load and unload at the same elevation as the warehouse floor.
Piggyback rail
Piggyback is the transporting of trailers or entire trucks on rail cars, which is a more fuel efficient means of shipping and saves loading, unloading and sorting labor. Wagons had been carried on rail cars in the 19th century, with horses in separate cars. Trailers began being carried on rail cars in the U.S. in 1956. Piggyback was 1% of freight in 1958, rising to 15% in 1986.
Containerization
Either loading or unloading break bulk cargo on and off ships typically took several days. It was strenuous and somewhat dangerous work. Losses from damage and theft were high. The work was erratic and most longshoreman had a lot of unpaid idle time. Sorting and keeping track of break bulk cargo was also time-consuming, and holding it in warehouses tied up capital.
Old style ports with warehouses were congested and many lacked efficient transportation infrastructure, adding to costs and delays in port.
By handling freight in standardized containers in compartmentalized ships, either loading or unloading could typically be accomplished in one day. Containers can be more efficiently filled than break bulk because containers can be stacked several high, doubling the freight capacity for a given size ship.
Loading and unloading labor for containers is a fraction of break bulk, and damage and theft are much lower. Also, many items shipped in containers require less packaging.
Containerization with small boxes was used in both world wars, particularly WW II, but became commercial in the late 1950s. Containerization left large numbers of warehouses at wharves in port cities vacant, freeing up land for other development. See also: Intermodal freight transport
Work practices and processes
Division of labor
Before the factory system much production took place in the household, such as spinning and weaving, and was for household consumption. This was partly due to the lack of transportation infrastructures, especially in America.
Division of labor was practiced in antiquity but became increasingly specialized during the Industrial Revolution, so that instead of a shoemaker cutting out leather as part of the operation of making a shoe, a worker would do nothing but cut out leather. In Adam Smith's famous example of a pin factory, workers each doing a single task were far more productive than a craftsmen making an entire pin.
Starting before and continuing into the industrial revolution, much work was subcontracted under the putting out system (also called the domestic system) whereby work was done at home. Putting out work included spinning, weaving, leather cutting and, less commonly, specialty items such as firearms parts. Merchant capitalists or master craftsmen typically provided the materials and collected the work pieces, which were made into finished product in a central workshop.
Factory system
During the industrial revolution much production took place in workshops, which were typically located in the rear or upper level of the same building where the finished goods were sold. These workshops used tools and sometimes simple machinery, which was usually hand or animal powered. The master craftsman, foreman or merchant capitalist supervised the work and maintained quality. Workshops grew in size but were displaced by the factory system in the early 19th century. Under the factory system capitalists hired workers and provided the buildings, machinery and supplies and handled the sale of the finished products.
Interchangeable parts
Changes to traditional work processes that were done after analyzing the work and making it more systematic greatly increased the productivity of labor and capital. This was the changeover from the European system of craftsmanship, where a craftsman made a whole item, to the American system of manufacturing which used special purpose machines and machine tools that made parts with precision to be interchangeable. The process took decades to perfect at great expense because interchangeable parts were more costly at first. Interchangeable parts were achieved by using fixtures to hold and precisely align parts being machined, jigs to guide the machine tools and gauges to measure critical dimensions of finished parts.
Scientific management
Other work processes involved minimizing the number of steps in doing individual tasks, such as bricklaying, by performing time and motion studies to determine the one best method, the system becoming known as Taylorism after Fredrick Winslow Taylor who is the best known developer of this method, which is also known as scientific management after his work The Principles of Scientific Management.
Standardization
Standardization and interchangeability are considered to be main reasons for U.S. exceptionality. Standardization was part of the change to interchangeable parts, but was also facilitated by the railroad industry and mass-produced goods. Railroad track gauge standardization and standards for rail cars allowed inter-connection of railroads. Railway time formalized time zones. Industrial standards included screw sizes and threads and later electrical standards. Shipping container standards were loosely adopted in the late 1960s and formally adopted ca. 1970. Today there are vast numbers of technical standards. Commercial standards includes such things as bed sizes. Architectural standards cover numerous dimensions including stairs, doors, counter heights and other designs to make buildings safe, functional and in some cases allow a degree of interchangeability.
Rationalized factory layout
Electrification allowed the placement of machinery such as machine tools in a systematic arrangement along the flow of the work. Electrification was a practical way to motorize conveyors to transfer parts and assemblies to workers, which was a key step leading to mass production and the assembly line.
Modern business management
Business administration, which includes management practices and accounting systems is another important form of work practices. As the size of businesses grew in the second half of the 19th century they began being organized by departments and managed by professional managers as opposed to being run by sole proprietors or partners.
Business administration as we know it was developed by railroads who had to keep up with trains, railcars, equipment, personnel and freight over large territories.
Modern business enterprise (MBE) is the organization and management of businesses, particularly large ones. MBE's employ professionals who use knowledge based techniques such areas as engineering, research and development, information technology, business administration, finance and accounting. MBE's typically benefit from economies of scale.
“Before railroad accounting we were moles burrowing in the dark." Andrew Carnegie
Continuous production
Continuous production is a method by which a process operates without interruption for long periods, perhaps even years. Continuous production began with blast furnaces in ancient times and became popular with mechanized processes following the invention of the Fourdrinier paper machine during the Industrial Revolution, which was the inspiration for continuous rolling. It began being widely used in chemical and petroleum refining industries in the late nineteenth and early twentieth centuries. It was later applied to direct strip casting of steel and other metals.
Early steam engines did not supply power at a constant enough load for many continuous applications ranging from cotton spinning to rolling mills, restricting their power source to water. Advances in steam engines such as the Corliss steam engine and the development of control theory led to more constant engine speeds, which made steam power useful for sensitive tasks such as cotton spinning. AC motors, which run at constant speed even with load variations, were well suited to such processes.
Scientific agriculture
Losses of agricultural products to spoilage, insects and rats contributed greatly to productivity. Much hay stored outdoors was lost to spoilage before indoor storage or some means of coverage became common. Pasteurization of milk allowed it to be shipped by railroad.
Keeping livestock indoors in winter reduces the amount of feed needed. Also, feeding chopped hay and ground grains, particularly corn (maize), was found to improve digestibility. The amount of feed required to produce a kg of live weight chicken fell from 5 in 1930 to 2 by the late 1990s and the time required fell from three months to six weeks.
The Green Revolution increased crop yields by a factor of 3 for soybeans and between 4 and 5 for corn (maize), wheat, rice and some other crops. Using data for corn (maize) in the U.S., yields increased about 1.7 bushels per acre from the early 1940s until the first decade of the 21st century when concern was being expressed about reaching limits of photosynthesis. Because of the constant nature of the yield increase, the annual percentage increase has declined from over 5% in the 1940s to 1% today, so while yields for a while outpaced population growth, yield growth now lags population growth.
High yields would not be possible without significant applications of fertilizer, particularly nitrogen fertilizer which was made affordable by the Haber-Bosch ammonia process. Nitrogen fertilizer is applied in many parts of Asia in amounts subject to diminishing returns, which however does still give a slight increase in yield. Crops in Africa are in general starved for NPK and much of the world's soils are deficient in zinc, which leads to deficiencies in humans.
The greatest period of agricultural productivity growth in the U.S. occurred from World War 2 until the 1970s.
Land is considered a form of capital, but otherwise has received little attention relative to its importance as a factor of productivity by modern economists, although it was important in classical economics. However, higher crop yields effectively multiplied the amount of land.
New materials, processes and de-materialization
Iron and steel
The process of making cast iron was known before the 3rd century AD in China. Cast iron production reached Europe in the 14th century and Britain around 1500. Cast iron was useful for casting into pots and other implements, but was too brittle for making most tools. However, cast iron had a lower melting temperature than wrought iron and was much easier to make with primitive technology. Wrought iron was the material used for making many hardware items, tools and other implements. Before cast iron was made in Europe, wrought iron was made in small batches by the bloomery process, which was never used in China. Wrought iron could be made from cast iron more cheaply than it could be made with a bloomery.
The inexpensive process for making good quality wrought iron was puddling, which became widespread after 1800. Puddling involved stirring molten cast iron until small globs sufficiently decarburized to form globs of hot wrought iron that were then removed and hammered into shapes. Puddling was extremely labor-intensive. Puddling was used until the introduction of the Bessemer and open hearth processes in the mid and late 19th century, respectively.
Blister steel was made from wrought iron by packing wrought iron in charcoal and heating for several days. See: Cementation process The blister steel could be heated and hammered with wrought iron to make shear steel, which was used for cutting edges like scissors, knives and axes. Shear steel was of non uniform quality and a better process was needed for producing watch springs, a popular luxury item in the 18th century. The successful process was crucible steel, which was made by melting wrought iron and blister steel in a crucible.
Production of steel and other metals was hampered by the difficulty in producing sufficiently high temperatures for melting. An understanding of thermodynamic principles such as recapturing heat from flue gas by preheating combustion air, known as hot blast, resulted in much higher energy efficiency and higher temperatures. Preheated combustion air was used in iron production and in the open hearth furnace. In 1780, before the introduction of hot blast in 1829, it required seven times as much coke as the weight of the product pig iron. The hundredweight of coke per short ton of pig iron was 35 in 1900, falling to 13 in 1950. By 1970 the most efficient blast furnaces used 10 hundredweight of coke per short ton of pig iron.
Steel has much higher strength than wrought iron and allowed long span bridges, high rise buildings, automobiles and other items. Steel also made superior threaded fasteners (screws, nuts, bolts), nails, wire and other hardware items. Steel rails lasted over 10 times longer than wrought iron rails.
The Bessemer and open hearth processes were much more efficient than making steel by the puddling process because they used the carbon in the pig iron as a source of heat. The Bessemer (patented in 1855) and the Siemens-Martin (c. 1865) processes greatly reduced the cost of steel. By the end of the 19th century, Gilchirst-Thomas “basic” process had reduced production costs by 90% compared to the puddling process of the mid-century.
Today a variety of alloy steels are available that have superior properties for special applications like automobiles, pipelines and drill bits. High speed or tool steels, whose development began in the late 19th century, allowed machine tools to cut steel at much higher speeds. High speed steel and even harder materials were an essential component of mass production of automobiles.
Some of the most important specialty materials are steam turbine and gas turbine blades, which have to withstand extreme mechanical stress and high temperatures.
The size of blast furnaces grew greatly over the 20th century and innovations like additional heat recovery and pulverized coal, which displaced coke and increased energy efficiency.
Bessemer steel became brittle with age because nitrogen was introduced when air was blown in. The Bessemer process was also restricted to certain ores (low phosphate hematite). By the end of the 19th century the Bessemer process was displaced by the open hearth furnace (OHF). After World War II the OHF was displaced by the basic oxygen furnace (BOF), which used oxygen instead of air and required about 35–40 minutes to produce a batch of steel compared to 8 to 9 hours for the OHF. The BOF also was more energy efficient.
By 1913, 80% of steel was being made from molten pig iron directly from the blast furnace, eliminating the step of casting the "pigs" (ingots) and remelting.
The continuous wide strip rolling mill, developed by ARMCO in 1928, was most important development in steel industry during the inter-war years. Continuous wide strip rolling started with a thick, coarse ingot. It produced a smoother sheet with more uniform thickness, which was better for stamping and gave a nice painted surface. It was good for automotive body steel and appliances. It used only a fraction of the labor of the discontinuous process, and was safer because it did not require continuous handling. Continuous rolling was made possible by improved sectional speed control: See: Automation, process control and servomechanisms
After 1950 continuous casting contributed to productivity of converting steel to structural shapes by eliminating the intermittent step of making slabs, billets (square cross-section) or blooms (rectangular) which then usually have to be reheated before rolling into shapes. Thin slab casting, introduced in 1989, reduced labor to less than one hour per ton. Continuous thin slab casting and the BOF were the two most important productivity advancements in 20th-century steel making.
As a result of these innovations, between 1920 and 2000 labor requirements in the steel industry decreased by a factor of 1,000, from more than 3 worker-hours per tonne to just 0.003.
Sodium carbonate (soda ash) and related chemicals
Sodium compounds: carbonate, bicarbonate and hydroxide are important industrial chemicals used in important products like making glass and soap. Until the invention of the Leblanc process in 1791, sodium carbonate was made, at high cost, from the ashes of seaweed and the plant barilla. The Leblanc process was replaced by the Solvay process beginning in the 1860s. With the widespread availability of inexpensive electricity, much sodium is produced along with chlorine by electro-chemical processes.
Cement
Cement is the binder for concrete, which is one of the most widely used construction materials today because of its low cost, versatility and durability. Portland cement, which was invented 1824–1825, is made by calcining limestone and other naturally occurring minerals in a kiln. A great advance was the perfection of rotary cement kilns in the 1890s, the method still being used today. Reinforced concrete, which is suitable for structures, began being used in the early 20th century.
Paper
Paper was made one sheet at a time by hand until development of the Fourdrinier paper machine (c. 1801) which made a continuous sheet. Paper making was severely limited by the supply of cotton and linen rags from the time of the invention of the printing press until the development of wood pulp (c. 1850s)in response to a shortage of rags. The sulfite process for making wood pulp started operation in Sweden in 1874. Paper made from sulfite pulp had superior strength properties than the previously used ground wood pulp (c. 1840). The kraft (Swedish for strong) pulping process was commercialized in the 1930s. Pulping chemicals are recovered and internally recycled in the kraft process, also saving energy and reducing pollution. Kraft paperboard is the material that the outer layers of corrugated boxes are made of. Until Kraft corrugated boxes were available, packaging consisted of poor quality paper and paperboard boxes along with wood boxes and crates. Corrugated boxes require much less labor to manufacture than wooden boxes and offer good protection to their contents. Shipping containers reduce the need for packaging.
Rubber and plastics
Vulcanized rubber made the pneumatic tire possible, which in turn enabled the development of on and off-road vehicles as we know them. Synthetic rubber became important during the Second World War when supplies of natural rubber were cut off.
Rubber inspired a class of chemicals known as elastomers, some of which are used by themselves or in blends with rubber and other compounds for seals and gaskets, shock absorbing bumpers and a variety of other applications.
Plastics can be inexpensively made into everyday items and have significantly lowered the cost of a variety of goods including packaging, containers, parts and household piping.
Optical fiber
Optical fiber began to replace copper wire in the telephone network during the 1980s. Optical fibers are very small diameter, allowing many to be bundled in a cable or conduit. Optical fiber is also an energy efficient means of transmitting signals.
Oil and gas
Seismic exploration, beginning in the 1920s, uses reflected sound waves to map subsurface geology to help locate potential oil reservoirs. This was a great improvement over previous methods, which involved mostly luck and good knowledge of geology, although luck continued to be important in several major discoveries. Rotary drilling was a faster and more efficient way of drilling oil and water wells. It became popular after being used for the initial discovery of the East Texas field in 1930.
Hard materials for cutting
Numerous new hard materials were developed for cutting edges such as in machining. Mushet steel, which was developed in 1868, was a forerunner of High speed steel, which was developed by a team led by Fredrick Winslow Taylor at Bethlehem Steel Company around 1900. High speed steel held its hardness even when it became red hot. It was followed by a number of modern alloys.
From 1935 to 1955 machining cutting speeds increased from 120 to 200 ft/min to 1000 ft/min due to harder cutting edges, causing machining costs to fall by 75%.
One of the most important new hard materials for cutting is tungsten carbide.
Dematerialization
Dematerialization is the reduction of use of materials in manufacturing, construction, packaging or other uses. In the U.S. the quantity of raw materials per unit of output decreased approx 60% since 1900. In Japan the reduction has been 40% since 1973.
Dematerialization is made possible by substitution with better materials and by engineering to reduce weight while maintaining function. Modern examples are plastic beverage containers replacing glass and paperboard, plastic shrink wrap used in shipping and light weight plastic packing materials. Dematerialization has been occurring in the U. S. steel industry where the peak in consumption occurred in 1973 on both an absolute and per capita basis. At the same time, per capita steel consumption grew globally through outsourcing of manufacturing to developing countries. Cumulative global GDP or wealth has grown in direct proportion to energy consumption since 1970, while Jevons paradox posits that efficiency improvement leads to increased energy consumption. Access to energy globally constrains dematerialization.
Communications
Telegraphy
The telegraph appeared around the beginning of the railroad era and railroads typically installed telegraph lines along their routes for communicating with the trains.
Teleprinters appeared in 1910 and had replaced between 80 and 90% of Morse code operators by 1929. It is estimated that one teletypist replaced 15 Morse code operators.
Telephone
The early use of telephones was primarily for business. Monthly service cost about one third of the average worker's earnings. The telephone along with trucks and the new road networks allowed businesses to reduce inventory sharply during the 1920s.
Telephone calls were handled by operators using switchboards until the automatic switchboard was introduced in 1892. By 1929, 31.9% of the Bell system was automatic.
Automatic telephone switching originally used electro-mechanical switches controlled by vacuum tube devices, which consumed a large amount of electricity. Call volume eventually grew so fast that it was feared the telephone system would consume all electricity production, prompting Bell Labs to begin research on the transistor.
Radio frequency transmission
After WWII microwave transmission began being used for long-distance telephony and transmitting television programming to local stations for rebroadcast.
Fiber optics
The diffusion of telephony to households was mature by the arrival of fiber-optic communications in the late 1970s. Fiber optics greatly increased the transmission capacity of information over previous copper wires and further lowered the cost of long-distance communication.
Communications satellites
Communications satellites came into use in the 1960s and today carry a variety of information including credit card transaction data, radio, television and telephone calls. The Global Positioning System (GPS) operates on signals from satellites.
Facsimile (fax)
Fax (short for facsimile) machines of various types had been in existence since the early 1900s but became widespread beginning in the mid-1970s.
Home economics: Public water supply, household gas supply and appliances
Before public water was supplied to households it was necessary for someone annually to haul up to 10,000 gallons of water to the average household.
Natural gas began being supplied to households in the late 19th century.
Household appliances followed household electrification in the 1920s, with consumers buying electric ranges, toasters, refrigerators and washing machines. As a result of appliances and convenience foods, time spent on meal preparation and clean up, laundry and cleaning decreased from 58 hours/week in 1900 to 18 hours/week by 1975. Less time spent on housework allowed more women to enter the labor force.
Automation, process control and servomechanisms
Automation means automatic control, meaning a process is run with minimum operator intervention. Some of the various levels of automation are: mechanical methods, electrical relay, feedback control with a controller and computer control. Common applications of automation are for controlling temperature, flow and pressure. Automatic speed control is important in many industrial applications, especially in sectional drives, such as found in metal rolling and paper drying.
The earliest applications of process control were mechanisms that adjusted the gap between mill stones for grinding grain and for keeping windmills facing into the wind. The centrifugal governor used for adjusting the mill stones was copied by James Watt for controlling speed of steam engines in response to changes in heat load to the boiler; however, if the load on the engine changed the governor only held the speed steady at the new rate. It took much development work to achieve the degree of steadiness necessary to operate textile machinery. A mathematical analysis of control theory was first developed by James Clerk Maxwell. Control theory was developed to its "classical" form by the 1950s. See: Control theory#History
Factory electrification brought simple electrical controls such as ladder logic, whereby push buttons could be used to activate relays to engage motor starters. Other controls such as interlocks, timers and limit switches could be added to the circuit.
Today automation usually refers to feedback control. An example is cruise control on a car, which applies continuous correction when a sensor on the controlled variable (Speed in this example) deviates from a set-point and can respond in a corrective manner to hold the setting. Process control is the usual form of automation that allows industrial operations like oil refineries, steam plants generating electricity or paper mills to be run with a minimum of manpower, usually from a number of control rooms.
The need for instrumentation grew with the rapidly growing central electric power stations after the First World War. Instrumentation was also important for heat treating ovens, chemical plants and refineries. Common instrumentation was for measuring temperature, pressure or flow. Readings were typically recorded on circle charts or strip charts. Until the 1930s control was typically "open loop", meaning that it did not use feedback. Operators made various adjustments by such means as turning handles on valves. If done from a control room a message could be sent to an operator in the plant by color coded light, letting him know whether to increase or decrease whatever was being controlled. The signal lights were operated by a switchboard, which soon became automated. Automatic control became possible with the feedback controller, which sensed the measured variable, measured the deviation from the setpoint and perhaps the rate of change and time weighted amount of deviation, compared that with the setpoint and automatically applied a calculated adjustment. A stand-alone controller may use a combination of mechanical, pneumatic, hydraulic or electronic analogs to manipulate the controlled device. The tendency was to use electronic controls after these were developed, but today the tendency is to use a computer to replace individual controllers.
By the late 1930s feedback control was gaining widespread use. Feedback control was an important technology for continuous production.
Automation of the telephone system allowed dialing local numbers instead of having calls placed through an operator. Further automation allowed callers to place long-distance calls by direct dial. Eventually almost all operators were replaced with automation.
Machine tools were automated with numerical control (NC) in the 1950s. This soon evolved into computerized numerical control (CNC).
Servomechanisms are commonly position or speed control devices that use feedback. Understanding of these devices is covered in control theory. Control theory was successfully applied to steering ships in the 1890s, but after meeting with personnel resistance it was not widely implemented for that application until after the First World War. Servomechanisms are extremely important in providing automatic stability control for airplanes and in a wide variety of industrial applications.
Industrial robots were used on a limited scale from the 1960s but began their rapid growth phase in the mid-1980s after the widespread availability of microprocessors used for their control. By 2000 there were over 700,000 robots worldwide.
Computers, data processing and information technology
Unit record equipment
Early electric data processing was done by running punched cards through tabulating machines, the holes in the cards allowing electrical contact to increment electronic counters. Tabulating machines were in a category called unit record equipment, through which the flow of punched cards was arranged in a program-like sequence to allow sophisticated data processing. Unit record equipment was widely used before the introduction of computers.
The usefulness of tabulating machines was demonstrated by compiling the 1890 U.S. census, allowing the census to be processed in less than a year and with great labor savings compared to the estimated 13 years by the previous manual method.
Stored-program computers
The first digital computers were more productive than tabulating machines, but not by a great amount. Early computers used thousands of vacuum tubes (thermionic valves) which used a lot of electricity and constantly needed replacing. By the 1950s the vacuum tubes were replaced by transistors which were much more reliable and used relatively little electricity. By the 1960s thousands of transistors and other electronic components could be manufactured on a silicon semiconductor wafer as integrated circuits, which are universally used in today's computers.
Computers used paper tape and punched cards for data and programming input until the 1980s when it was still common to receive monthly utility bills printed on a punched card that was returned with the customer's payment.
In 1973 IBM introduced point of sale (POS) terminals in which electronic cash registers were networked to the store mainframe computer. By the 1980s bar code readers were added. These technologies automated inventory management. Wal-Mart was an early adopter of POS. The Bureau of Labor Statistics estimated that bar code scanners at checkout increased ringing speed by 30% and reduced labor requirements of cashiers and baggers by 10–15%.
Data storage became better organized after the development of relational database software that allowed data to be stored in different tables. For example, a theoretical airline may have numerous tables such as: airplanes, employees, maintenance contractors, caterers, flights, airports, payments, tickets, etc. each containing a narrower set of more specific information than would a flat file, such as a spreadsheet. These tables are related by common data fields called keys. (See: Relational model) Data can be retrieved in various specific configurations by posing a query without having to pull up a whole table. This, for example, makes it easy to find a passenger's seat assignment by a variety of means such as ticket number or name, and provide only the queried information. See: SQL
Since the mid-1990s, interactive web pages have allowed users to access various servers over Internet to engage in e-commerce such as online shopping, paying bills, trading stocks, managing bank accounts and renewing auto registrations. This is the ultimate form of back office automation because the transaction information is transferred directly to the database.
Computers also greatly increased productivity of the communications sector, especially in areas like the elimination of telephone operators. In engineering, computers replaced manual drafting with CAD, with a 500% average increase in a draftsman's output. Software was developed for calculations used in designing electronic circuits, stress analysis, heat and material balances. Process simulation software has been developed for both steady state and dynamic simulation, the latter able to give the user a very similar experience to operating a real process like a refinery or paper mill, allowing the user to optimize the process or experiment with process modifications.
Automated teller machines (ATM's) became popular in recent decades and self checkout at retailers appeared in the 1990s.
The Airline Reservations System and banking are areas where computers are practically essential. Modern military systems also rely on computers.
In 1959 Texaco's Port Arthur refinery became the first chemical plant to use digital process control.
Computers did not revolutionize manufacturing because automation, in the form of control systems, had already been in existence for decades, although computers did allow more sophisticated control, which led to improved product quality and process optimization. See: Productivity paradox.
Semiconductor device fabrication
In a lengthy, costly, complicated, and intricate process of semiconductor device fabrication (SDFP, one of the most expensive industries as of 2022) various approaches were undertaken and many technologies were investigated since 1960s both by state (e.g. US) and private businesses in order to speed up production process and increase design & fabrication productivity.
Electronic design automation (EDA) software tools had a major impact on delivery and success of many modern electronic device and products. As the integration of semiconductor and emergence of the VLSI devices grew over the years it became impossible to keep up with pace (see also Moore's law) without using specialized tools. EDA software tools are widely applied in modern-day photomask fabrication process (which was previously done by hand). They have provided a continuous increase in design & prototyping productivity of ASIC/FPGA/DRAM devices and cut down time-to-market significantly. In 2003 three generations of EDA suits were reported in regard to amount of logical gates of a devices per man-years since 1979 to 1995: I, II, and III. Evidently, the productivity grew hundredfold by migration from generation I to III. Thanks to ever-evolving EDA it became possible to spend the same amount of time on designing complex ASICs that would be spent years ago on a less complex one.
Advances in photolithography technologies like krypton fluoride (KrF)-based excimer laser also helped to boost production rates at lower cost even at their own expensiveness.
Long term decline in productivity growth
"The years 1929–1941 were, in the aggregate, the most technologically progressive of any comparable period in U.S. economic history." Alexander J. Field
"As industrialization has proceeded, its effects, relatively speaking, have become less, not more, revolutionary"...."There has, in effect, been a general progression in industrial commodities from a deficiency to a surplus of capital relative to internal investments". Alan Sweezy, 1943
U.S. productivity growth has been in long-term decline since the early 1970s, with the exception of a 1996–2004 spike caused by an acceleration of Moore's law semiconductor innovation. Part of the early decline was attributed to increased governmental regulation since the 1960s, including stricter environmental regulations. Part of the decline in productivity growth is due to exhaustion of opportunities, especially as the traditionally high-productivity sectors decline in size. Robert J. Gordon considered productivity to be "one big wave" that crested and is now receding to a lower level, while M. King Hubbert called the phenomenon of the great productivity gains preceding the Great Depression a "one time event."
Because of reduced population growth in the U.S. and a peaking of productivity growth, sustained U.S. GDP growth has never returned to the 4% plus rates of the pre-World War I decades.
The computer and computer-like semiconductor devices used in automation are the most significant productivity-improving technologies developed in the final decades of the twentieth century; however, their contribution to overall productivity growth was disappointing. Most of the productivity growth occurred in the new industry computer and related industries. Economist Robert J. Gordon is among those who questioned whether computers lived up to the great innovations of the past, such as electrification. This issue is known as the productivity paradox. Gordon's (2013) analysis of productivity in the U.S. gives two possible surges in growth, one during 1891–1972 and the second in 1996–2004 due to the acceleration in Moore's law-related technological innovation.
Improvements in productivity affected the relative sizes of various economic sectors by reducing prices and employment. Agricultural productivity released labor at a time when manufacturing was growing. Manufacturing productivity growth peaked with factory electrification and automation, but still remains significant. However, as the relative size of the manufacturing sector shrank the government and service sectors, which have low productivity growth, grew.
Improvement in living standards
Chronic hunger and malnutrition were the norm for the majority of the population of the world including England and France, until the latter part of the 19th century. Until about 1750, in large part due to malnutrition, life expectancy in France was about 35 years, and only slightly higher in England. The U.S. population of the time was adequately fed, were much taller and had life expectancies of 45–50 years.
The gains in standards of living have been accomplished largely through increases in productivity. In the U.S. the amount of personal consumption that could be bought with one hour of work was about $3.00 in 1900 and increased to about $22 by 1990, measured in 2010 dollars. For comparison, a U.S. worker today earns more (in terms of buying power) working for ten minutes than subsistence workers, such as the English mill workers that Fredrick Engels wrote about in 1844, earned in a 12-hour day.
Decline in work week
As a result of productivity increases, the work week declined considerably over the 19th century. By the 1920s the average work week in the U.S. was 49 hours, but the work week was reduced to 40 hours (after which overtime premium was applied) as part of the National Industrial Recovery Act of 1933.
The push towards implementing a four-day week has remained loosely relevant within the contemporary workplace due to the various possible benefits it may yield.
See also
Accelerating change
Democracy and economic growth
Great divergence
Historical school of economics
Kondratiev wave
List of countries by GNI per capita growth
Productivity software
Working hours
References
Sources and further reading
Link, Stefan J. Forging Global Fordism: Nazi Germany, Soviet Russia, and the Contest over the Industrial Order (2020) excerpt
External links
Productivity and Costs – Bureau of Labor Statistics United States Department of Labor: contains international comparisons of productivity rates, historical and present
Productivity Statistics – Organisation for Economic Co-operation and Development
Greenspan Speech
OECD estimates of labour productivity levels
Miller, Doug, Towards Sustainable Labour Costing in UK Fashion Retail (February 5, 2013)
Production economics
Manufacturing
Economic growth | Productivity-improving technologies | [
"Engineering"
] | 13,227 | [
"Manufacturing",
"Mechanical engineering"
] |
29,434,769 | https://en.wikipedia.org/wiki/Alternative%20Energy%20Institute | Alternative Energy Institute (also known as AEI) was West Texas A&M University's alternative energy research branch. Formed in 1977, the program was nationally and internationally recognized, and along with research provides education and outreach around the U.S. and the globe.
History
AEI was founded at West Texas State University (now West Texas A&M University) in 1977 by
Dr. Vaughn Nelson, Dr. Earl Gilmore and Dr. Robert Barieau during the 1973 oil crisis. The physics department at West Texas State was already experimenting with wind power and these three individuals took the initiative to found a department to concentrate upon the study of wind. The basic goals of the department were:
To test wind turbine designs.
Improve on current aerodynamic design.
Teach public about the state of wind & solar technology.
First Decade: 1977 - 1987
Initially, much of the organization's focus was on small wind turbine research and improving blade designs. At this time they installed test turbines and water pumping applications throughout Texas. These projects allowed AEI to develop and improve upon blade design theory and production. During this period the organization also provided consulting in Latin America, Jamaica, Hawaii, and Europe. There, AEI trained villages and groups in wind energy systems.
At this time AEI operated from three locations: one off-campus and two on-campus. At these locations they customized testing on blade designs, turbine generator units, and complete designs.
Second Decade: 1987 - 1997
During this decade the organization focused on green building projects. The most notable of these was AEI's Solar Energy Building. Finished in 1993, the building served as the main site for AEI's operations for seventeen years. The building covered all of the organization's energy usage, including an on-site 10 kW Bergey wind turbine installation, and 3 kW of photovoltaics.
Several electric vans were donated to the organization at this time, two of which were maintained for several years. These vans were used to collect data and complete local wind energy projects, as well as to give campus and test site tours.
Starting in 1995, AEI began working with the Texas General Land Office to provide Texas Wind Data to the public. While the GLO data sites have since been decommissioned, the organization still collects, analyzes and publishes Texas Wind Data for the general public.
Now: 2022 - Present
AEI is currently focusing on developing a new degree plan at WTAMU as well as continuing its research on green energy systems. In terms of turbine testing, the organization focuses on small blade and turbine testing, particularly innovative horizontal and vertical axis designs.
In 2010, the AEI test site was moved to the Nance Ranch. At this time, the organization's offices were also moved, to WTAMU's Palo Duro Research Facility.
In the late 90s, AEI also began developing a fortran program called ROTOR. The program could predict theoretical power curves for blade designs and produce screen and printed output of this. The program has been modified a few times since then and is still being used today.
During this time, AEI's Wind Data program has greatly expanded. In addition to working with wind farmers to provide data for the public, the organization also analyzes and publishes data for private organizations. In total, the organizations now collects data from 75 sites scattered across Texas. 50 of these data sites are archived online, 31 of which offer data for public use by researchers and developers.
Education
Courses
Since 2009, AEI has been offering online alternative energy courses at WTAMU on Wind energy, Solar energy and Renewable energy. Currently, WT offers one course per semester, with alternating subjects. The courses are taught by AEI staff and are open to WT students and those seeking certification.
In addition to the online courses offered by WTAMU, AEI has also authored renewable energy textbooks
and educational CDs. The CDs cover the subjects of wind energy, wind turbines, solar energy and wind water pumping. Some CDs are also available in Spanish.
Seminars & Symposiums
Windy Land Owners
Started in 1989, AEI has been giving annual Windy Land Owners seminars. Designed to teach land owners and other interested parties general information about the wind industry, most of the seminars took place in the states surrounding Texas. Due to increased interest, AEI began giving seminars in Texas starting in 2001.
Topics covered at these seminars include:
Wind farm basics
Wind resources in Texas
Potential problem and contract considerations
Starting in 2009, AEI also began offering presentations from the WLO Seminars online for general information.
WEATS
AEI's Solar Energy Building
Launched in 1998, The Wind Energy Applications Training Symposium (WEATS) is an internationally acclaimed workshop for the Native American community. Designed for project planners, developers, utility officials, and engineers directly involved with energy projects, it is both a good resource for networking and developing practical knowledge.
Topics covered at this symposium include:
Practical knowledge and analytical tools for conducting project pre-feasibility and identification analysis
Implementation of small and large wind energy projects
Lectures from National Renewable Energy Laboratory and local experts about the capabilities of the technology and the economic and financial aspects of sustainable project development
Site visits
Community Involvement
In addition to its seminars and workshops, AEI also regularly offers consulting to potential wind farmers and hosts tours of its research facilities.
As part of its community outreach, the organization also presents at the Caprock Science Fair and other local schools, informing students about wind energy via displays, demonstrations and brochures at the elementary, junior high and high school levels.
See also
List of energy storage projects
References
External links
Alternative Energy Institute Official Web Site (Windenergy.org)
Renewable Wind Test Center Official Web Site (Windtestcenter.org)
West Texas A&M University
West Texas A&M University
Energy infrastructure in Texas
Sustainable energy
Energy research institutes
Research institutes in Texas
Research institutes established in 1977 | Alternative Energy Institute | [
"Engineering"
] | 1,176 | [
"Energy research institutes",
"Energy organizations"
] |
23,316,940 | https://en.wikipedia.org/wiki/Isobar%20%28nuclide%29 | Isobars are atoms (nuclides) of different chemical elements that have the same number of nucleons. Correspondingly, isobars differ in atomic number (or number of protons) but have the same mass number. An example of a series of isobars is 40S, 40Cl, 40Ar, 40K, and 40Ca. While the nuclei of these nuclides all contain 40 nucleons, they contain varying numbers of protons and neutrons.
The term "isobars" (originally "isobares") for nuclides was suggested by British chemist Alfred Walter Stewart in 1918. It is derived .
Mass
The same mass number implies neither the same mass of nuclei, nor equal atomic masses of corresponding nuclides. From the Weizsäcker formula for the mass of a nucleus:
where mass number equals to the sum of atomic number and number of neutrons , and , , , , , are constants, one can see that the mass depends on and non-linearly, even for a constant mass number. For odd , it is admitted that and the mass dependence on is convex (or on or , it does not matter for a constant ). This explains that beta decay is energetically favorable for neutron-rich nuclides, and positron decay is favorable for strongly neutron-deficient nuclides. Both decay modes do not change the mass number, hence an original nucleus and its daughter nucleus are isobars. In both aforementioned cases, a heavier nucleus decays to its lighter isobar.
For even the term has the form:
where is another constant. This term, subtracted from the mass expression above, is positive for even-even nuclei and negative for odd-odd nuclei. This means that even-even nuclei, which do not have a strong neutron excess or neutron deficiency, have higher binding energy than their odd-odd isobar neighbors. It implies that even-even nuclei are (relatively) lighter and more stable. The difference is especially strong for small . This effect is also predicted (qualitatively) by other nuclear models and has important consequences.
Stability
The Mattauch isobar rule states that if two adjacent elements on the periodic table have isotopes of the same mass number, at least one of these isobars must be a radionuclide (radioactive). In cases of three isobars of sequential elements where the first and last are stable (this is often the case for even-even nuclides, see above), branched decay of the middle isobar may occur. For instance, radioactive iodine-126 has almost equal probabilities for two decay modes: positron emission, leading to tellurium-126, and beta emission, leading to xenon-126.
No observationally stable isobars exist for mass numbers 5 (decays to helium-4 plus a proton or neutron), 8 (decays to two helium-4 nuclei), 147, 151, as well as for 209 and above. Two observationally stable isobars exist for 36, 40, 46, 50, 54, 58, 64, 70, 74, 80, 84, 86, 92, 94, 96, 98, 102, 104, 106, 108, 110, 112, 114, 120, 122, 123, 124, 126, 132, 134, 136, 138, 142, 154, 156, 158, 160, 162, 164, 168, 170, 176, 180 (including a meta state), 192, 196, 198 and 204.
In theory, no two stable nuclides have the same mass number (since no two nuclides that have the same mass number are both stable to beta decay and double beta decay), and no stable nuclides exist for mass numbers 5, 8, 143–155, 160–162, and ≥ 165, since in theory, the beta-decay stable nuclides for these mass numbers can undergo alpha decay.
See also
Isotopes (nuclides having the same number of protons)
Isotones (nuclides having the same number of neutrons)
Nuclear isomers (different excited states of the same nuclide)
Magic number (physics)
Electron capture
Bibliography
References
Nuclear physics | Isobar (nuclide) | [
"Physics"
] | 872 | [
"Nuclear physics"
] |
26,220,007 | https://en.wikipedia.org/wiki/ISO%2026262 | ISO 26262, titled "Road vehicles – Functional safety", is an international standard for functional safety of electrical and/or electronic systems that are installed in serial production road vehicles (excluding mopeds), defined by the International Organization for Standardization (ISO) in 2011, and revised in 2018.
Overview of the Standard
Functional safety features form an integral part of each automotive product development phase, ranging from the specification, to design, implementation, integration, verification, validation, and production release. The standard ISO 26262 is an adaptation of the Functional Safety standard IEC 61508 for Automotive Electric/Electronic Systems. ISO 26262 defines functional safety for automotive equipment applicable throughout the lifecycle of all automotive electronic and electrical safety-related systems.
The first edition (ISO 26262:2011), published on 11 November 2011, was limited to electrical and/or electronic systems installed in "series production passenger cars" with a maximum gross weight of . The second edition (ISO 26262:2018), published in December 2018, extended the scope from passenger cars to all road vehicles except mopeds.
The standard aims to address possible hazards caused by the malfunctioning behaviour of electronic and electrical systems in vehicles. Although entitled "Road vehicles – Functional safety" the standard relates to the functional safety of Electrical and Electronic systems as well as that of systems as a whole or of their mechanical subsystems.
Like its parent standard, IEC 61508, ISO 26262 is a risk-based safety standard, where the risk of hazardous operational situations is qualitatively assessed and safety measures are defined to avoid or control systematic failures and to detect or control random hardware failures, or mitigate their effects.
Goals of ISO 26262:
Provides an automotive safety lifecycle (management, development, production, operation, service, decommissioning) and supports tailoring the necessary activities during these lifecycle phases.
Covers functional safety aspects of the entire development process (including such activities as requirements specification, design, implementation, integration, verification, validation, and configuration).
Provides an automotive-specific risk-based approach for determining risk classes (Automotive Safety Integrity Levels, ASILs).
Uses ASILs for specifying the items' necessary safety requirements for achieving an acceptable residual risk.
Provides requirements for validation and confirmation measures to ensure a sufficient and acceptable level of safety is being achieved.
Parts of ISO 26262
ISO 26262:2018 consists of twelve parts, ten normative parts (parts 1 to 9 and 12) and two guidelines (parts 10 and 11):
Vocabulary
Management of functional safety
Concept phase
Product development at the system level
Product development at the hardware level
Product development at the software level
Production, operation, service and decommissioning
Supporting processes
Automotive Safety Integrity Level (ASIL)-oriented and safety-oriented analysis
Guidelines on ISO 26262
Guidelines on application of ISO 26262 to semiconductors
Adaptation of ISO 26262 for motorcycles
In comparison, ISO 26262:2011 consisted of just 10 parts, with slightly different naming:
Part 7 was named just Production and operation
Part 10 was named Guideline ... instead of Guidelines ...
Parts 11 and 12 did not exist.
Part 1: Vocabulary
ISO 26262 specifies a vocabulary (a Project Glossary) of terms, definitions, and abbreviations for application in all parts of the standard.
Of particular importance is the careful definition of fault, error, and failure as these terms are key to the standard’s definitions of functional safety processes, particularly in the consideration that "A fault can manifest itself as an error ... and the error can ultimately cause a failure". A resulting malfunction that has a hazardous effect represents a loss of functional safety.
Note: In contrast to other Functional Safety standards and the updated ISO 26262:2018, Fault Tolerance was not explicitly defined in ISO 26262:2011 – since it was assumed impossible to comprehend all possible faults in a system.
Note: ISO 26262 does not use the IEC 61508 term Safe failure fraction (SFF). The terms single point faults metric and latent faults metric are used instead.
Part 2: Management of functional safety
ISO 26262 provides a standard for functional safety management for automotive applications, defining standards for overall organizational safety management as well as standards for a safety life cycle for the development and production of individual automotive products. The ISO 26262 safety life cycle described in the next section operates on the following safety management concepts:
Parts 3-7: Safety Life Cycle
Processes within the ISO 26262 safety life cycle identify and assess hazards (safety risks), establish specific safety requirements to reduce those risks to acceptable levels, and manage and track those safety requirements to produce reasonable assurance that they are accomplished in the delivered product. These safety-relevant processes may be viewed as being integrated or running in parallel with a managed requirements life cycle of a conventional Quality Management System:
An item (a particular automotive system product) is identified and its top level system functional requirements are defined.
A comprehensive set of hazardous events are identified for the item.
An ASIL is assigned to each hazardous event.
A safety goal is determined for each hazardous event, inheriting the ASIL of the hazard.
A vehicle level functional safety concept defines a system architecture to ensure the safety goals.
Safety goals are refined into lower-level safety requirements.(In general, each safety requirement inherits the ASIL of its parent safety requirement/goal. However, subject to constraints, the inherited ASIL may be lowered by decomposition of a requirement into redundant requirements implemented by sufficiently independent redundant components.)
"Safety requirements" are allocated to architectural components (subsystems, hardware components, software components) (In general, each component should be developed in compliance with standards and processes suggested/required for the highest ASIL of the safety requirements allocated to it.)
The architectural components are then developed and validated in accord with the allocated safety (and functional) requirements.
Part 8: Supporting Processes
ISO 26262 defines objectives for integral processes that are supportive to the Safety Life Cycle processes, but are continuously active throughout all phases, and also defines additional considerations that support accomplishment of general process objectives.
Controlled corporate interfaces for flow down of objectives, requirements, and controls to all suppliers in distributed developments
Explicit specification of safety requirements and their management throughout the Safety Life Cycle
Configuration control of work products, with formal unique identification and reproducibility of the configurations that provides for traceability between dependent work products and identification of all changes in configuration
Formal change management, including management of impact of changes on safety requirements, for the purposes of assurance of removal of detected defects as well as for product change without introduction of hazards
Planning, control, and reporting of the verification of work products, including review, analysis, and testing, with regression analysis of detected defects to their source
Planned identification and management of all documentation (work products) produced through all phases of the Safety Life Cycle to facilitate continuous management of functional safety and safety assessment
Confidence in software tools (qualification of software tools for the intended and actual use)
Qualification of previously developed software and hardware components for integration in the currently developed ASIL item
Use of service history evidence to argue that an item has proven sufficiently safe in use for the intended ASIL
Part 9: Automotive Safety Integrity Level (ASIL)-oriented and safety-oriented analysis
Automotive Safety Integrity Level refers to an abstract classification of inherent safety risk in an automotive system or elements of such a system. ASIL classifications are used within ISO 26262 to express the level of risk reduction required to prevent a specific hazard, with ASIL D representing the highest hazard level and ASIL A the lowest. The ASIL assessed for a given hazard is then assigned to the safety goal set to address that hazard and is then inherited by the safety requirements derived from that goal.
ASIL Assessment Overview
The determination of ASIL is the result of hazard analysis and risk assessment. In the context of ISO 26262, a hazard is assessed based on the relative impact of hazardous effects related to a system, as adjusted for relative likelihoods of the hazard manifesting those effects. That is, each hazardous event is assessed in terms of severity of possible injuries within the context of the relative amount of time a vehicle is exposed to the possibility of the hazard happening as well as the relative likelihood that a typical driver can act to prevent the injury.
ASIL Assessment Process
At the beginning of the safety life cycle, hazard analysis and risk assessment is performed, resulting in assessment of ASIL to all identified hazardous events and safety goals.
Each hazardous event is classified according to the severity (S) of injuries it can be expected to cause:
Risk Management recognizes that consideration of the severity of a possible injury is modified by how likely the injury is to happen; that is, for a given hazard, a hazardous event is considered a lower risk if it is less likely to happen. Within the hazard analysis and risk assessment process of this standard, the likelihood of an injurious hazard is further classified according to a combination of
exposure (E) (the relative expected frequency of the operational conditions in which the injury can possibly happen) and
control (C) (the relative likelihood that the driver can act to prevent the injury).
In terms of these classifications, an Automotive Safety Integrity Level D hazardous event (abbreviated ASIL D) is defined as an event having reasonable possibility of causing a life-threatening (survival uncertain) or fatal injury, with the injury being physically possible in most operating conditions, and with little chance the driver can do something to prevent the injury. That is, ASIL D is the combination of S3, E4, and C3 classifications. For each single reduction in any one of these classifications from its maximum value (excluding reduction of C1 to C0), there is a single-level reduction in the ASIL from D. [For example, a hypothetical uncontrollable (C3) fatal injury (S3) hazard could be classified as ASIL A if the hazard has a very low probability (E1).] The ASIL level below A is the lowest level, QM. QM refers to the standard's consideration that below ASIL A; there is no safety relevance and only standard Quality Management processes are required.
These Severity, Exposure, and Control definitions are informative, not prescriptive, and effectively leave some room for subjective variation or discretion between various automakers and component suppliers. In response, the Society for Automotive Safety Engineers (SAE) has issued J2980 – Considerations for ISO26262 ASIL Hazard Classification to provide more explicit guidance for assessing Exposure, Severity and Controllability for a given hazard.
See also
Automotive Safety Integrity Level, comparison with other safety level systems
ARP4754 (Guidelines For Development Of Civil Aircraft and Systems)
DO-178C (Aerospace)
IEC 61508 (Industrial/General, ISO 26262 is an adaption with minor differences)
ISO 60730 (Household)
References
External links
ISO 26262-1:2011(en) (Road vehicles — Functional safety — Part 1: Vocabulary) at ISO Online Browsing Platform (OBP)
ISO 26262-1:2018(en) (Road vehicles — Functional safety — Part 1: Vocabulary) at ISO Online Browsing Platform (OBP)
26262
Automotive standards
International standards
Automotive safety
Safety engineering | ISO 26262 | [
"Engineering"
] | 2,306 | [
"Safety engineering",
"Systems engineering"
] |
26,220,783 | https://en.wikipedia.org/wiki/Characteristic%20admittance | Characteristic admittance is the mathematical inverse of the characteristic impedance.
The general expression for the characteristic admittance of a transmission line is:
where
is the resistance per unit length,
is the inductance per unit length,
is the conductance of the dielectric per unit length,
is the capacitance per unit length,
is the imaginary unit, and
is the angular frequency.
The current and voltage phasors on the line are related by the characteristic admittance as:
where the superscripts and represent forward- and backward-traveling waves, respectively.
See also
Characteristic impedance
References
Electricity
Physical quantities
Distributed element circuits | Characteristic admittance | [
"Physics",
"Mathematics",
"Engineering"
] | 128 | [
"Physical phenomena",
"Physical quantities",
"Quantity",
"Electronic engineering",
"Distributed element circuits",
"Physical properties"
] |
47,790,450 | https://en.wikipedia.org/wiki/Mothur | mothur is an open source software package for bioinformatics data processing. The package is frequently used in the analysis of DNA from uncultured microbes. mothur is capable of processing data generated from several DNA sequencing methods including 454 pyrosequencing, Illumina HiSeq and MiSeq, Sanger, PacBio, and IonTorrent. The first release of mothur occurred in 2009. The release of mothur was announced in a publication in the journal Applied and Environmental Microbiology. As of October 26, 2022 the article releasing mothur had been cited by around 15,000 other research studies.
External links
References
Free bioinformatics software
Computational biology | Mothur | [
"Biology"
] | 144 | [
"Computational biology"
] |
47,793,176 | https://en.wikipedia.org/wiki/Design%20of%20plastic%20components | Injection molding has been one of the most popular ways for fabricating plastic parts for a very long time. They are used in automotive interior parts, electronic housings, housewares, medical equipment, compact discs, and even doghouses. Below are certain rule based standard guidelines which can be referred to while designing parts for injection molding considering manufacturability in mind.
Geometric considerations
The most common guidelines refer to the specification of various relationships between geometric parameters which result in easier or better manufacturability. Some of these are as follows:
Wall Thickness
Non-uniform wall sections can contribute to warpage and stresses in molded parts. Sections which are too thin have a higher chance of breakage in handling, may restrict the flow of material and may trap air causing a defective part. Too heavy a wall thickness, on the other hand, will slow the curing cycle and add to material cost and increase cycle time.
Generally, thinner walls are more feasible with small parts rather than with large ones. The limiting factor in wall thinness is the tendency for the plastic material in thin walls to cool and solidify before the mold is filled. The shorter the material flow, the thinner the wall can be. Walls also should be as uniform in thickness as possible to avoid warpage from uneven shrinkage. When changes in wall thickness are unavoidable, the transition should be gradual and not abrupt.
Some plastics are more sensitive to wall thickness than others, where acetal and ABS plastics max out at around 0.12 in. thick (3 mm), acrylic can go to 0.5 in. (12 mm), polyurethane to 0.75 in. (18 mm), and certain fiber-reinforced plastics to 1 in. (25 mm) or more. Even so, designers should recognize that very thick cross sections can increase the likelihood of cosmetic defects like sink.
Draft angles
Draft angle design is an important factor when designing plastic parts. Because of shrinkage of plastic material, injection molded parts have a tendency to shrink onto a core. This creates higher contact pressure on the core surface and increases friction between the core and the part, thus making ejection of the part from the mold difficult. Hence, draft angles should be designed properly to assist in part ejection. This also reduces cycle time and improves productivity. Draft angles should be used on interior and exterior walls of the part along the pulling direction.
The minimum allowable draft angle is harder to quantify. Plastic material suppliers and molders are the authority on what is the lowest acceptable draft. In most instances, 1degree per side will be sufficient, but between 2 degree and 5 degree per side would be preferable. If the design is not compatible with 1 degree, then allow for 0.5 degree on each side. Even a small draft angle, such as 0.25 degree, is preferable to none at all.
Radius at corners
Generously rounded corners provide a number of advantages. There is less stress concentration on the part and on the tool. Because of sharp corners, material flow is not smooth and tends to be difficult to fill, reduces tooling strength and causes stress concentration. Parts with radii and fillets are more economical and easier to produce, reduce chipping, simplify mold construction and add strength to molded part with good appearance.
Sharp Corners general design guidelines in injection molding suggest that corner radii should be at least one-half the wall thickness. It is recommended to avoid sharp corners and use generous fillets and radii whenever required. During injection molding, the molten plastic has to navigate turns or corners. Rounded corners will ease plastic flow, so engineers should generously radius the corners of all parts. In contrast, sharp inside corners result in molded-in stress particularly during the cooling process when the top of the part tries to shrink and the material pulls against the corners. Moreover, the first rule of plastic design i.e. uniform wall thickness will be obeyed. As the plastic goes around a well-proportioned corner, it will not be subjected to area increases and abrupt changes in direction. Cavity packing pressure stays consistent. This leads to a strong, dimensionally stable corner that will resist post-mold warpage.
Hole depth to diameter ratio
Core pins are used to produce holes in plastic parts. Through holes are easier to produce than blind holes which don't go through the entire part. Blind holes are created by pins that are supported at only one end; hence such pins should not be long. Longer pins will deflect more and be pushed by the pressure of the molten plastic material during molding. It is recommended that hole depth-to-diameter ratio should not be more than 2.
Feature Based Rules
Ribs
Rib features help in strengthening the molded part without adding to wall thickness. In some cases, they can also act as decorative features. Ribs also provide alignment in mating parts or provide stopping surfaces for assemblies. However, projections like ribs can create cavity filling, venting, and ejection problems. These problems become more troublesome for taller ribs. Ribs need to be designed in correct proportion to avoid defects such as short shots and provide the required strength. Thick and deep ribs can cause sink marks and filling problems respectively. Deep ribs can also lead to ejection problems. If ribs are too long or too wide, supporting ribs may be required. It is better to use a number of smaller ribs instead of one large rib.
Recommended values for parameters: Generally, the rib height is recommended to be not more than 2.5 to 3 times the nominal wall thickness. Similarly, rib thickness at its base should be around 0.4 to 0.6 times the nominal wall thickness.
Minimum base radius for ribs: A fillet of a certain minimum radius value should be provided at the base of a rib to reduce stress. However, the radius should not be so large that it results in thick sections. The radius eliminates a sharp corner and stress concentration. Flow and cooling are also improved. Fillet radius at the base of ribs should be between 0.25 and 0.4 times the nominal wall thicknesses of the part.
Draft angle for ribs: Draft angle design is an important factor when designing plastic parts. Such parts may have a greater tendency to shrink onto a core. This creates higher contact pressure on the core surface and increases friction between the core and the part, thus making ejection of the part from the mold difficult. Hence, draft angles should be designed properly to assist in part ejection. This also reduces cycle time and improves productivity. Draft angles should be used on interior or exterior walls of the part along the pulling direction. It is recommended that draft angle for rib should be around 1 to 1.5 deg. Minimum draft should be 0.5 per side.
Spacing between two parallel ribs: Mold wall thickness gets affected due to spacing between various features in the plastic model. If features like ribs are placed close to each other or the walls of the parts, thin areas are created which can be hard to cool and can affect quality. If the mold wall is too thin, it is also difficult to manufacture and can also result in a lower life for the mold due to problems like hot blade creation and differential cooling. It is recommended that spacing between ribs should be at least 2 times the nominal wall.
Boss
Boss, a basic design element in plastics, is typically cylindrical and used as a mounting fixture, location point, reinforcement feature or spacer. Under service conditions, bosses are often subjected to loadings not encountered in other sections of a component.
Minimum radius at base of boss: Provide a generous radius at the base of the boss for strength and ample draft for easy part removal from the mold. A fillet of a certain minimum radius value should be provided at the base of boss to reduce stress. The intersection of the base of the boss with the nominal wall is typically stressed and stress concentration increases if no radii are provided. Also, the radius at the base of the boss should not exceed a maximum value to avoid thick sections. The radius at base of boss provides strength and ample draft for easy removal from the mold. It is recommended that the radius at the base of boss should be 0.25 to 0.5 times the nominal wall thickness.
Boss height to outer diameter ratio: A tall boss with the included draft will generate a material mass and thick section at the base. In addition, the core pin will be difficult to cool, can extend the cycle time and affect the cored hole dimensionally. It is recommended that height of boss should be less than 3 times of outer diameter.
Minimum radius at tip of boss: Bosses are features added to the nominal wall thickness of the component and are usually used to facilitate mechanical assembly. Under service conditions, bosses are often subjected to loadings not encountered in other sections of a component. A fillet of a certain minimum radius value should be provided at the tip of boss to reduce stress.
Wall thickness of boss: Wall thicknesses for bosses should be less than 60 percent of the nominal wall to minimize sinking. However, if the boss is not in a visible area, then the wall thickness can be increased to allow for increased stresses imposed by self-tapping screws. It is recommended that wall thickness of boss should be around 0.6 times of nominal wall thickness depending on the material.
Radius at base of hole in boss: Bosses find use in many part designs as points for attachment and assembly. The most common variety consists of cylindrical projections with holes designed to receive screws, threaded inserts, or other types of fastening hardware. Providing a radius on the core pin helps in avoiding a sharp corner. This not only helps molding but also reduces stress concentration. It is recommended that the radius at base of hole in boss should be 0.25 to 0.5 times the nominal wall thickness.
Minimum draft for boss inner and outer diameter: An appropriate draft on the outer diameter of a boss helps easy ejection from the mold. Draft is required on the walls of boss to permit easy withdrawal from the mold. Similarly, designs may require a minimum taper on the ID of a boss for proper engagement with a fastener. Draft is required on the walls of boss to permit easy withdrawal from the mold. It is recommended that minimum draft on outer surface of the boss should be greater than or equal to 0.5 degree and on inner surface it should be greater than 0.25 degrees.
Spacing between bosses: When bosses are placed very close to each other, it results in creating thin areas which are hard to cool and can affect the quality and productivity. Also, if the mold wall is too thin, it is very difficult to manufacture and often results in a lower life for the mold, due to problems like hot blade creation and differential cooling. It is recommended that spacing between bosses should be at least 2 times the nominal wall thickness.
Standalone boss: Bosses and other thick sections should be cored. It is good practice to attach the boss to the sidewall. In this case the material flow is uniform and provides additional load distribution for the part. For better rigidity and material flow, the general guideline suggests that boss should be connected to nearest side wall.
Undercut detection
Undercuts should be avoided for ease of manufacturing. Undercuts typically require additional mechanisms for manufacture adding to mold cost and complexity. In addition, the part must have room to flex and deform. Clever part design or minor design concessions often can eliminate complex mechanisms for undercuts. Undercuts may require additional time for unloading molds. It is recommended that undercuts on a part should be avoided to the extent possible.
Fillet
Sharp corners increase concentrations, which are prone to air entrapments, air voids, and sink marks hence weakening the structural integrity of the plastic part. It must be eliminated using radii whenever is possible.
It is recommended that an inside radius be a minimum of one times the thickness.
At corners, the suggested inside radius is 0.5 times the material thickness and the outside radius is 1.5 times the material thickness. A bigger radius should be used if part design allows
Holes
Holes can be possibly made on slides but can result in generation of weld lines.
Minimum spacing between 2 holes or a hole and a sidewall should be equal to the diameter of the hole.
The hole should be located at a minimum distance of 3 times the diameter from the edge of a part, to minimize stresses.
A through hole is preferred over a blind hole because core pin that produces a hole can be supported at both ends and is less likely to bend.
Holes in the bottom of a part are better than holes in side, which require retractable core pins.
Depth of blind holes should not be more than 2 times the diameter.
Steps should be used to increase the depth of a deep blind hole.
For through holes, cutout sections in the part can shorten the length of a small-diameter pin.
Use overlapping and offset mold cavity projections instead of core pins to produce holes parallel to the die parting line (perpendicular to the mold movement direction).
Simulation
The design of injection moulded components can be further improved and optimised by using injection moulding simulation software such as Autodesk Moldflow and SolidWorks Plastics. This software works with components designed in CAD to simulate how a polymer behaves when it enters a injection mould cavity. It can predict how the molten material flows and freezes, any part geometry that is too thin or too thick and if there are any weaknesses created in the plastic from defects such as weld lines.
When simulation is undertaken in the design phase of a project, in advance of the tool being manufactured, it can help to identify the problems discussed above and allow the designer to iteratively modify and re-simulate the design to make improvements. Use of simulation in the design phase can help to reduce problems with the physical mould and therefore reduce time to market, reduce the use of material and energy, prevent surface defects such as sink and flow marks, and reduce the time taken to inject, cool and eject the part, improving the injection moulding machine's output rate.
References
Injection molding
Industrial design | Design of plastic components | [
"Engineering"
] | 2,898 | [
"Industrial design",
"Design engineering",
"Design"
] |
47,793,259 | https://en.wikipedia.org/wiki/Intermuscular%20coherence | Intermuscular Coherence is a measure to quantify correlations between the activity of two muscles, which is often assessed using electromyography. The correlations in muscle activity are quantified in frequency domain, and therefore referred to as intermuscular coherence.
History
The synchronisation of motor units of a single muscle in animals and humans are known for decades. The early studies that investigated the relationship of EMG activity used time-domain cross-correlation to quantify common input. The explicit notion of presence of synchrony between motor units of two different muscles was reported at a later time. In the 1990s, coherence analysis was introduced to examine in frequency content of common input.
Physiology
Intermuscular coherence can be used to investigate the neural circuitry involved in motor control. Correlated muscle activity indicates common input to the motor unit pools of both muscles and reflects shared neural pathways (including cortical, subcortical and spinal) that contribute to muscle activity and movement. The strength of intermuscular coherence is dependent on the relationship between muscles and is generally stronger between muscle pairs that are anatomically and functionally closely related. Intermuscular coherence can therefore be used to identify impairments in motor pathways.
See also
Corticomuscular coherence
Corticocortical coherence
References
External links
Neurspec Toolbox for MATLAB
Neuroscience
Neurophysiology | Intermuscular coherence | [
"Biology"
] | 293 | [
"Neuroscience"
] |
47,796,972 | https://en.wikipedia.org/wiki/Gordon%20decomposition | In mathematical physics, the Gordon decomposition (named after Walter Gordon) of the Dirac current is a splitting of the charge or particle-number current into a part that arises from the motion of the center of mass of the particles and a part that arises from gradients of the spin density. It makes explicit use of the Dirac equation and so it applies only to "on-shell" solutions of the Dirac equation.
Original statement
For any solution of the massive Dirac equation,
the Lorentz covariant number-current may be expressed as
where
is the spinor generator of Lorentz transformations,
and
is the Dirac adjoint.
The corresponding momentum-space version for plane wave solutions and obeying
is
where
Proof
One sees that from Dirac's equation that
and, from the adjoint of Dirac's equation,
Adding these two equations yields
From Dirac algebra, one may show that Dirac matrices satisfy
Using this relation,
which amounts to just the Gordon decomposition, after some algebra.
Utility
The second, spin-dependent, part of the current coupled to the photon field, yields, up to an ignorable total divergence,
that is, an effective Pauli moment term, .
Massless generalization
This decomposition of the current into a particle number-flux (first term) and bound spin contribution (second term) requires .
If one assumed that the given solution has energy so that , one might obtain a decomposition that is valid for both massive and massless cases.
Using the Dirac equation again, one finds that
Here , and
with
so that
where is the vector of Pauli matrices.
With the particle-number density identified with , and for a near plane-wave
solution of finite extent, one may interpret the first term in the decomposition as the current , due to particles moving at speed .
The second term, is the current due to the gradients in the intrinsic magnetic moment density. The magnetic moment itself is found by integrating by parts to show that
For a single massive particle in its rest frame, where , the magnetic moment reduces to
where and is the Dirac value of the gyromagnetic ratio.
For a single massless particle obeying the right-handed Weyl equation, the spin-1/2 is locked to the direction of its kinetic momentum and the magnetic moment becomes
Angular momentum density
For both the massive and massless cases, one also has an expression for the momentum density as part of the symmetric Belinfante–Rosenfeld stress–energy tensor
Using the Dirac equation one may evaluate to find the energy density to be , and the momentum density,
If one used the non-symmetric canonical energy-momentum tensor
one would not find the bound spin-momentum contribution.
By an integration by parts one finds that the spin contribution to the total angular momentum is
This is what is expected, so the division by 2 in the spin contribution to the momentum density is necessary. The absence of a division by 2 in the formula for the current reflects the gyromagnetic ratio of the electron. In other words, a spin-density gradient is twice as effective at making an electric current as it is at contributing to the linear momentum.
Spin in Maxwell's equations
Motivated by the Riemann–Silberstein vector form of Maxwell's equations, Michael Berry uses the Gordon strategy to obtain gauge-invariant expressions for the intrinsic spin angular-momentum density for solutions to Maxwell's equations.
He assumes that the solutions are monochromatic and uses the phasor expressions , . The time average of the Poynting vector momentum density is then given by
We have used Maxwell's equations in passing from the first to the second and third lines, and in expression such as the scalar product is between the fields so that the vector character is determined by the .
As
and for a fluid with intrinsic angular momentum density we have
these identities suggest that the spin density can be identified as either
or
The two decompositions coincide when the field is paraxial. They also coincide when the field is a pure helicity state – i.e. when where the helicity takes the values for light that is right or left circularly polarized respectively. In other cases they may differ.
References
Equations of physics | Gordon decomposition | [
"Physics",
"Mathematics"
] | 861 | [
"Mathematical objects",
"Equations of physics",
"Equations"
] |
36,072,745 | https://en.wikipedia.org/wiki/Minkowski%27s%20second%20theorem | In mathematics, Minkowski's second theorem is a result in the geometry of numbers about the values taken by a norm on a lattice and the volume of its fundamental cell.
Setting
Let be a closed convex centrally symmetric body of positive finite volume in -dimensional Euclidean space . The gauge or distance Minkowski functional attached to is defined by
Conversely, given a norm on we define to be
Let be a lattice in . The successive minima of or on are defined by setting the -th successive minimum to be the infimum of the numbers such that contains linearly-independent vectors of . We have .
Statement
The successive minima satisfy
Proof
A basis of linearly independent lattice vectors can be defined by .
The lower bound is proved by considering the convex polytope with vertices at , which has an interior enclosed by and a volume which is times an integer multiple of a primitive cell of the lattice (as seen by scaling the polytope by along each basis vector to obtain -simplices with lattice point vectors).
To prove the upper bound, consider functions sending points in to the centroid of the subset of points in that can be written as for some real numbers . Then the coordinate transform has a Jacobian determinant . If and are in the interior of and (with ) then with , where the inclusion in (specifically the interior of ) is due to convexity and symmetry. But lattice points in the interior of are, by definition of , always expressible as a linear combination of , so any two distinct points of cannot be separated by a lattice vector. Therefore, must be enclosed in a primitive cell of the lattice (which has volume ), and consequently .
References
Hermann Minkowski | Minkowski's second theorem | [
"Mathematics"
] | 342 | [
"Geometry of numbers",
"Geometry",
"Theorems in geometry",
"Mathematical problems",
"Mathematical theorems",
"Number theory"
] |
36,074,588 | https://en.wikipedia.org/wiki/S-Glutathionylation | S-Glutathionylation is the posttranslational modification of protein cysteine residues by the addition of glutathione, the most abundant and important low-molecular-mass thiol within most cell types.
Protein S-glutathionylation is involved in
oxidative stress
nitrosative stress
preventing irreversible oxidation of protein thiols
control of cell-signalling pathways by modulating protein function
References
Post-translational modification | S-Glutathionylation | [
"Chemistry"
] | 100 | [
"Post-translational modification",
"Gene expression",
"Biochemical reactions"
] |
36,075,414 | https://en.wikipedia.org/wiki/Computable%20topology | Computable topology is a discipline in mathematics that studies the topological and algebraic structure of computation. Computable topology is not to be confused with algorithmic or computational topology, which studies the application of computation to topology.
Topology of lambda calculus
As shown by Alan Turing and Alonzo Church, the λ-calculus is strong enough to describe all mechanically computable functions (see Church–Turing thesis). Lambda-calculus is thus effectively a programming language, from which other languages can be built. For this reason when considering the topology of computation it is common to focus on the topology of λ-calculus. Note that this is not necessarily a complete description of the topology of computation, since functions which are equivalent in the Church-Turing sense may still have different topologies.
The topology of λ-calculus is the Scott topology, and when restricted to continuous functions the type free λ-calculus amounts to a topological space reliant on the tree topology. Both the Scott and Tree topologies exhibit continuity with respect to the binary operators of application ( f applied to a = fa ) and abstraction ((λx.t(x))a = t(a)) with a modular equivalence relation based on a congruency. The λ-algebra describing the algebraic structure of the lambda-calculus is found to be an extension of the combinatory algebra, with an element introduced to accommodate abstraction.
Type free λ-calculus treats functions as rules and does not differentiate functions and the objects which they are applied to, meaning λ-calculus is type free. A by-product of type free λ-calculus is an effective computability equivalent to general recursion and Turing machines. The set of λ-terms can be considered a functional topology in which a function space can be embedded, meaning λ mappings within the space X are such that λ:X → X. Introduced November 1969, Dana Scott's untyped set theoretic model constructed a proper topology for any λ-calculus model whose function space is limited to continuous functions. The result of a Scott continuous λ-calculus topology is a function space built upon a programming semantic allowing fixed point combinatorics, such as the Y combinator, and data types. By 1971, λ-calculus was equipped to define any sequential computation and could be easily adapted to parallel computations. The reducibility of all computations to λ-calculus allows these λ-topological properties to become adopted by all programming languages.
Computational algebra from λ-calculus algebra
Based on the operators within lambda calculus, application and abstraction, it is possible to develop an algebra whose group structure uses application and abstraction as binary operators. Application is defined as an operation between lambda terms producing a λ-term, e.g. the application of λ onto the lambda term a produces the lambda term λa. Abstraction incorporates undefined variables by denoting λx.t(x) as the function assigning the variable a to the lambda term with value t(a) via the operation ((λ x.t(x))a = t(a)). Lastly, an equivalence relation emerges which identifies λ-terms modulo convertible terms, an example being beta normal form.
Scott topology
The Scott topology is essential in understanding the topological structure of computation as expressed through the λ-calculus. Scott found that after constructing a function space using λ-calculus one obtains a Kolmogorov space, a topological space which exhibits pointwise convergence, in short the product topology. It is the ability of self homeomorphism as well as the ability to embed every space into such a space, denoted Scott continuous, as previously described which allows Scott's topology to be applicable to logic and recursive function theory. Scott approaches his derivation using a complete lattice, resulting in a topology dependent on the lattice structure. It is possible to generalise Scott's theory with the use of complete partial orders. For this reason a more general understanding of the computational topology is provided through complete partial orders. We will re-iterate to familiarize ourselves with the notation to be used during the discussion of Scott topology.
Complete partial orders are defined as follows:
First, given the partially ordered set D=(D,≤), a nonempty subset X ⊆ D is directed if ∀ x,y ∈ X ∃ z ∈ X where x≤ z & y ≤ z.
D is a complete partial order (cpo) if:
⋅ Every directed X ⊆D has a supremum, and:
∃ bottom element ⊥ ∈ D such that ∀ x ∈ D ⊥ ≤ x.
We are now able to define the Scott topology over a cpo (D, ≤ ).
O ⊆ D is open if:
for x ∈ O, and x ≤ y, then y ∈ O, i.e. O is an upper set.
for a directed set X ⊆ D, and supremum(X) ∈ O, then X ∩ O ≠ ∅.
Using the Scott topological definition of open it is apparent that all topological properties are met.
⋅∅ and D, i.e. the empty set and whole space, are open.
⋅Arbitrary unions of open sets are open:
Proof: Assume is open where i ∈ I, I being the index set. We define U = ∪{ ; i ∈ I}. Take b as an element of the upper set of U, therefore a ≤ b for some a ∈ U It must be that a ∈ for some i, likewise b ∈ upset(). U must therefore be upper as well since ∈ U.
Likewise, if D is a directed set with a supremum in U, then by assumption sup(D) ∈ where is open. Thus there is a b ∈ D where b ∈ . The union of open sets is therefore open.
⋅Open sets under intersection are open:
Proof: Given two open sets, U and V, we define W = U∩V. If W = ∅ then W is open. If non-empty say b ∈ upset(W) (the upper set of W), then for some a ∈ W, a ≤ b. Since a ∈ U∩V, and b an element of the upper set of both U and V, then b ∈ W.
Finally, if D is a directed set with a supremum in W, then by assumption sup(D) ∈ . So there is a ∈ and b ∈ . Since D is directed there is c ∈ D with ; and since U and V are upper sets, c ∈ as well.
Though not shown here, it is the case that the map is continuous if and only if f(sup(X)) = sup(f(X)) for all directed X⊆D, where f(X) = {f(x) | x ∈ X} and the second supremum in .
Before we begin explaining that application as common to λ-calculus is continuous within the Scott topology we require a certain understanding of the behavior of supremums over continuous functions as well as the conditions necessary for the product of spaces to be continuous namely
With be a directed family of maps, then if well defined and continuous.
If F is directed and cpo and a cpo where sup({f(x) | f ∈ F}).
We now show the continuity of application. Using the definition of application as follows:
Ap: where Ap(f,x) = f(x).
Ap is continuous with respect to the Scott topology on the product () :
Proof: λx.f(x) = f is continuous. Let h = λ f.f(x). For directed F
h(sup(F)) = sup(F)(x)
= sup( {f(x) | f ∈ F} )
= sup( {h(f) | f ∈ F} )
= sup( h(F) )
By definition of Scott continuity h has been shown continuous. All that is now required to prove is that application is continuous when it's separate arguments are continuous, i.e. and are continuous, in our case f and h.
Now abstracting our argument to show with and as the arguments for D and respectively, then for a directed X ⊆ D
= f( sup( (x,) | x ∈ X} ))
(since f is continuous and {(x,) | x ∈ X}) is directed):
= sup( {f(x,) | x ∈ X} )
= sup(g(X))
g is therefore continuous. The same process can be taken to show d is continuous.
It has now been shown application is continuous under the Scott topology.
In order to demonstrate the Scott topology is a suitable fit for λ-calculus it is necessary to prove abstraction remains continuous over the Scott topology. Once completed it will have been shown that the mathematical foundation of λ-calculus is a well defined and suitable candidate functional paradigm for the Scott topology.
With we define (x) =λ y ∈ f(x,y)We will show:
(i) is continuous, meaning ∈
(ii) λ is continuous.
Proof (i): Let X ⊆ D be directed, then
(sup(X)) = λ y.f( sup(X),y )
= λ y.( f(x,y) )
= ( λy.f(x,y) )
= sup((X))
Proof (ii): Defining L = λ then for F directed
L(sup(F)) = λ x λ y. (sup(F))(x,y))
= λ x λ y. f(x,y)
= λx λy.f(x,y)
= sup(L(F))
It has not been demonstrated how and why the λ-calculus defines the Scott topology.
Böhm trees and computational topology
Böhm trees, easily represented graphically, express the computational behavior of a lambda term. It is possible to predict the functionality of a given lambda expression from reference to its correlating Böhm tree. Böhm trees can be seen somewhat analogous to where the Böhm tree of a given set is similar to the continued fraction of a real number, and what is more, the Böhm tree corresponding to a sequence in normal form is finite, similar to the rational subset of the Reals.
Böhm trees are defined by a mapping of elements within a sequence of numbers with ordering (≤, lh) and a binary operator * to a set of symbols. The Böhm tree is then a relation among a set of symbols through a partial mapping ψ.
Informally Böhm trees may be conceptualized as follows:
Given: Σ = { λ x_{1} x_{n} . y | n ∈ y are variables and denoting BT(M) as the Böhm tree for a lambda term M we then have:
BT(M) = ⊥ if M is unsolvable (therefore a single node)
BT(M) = λ.y
/ \
BT( BT( ) ; if M is solvable
More formally:
Σ is defined as a set of symbols. The Böhm tree of a λ term M, denoted BT(M), is the Σ labelled tree defined as follows:
If M is unsolvable:
BT(M)() is unsolvable
If M is solvable, where M = λ x_{1}:
BT(M)(< >) = λ x_{1}
BT(M)() = BT(M_k)() and k < m
= undefined and k ≥ m
We may now move on to show that Böhm trees act as suitable mappings from the tree topology to the scott topology. Allowing one to see computational constructs, be it within the Scott or tree topology, as Böhm tree formations.
Böhm tree and tree topology
It is found that Böhm tree's allow for a continuous mapping from the tree topology to the Scott topology. More specifically:
We begin with the cpo B = (B,⊆) on the Scott topology, with ordering of Böhm tree's denoted M⊆ N, meaning M, N are trees and M results from N. The tree topology on the set Ɣ is the smallest set allowing for a continuous map
BT:B.
An equivalent definition would be to say the open sets of Ɣ are the image of the inverse Böhm tree (O) where O is Scott open in B.
The applicability of the Bömh trees and the tree topology has many interesting consequences to λ-terms expressed topologically:
Normal forms are found to exist as isolated points.
Unsolvable λ-terms are compactification points.
Application and abstraction, similar to the Scott topology, are continuous on the tree topology.
Algebraic structure of computation
New methods of interpretation of the λ-calculus are not only interesting in themselves but allow new modes of thought concerning the behaviors of computer science. The binary operator within the λ-algebra A is application. Application is denoted ⋅ and is said to give structure . A combinatory algebra allows for the application operator and acts as a useful starting point but remains insufficient for the λ-calculus in being unable to express abstraction. The λ algebra becomes a combinatory algebra M combined with a syntactic operator λ* that transforms a term B(x,y), with constants in M, into C()≡ λ* x.B(x,). It is also possible to define an extension model to circumvent the need of the λ* operator by allowing ∀x (fx =gx) ⇒ f =g . The construction of the λ-algebra through the introduction of an abstraction operator proceeds as follows:
We must construct an algebra which allows for solutions to equations such as axy = xyy such that a = λ xy.xyy there is need for the combinatory algebra. Relevant attributes of the combinatory algebra are:
Within combinatory algebra there exists applicative structures. An applicative structure W is a combinatory algebra provided:
⋅W is non-trival, meaning W has cardinality > 1
⋅W exhibits combinatory completeness (see completeness of the S-K basis). More specifically: for every term A ∈ the set of terms of W, and with the free variables of A within then:
where
The combinatory algebra is:
Never commutative
Not associative.
Never finite.
Never recursive.
Combinatory algebras remain unable to act as the algebraic structure for λ-calculus, the lack of recursion being a major disadvantage. However the existence of an applicative term ) provides a good starting point to build a λ-calculus algebra. What is needed is the introduction of a lambda term, i.e. include λx.A(x, ).
We begin by exploiting the fact that within a combinatory algebra M, with A(x, ) within the set of terms, then:
s.t. bx = A(x, ).
We then require b have a dependence on resulting in:
B()x = A(x, ).
B() becomes equivalent to a λ term, and is therefore suitably defined as follows: B( λ*.
A pre-λ-algebra (pλA) can now be defined.
pλA is an applicative structure W = (X,⋅) such that for each term A within the set of terms within W and for every x there is a term λ*x.A ∈ T(W) (T(W) ≡ the terms of W) where (the set of free variables of λ*x.A) = (the set of free variables of A) - {x}. W must also demonstrate:
(λ*x.A)x = A
λ*x.A≡ λ*x.A[x:=y] provided y is not a free variable of A
(λ*x.A)[y:=z]≡λ*x.A[x:=y] provided y,z ≠ x and z is not a free variable of A
Before defining the full λ-algebra we must introduce the following definition for the set of λ-terms within W denoted with the following requirements:
a ∈ W
x ∈ for x ∈ ()
M,N ∈ (MN) ∈
M ∈ (λx.M) ∈
A mapping from the terms within to all λ terms within W, denoted * : , can then be designed as follows:
(MN)* = M* N*
(λx.M)* = λ* x*.M*
We now define λ(M) to denote the extension after evaluating the terms within .
λx.(λy.yx) = λx.x in λ(W).
Finally we obtain the full λ-algebra through the following definition:
(1) A λ-algebra is a pλA W such that for M,N ∈ Ɣ(W):
λ(W) ⊢ M = N ⇒ W ⊨ M = N.
Though arduous, the foundation has been set for a proper algebraic framework for which the λ-calculus, and therefore computation, may be investigated in a group theoretic manner.
References
Computational topology
Computational complexity theory
Computational science
Computational fields of study | Computable topology | [
"Mathematics",
"Technology"
] | 3,568 | [
"Computational topology",
"Computational fields of study",
"Applied mathematics",
"Computational mathematics",
"Computational science",
"Topology",
"Computing and society"
] |
36,079,854 | https://en.wikipedia.org/wiki/Extended%20discrete%20element%20method | The extended discrete element method (XDEM) is a numerical technique that extends the dynamics of granular material or particles as described through the classical discrete element method (DEM) (Cundall and Allen) by additional properties such as the thermodynamic state, stress/strain or electro-magnetic field for each particle. Contrary to a continuum mechanics concept, the XDEM aims at resolving the particulate phase with its various processes attached to the particles. While the discrete element method predicts position and orientation in space and time for each particle, the extended discrete element method additionally estimates properties such as internal temperature and/or species distribution or mechanical impact with structures.
History
Molecular dynamics developed in the late 1950s by Alder et al. and early 1960s by Rahman may be regarded as a first step toward the extended discrete element method, although the forces due to collisions between particles were replaced by energy potentials e.g. Lennard-Jones potentials of molecules and atoms as long range forces to determine interaction.
Similarly, the fluid dynamic interaction of particles suspended in a flow were investigated. The drag forces exerted on the particles by the relative velocity by them and the flow were treated as additional forces acting on the particles. Therefore, these multiphase flow phenomena including a solid e.g.~particulate and a gaseous or fluid phase resolve the particulate phase by discrete methods, while gas or liquid flow is described by continuous methods, and therefore, is labelled the combined continuum and discrete model (CCDM) as applied by Kawaguchi et al., Hoomans, Xu 1997 and Xu 1998. Due to a discrete description of the solid phase, constitutive relations are omitted, and therefore, leads to a better understanding of the fundamentals. This was also concluded by Zhu 2007 et al. and Zhu 2008 et al. during a review on particulate flows modelled with the CCDM approach. It has seen a mayor development in last two decades and describes motion of the solid phase by the Discrete Element Method (DEM) on an individual particle scale and the remaining phases are treated by the Navier-Stokes equations. Thus, the method is recognized as an effective tool to investigate into the interaction between a particulate and fluid phase as reviewed by Yu and Xu, Feng and Yu and Deen et al. Based on the CCDM methodology the characteristics of spouted and fluidised beds are predicted by Gryczka et al.
The theoretical foundation for the XDEM was developed in 1999 by Peters, who described incineration of a wooden moving bed on a forward acting grate. The concept was later also employed by Sismsek et al. to predict the furnace process of a grate firing system. Applications to the complex processes of a blast furnace have been attempted by Shungo et al. Numerical simulation of fluid injection into a gaseous environment nowadays is adopted by a large number of CFD-codes such as Simcenter STAR-CCM+, Ansys and AVL-Fire. Droplets of a spray are treated by a zero-dimensional approach to account for heat and mass transfer to the fluid phase.
Methodology
Many engineering problems exist that include continuous and discrete phases, and those problems cannot be simulated accurately by continuous or discrete approaches. XDEM provides a solution for some of those engineering applications.
Although research and development of numerical methods in each domains of discrete and continuous solvers is still progressing, software tools are available. In order to couple discrete and continuous approaches, two major approaches are available:
Monolithic approach: The equations describing multi-physics phenomena are solved simultaneously by a single solver producing a complete solution.
Partitioned or staggered approach: The equations describing multi-physics phenomena are solved sequentially by appropriately tailored and distinct solvers with passing the results of one analysis as a load to the other.
The former approach requires a solver that handles all physical problems involved, therefore it requires a larger implementation effort. However, there exist scenarios for which it is difficult to arrange the coefficients of combined differential equations in one matrix.
The latter, partitioned, approach couples a number of solvers representing individual domains of physics offers advantages over a monolithic concept. It encompasses a larger degree of flexibility because it can use many solvers. Furthermore, it allows a more modular software development. However, partitioned simulations require stable and accurate coupling algorithms.
Within the staggered concept of XDEM, continuous fields are described by the solution the respective continuous (conservation) equations. Properties of individual particles such as temperature are also resolved by solving respective conservation equations that yield both a spatial and temporal internal distribution of relevant variables. Major conservation principles with their equations and variables to be solved for and that are employed to an individual particle within XDEM are listed in the following table.
The solution of these equations in principle defines a three-dimensional and transient field of the relevant variables such as temperature or species. However, the application of these conservation principles to a large number of particles usually restricts the resolution to at most one representative dimension and time due to CPU time consumption. Experimental evidence
at least in reaction engineering supports the assumption of one-dimensionality as pointed out by Man and Byeong, while the importance of a transient behaviour is stressed by Lee et al.
Applications
Problems that involve both a continuous and a discrete phase are important in applications as diverse as pharmaceutical industry e.g.~drug production, agriculture food and processing industry, mining, construction and agricultural machinery, metals manufacturing, energy production and systems biology. Some predominant examples are coffee, corn flakes, nuts, coal, sand, renewable fuels e.g.~biomass for energy production and fertilizer.
Initially, such studies were limited to simple flow configurations as pointed out by Hoomans, however, Chu and Yu demonstrated that the method could be applied to a complex flow configuration consisting of a fluidized bed, conveyor belt and a cyclone. Similarly, Zhou et al. applied the CCDM approach to the complex geometry of fuel-rich/lean burner for pulverised coal combustion in a plant and Chu et al. modelled the complex flow of air, water, coal and magnetite particles of different sizes in a dense medium cyclone (DMC).
The CCDM approach has also been applied to fluidised beds as reviewed by Rowe and Nienow and Feng and Yu and applied by Feng and Yu to the chaotic motion of particles of different sizes in a gas fluidized bed. Kafuia et al. describe discrete particle-continuum fluid modelling of gas-solid fluidised beds. Further applications of XDEM include thermal conversion of biomass on a backward and forward acting grate. Heat transfer in thermal/reacting particulate systems was also solved and investigated, as comprehensively reviewed by Peng et al. The deformation of a conveyor belt due to impacting granular material that is discharged over a chute represents an application in the field of stress/strain analysis.
References
Numerical differential equations
Computational physics | Extended discrete element method | [
"Physics"
] | 1,409 | [
"Computational physics"
] |
36,080,526 | https://en.wikipedia.org/wiki/KcsA%20potassium%20channel | KcsA (K channel of streptomyces A) is a prokaryotic potassium channel from the soil bacterium Streptomyces lividans that has been studied extensively in ion channel research. The pH activated protein possesses two transmembrane segments and a highly selective pore region, responsible for the gating and shuttling of K+ ions out of the cell. The amino acid sequence found in the selectivity filter of KcsA is highly conserved among both prokaryotic and eukaryotic K+ voltage channels; as a result, research on KcsA has provided important structural and mechanistic insight on the molecular basis for K+ ion selection and conduction. As one of the most studied ion channels to this day, KcsA is a template for research on K+ channel function and its elucidated structure underlies computational modeling of channel dynamics for both prokaryotic and eukaryotic species.
History
KcsA was the first potassium ion channel to be characterized using x-ray crystallography by Roderick MacKinnon and his colleagues in 1998. In the years leading up to this, research on the structure of K+ channels was centered on the use of small toxin binding to reveal the location of the pore and selectivity filter among channel residues. MacKinnon's group theorized the tetrameric arrangement of the transmembrane segments, and even suggested presence of pore-forming “loops” in the filter region made of short segments of amino acids that interacted with K+ ions passing through the channel The discovery of strong sequence homology between KcsA and other channels in the Kv family, including the Shaker protein, attracted the attention of the scientific community especially as the K+ channel signature sequence began to appear in other prokaryotic genes. The simplicity of the two transmembrane helices in KcsA, as opposed to the six in many eukaryotic ion channels, also provided a method to understand the mechanisms of K+ channels conduction at a more rudimentary level, thereby providing even great impetus for the study of KcsA.
The crystal structure of KcsA was solved by the MacKinnon group in 1998 after discovery that removal of the C-terminus cytoplasmic domain of the native protein (residues 126–158) increases the stability of crystallized samples. A model of KcsA at the 3.2A resolution was produced that confirmed the tetrameric arrangement of the protein around a center pore, with one helix of each subunit facing the inside axis and the other facing outwards. Three years later, a higher resolution model was produced by Morais-Cabral and Zhou after monoclonal Fab fragments were attached to KcsA crystals to further stabilize the channel. In the early 2000s, evidence for the occupation of the selectivity filter by two K+ atom during the transport process emerged, based on energy and electrostatic calculations made to model the pore region. Continued investigation of the various opened and closed, inactive and active conformations of KcsA by other imaging methods such as ssNMR and EPR have since provided even more insight into channel structure and the forces gating the switch from channel inactivation to conduction.
In 2007, Riek et al. showed that the channel opening that results from titrating the ion channel from pH 7 to pH 4, corresponds to conformational changes in two regions: transition to the ion-exchanging state of the selectivity filter, and the opening of the arrangement of TM2 at the C-terminus. This model explains the ability of KcsA to simultaneous select for K+ ions while also gating electrical conductance. In 2011, the crystal structure of full length KcsA was resolved to reveal that hindrance by the previously truncated residues permits only straightforward expansion of the intercellular ion passage region of the protein. This research provides a more detailed look into the motion of separate channel regions during ion conduction. In the present day, KcsA studies are focused on using the prokaryotic channel as a model for the channel dynamics of larger eukaryotic K+ channels, including hERG.
Structure
The structure of KcsA is that of an inverted cone, with a central pore running down the center made up of two transmembrane helices (the outer-helix M1 and the inner-helix M2), which span the lipid bilayer. The channel itself is a tetramer composed of four identical, single-domain subunits (each with two α-helices) arranged so that one M2 helix faces the central pore, while the other M1 helix faces the lipid membrane. The inner helices are tilted by about 25° in relation to the lipid membrane and are slightly kinked, opening up to face the outside of the cell like a flower. These two TM helices are linked by a reentrant loop, dispersed symmetrically around a common axis corresponding to the central pore. The pore region spans approximately 30 amino acid residues and can be divided into three parts: a selectivity filter near the extracellular side, a dilated water-filled cavity at the center, and a closed gate near the cytoplasmic side formed by four packed M2 helices. This architecture is found to be highly conserved in the potassium channel family in both eukaryotes and prokaryotes.
The overall length of the pore is 45 Å, and its diameter varies considerably within the distinct regions of the inner tunnel. Travelling from the intracellular region outwards (bottom to top in the picture) the pore begins with a gate region formed by M2 helices at 18 Å in diameter, and then opens into a wide cavity (~10 Å across) near the middle of the membrane. In these regions, K+ ions are in contact with surrounding water molecules but when they enter the channel from the selectivity filter at the top, the cavity is so narrow that K+ ions must shed any hydrating waters in order to enter the cell. In regards to the amino acid composition of the pore-lining residues within KcsA, the side chains lining the internal pore and cavity are predominantly hydrophobic, but within the selectivity filter polar amino acids are present that contact the dehydrated K+ ions.
Selectivity filter
The wider end of the cone corresponds to the extracellular mouth of the channel made up of pore helices, plus a selectivity filter that is formed by a TVGYG sequence, (Threonine, Valine, Glycine, Tyrosine, Glycine), characteristic of potassium channels. Within this region, coordination between the TVGYG amino acids and incoming K+ ions allows for conduction of ions through the channel. The selectivity filter of KcsA contains four ion binding sites, although it is proposed that only two of these four positions are occupied at one time. The selectivity filter is about 3 Å in diameter. though molecular dynamics simulations suggest the filter is flexible. The presence of TVGYG in the filter region of KcsA is conserved even in more complex eukaryotic channels, thus making KcsA an optimal system for studying K+ channel conductance across species.
Function
The KcsA channel is considered a model channel because the KcsA structure provides a framework for understanding K+ channel conduction, which has three parts: Potassium selectivity, channel gating by pH sensitivity, and voltage-gated channel inactivation. K+ ion permeation occurs at the upper selectivity filter region of the pore, while pH gating rises from the protonation of transmembrane helices at the end of the pore. At low pH, the M2 helix is protonated, shifting the ion channel from closed to open conformation. As ions flow through the channel, voltage gating mechanisms are thought to induce interactions between Glu71 and Asp80 in the selectivity filter, which destabilize the conductive conformation and facilitate entry into a long-lived nonconducting state that resembles the C-type–inactivation of voltage-dependent channels.
In the nonconducting conformation of KcsA at pH 7, K+ is bound tightly to coordinating oxygens of the selectivity filter and the four TM2 helices converge near the cytoplasmic junction to block the passage of any potassium ions. At pH 4 however, KcsA undergoes millisecond-timescale conformational exchanges filter permeating and nonpermeating states and between the open and closed conformations of the M2 helices. While these distinct conformational changes occur in separate regions of the channel, the molecular behavior of each region is linked by both electrostatic interactions and allostery. The dynamics of this exchange stereochemical configurations in the filter provides the physical basis for simultaneous K+ conductance and gating.
K+selectivity
The sequence TVGYG is especially important for maintaining the potassium specificity of KcsA. The glycines in this selectivity filter sequence have dihedral angles that allow carbonyl oxygen atoms in the protein backbone of the filter to point in one direction, toward the ions along the pore. The glycines and threonine coordinate with the K+ ion, while the side-chains of valine and tyrosine are directed into the protein core to impose geometric constraint on the filter. As a result, the KcsA tetramer harbors four equal spaced K+ binding sites, with each side composed of a cage formed by eight oxygen atoms that sit on the vertices of a cube. The oxygen atoms that surround K+ ions in the filter are arranged like the water molecules that encircle hydrated K+ ions in the cavity of the channel; this suggests that oxygen coordination and binding sites in the selectivity filter are paying for the energetic cost of K+ dehydration. Because the Na+ ion is too small for these K+-sized binding sites, dehydration energy is not compensated and thus, the filter selects against other extraneous ions. Additionally, the KcsA channel is blocked by Cs+ ions and gating requires the presence of Mg2+ ions.
pH Sensitivity
The pH-dependent conductance of KcsA indicates that the opening of the ion channel occurs when the protein is exposed to a more acidic environment. NMR studies performed by the Riek group show that pH sensitivity occurs in both the C-terminal TM2 region of the protein as well as with Tyr78 and Gly79 residues in the selectivity filter. There is evidence to suggest that the main pH sensor is in the cytoplasmic domain. Exchanging negatively charged amino acids for neutral ones made the KcsA channel insensitive to pH even though there were no amino-acid changes at the transmembrane region. In addition, between the pH of 6 and 7, histidine is one of the few titratable side chains of histidines; they are absent in the transmembrane and extracellular segments of TM2 but present at KcsA's C-terminus. This highlights a possible mechanism for the slow opening of KcsA which is particularly pH sensitive, especially as the conformational propagation of channel opening signal from the C-terminus to the selectivity filter could be important in coordinating the structural changes needed for conductance along the entire pore.
NMR studies also suggest that a complex hydrogen bond network between Tyr78, Gly79, Glu71 and Asp80 exists in the KcsA filter region, and further acts as a pH-sensitive trigger for conductance. The mutation of key residues in the region, including E71A, results in a large energy cost of 4 kcal mol−1, equivalent to the loss of the hydrogen bond between Glu71 and Tyr78 and the water-mediated hydrogen bond between Glu71 and Asp80 in KcsA(E71A). These studies further highlight the role of pH gating in KcsA channel function.
Voltage Gating
In 2006, the Perozo group proposed a mechanistic explanation for the effects of voltage fields on KcsA gating. After adding a depolarizing current to the channel, the reorientation of Glu71 towards the intracellular pore occurs, thereby disrupting the Glu71-Asp80 carboxyl-carboxylate pair that initially stabilizes the selectivity filter. The collapse of the filter region prevents entry into or facilitate exit from the inactivated state. Glu71, a key part of the selectivity filter signature sequence that is conserved among K+ ion channels, plays a pivotal role in gating as its ability to reorient itself in the direction of the transmembrane voltage field is able to provide an explanation for voltage gating events in KcsA. The orientation of amino acids in the filter region might play significant physiological role in modulating potassium fluxes in eukaryotes and prokaryotes under steady-state conditions.
Research
Function
The precise mechanism of potassium channel selectivity continues to be studied and debated and multiple models are used to describe different aspects of the selectivity. Models explaining selectivity based on field strength concept developed by George Eisenman based on Coulomb's law have been applied to KcsA. An alternative explanation for the selectivity of KcsA is based on the close-fit model (also known as the snug-fit model) developed by Francisco Bezanilla and Armstrong. The main chain carbonyl oxygen atoms that make up the selectivity filter are held at a precise position that allows them to substitute for water molecules in the hydrated shell of the potassium ion, but they are too far from a sodium ion. Further work has studied thermodynamic differences in ion binding, topological considerations, and the number of continuous ion binding sites.
In addition, a major limitation of crystal structure study and simulations has yet to be discussed: the best resolved and most applied crystal structure of KcsA appears to be that of the ‘closed' form of the channel. This is reasonable as the closed state of the channel is favored at neutral pH, at which the crystal structure was solved by X-ray crystallography. However, the dynamic behavior of KcsA makes analysis of the channel difficult as a crystal structure inevitably provides a static, spatially and temporally averaged image of a channel. To bridge the gap between molecular structure and physiological behavior, an understanding of the atomic resolution dynamics of potassium channels is required.
Applications
Due to the high sequence similarity between the pore of KcsA and other eukaryotic K+ ion channel proteins, KcsA has provided important insight into the behavior of other important voltage conducting proteins such as the drosophilla-derived Shaker and the human hERG potassium channel. KcsA has been used in mutagenesis studies to model the interactions between hERG and various drug compounds. Such tests can screen for drug-hERG channel interactions that cause acquired long QT syndrome, are essential for determining the cardiac safety of new medications. In addition, homology models based on the closed state KcsA crystal structure have been generated computationally to construct a multiple state representation of the hERG cardiac K+ channel. Such models reveal the flexibility of the hERG channel and can consistently predict the binding affinity of a set of diverse ion channel-interacting ligands. Analysis of the complex ligand-hERG structures can be used to guide the synthesis of drug analogs with reduced hERG liability, based on drug structure and docking potential.
See also
Calcium channel
Potassium channel
Sodium channel
References
Ion channels
Bacterial proteins | KcsA potassium channel | [
"Chemistry"
] | 3,207 | [
"Neurochemistry",
"Ion channels"
] |
31,868,890 | https://en.wikipedia.org/wiki/Profile%20diagram | In Unified Modeling Language
in the field of software engineering,
a profile diagram
operates at the metamodel level to show stereotypes as classes with the «stereotype» stereotype, and profiles as packages with the «profile» stereotype. The extension relation (solid line with closed, filled arrowhead) indicates what metamodel element a given stereotype is extending.
History
The profile diagram did not exist in UML 1.
Other diagrams had been used to display this issue.
It was introduced with UML 2 to display the usage of profiles.
See also
UML diagrams
References
External links
Christoph Kecher: "UML 2.0 - Das umfassende Handbuch" Galileo Computing, 2006,
Unified Modeling Language diagrams
Systems Modeling Language | Profile diagram | [
"Engineering"
] | 150 | [
"Systems engineering",
"Systems Modeling Language"
] |
31,870,479 | https://en.wikipedia.org/wiki/Syntomic%20topology | In algebraic geometry, the syntomic topology is a Grothendieck topology introduced by .
Mazur defined a morphism to be syntomic if it is flat and locally a complete intersection. The syntomic topology is generated by surjective syntomic morphisms of affine schemes.
References
External links
Explanation of the word "syntomic" by Barry Mazur.
Algebraic geometry | Syntomic topology | [
"Mathematics"
] | 87 | [
"Fields of abstract algebra",
"Algebraic geometry"
] |
41,676,644 | https://en.wikipedia.org/wiki/Enduring%20Quests%20and%20Daring%20Visions | Enduring Quests and Daring Visions is a vision for astrophysics programs chartered by then-Director of NASA's Astrophysics Division, Paul Hertz, and released in late 2013. It lays out plans over 30 years as long-term goals and missions. Goals include mapping the Cosmic Microwave Background and finding Earth like exoplanets, to go deeper into space-time studying the Large Scale Structure of the Universe, extreme physics, and looking back farther in time. The panel that produced the vision included many notable American astrophysicists, including: Chryssa Kouveliotou, Eric Agol, Natalie Batalha, Misty Bentz, Alan Dressler, Scott Gaudi, Olivier Guyon, Enectali Figueroa-Feliciano, Feryal Ozel, Aki Roberge, Amber Straughn, and Joan Centrella.
Examples of discussed missions include:
Astro-H (Hitomi)
Black Hole Mapper
CMB Polarization Surveyor
Cosmic Dawn
Euclid
ExoEarth Mapper
Gaia
Gravitational Wave Surveyor/Mapper
Habitable Exoplanet Imaging Mission (HabEx)
Far-Infrared Surveyor (later renamed the Origins Space Telescope)
JEM-EUSO
James Webb Space Telescope (JWST)
Large UV Optical Infrared Surveyor (LUVOIR)
Nancy Grace Roman Space Telescope
Neutron Star Interior Composition Explorer (NICER)
Transiting Exoplanet Survey Satellite (TESS)
X-Ray Surveyor (later renamed the Lynx X-ray Observatory)
References
External links
Enduring Quests and Daring Visions (NASA) (.pdf)
Astrophysics
2013 in outer space | Enduring Quests and Daring Visions | [
"Physics",
"Astronomy"
] | 326 | [
"Astronomical sub-disciplines",
"Astrophysics"
] |
41,681,462 | https://en.wikipedia.org/wiki/Von%20Neumann%E2%80%93Wigner%20interpretation | The von Neumann–Wigner interpretation, also described as "consciousness causes collapse", is an interpretation of quantum mechanics in which consciousness is postulated to be necessary for the completion of the process of quantum measurement.
Background: observation in quantum mechanics
In the orthodox Copenhagen interpretation, quantum mechanics predicts only the probabilities for different observed experimental outcomes. What constitutes an observer or an observation is not directly specified by the theory, and the behavior of a system under measurement and observation is completely different from its usual behavior: the wavefunction that describes a system spreads out into an ever-larger superposition of different possible situations. However, during observation, the wavefunction describing the system collapses to one of several options. If there is no observation, this collapse does not occur, and none of the options ever become less likely.
It can be predicted using quantum mechanics, absent a collapse postulate, that an observer observing a quantum superposition will turn into a superposition of different observers seeing different things. The observer will have a wavefunction which describes all the possible outcomes. Still, in actual experience, an observer never senses a superposition, but always senses that one of the outcomes has occurred with certainty. This apparent conflict between a wavefunction description and classical experience is called the problem of observation (see Measurement problem).
The interpretation
In his 1932 book The Mathematical Foundations of Quantum Mechanics, John von Neumann argued that the mathematics of quantum mechanics allows the collapse of the wave function to be placed at any position in the causal chain from the measurement device to the "subjective perception" of the human observer. In 1939, Fritz London and Edmond Bauer argued for the latter boundary (consciousness). In the 1960s, Eugene Wigner reformulated the "Schrödinger's cat" thought experiment as "Wigner's friend" and proposed that the consciousness of an observer is the demarcation line that precipitates collapse of the wave function, independent of any realist interpretation. See Consciousness and measurement. The mind is postulated to be non-physical and the only true measurement apparatus.
This interpretation has been summarized thus:
The rules of quantum mechanics are correct but there is only one system which may be treated with quantum mechanics, namely the entire material world. There exist external observers which cannot be treated within quantum mechanics, namely human (and perhaps animal) minds, which perform measurements on the brain causing wave function collapse.
Henry Stapp has argued for the concept as follows:
From the point of view of the mathematics of quantum theory it makes no sense to treat a measuring device as intrinsically different from the collection of atomic constituents that make it up. A device is just another part of the physical universe... Moreover, the conscious thoughts of a human observer ought to be causally connected most directly and immediately to what is happening in his brain, not to what is happening out at some measuring device... Our bodies and brains thus become ... parts of the quantum mechanically described physical universe. Treating the entire physical universe in this unified way provides a conceptually simple and logically coherent theoretical foundation...
Objections to the interpretation
There are other possible solutions to the "Wigner's friend" thought experiment, which do not require consciousness to be different from other physical processes. Moreover, Wigner actually shifted to those interpretations (and away from "consciousness causes collapse") in his later years. This was partly because he was embarrassed that "consciousness causes collapse" can lead to a kind of solipsism, but also because he decided that he had been wrong to try to apply quantum physics at the scale of everyday life (specifically, he rejected his initial idea of treating macroscopic objects as isolated systems). See .
This interpretation relies upon an interactionist form of dualism that is inconsistent with the materialism that is commonly used to understand the brain, and accepted by most scientists. (Materialism assumes that consciousness has no special role in relation to quantum mechanics.) The measurement problem notwithstanding, they point to a causal closure of physics, suggesting a problem with how consciousness and matter might interact, reminiscent of objections to Descartes' substance dualism.
The interpretation has also been criticized for not explaining which things have sufficient consciousness to collapse the wave function. Also, it posits an important role for the conscious mind, and it has been questioned how this could be the case for the earlier universe, before consciousness had evolved or emerged. It has been argued that "[consciousness causes collapse] does not allow sensible discussion of Big Bang cosmology or biological evolution". For example, Roger Penrose remarked: "[T]he evolution of conscious life on this planet is due to appropriate mutations having taken place at various times. These, presumably, are quantum events, so they would exist only in linearly superposed form until they finally led to the evolution of a conscious being—whose very existence depends on all the right mutations having 'actually' taken place!" Others further suppose a universal mind (see also panpsychism and panexperientialism). Other researchers have expressed similar objections to the introduction of any subjective element in the collapse of the wavefunction.
Testability
It has been argued that the results of delayed-choice quantum eraser experiments empirically falsify this interpretation. However, the argument was shown to be invalid because an interference pattern would only be visible after post-measurement detections were correlated through use of a coincidence counter; if that was not true, the experiment would allow signaling into the past. The delayed-choice quantum eraser experiment has also been used to argue for support of this interpretation, but, as with other arguments, none of the cited references prove or falsify this interpretation.
The central role played by consciousness in this interpretation naturally calls for use of psychological experiments to verify or falsify it. One such approach relies on explaining the empirical presentiment effect quantum mechanically. Another approach makes use of the psychological priming effect to design an appropriate test. Both methods claim verification success.
Reception
A poll was conducted at a quantum mechanics conference in 2011 using 33 participants (including physicists, mathematicians, and philosophers). Researchers found that 6% of participants (2 of the 33) indicated that they believed the observer "plays a distinguished physical role (e.g., wave-function collapse by consciousness)". This poll also states that 55% (18 of the 33) indicated that they believed the observer "plays a fundamental role in the application of the formalism but plays no distinguished physical role". They also mention that "Popular accounts have sometimes suggested that the Copenhagen interpretation attributes such a role to consciousness. In our view, this is to misunderstand the Copenhagen interpretation."
Views of the pioneers of quantum mechanics
Many of the originators of quantum mechanical theory held that humans can effectively interrogate nature through interacting with it, and that in this regard quantum mechanics is not different from classical mechanics. In addition, Werner Heisenberg maintained that wave function collapse, "The discontinuous change in the probability function", takes place when the result of a measurement is registered in the mind of an observer. However, this is because he understood the probability function as an artifact of human knowledge: he also argued that the reality of the material transition from "possible" to "actual" was mind-independent. Albert Einstein, who believed in realism, and did not accept the theoretical completeness of quantum mechanics, similarly appealed for the merely epistemic conception of the wave function:
[I advocate] that one conceives of the psi-function [i.e., wavefunction] only as an incomplete description of a real state of affairs, where the incompleteness of the description is forced by the fact that observation of the state is only able to grasp part of the real factual situation. Then one can at least escape the singular conception that observation (conceived as an act of consciousness) influences the real physical state of things; the change in the psi-function through observation then does not correspond essentially to the change in a real matter of fact but rather to the alteration in our knowledge of this matter of fact.
Bohr also took an active interest in the philosophical implications of quantum theories such as his complementarity principle. He believed quantum theory offers a complete description of nature, albeit one that is simply ill-suited for everyday experiences – which are better described by classical mechanics and probability. Bohr never specified a demarcation line above which objects cease to be quantum and become classical. He believed that it was not a question of physics, but one of philosophy or convenience.
See also
Interpretations of quantum mechanics
Measurement in quantum mechanics
Quantum mind
Quantum Zeno effect
Wigner's friend
References
External links
PHYSICS TODAY: "Is the moon there when nobody looks? Reality and the quantum theory" (pdf)
"Quantum Cosmology and the Hard Problem of the Conscious Brain" (pdf)
Mindful Sensationalism: A Quantum Framework for Consciousness.
Brian Josephson on QM and consciousness
Quantum Enigma from Oxford University Press
"Critique of Quantum Enigma Physics encounters Consciousness", by Michael Nauenberg
Quantum mind
Interpretations of quantum mechanics
Observation
Philosophical problems
Philosophy of physics
John von Neumann | Von Neumann–Wigner interpretation | [
"Physics"
] | 1,862 | [
"Philosophy of physics",
"Applied and interdisciplinary physics",
"Quantum mechanics",
"Quantum mind",
"Interpretations of quantum mechanics"
] |
38,954,098 | https://en.wikipedia.org/wiki/Blocking%20set | In geometry, specifically projective geometry, a blocking set is a set of points in a projective plane that every line intersects and that does not contain an entire line. The concept can be generalized in several ways. Instead of talking about points and lines, one could deal with n-dimensional subspaces and m-dimensional subspaces, or even more generally, objects of type 1 and objects of type 2 when some concept of intersection makes sense for these objects. A second way to generalize would be to move into more abstract settings than projective geometry. One can define a blocking set of a hypergraph as a set that meets all edges of the hypergraph.
Definition
In a finite projective plane π of order n, a blocking set is a set of points of π that every line intersects and that contains no line completely. Under this definition, if B is a blocking set, then complementary set of points, π\B is also a blocking set. A blocking set B is minimal if the removal of any point of B leaves a set which is not a blocking set. A blocking set of smallest size is called a committee. Every committee is a minimal blocking set, but not all minimal blocking sets are committees. Blocking sets exist in all projective planes except for the smallest projective plane of order 2, the Fano plane.
It is sometimes useful to drop the condition that a blocking set does not contain a line. Under this extended definition, and since, in a projective plane every pair of lines meet, every line would be a blocking set. Blocking sets which contained lines would be called trivial blocking sets, in this setting.
Examples
In any projective plane of order n (each line contains n + 1 points), the points on the lines forming a triangle without the vertices of the triangle (3(n - 1) points) form a minimal blocking set (if n = 2 this blocking set is trivial) which in general is not a committee.
Another general construction in an arbitrary projective plane of order n is to take all except one point, say P, on a given line and then one point on each of the other lines through P, making sure that these points are not all collinear (this last condition can not be satisfied if n = 2.) This produces a minimal blocking set of size 2n.
A projective triangle β of side m in PG(2,q) consists of 3(m - 1) points, m on each side of a triangle, such that the vertices A, B and C of the triangle are in β, and the following condition is satisfied: If point P on line AB and point Q on line BC are both in β, then the point of intersection of PQ and AC is in β.
A projective triad δ of side m is a set of 3m - 2 points, m of which lie on each of three concurrent lines such that the point of concurrency C is in δ and the following condition is satisfied: If a point P on one of the lines and a point Q on another line are in δ, then the point of intersection of PQ with the third line is in δ.
Theorem: In PG(2,q) with q odd, there exists a projective triangle of side (q + 3)/2 which is a blocking set of size 3(q + 1)/2.
Using homogeneous coordinates, let the vertices of the triangle be A = (1,0,0), B = (0,1,0) and C = (0,0,1). The points, other than the vertices, on side AB have coordinates of the form (-c, 1, 0), those on BC have coordinates (0,1,a) and those on AC have coordinates (1,0,b) where a, b and c are elements of the finite field GF(q). Three points, one on each of these sides, are collinear if and only if a = bc. By choosing all of the points where a, b and c are nonzero squares of GF(q), the condition in the definition of a projective triangle is satisfied.
Theorem: In PG(2,q) with q even, there exists a projective triad of side (q + 2)/2 which is a blocking set of size (3q + 2)/2.
The construction is similar to the above, but since the field is of characteristic 2, squares and non-squares need to be replaced by elements of absolute trace 0 and absolute trace 1. Specifically, let C = (0,0,1). Points on the line X = 0 have coordinates of the form (0,1,a), and those on the line Y = 0 have coordinates of the form (1,0,b). Points of the line X = Y have coordinates which may be written as (1,1,c). Three points, one from each of these lines, are collinear if and only if a = b + c. By selecting all the points on these lines where a, b and c are the field elements with absolute trace 0, the condition in the definition of a projective triad is satisfied.
Theorem: In PG(2,p), with p a prime, there exists a projective triad of side (p + 1)/2 which is a blocking set of size (3p+ 1)/2.
Size
One typically searches for small blocking sets. The minimum size of a blocking set of is called .
In the Desarguesian projective plane of order q, PG(2,q), the size of a blocking set B is bounded:
When q is a square the lower bound is achieved by any Baer subplane and the upper bound comes from the complement of a Baer subplane.
A more general result can be proved,
Any blocking set in a projective plane π of order n has at least points. Moreover, if this lower bound is met, then n is necessarily a square and the blocking set consists of the points in some Baer subplane of π.
An upper bound for the size of a minimal blocking set has the same flavor,
Any minimal blocking set in a projective plane π of order n has at most points. Moreover, if this upper bound is reached, then n is necessarily a square and the blocking set consists of the points of some unital embedded in π.
When n is not a square less can be said about the smallest sized nontrivial blocking sets. One well known result due to Aart Blokhuis is:
Theorem: A nontrivial blocking set in PG(2,p), p a prime, has size at least 3(p + 1)/2.
In these planes a projective triangle which meets this bound exists.
History
Blocking sets originated in the context of economic game theory in a 1956 paper by Moses Richardson. Players were identified with points in a finite projective plane and minimal winning coalitions were lines. A blocking coalition was defined as a set of points containing no line but intersecting every line. In 1958, J. R. Isbell studied these games from a non-geometric viewpoint. Jane W. DiPaola studied the minimum blocking coalitions in all the projective planes of order in 1969.
In hypergraphs
Let be a hypergraph, so that is a set of elements, and is a collection of subsets of , called (hyper)edges. A blocking set of is a subset of that has nonempty intersection with each hyperedge.
Blocking sets are sometimes also called "hitting sets" or "vertex covers".
Also the term "transversal" is used, but in some contexts a transversal of is a subset of that meets each hyperedge in exactly one point.
A "two-coloring" of is a partition of
into two subsets (color classes) such that no edge is monochromatic, i.e., no edge is contained entirely within or within . Now both and are blocking sets.
Complete k-arcs
In a projective plane a complete k-arc is a set of k points, no three collinear, which can not be extended to a larger arc (thus, every point not on the arc is on a secant line of the arc–a line meeting the arc in two points.)
Theorem: Let K be a complete k-arc in Π = PG(2,q) with k < q + 2. The dual in Π of the set of secant lines of K is a blocking set, B, of size k(k - 1)/2.
Rédei blocking sets
In any projective plane of order q, for any nontrivial blocking set B (with b = |B|, the size of the blocking set) consider a line meeting B in n points. Since no line is contained in B, there must be a point, P, on this line which is not in B. The q other lines though P must each contain at least one point of B in order to be blocked. Thus, If for some line equality holds in this relation, the blocking set is called a blocking set of Rédei type and the line a Rédei line of the blocking set (note that n will be the largest number of collinear points in B). Not all blocking sets are of Rédei type, but many of the smaller ones are. These sets are named after László Rédei whose monograph on Lacunary polynomials over finite fields was influential in the study of these sets.
Affine blocking sets
A set of points in the finite Desarguesian affine space that intersects every hyperplane non-trivially, i.e., every hyperplane is incident with some point of the set, is called an affine blocking set. Identify the space with by fixing a coordinate system. Then it is easily shown that the set of points lying on the coordinate axes form a blocking set of size . Jean Doyen conjectured in a 1976 Oberwolfach conference that this is the least possible size of a blocking set.
This was proved by R. E. Jamison in 1977, and independently by A. E. Brouwer, A. Schrijver in 1978 using the so-called polynomial method. Jamison proved the following general covering result from which the bound on affine blocking sets follows using duality:
Let be an dimensional vector space over . Then the number of -dimensional cosets required to cover all vectors except the zero vector is at least . Moreover, this bound is sharp.
Notes
References
C. Berge, Graphs and hypergraphs, North-Holland, Amsterdam, 1973. (Defines .)
P. Duchet, Hypergraphs, Chapter 7 in: Handbook of Combinatorics, North-Holland, Amsterdam, 1995.
Combinatorics
Hypergraphs
Finite geometry
Projective geometry | Blocking set | [
"Mathematics"
] | 2,200 | [
"Discrete mathematics",
"Combinatorics"
] |
38,956,801 | https://en.wikipedia.org/wiki/Reinventing%20Gravity | Reinventing Gravity: A Scientist Goes Beyond Einstein is a science text by John W. Moffat, which explains his controversial theory of gravity.
Moffat's theory
Moffat's work culminates in his nonsymmetric gravitational theory and scalar–tensor–vector gravity (now called MOG). His theory explains galactic rotation curves without invoking dark matter. He proposes a variable speed of light approach to cosmological problems, which posits that G/c is constant through time, but G and c separately have not been. Moreover, the speed of light c may have been much higher (at least trillion trillion times faster than the normal speed of light) during early moments of the Big Bang. His recent work on inhomogeneous cosmological models purports to explain certain anomalous effects in the CMB data, and to account for the recently discovered acceleration of the expansion of the universe.
The theory is based on an action principle and postulates the existence of a vector field, while elevating the three constants of the theory to scalar fields. In the weak-field approximation, STVG produces a Yukawa-like modification of the gravitational force due to a point source. Intuitively, this result can be described as follows: far from a source gravity is stronger than the Newtonian prediction, but at shorter distances, it is counteracted by a repulsive fifth force due to the vector field.
Reception
The book was positively reviewed in EE Times, Physics World and Publishers Weekly.
See also
Einstein Wrote Back, another book by Moffat
References
Popular physics books
Theories of gravity
2008 non-fiction books | Reinventing Gravity | [
"Physics"
] | 334 | [
"Theoretical physics",
"Theories of gravity"
] |
38,959,571 | https://en.wikipedia.org/wiki/Transcriptor | A transcriptor is a transistor-like device composed of DNA and RNA rather than a semiconducting material such as silicon. Prior to its invention in 2013, the transcriptor was considered an important component to build biological computers.
Background
To function, a modern computer needs three different capabilities: It must be able to store information, transmit information between components, and possess a basic system of logic. Prior to March 2013, scientists had successfully demonstrated the ability to store and transmit data using biological components made of proteins and DNA. Simple two-terminal logic gates had been demonstrated, but required multiple layers of inputs and thus were impractical due to scaling difficulties.
Invention and description
On March 28, 2013, a team of bioengineers from Stanford University led by Drew Endy announced that they had created the biological equivalent of a transistor, which they named a "transcriptor". That is, they created a three-terminal device with a logic system that can control other components. The transcriptor regulates the flow of RNA polymerase across a strand of DNA using special combinations of enzymes to control movement. According to project member Jerome Bonnet, "The choice of enzymes is important. We have been careful to select enzymes that function in bacteria, fungi, plants and animals, so that bio-computers can be engineered within a variety of organisms."
Transcriptors can replicate traditional AND, OR, NOR, NAND, XOR, and XNOR gates with equivalents, which Endy dubbed "Boolean Integrase Logic (BIL) gates", in a single-layer process (i.e., without requiring multiple instances of the simpler gates to build up more complex ones). Like a traditional transistor, a transcriptor can amplify an input signal. A group of transcriptors can do almost any type of computing, including counting and comparison.
Impact
Stanford dedicated the BIL gate's design to the public domain, which may speed its adoption. According to Endy, other researchers were already using the gates to reprogram metabolism when the Stanford team published its research.
Computing by transcriptor is still very slow; it can take a few hours between receiving an input signal and generating an output. Endy doubted that biocomputers would ever be as fast as traditional computers, but added that is not the goal of his research. "We're building computers that will operate in a place where your cellphone isn't going to work", he said. Medical devices with built-in biological computers could monitor, or even alter, cell behavior from inside a patient's body. ExtremeTech writes:
UC Berkeley biochemical engineer Jay Keasling said the transcriptor "clearly demonstrates the power of synthetic biology and could revolutionize how we compute in the future".
References
External links
- original journal article, published in Science
Explanatory video created by Drew Endy
NPR article with series of moving pictures that explain how the transcriptor works
Public domain release of the BIL gates technology
American inventions
DNA nanotechnology
Molecular biology
Theoretical computer science | Transcriptor | [
"Chemistry",
"Materials_science",
"Mathematics",
"Biology"
] | 624 | [
"Theoretical computer science",
"Applied mathematics",
"DNA nanotechnology",
"Molecular biology",
"Biochemistry",
"Nanotechnology"
] |
43,188,546 | https://en.wikipedia.org/wiki/Satellite%20data%20unit | A satellite data unit (SDU) is an avionics device installed in an aircraft that allows air/ground communication via a satellite network. It is an integral part of an aircraft's SATCOM (satellite communication) system. The device connects with a satellite via ordinary radio frequency (RF) communication and the satellite then connects to a ground station or vice versa. All satellite communication whether audio or data is processed by the SDU.
The SDU communicates with an onboard MDDU (multi-purpose disk-drive unit) which maintains an updatable table of ground stations near the aircraft's position and the order of preference for selection of which ground station to use which thus guides the choice of satellite. Along with analysing data continuously sent from all ground stations (such as station status and the error rate of signals from each station) the SDU receives information on the aircraft's position and orientation from another onboard system (ADIRU, air data inertial reference unit) which it passes to the BSU (beam-steering unit) to direct the signal beam from the aircraft to the chosen satellite.
With the advent of cellphones and the Internet a separate or integrated SDU can be used to offer telephone and Internet services to passengers.
Logs of satellite communication have been used to inform search and rescue agencies of locations of missing aircraft, like Malaysia Airlines Flight 370 whose position was unknown due to loss of radar contact and other communications. Automated SATCOM transmissions suggested it flew about off its designated flight path having flown approximately south-southwest rather than the intended approximately north-northeast.
References
Navigational equipment
Aircraft instruments
Avionics
Communications satellites | Satellite data unit | [
"Technology",
"Engineering"
] | 332 | [
"Avionics",
"Aircraft instruments",
"Measuring instruments"
] |
43,191,282 | https://en.wikipedia.org/wiki/Flexible%20battery | Flexible batteries are batteries, both primary and secondary, that are designed to be conformal and flexible, unlike traditional rigid ones. They can maintain their characteristic shape even against continual bending or twisting. The increasing interest in portable and flexible electronics has led to the development of flexible batteries which can be implemented in products such as smart cards, wearable electronics, novelty packaging, flexible displays and transdermal drug delivery patches. The advantages of flexible batteries are their conformability, light weight, and portability, which makes them easy to be implemented in products such as flexible and wearable electronics. Hence efforts are underway to make different flexible power sources including primary and rechargeable batteries with high energy density and good flexibility.
Basic methods and designs
In general, a battery is made of one or several galvanic cells, where each cell consists of cathode, anode, separator, and in many cases current collectors. In flexible batteries all these components need to be flexible. These batteries can be fabricated into different shapes and sizes and by different methods. One approach is to use polymer binders to fabricate composite electrodes where conductive additives are used to enhance their conductivity. The electrode materials can be printed or coated onto flexible substrates. The cells are assembled into flexible packaging materials to maintain bendability. Others approaches include the filtering of electrode suspension through filters to form free-standing films, or use flexible matrix to hold electrode materials. There are also other designs like cable batteries.
Flexible secondary (rechargeable) batteries
There have been many efforts in adapting conventional batteries such as zinc-carbon and lithium ion, and at the same time new materials such as those based on nanoparticle complexes are being developed for flexible battery and supercapacitor electrodes. For example, there are efforts at developing flexible lithium-ion batteries. Some studies have introduced nanocarbons into flexible lithium-ion batteries, and there are batteries with Li4Ti5O12 and LiFePO4 as anode and cathode, with graphene-based current collector. Carbon nanotube electrodes have been reported too: pristine, and combined with Li4Ti5O12, LiCoO2, or SnO2. Another development is the paper-thin flexible self-rechargeable battery that combines a thin-film organic solar cell with an extremely thin and highly flexible lithium-polymer battery. This recharges itself when exposed to light.
Flexible primary batteries
Disposable, primary flexible primary batteries which are the equivalent of AA and AAA batteries are also of great interest with applicability in smart cards, medical patches, greeting cards, toys, and disposable devices. Advantages of primary batteries with aqueous electrolyte over lithium-ion batteries include their eco-friendliness and the ease of fabrication. A flexible zinc-carbon battery using single-walled carbon nanotubes was reported in 2010.
Alkaline batteries are more durable than conventional zinc-carbon batteries under heavy load. An alkaline battery uses MnO2 as the active material along with zinc anode, and KOH is used as an electrolyte here. A flexible alkaline cell offers several challenges because compared to zinc-carbon cells using weak acidic or neutral electrolytes, KOH is more basic and corrosive. Gaikwad has proposed an alkaline battery using nylon mesh in 2011.
Business and commercialization
Commercialization efforts for flexible lithium-ion and zinc-carbon systems are ongoing. LG is proposing to mass-produce a flexible cable battery. The global market for thin film batteries increased from $33.5 million in 2011 to $51.8 million in 2012, and is estimated to be valued at $87.3 million by the end of 2013.
See also
Primary battery
Rechargeable battery
Battery (electricity)
References
External links
Lithium-ion batteries
20th-century inventions
Metal-ion batteries
Battery shapes
Flexible electronics | Flexible battery | [
"Engineering"
] | 799 | [
"Electronic engineering",
"Flexible electronics"
] |
43,192,161 | https://en.wikipedia.org/wiki/Cohen%E2%80%93Hewitt%20factorization%20theorem | In mathematics, the Cohen–Hewitt factorization theorem states that if is a left module over a Banach algebra with a left approximate unit , then an element of can be factorized as a product (for some and ) whenever . The theorem was introduced by and .
References
Banach algebras
Theorems in functional analysis | Cohen–Hewitt factorization theorem | [
"Mathematics"
] | 65 | [
"Theorems in mathematical analysis",
"Mathematical analysis",
"Theorems in functional analysis",
"Mathematical analysis stubs"
] |
43,193,013 | https://en.wikipedia.org/wiki/Currentology | Currentology is a science that studies the internal movements of water masses.
Description
In the study of fluid mechanics, researchers attempt to give a correct explanation of marine currents. Currents are caused by external driving forces such as wind, gravitational effects, coriolis forces and physical differences between various water masses, the main parameter being the difference of density that varies in function of the temperature and salinity.
The study of currents, combined with other factors such as tides and waves is relevant for understanding marine hydrodynamics and linked processes such as sediment transport and climate balance.
The measurement of maritime currents
The measurements of maritime currents can be made according to different techniques:
current meter
diversion buoys
See also
References
Oceanography | Currentology | [
"Physics",
"Environmental_science"
] | 143 | [
"Oceanography",
"Hydrology",
"Applied and interdisciplinary physics"
] |
43,194,195 | https://en.wikipedia.org/wiki/Orbiting%20Carbon%20Observatory%202 | Orbiting Carbon Observatory-2 (OCO-2) is an American environmental science satellite which launched on 2 July 2014. A NASA mission, it is a replacement for the Orbiting Carbon Observatory which was lost in a launch failure in 2009. It is the second successful high-precision (better than 0.3%) observing satellite, after GOSAT.
Mission overview
The OCO-2 satellite was built by Orbital Sciences Corporation, based around the LEOStar-2 bus. The spacecraft is being used to study carbon dioxide concentrations and distributions in the atmosphere.
OCO-2 was ordered after the original OCO spacecraft failed to achieve orbit. During the first satellite's launch atop a Taurus-XL in February 2009, the payload fairing failed to separate from around the spacecraft and the rocket did not have sufficient power to enter orbit with its additional mass. Although a Taurus launch was initially contracted for the reflight, the launch contract was cancelled after the same malfunction occurred on the launch of the Glory satellite two years later.
United Launch Alliance launched OCO-2 using a Delta II rocket at the beginning of a 30-second launch window at 09:56 UTC (2:56 PDT) on 2 July 2014. Flying in the 7320-10C configuration, the rocket launched from Space Launch Complex 2W at Vandenberg Air Force Base. The initial launch attempt on 1 July at 09:56:44 UTC was scrubbed at 46 seconds on the countdown clock due to a faulty valve on the water suppression system, used to flow water on the launch pad to dampen the acoustic energy during launch.
OCO-2 joined the A-train satellite constellation, becoming the sixth satellite in the group. Members of the A-train fly very close together in Sun-synchronous orbit, to make nearly simultaneous measurements of Earth. A particularly short launch window of 30 seconds was necessary to achieve a proper position in the train. As of it was in an orbit with a perigee of , an apogee of and a 98.2° inclination.
The mission is expected to cost , including design, development, launch and operations.
Column measurements
Rather than directly measuring concentrations of carbon dioxide in the atmosphere, OCO-2 records how much of the sunlight reflected off the Earth is absorbed by molecules in an air column. OCO-2 makes measurements in three different spectral bands over four to eight different footprints of approximately each. About 24 soundings are collected per second while in sunlight and over 10% of these are sufficiently cloud free for further analysis. One spectral band is used for column measurements of oxygen (A-band 0.765 microns), and two are used for column measurements of carbon dioxide (weak band 1.61 microns, strong band 2.06 microns).
In the retrieval algorithm measurements from the three bands are combined to yield column-averaged dry-air mole fractions of carbon dioxide. Because these are dry-air mole fractions, these measurements do not change with water content or surface pressure. Because the molecular oxygen content of the atmosphere (i.e. excluding the oxygen in water vapour) is well known to be 20.95%, oxygen is used as a measure of the total dry air column. To ensure these measurements are traceable to the World Meteorological Organization, OCO-2 measurements are carefully compared with measurements by the Total Carbon Column Observing Network (TCCON).
Data products
Mission data are provided to the public by the NASA Goddard Earth Science Data and Information Services Center (GES DISC). The Level 1B data product is the least processed and contains records for all collected soundings (about 74,000 soundings per orbit). The Level 2 product contains estimates of the column-averaged dry-air mole fractions of carbon dioxide, among other parameters such as surface albedo and aerosol content. The Level 3 product consists of global maps of carbon dioxide concentrations developed by OCO-2 scientists.
See also
Space-based measurements of carbon dioxide
Orbiting Carbon Observatory 3
Greenhouse Gases Observing Satellite
TanSat
References
Bibliography
External links
Orbiting Carbon Observatory at NASA.gov
Orbiting Carbon Observatory by the Jet Propulsion Laboratory
Orbiting Carbon Observatory by the JPL Science Division
2014 in the United States
Spacecraft launched in 2014
Earth observation satellites of the United States
Spacecraft launched by Delta II rockets
Articles containing video clips
NASA satellites
Satellites monitoring GHG emissions | Orbiting Carbon Observatory 2 | [
"Chemistry",
"Environmental_science"
] | 884 | [
"Greenhouse gases",
"Environmental chemistry"
] |
43,194,879 | https://en.wikipedia.org/wiki/Geometric%20transformation | In mathematics, a geometric transformation is any bijection of a set to itself (or to another such set) with some salient geometrical underpinning, such as preserving distances, angles, or ratios (scale). More specifically, it is a function whose domain and range are sets of points — most often both or both — such that the function is bijective so that its inverse exists. The study of geometry may be approached by the study of these transformations, such as in transformation geometry.
Classifications
Geometric transformations can be classified by the dimension of their operand sets (thus distinguishing between, say, planar transformations and spatial transformations). They can also be classified according to the properties they preserve:
Displacements preserve distances and oriented angles (e.g., translations);
Isometries preserve angles and distances (e.g., Euclidean transformations);
Similarities preserve angles and ratios between distances (e.g., resizing);
Affine transformations preserve parallelism (e.g., scaling, shear);
Projective transformations preserve collinearity;
Each of these classes contains the previous one.
Möbius transformations using complex coordinates on the plane (as well as circle inversion) preserve the set of all lines and circles, but may interchange lines and circles.
Conformal transformations preserve angles, and are, in the first order, similarities.
Equiareal transformations, preserve areas in the planar case or volumes in the three dimensional case. and are, in the first order, affine transformations of determinant 1.
Homeomorphisms (bicontinuous transformations) preserve the neighborhoods of points.
Diffeomorphisms (bidifferentiable transformations) are the transformations that are affine in the first order; they contain the preceding ones as special cases, and can be further refined.
Transformations of the same type form groups that may be sub-groups of other transformation groups.
Opposite group actions
Many geometric transformations are expressed with linear algebra. The bijective linear transformations are elements of a general linear group. The linear transformation A is non-singular. For a row vector v, the matrix product vA gives another row vector w = vA.
The transpose of a row vector v is a column vector vT, and the transpose of the above equality is Here AT provides a left action on column vectors.
In transformation geometry there are compositions AB. Starting with a row vector v, the right action of the composed transformation is w = vAB. After transposition,
Thus for AB the associated left group action is In the study of opposite groups, the distinction is made between opposite group actions because commutative groups are the only groups for which these opposites are equal.
Active and passive transformations
See also
Coordinate transformation
Erlangen program
Symmetry (geometry)
Motion
Reflection
Rigid transformation
Rotation
Topology
Transformation matrix
References
Further reading
Dienes, Z. P.; Golding, E. W. (1967) . Geometry Through Transformations (3 vols.): Geometry of Distortion, Geometry of Congruence, and Groups and Coordinates. New York: Herder and Herder.
David Gans – Transformations and geometries.
John McCleary (2013) Geometry from a Differentiable Viewpoint, Cambridge University Press
Modenov, P. S.; Parkhomenko, A. S. (1965) . Geometric Transformations (2 vols.): Euclidean and Affine Transformations, and Projective Transformations. New York: Academic Press.
A. N. Pressley – Elementary Differential Geometry.
Yaglom, I. M. (1962, 1968, 1973, 2009) . Geometric Transformations (4 vols.). Random House (I, II & III), MAA (I, II, III & IV).
Geometry
Functions and mappings
Symmetry
Transformation (function) | Geometric transformation | [
"Physics",
"Mathematics"
] | 769 | [
"Functions and mappings",
"Mathematical analysis",
"Transformation (function)",
"Mathematical objects",
"Mathematical relations",
"Geometry",
"Symmetry"
] |
43,196,367 | https://en.wikipedia.org/wiki/Qanats%20of%20Ghasabeh | The Qanats of Ghasabeh (), also called Kariz e Kay Khosrow, is one of the world's oldest and largest networks of qanats (underground aqueducts). Built between 700 and 500 BCE by the Achaemenid Empire in what is now Gonabad, Razavi Khorasan Province, Iran, the complex contains 427 water wells with a total length of . The site was first added to UNESCO's list of tentative World Heritage Sites in 2007, then officially inscribed in 2016, collectively with several other qanats, as "The Persian Qanat".
History
Qanat Ghasabe has a link to the legendary king of Iran Kay Khosrow. Many historical and geographical sources mention two main wars in Gonabad region during the time of Kay Khosrow. Davazdah Rokh war and Froad war had been mentioned in Shahnameh. According to Nasir Khusraw, the qanat of Gonabad was built by the order of Kay Khosrow
Persian period
The use of water clocks in Iran, especially in the Qanats of Gonabad and Kariz Zibad, dates back to 500 BCE. Later they were also used to determine the exact holy days of pre-Islamic religions, such as the Nowruz, Chelah, or Yaldā—the shortest, longest, and equal-length days and nights of the year. The water clock, or Fenjaan, was the most accurate and commonly used timekeeping device for calculating the amount or the time that a farmer must take water from the Qanats of Gonabad until it was replaced by more accurate current clocks.
It was an Achaemenid ruling that in case someone succeeded in constructing a qanat and bringing groundwater to the surface in order to cultivate land, or in renovating an abandoned qanat, the tax he was supposed to pay the government would be waived not only for him but also for his successors for up to five generations. Following Darius' order, Silaks, the naval commander of the Persian army, and Khenombiz, the royal architect, managed to construct a qanat in the oasis of Kharagha in Egypt. Beadnell believes that qanat construction dates back to two distinct periods: they were first constructed by the Persianse, and later the Romans dug other qanats during their reign in Egypt from 30 BCE to 395 AD. The magnificent temple built in this area during Darius' reign shows that there was a considerable population depending on the water of qanats; Ragerz has estimated this population to be 10,000 people. The most reliable document confirming the existence of qanats at this time was written by Polybius, who states that: "the streams are running down from everywhere at the base of Alborz mountain, and people have transferred too much water from a long distance through some subterranean canals by spending much cost and labor".
Islamic period
In Iran, the advent of Islam, which coincided with the overthrow of the Sassanid dynasty, brought about a profound change in religious, political, social and cultural structures. But the qanats stayed intact, because the economic infrastructure was of great importance to the Arabs. As an instance, M. Lombard reports that the Moslem clerics who lived during the Abbasid period, such as Abooyoosef Ya’qoob (died 798 AD) stipulated that whoever can bring water to the idle lands in order to cultivate, his tax would be waived and he would be entitled to the lands cultivated. Therefore, this policy did not differ from that of the Achaemenids in not getting any tax from the people who revived abandoned lands.
Apart from the Book of Alghani, which is considered a law booklet focusing on qanat-related rulings based on Islamic principles, there is another book about groundwater written by Karaji in the year 1010. This book, entitled Extraction of Hidden Waters, examines just the technical issues associated with the qanat and tries to answer common questions such as how to construct and repair a qanat, how to find a groundwater supply, how to do leveling, etc. Some of the hydrogeological innovations described in this book were first introduced there. There are some records dating back to that time, signifying their concern about the legal vicinity of qanats. For example, Mohammad bin Hasan quotes Aboo-Hanifeh that in case someone constructs a qanat in abandoned land, someone else can dig another qanat in the same land on the condition that the second qanat is 500 zera' (375 meters) away from the first one.
Pahlavi period
During the Pahlavi period, the process of qanat construction and maintenance continued. A council that was responsible for the qanats was set up by the government. At that time most of the qanats belonged to landlords. In fact, feudalism was the prevailing system in the rural regions. The peasants were not entitled to the lands they worked on, but were considered only as the users of the lands. They had to pay rent for land and water to the landlords who could afford to finance all the proceedings required to maintain the qanats, for they were relatively wealthy. According to the report of Safi Asfiya, who was in charge of supervising the qanats of Iran in the former regime, in the year 1942 Iran had 40,000 qanats with a total discharge of 600,000 liters per second or 18.2 billion cubic meters per year. In 1961, another report was published revealing that in Iran there were 30,000 qanats of which just 20,000 were still in use, with a total output of 560,000 lit/se or 17.3 billion cubic meters per year. In 1959 a reforme program named as the White Revolution was declared by the former Shah. One of the articles of this program addressed the land reform that let peasants take ownership of part of the landlords' lands. In fact, the land reform meant that the landlords lost their motivation for investing more money in constructing or repairing the qanats which were subject to the Lnd Reform Law. On the other hand, the peasants could not come up with the money to maintain the qanats, so many qanats were gradually abandoned.
In 1963, the Ministry of Water and Electricity was established in order to provide the rural and urban areas of the country with sufficient water and electricity. Later, this Ministry was renamed the Ministry of Energy.
In the year 2000, holding the International Conference on Qanats in Yazd drew a lot of attention to the qanats. In 2005 the Iranian government and UNESCO signed an agreement to set up the International Center on Qanats and Historic Hydraulic Structures (ICQHS) under the auspices of UNESCO. The main mission of this center is the recognition, transfer of knowledge and experiences, promotion of information and capacities with regard to all aspects of qanat technology and related historic hydraulic structures. This mission aims to fulfill sustainable development of water resources and the application of the outcome of the activities in order to preserve historical and cultural values as well as the promotion of the public welfare within the communities whose existence depends on the rational exploitation of the resources and preservation of such historical structures.
Ghasabe Qanat of Gonabad
The documentary film of the Ghasabe Qanat of Gonabad (70 minutes movie) illustrates the engineering potential of Iranian diggers to dig aqueducts throughout history, explaining its importance. The story of the Ghasabe aqueduct in water supply in desert conditions is illustrated in this movie. By documenting the aqueduct of the Ghasabe Qanat, it has been attempted to illustrate the potential of the Gonabad excavation and its continued importance throughout history. The film is based on the memoirs of many years of travel and residence of French scholar Henri Goblot in the mid-twentieth century and Visit the Iranian aqueducts. This documentary film was produced by radio and television of Khorasan-e-Razavi Province in collaboration with Khorasan-e-Razavi Province Cultural Heritage and Tourism administration; commissioned by UNESCO. And made by Seyedsaeed Aboozarian known as (Payman Ashegh) as Production Manager / Translator and Narrator of French; and Saeed Tavakkolifar as Director.
References
Sources
Iranian newspaper farheekhtegan 21/May 2016 No 1949
Iranian newspaper Mardom salari18Feb 2016 Qantas or kariz in Gonabad
Qanats and its tools waterclock
Qanat Gonabad or Kariz Kai Khosrow
pars documentary
Persian developed underground aqueducts
Water wells
Infrastructure in Iran
World Heritage Sites in Iran
Buildings and structures in Razavi Khorasan province
Tourist attractions in Razavi Khorasan province | Qanats of Ghasabeh | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,822 | [
"Hydrology",
"Water wells",
"Environmental engineering"
] |
35,001,781 | https://en.wikipedia.org/wiki/Beta%20adrenergic%20receptor%20kinase%20carboxyl-terminus | Beta adrenergic receptor kinase carboxyl-terminus (also βARKct) is a peptide composed of the last 194 amino acid residues of the carboxyl-terminus of beta adrenergic receptor kinase 1 (βARK1). It binds the βγ subunits of G proteins located in the plasma membrane of cells. It is currently an experimental gene therapy for the treatment of heart failure.
Heart Failure
During heart failure, the heart is not able to pump enough blood to the rest of the body and will begin to undergo processes in order to compensate for its decreased function. These processes will attempt to increase the heart’s output; however, the heart may become overstressed and eventually dysfunctional as a result. The sympathetic nervous system increases norepinephrine release to stimulate β-adrenergic receptors (βARs) located on heart cell (cardiomyocyte) membranes to increase the heart’s rate and force of contraction. If the heart is already stressed or damaged, this will cause the heart to work above its capacity. Continuous stimulation of the βARs leads to the activation of βARK1 which phosphorylates βARs to decrease their response to norepinephrine and other catecholamines. βARs are downregulated as a result, decreasing the control over the heart’s rate and force of contraction. A cycle begins as more norepinephrine is produced in an attempt to stimulate the heart to contract.
Functions
The βARKct peptide acts by binding to Gβγ proteins, competing with βARK1 for the same binding site. βARK1 requires binding to Gβγ protein-coupled receptors to be activated. By inhibiting βARK1, βARs will be upregulated back to a normal range. With βAR function restored in a failing heart, the force of contraction increases and the levels of catecholamines and growth factors return to normal.
Additionally, when βARs are activated, βARKct will bind Gβγ proteins to prevent their interaction with and inhibition of the L-type calcium channels (LCC) present on cardiomyocyte plasma membranes. This increases the flow of calcium ions through the LCCs during depolarization of the cardiomyocyte, increasing calcium levels for contraction to occur. This mechanism has been demonstrated under in vitro conditions and may work with the inhibition of βARK1 to restore βAR function.
Gene Therapy
The main approach to treatment using βARKct is to insert the gene coding for it into a virus and then infecting cardiomyocytes with it. The virus, containing the βARKct gene, may be injected directly into the left coronary artery or the left ventricular walls following surgical opening of the thorax. A less invasive method for transfer is by using a catheter to inject the virus directly into the left coronary artery without opening the chest cavity.
Experimental Animal Models
The use of βARKct gene therapy in humans is still under investigation with no trials currently being carried out. The effectiveness of this therapy has been shown in small animal models including mice, rats, and rabbits. Larger animal models, such as pig hearts, more resemble the human heart and have also demonstrated the benefits of this therapy and its potential use in humans.
References
Peptides | Beta adrenergic receptor kinase carboxyl-terminus | [
"Chemistry"
] | 680 | [
"Biomolecules by chemical classification",
"Peptides",
"Molecular biology"
] |
35,002,326 | https://en.wikipedia.org/wiki/Flashspun%20fabric | Flashspun fabric is a nonwoven fabric formed from fine fibrillation of a film by the rapid evaporation of solvent and subsequent bonding during extrusion.
A pressurised solution of, for example, high-density polyethylene (HDPE) or polypropylene in a solvent such as fluoroform is heated, pressurised and pumped through a hole into a chamber. When the solution is allowed to expand rapidly through the hole the solvent evaporates to leave a highly oriented non-woven network of filaments.
See also
Tyvek
Melt blowing
References
Nonwoven_fabrics
Plastics
Synthetic_paper | Flashspun fabric | [
"Physics",
"Chemistry"
] | 133 | [
"Synthetic materials",
"Unsolved problems in physics",
"Amorphous solids",
"Synthetic paper",
"Plastics"
] |
35,006,277 | https://en.wikipedia.org/wiki/Turbidimetry | Turbidimetry (the name being derived from turbidity) is the process of measuring the loss of intensity of transmitted light due to the scattering effect of particles suspended in it. Light is passed through a filter creating a light of known wavelength which is then passed through a cuvette containing a solution. A photoelectric cell collects the light which passes through the cuvette. A measurement is then given for the amount of absorbed light.
Turbidimetry can be used in biology to find the number of cells in a suspension.
Turbidity-is an expression of optical look of a suspension caused by radiation to the scattered and absorbed wavelength. Scattering of light is elastic so both incident and scattered radiation have same wavelength.
A turbidometer measures the amount of radiation that passes through a fluid in forward direction, analogous to absorption spectrophotometry.
Standard for turbidimetry is prepared by dissolving 5g of hydrazinium (2+) sulfate(N2H4H2SO4) and 50g of hexamethylenetertramine in 1liter of distilled water is defined as 4000 nephelometric Turbidity Unit(NTU)
Application
Determination of water
Clarity of pharma products and drinks
Immunoassay in lab
Turbidimetry offers little advantage than nephelometry in measurement of sensitivity in low level antigen a antibody immunoassay.
Antigen excess and matrix effects are limitations encountered
Immunoturbidimetry
Immunoturbidimetry is an important tool in the broad diagnostic field of clinical chemistry. It is used to determine serum proteins not detectable with classical clinical chemistry methods. Immunoturbidimetry uses the classical antigen-antibody reaction. The antigen-antibody complexes aggregate to form particles that can be optically detected by a photometer.
See also
Colorimetry
Turbidity
References
Measurement
Chemical tests
Immunologic tests
Scattering | Turbidimetry | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Biology"
] | 402 | [
"Physical quantities",
"Quantity",
"Scattering stubs",
"Immunologic tests",
"Measurement",
"Size",
"Chemical tests",
"Scattering",
"Condensed matter physics",
"Particle physics",
"Nuclear physics",
"Physical chemistry stubs"
] |
35,006,837 | https://en.wikipedia.org/wiki/Setralit | Setralit is a technical natural fiber based on plant fibers whose property profile has been modified selectively in order to meet different industrial requirements. It was first manufactured in 1989 by Jean-Léon Spehner, an Alsatian engineer, and further developed by the German company ECCO Gleittechnik GmbH. The name “Setralit“ is derived from the French company Setral S.à.r.l. which is a subsidiary company of ECCO, where Spehner was employed at that time. Setralit was officially described first in 1990.
History
In the late eighties and early nineties asbestos in friction pads was banned at first in Germany and subsequently in the European Union (EU). Consequently, the friction lining industry was looking for a substitute that was suitable as a reinforcing as well as a processing fiber. At the same time the EU established and subsidized a mandatory property set-aside to restrict the grain production. Only plants für use in industry could be grown on the set-aside land without affecting subsidies. Both the EU and the Federal Republic of Germany supplied money to boost the development of new materials and new manufacturing processes of such “renewable resources”, first of all for bast fiber plants like flax and – since 1996 – hemp with low THC content.
Against this background ECCO took part in joint project for the utilization of flax fibers in brake and clutch linings, funded by the German Federal Department for Research and Technology (BMFT). During this project several Setralit fiber types were being used for the first time. They had been generated by a chemical, thermal and/or mechanical treatment of flax tow which is a side product of the textile industry. The German popular press praised this approach as a ”sensational invention“.
However, the varying properties of the base material of first generation Setralit turned out to be a serious disadvantage because these variations affected the performance characteristics of the final product in an unforeseeable way. These differences are mainly being caused by growth and harvest conditions and as such are being influenced by the climate as well as by short-term weather fluctuations in the growing area. These effects are particularly critical during dew retting.
In order to avoid this problem ECCO developed an ultrasonic decomposition process (named “ultrasonic break-down“) at the end of the 90's. Thanks to this controllable, physico-chemical extraction most of the associated material of the plant fibers (lignin, pectin, waxes, natural adhesives, fragrances and dyestuffs, as well as dust, bacteria, and fungi spores) is removed or destroyed. These second generation Setralit-fibers show an immensely smaller range of property variations compared to those of the first generation, which makes them more attractive for industrial use.
Following this, ECCO developed a series of Setralit types for various industrial end applications in cooperation with several industry partners in the construction, plastic and paper industries. In 2005, a fibrillated Setralit fiber managed to achieve the industrial breakthrough. This type is mainly used as a substitute for aramid pulp (Kevlar, Twaron et al.) for example in friction pads.
The omnipresent political discussions about sustainability, protection of the natural resources, and reduction of global warming gases push the Setralit fiber into the focus of new industrial users.
According to the Nova Institute, Hürth, Germany, in the future there is no alternative to the increased substantial use of agricultural raw materials. Bio-based substances such as biodegradable and durable bio plastics, natural fiber reinforced (bio) plastics (bio composites) and wood-plastic composites (WPC) form an interesting new class of materials thereby.
Production (Setralit process)
Setralit production is two-staged. During the first step the raw material is being subjected to an aqueous ultrasonic procedure followed by washing and drying. During the second step of conditioning the so-cleaned Setralit-fiber is specifically threatened depending on its end use. In general this step is merely mechanical (cutting, grinding, fibrillating etc.), but it can also be combined with a thermal or chemical treatment. The term Setralit process means the combination of two (or more) manufacturing steps in a row. Decomposition of the plant fiber bundles leaves the basic physical properties of the elementary fibers unaffected; therefore Setralit is still identified as a natural fiber.
The chemical fiber extraction by ultrasound is controllable, as are the following conditioning processes. Thus the properties can be fitted to the requirements of the end products – within the range that nature of fiber allows.
In principle, any plant fibers can be considered as raw material. However, bast fibers of annuals (flax, hemp, jute, kenaf et al.) are preferred. Appropriate are stem fibers of perennial plants (nettle, ramie), leaf fibers (sisal, abaca, cabuja, curaua) plus seed and fruit fibers (cotton, kapok, coir). In contrast, the application of the Setralit technique to herbage (bamboo, miscanthus, bagasse, cereal, rice and corn straw) and wood has only been explored rudimentarily.
The Setralit techniques are applied either on crushed dry straw of bast fiber plants (mechanical fiber extraction) or on decorticated fibers (long fibers, flax tow). In the first case the fiber has to be separated mechanically, after the ultrasonic break-down, from its non-fibrous components (shives). As a benefit one gets clean shives as a byproduct which can serve as a raw material for high quality fiber powder. With the help of modified Setralit techniques the shives may also be upgraded separately.
Characteristics
Concerning the technical characteristics, Setralit fibers differ remarkably from the raw fibers from which they were extracted from. The distinctive attribute of Setralit compared to a conventionally gained fiber is the reproducibility of its technical properties. These are being produced by standardized treatment processes. As conventional natural fibers mainly reflect the quality variations of the primary material, these are being evened out by ultrasonic extraction.
Other differing characteristics are:
High cleanliness
Brighter color
Higher temperature resistance
Customized and constant quality
Rapid, high, and equal water absorption.
The different, mostly mechanical conditioning actions of step two lead to a range of well defined Setralit-types that differ in appearance as well as in technical characteristics. These properties are suited for the possible application of a certain Setralit-type. They are stated in the technical datasheet. The specification describes the permitted variation of such parameters.
Specification of a fibrillated natural fiber SETRALIT® NFU/31-2
Natural fiber-pulp, cleaned, fibrillated
(*1) * 10,000 cm2/g BD (Blaine-Dyckerhoff) is equivalent to ~ 6 m2/g BET (Brunauer, Emmett, Teller)
(*2) In this context “fibril” means a part of a fiber whose diameter is smaller than one third of the original fiber (mother or stem fiber). This line is drawn arbitrarily.
The mechanical strength values of Setralit-fibers reflect those of the primary material (e.g. flax, hemp, ramie fiber). The ultrasonic procedure does not lead to a damage of the fiber. Any losses in strength properties are due only to the mechanical wear during the second processing step.
Comparison fibrillated nature fiber – aramid pulp
(Application: friction linings) The characteristics of a fibrillated Setralit-fiber compared to a synthetic high-performance fiber (aramid pulp):
Reference:
[1] Own measurements of ECCO Gleittechnik GmbH.
[2] Sotton 1998: Flax a natural fibre with outstanding properties - Techtextil, Frankfurt. f
[3] Akzo: Twaron® - The power of aramid - Informational brochure.
[4] Final Report to Project: Ermittlung werkstoffkundlicher Merkmale von Flachsfasern “[Determination of flax fiber properties] - Institut für Kunststoffverarbeitung Aachen. f
Remarks
a Measured on fiber bundles
b Measured on elementary fibers
c Determined by thermal gravimetric analysis: here defined as temperature at 5% weight loss of dry fiber (heating-up rate: 5 °C/min).
d Here defined as mass of fibrils divided by total pulp mass.
e Determined by another method (Blaine-Dyckerhoff: flow resistance of N2) than used by Akzo (Brunauer, Emmett, Teller: absorption of N2); equivalent to ~ 6 m2/g BET.
f The mechanical properties of elementary flax and hemp fibers are comparable.
Applications
Setralit® is a raw material for industrial manufacturing and allows multifunctional utilization. It can be substituted expensive synthetic fiber (e.g. aramid) in challenging technical applications.
It can either be processed to form a fiber reinforced composite (semi-finished product, compound) or be used directly in final products (fiber-reinforced building materials, gaskets, fiber mats etc.).
Example for a possible future use of: glove compartment made of hemp fiber reinforced plastic with polypropylene (PP) matrix produced by NF injection molding.
Areas of application for Setralit®-fibers''
Friction Linings: today they represent the main application area of Setralit. Its use is important in mechanical engineering, because abrasion is non-harmful to the health of the operating personnel in contrast to the materials used previously (see 3.2).
Brake linings for vehicles consisting of many very different components including fillers and temperature resistant resins may contain Setralit fibers. The most important markets for brake linings are Europe, Japan and the United States, closely followed by emerging Asian economies India and China. Although the use of asbestos in friction linings has been prohibited in the EU since 1989, elevated asbestos levels are still detected there in areas which include a lot of braking like junctions, motorway exits, landing strips, or railroad stations.
Building sector: plaster, dry mortar, fiber cement, concrete, aerated concrete, hard plaster, floor pavement, lime-sand brick, gypsum cardboard, insulating materials, insulating boards, dispersion paints.
Plastics: semi-finished composites, prepregs, SMC, BMC, injection molding, formed parts, fiber reinforced polymers, especially biopolymers.
Textiles: clothes, home textiles, industrial textiles, geotextiles, filters, spunlace mats, medical and sanitary articles.
Chemical industry: friction linings, sealants, filtering agents, filler materials, thixotropic agents, bitumen, rubber, polishing agents, putties, adhesives.
Paper: technical papers, cardboard boxes, specialty papers.
Other applications: (especially for shives and other by-products): animal bedding, bulk solids, animal food (pectin et al.), biogas, energy generation.The fiber length of the technical Setralit® fiber is linked to general application fields as follows:'''
Long fiber, > 100 mm: textile applications
Yarn, cloth, fiber mats (e.g. heat insulation mats)
Short fiber, 0.5 – 10 mm: reinforcement fiber, textile short fiber
Material reinforcement (e.g. plastic injection molding, aerated concrete ), spunlace nonwoven
Process fiber < 1 mm
Improvement of manufacturing processes (e.g. friction pads)
Brand name Setralit
“SETRALIT®“ is a worldwide protected brand name of ECCO Gleittechnik GmbH and a registered trademark.
Original Setralit Fibers produced by ECCO enter the market also under other labeling.
The company
Karl-Heinz Hensel launched the enterprise ECCO Gleittechnik GmbH in 1982, and their German daughter company Setral in 1984. Since 1985 the French society Sétral S.A. (today S.à.r.l.) belongs to the group, too. Whereas ECCO predominantly works in the fields of renewable fibers and alternative solid lubricants, mainly research and development, their affiliates Setral and Sétral S.à.r.l. develop, produce und sell high performance special lubricants and maintenance products all over the world.
See also
Fiber crop
Injection molding
Retting
References
External links
ECCO-Setralit site concerning fiber activities
Use of flax in friction linings
Information referring to Setralit in „lesfibresvegetales.info“
”New Process and Reinforcing Fibers for Friction Materials Based on Renewable Raw Materials” – Presentation by Volker von Drach 2001 in “papers.sae.org “, SAE international technical papers
Fibers
Composite materials | Setralit | [
"Physics"
] | 2,691 | [
"Materials",
"Composite materials",
"Matter"
] |
35,007,107 | https://en.wikipedia.org/wiki/C7H11N3O2 | {{DISPLAYTITLE:C7H11N3O2}}
The molecular formula C7H11N3O2 (molar mass: 169.18 g/mol, exact mass: 169.0851 u) may refer to:
Histidine methyl ester (HME)
Ipronidazole
3-Methylhistidine (3-MH)
Molecular formulas | C7H11N3O2 | [
"Physics",
"Chemistry"
] | 84 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
40,246,634 | https://en.wikipedia.org/wiki/The%20Great%20Pheromone%20Myth | The Great Pheromone Myth is a book on pheromones and their application to chemosensation in mammals by Richard L. Doty, director of the University of Pennsylvania's Smell and Taste Center in Philadelphia. Doty argues that the concept of pheromone introduced by Karlson and Lüscher is too simple for mammalian chemonsensory systems, failing to take into account learning and the context-dependence of chemosensation. In this book, he is especially critical of human pheromones, arguing that not only are there no definitive studies finding human pheromones, but that humans lack a functional vomeronasal organ to detect pheromones. Its publication received coverage in the news media, especially concerning its arguments that human pheromones do not exist.
The pheromone concept
Karlson and Lüscher defined pheromones "..as substances which are secreted to the outside by an individual and received by a second individual of the same species in which they release a specific reaction, for example, a definite behaviour or a developmental process" (p. 55). To distinguish pheromones from other substances that can stimulate behaviors such as the scents of flowers or food odors, they emphasize that pheromones are "special messenger substances" such as sexual attractants in butterflies and moths. In their definition, pheromones are analogous to hormones such as testosterone or oxytocin, which are specific compounds. However, while hormones serve as chemical messengers within an individual organism, pheromones carry messages between individuals.
While the pheromone concept applies reasonably well to insects, Doty argues that there are serious problems with its application to mammals. The functions of pheromones, as specific types of compounds, are to produce unlearned, reflexive, and innate responses in recipients. Doty, however, argues that the chemical stimuli that mammals respond to are typically combinations of many compounds, that are sensed in complex social situations, and experience and learning are important for how mammals respond to chemical stimuli. For example, the Vandenbergh effect in mice occurs when puberty-accelerating pheromones are released in the urine of male mice. Doty shows that the effect depends on experience, that multiple compounds found in male mouse urine are involved in the effect, and the effect is not species specific since urine from male rats will also cause puberty acceleration in young female mice.
Human pheromones
He argues that there is little or no scientific basis for human pheromones in the scientific literature. The vomeronasal organ, which is the sensory organ that takes in pheromone compounds in mammals such as mice and rats is a vestigial organ in humans. Menstrual synchrony has long been viewed as a physiological phenomenon in humans that could only be explained by the exchange of pheromones among women. He argues, however, that methodological critiques of menstrual cycle research and recent research indicate the menstrual synchrony does not occur among women. Finally, he reviews the literature on human pheromones and argues that there are serious methodological issues in all studies reporting human pheromones and that no human pheromone has ever been definitively identified.
His conclusion is that human pheromones are a myth that is driven in part by economics. What he calls the "junk-science industry of pheromone-perfumes, pheromone-soaps, and pheromone cosmetics" arose from misunderstood research with mammals. For example, androstenedione is a steroid hormone that is found in human sweat and is the main ingredient in commercially sold human pheromone products, but scientific research provides little evidence that it functions as a pheromone. Doty cites a study in which women sniffed sweaty T-shirts of men. The women preferred T-shirts worn by men whose immune system genes were most different from them, indicating that it was the mixture of genes a man had that was the important factor in which sweaty T-shirt a woman preferred and not the androstenedione secreted in his sweat. Androstenedione is produced by pigs in abundance. Doty quips: "Are women, in fact, attracted to the odors of male pigs or more willing to have sex in the presence of such odors? Are birth rates or other indices of sexual behavior higher in states or counties with pig farms?"
Critical response
The book has generally been received well by the scientific community. According to Doty, those critical of the book range from people who refuse to read it to those who have semantic issues with the pheromone concept and its applicability to mammals. Peter Brennan argues that Doty does not consider some of the more recent scientific research that conflicts with his views. He cites a 2010 study in mice that reports the discovery of a urinary protein that attracts female mice. Brennan concludes: "I suspect that the majority of researchers will continue to use the term [pheromone], despite all of its shortcomings. But after reading this book, I will certainly be more circumspect when referring to pheromones in future."
References
Pheromones
Endocrinology | The Great Pheromone Myth | [
"Chemistry"
] | 1,099 | [
"Pheromones",
"Chemical ecology"
] |
40,251,156 | https://en.wikipedia.org/wiki/Long%20Harbour%20Nickel%20Processing%20Plant | The Long Harbour Nickel Processing Plant is a Canadian nickel concentrate processing facility located in Long Harbour, Newfoundland and Labrador.
Operated by Vale Limited, construction on the plant started in April 2009 and operations began in 2014. Construction costs were in excess of CAD $4.25 billion. Construction involved over 3,200 workers generating approximately 3,000 person-years of employment. Operation of the plant will require approximately 475 workers.
Production began in July 2014, reported in November 2014. Vale's nickel processing plant in Long Harbour received its first major shipment from its Labrador mine in Voisey's Bay in May 2015. As of that date, a small proportion of the plant's raw materials came from Voisey's Bay but the majority were imported from Indonesia. A spokesman for Vale said 100 per cent of the Long Harbour facility's production materials will come from Voisey's Bay by early 2016.
Using the metal processing technology of hydrometallurgy, the plant is designed to produce per year of finished nickel product, together with associated cobalt and copper products. The hydrometallurgy technology was piloted at a demonstration plant that opened in Argentia, Newfoundland and Labrador in 2004. This demonstration plant operated until 2008 and was instrumental in helping Vale decide to use the hydrometallurgy process for the Long Harbour processing plant.
A processing plant on Newfoundland was one of the requirements established by the Government of Newfoundland and Labrador in order to exploit the nickel deposit at the Voisey's Bay Mine in Labrador. The bulk carrier MV Umiak I was one of several ice-strengthened bulk carriers built to transport nickel concentrate from Voisey's Bay to the Long Harbour Nickel Processing Plant.
The Long Harbour Nickel Processing Plant was built on a partially brownfield site near the port of Long Harbour. The facility consists of a wharf for offloading nickel ore concentrate from bulk carriers, crushing and grinding facilities, a main processing plant approximately south of the port, a pipeline to supply process water, an effluent discharge pipe and diffuser, and a residue pipeline to a nearby disposal area. The hydrometallurgical process in the plant will pressure-leach the nickel ore concentrate in acidic solutions to separate iron, sulfur and other impurities from nickel, copper and cobalt.
On June 11, 2018, Premier Dwight Ball announced Vale is moving forward with its underground mine at Voisey's Bay. Ball stated that the move will extend the mine's operating life by at least 15 years. First ore is expected by April 2021 with processing to take place in Long Harbour.
References
External links
Vale Ltd: Voiseys Bay Development - Processing
Buildings and structures in Newfoundland and Labrador
Metallurgical facilities
Vale S.A. | Long Harbour Nickel Processing Plant | [
"Chemistry",
"Materials_science"
] | 554 | [
"Metallurgy",
"Metallurgical facilities"
] |
40,254,095 | https://en.wikipedia.org/wiki/Chrysophanol | Chrysophanol, also known as chrysophanic acid, is a fungal isolate and a natural anthraquinone. It is a C-3 methyl substituted chrysazin of the trihydroxyanthraquinone family.
Chrysophanol (other names; 1,8-dihydroxy-3-methyl-anthraquinone and chrysophanic acid) was found commonly within Chinese medicine and is a naturally occurring anthraquinone. Studies have been conducted on the benefits of chrysophanol and have found that it can aid in preventing cancer, diabetes, asthma, osteoporosis, retinal degeneration, Alzheimer's disease, osteoarthritis, and atherosclerosis.
Its most common effects are of chemotherapeutic and neuroprotective properties.
History
Chrysophanol was first noted from Rheum rhabarbarum which is a plant belonging to the Polygonaceae family. It has since been discovered to be present in various families such as Liliaceae, Meliaceae, Asphodelaceae and Fabaceae among more. As of 2019, it has been observed in 65 species from 14 genera, not just in plants but animals and microbes as well.
Uses
Chrysophanol has been shown to exhibit a variety of effects. It was shown in 2015 to lower cholesterol and triglyceride levels in zebrafish, as well as increase the frequency of peristalsis. This could therefore be used for lipid metabolic disorders in a clinical setting. Chrysophanol has also been shown to exhibit the same properties lipid lowering in rats in 2013.
It also has the potential to stimulate osteoblast differentiation. as well as alleviate diabetic nephropathy Furthermore, it can protect bronchial cells from cigarette smoke extract induced apoptosis. Chrysophanol can also improve the condition of renal interstitial fibrosis.
Chrysophanol has also been used to inhibit T-Cell activation and protect mice from dextran sulphate sodium induced inflammatory bowel disease. It was shown to have attenuated the pro-inflammatory cytokines that were present in the colon tissue due to sulphate sodium induced inflammatory bowel disease.
Mechanism of action
Chrysophanol can alleviate diabetic nephropathy by inactivating TGF-β/EMT signalling. It also has the potential to protect bronchial cells from cigarette smoke extract by repressing CYP1A expression which is usually produced due to excessive reactive oxygen species. Chrysophanol can increase osteoblast differentiation by inducing AMP-activated protein kinase as well as Smad1/5/9. Chrysophanol acts to improve renal interstitial fibrosis by downregulating TGF-β1 and phospho-Smad3 and by upregulating Smad7.
Chrysophanol can also aid in treatment for inflammatory bowel disease by inhibiting inflammation by targeting pro-inflammatory cytokines that are in tumour necrosis factor α. It has also been shown that it inhibits the mitogen-activated protein kinase pathway.
Chrysophanol blocks the proliferation of colon cancer cells in vitro. It induces the necrosis of cells via a reduction in ATP levels. Chrysophanol attenuates the effects of lead exposure in mice by reducing hippocampal neuronal cytoplasmic edema, enhancing mitochondrial crista fusion, significantly increasing memory and learning abilities, reducing lead content in blood, heart, brain, spleen, kidney and liver, promoting superoxide dismutase and glutathione peroxidase activities and reducing malondialdehyde level in the brain, kidney and liver.
Potential therapeutic uses
Chrysophanol can act as an antineoplastic drug. This has been shown in multiple organisms. It has been reported that chrysophanol causes necrosis-like cell death in renal cancer cells. It also has expressed the capability to be classes as an ATC code A10 drug due to its effect on diabetic nephropathy as well as being able to lower lipid absorption.
Production
Chrysophanol is naturally made by a variety of plant species. The most intake is from consumption of rhubarb.
Drug interactions
Chrysophanol has been shown to be able to be co-administered with atorvastatin, to lower cholesterol levels. This is due to the different mechanisms for each, with chrysophanol thought to bind to the stomach to disturb lipid absorption, while atorvastatin decreases cholesterol production in the liver.
Toxicity
Anthraquinones, chrysophanol derivatives among them, have been shown to be hepatotoxic. They can cause apoptosis in normal human liver cells. Chrysophanol derivatives such as chrysophanol-8-o-glucoside, have also been shown to possess anti-coagulant and anti-platelet properties. The derivatives also have potential to cause abnormal oxidative phosphorylation which can result in decreased mitochondrial membrane potential, as well as an increase in abundance of reactive oxygen species, and ultimately will lead to mitochondrial damage and eventual apoptosis.
There is also evidence that chrysophanol could cause damage to DNA. This has been demonstrated in two strains of Salmonella (strains TA 2637 and 1537). It is also important to note, that in treating liver cancer cells, it does so in a way that induced necrosis-like cell death. Necrosis damages the cellular environment, meaning that while it may treat potential issues, it can also damage the surrounding tissue.
References
Further reading
Dihydroxyanthraquinones
Plant toxins | Chrysophanol | [
"Chemistry"
] | 1,228 | [
"Chemical ecology",
"Plant toxins"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.