id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
23,431,822
https://en.wikipedia.org/wiki/C25H54ClN
{{DISPLAYTITLE:C25H54ClN}} The molecular formula C25H54ClN (molar mass: 404.16 g/mol) may refer to: Aliquat 336 Behentrimonium chloride, also known as docosyltrimethylammonium chloride or BTAC-228 Molecular formulas
C25H54ClN
Physics,Chemistry
71
26,816,105
https://en.wikipedia.org/wiki/RMF%20RNA%20motif
The rmf RNA motif is a conserved RNA structure that was originally detected using bioinformatics. rmf RNAs are consistently foundwithin species classified into the genus Pseudomonas, and is located potentially in the 5′ untranslated regions (5′ UTRs) of rmf genes. These genes encodes the ribosome modulation factor protein, which affects the translation of genes by modifying ribosome structure in response to stress such as starvation. This ribosome modulation is a part of the stringent response in bacteria. The likely biological role of rmf RNAs is ambiguous. Since the RNA could be in the 5′ UTRs of protein-coding genes, it was hypothesized that it functions as a cis-regulatory element. This hypothesis is bolstered by the observation that ribosome modulation factor binds ribosomal RNA, and many cis-regulatory RNAs called ribosomal protein leaders participate in a feedback regulation mechanism by binding to proteins that normally bind to ribosomal RNA. However, since rmf RNAs are not very close to the rmf genes, they might function as non-coding RNAs. References External links Cis-regulatory RNA elements Non-coding RNA
RMF RNA motif
Chemistry
247
60,567,436
https://en.wikipedia.org/wiki/Perfluorotriethylcarbinol
Perfluorotriethylcarbinol is a perfluorinated alcohol, namely 3-ethyl-3-pentanol. It is a powerful uncoupling agent and is toxic by inhalation. See also Perfluorinated compound Uncoupling agent References Uncouplers Perfluorinated alcohols Tertiary alcohols
Perfluorotriethylcarbinol
Chemistry
78
31,796,406
https://en.wikipedia.org/wiki/Chebyshev%27s%20bias
In number theory, Chebyshev's bias is the phenomenon that most of the time, there are more primes of the form 4k + 3 than of the form 4k + 1, up to the same limit. This phenomenon was first observed by Russian mathematician Pafnuty Chebyshev in 1853. Description Let (x; n, m) denote the number of primes of the form nk + m up to x. By the prime number theorem (extended to arithmetic progression), That is, half of the primes are of the form 4k + 1, and half of the form 4k + 3. A reasonable guess would be that (x; 4, 1) > (x; 4, 3) and (x; 4, 1) < (x; 4, 3) each also occur 50% of the time. This, however, is not supported by numerical evidence — in fact, (x; 4, 3) > (x; 4, 1) occurs much more frequently. For example, this inequality holds for all primes x < 26833 except 5, 17, 41 and 461, for which (x; 4, 1) = (x; 4, 3). The first x such that (x; 4, 1) > (x; 4, 3) is 26861, that is, (x; 4, 3) ≥ (x; 4, 1) for all x < 26861. In general, if 0 < a, b < n are integers, gcd(a, n) = gcd(b, n) = 1, a is a quadratic residue mod n, b is a quadratic nonresidue mod n, then (x; n, b) > (x; n, a) occurs more often than not. This has been proved only by assuming strong forms of the Riemann hypothesis. The stronger conjecture of Knapowski and Turán, that the density of the numbers x for which (x; 4, 3) > (x; 4, 1) holds is 1 (that is, it holds for almost all x), turned out to be false. They, however, do have a logarithmic density, which is approximately 0.9959.... Generalizations This is for k = −4 to find the smallest prime p such that (where is the Kronecker symbol), however, for a given nonzero integer k (not only k = −4), we can also find the smallest prime p satisfying this condition. By the prime number theorem, for every nonzero integer k, there are infinitely many primes p satisfying this condition. For positive integers k = 1, 2, 3, ..., the smallest primes p are 2, 11100143, 61981, 3, 2082927221, 5, 2, 11100143, 2, 3, 577, 61463, 2083, 11, 2, 3, 2, 11100121, 5, 2082927199, 1217, 3, 2, 5, 2, 17, 61981, 3, 719, 7, 2, 11100143, 2, 3, 23, 5, 11, 31, 2, 3, 2, 13, 17, 7, 2082927199, 3, 2, 61463, 2, 11100121, 7, 3, 17, 5, 2, 11, 2, 3, 31, 7, 5, 41, 2, 3, ... ( is a subsequence, for k = 1, 5, 8, 12, 13, 17, 21, 24, 28, 29, 33, 37, 40, 41, 44, 53, 56, 57, 60, 61, ... ) For negative integers k = −1, −2, −3, ..., the smallest primes p are 2, 3, 608981813029, 26861, 7, 5, 2, 3, 2, 11, 5, 608981813017, 19, 3, 2, 26861, 2, 643, 11, 3, 11, 31, 2, 5, 2, 3, 608981813029, 48731, 5, 13, 2, 3, 2, 7, 11, 5, 199, 3, 2, 11, 2, 29, 53, 3, 109, 41, 2, 608981813017, 2, 3, 13, 17, 23, 5, 2, 3, 2, 1019, 5, 263, 11, 3, 2, 26861, ... ( is a subsequence, for k = −3, −4, −7, −8, −11, −15, −19, −20, −23, −24, −31, −35, −39, −40, −43, −47, −51, −52, −55, −56, −59, ... ) For every (positive or negative) nonsquare integer k, there are more primes p with than with (up to the same limit) more often than not. Extension to higher power residue Let m and n be integers such that m ≥ 0, n > 0, gcd(m, n) = 1, define a function where is the Euler's totient function. For example, f(1, 5) = f(4, 5) = 1/2, f(2, 5) = f(3, 5) = 0, f(1, 6) = 1/2, f(5, 6) = 0, f(1, 7) = 5/6, f(2, 7) = f(4, 7) = 1/2, f(3, 7) = f(5, 7) = 0, f(6, 7) = 1/3, f(1, 8) = 1/2, f(3, 8) = f(5, 8) = f(7, 8) = 0, f(1, 9) = 5/6, f(2, 9) = f(5, 9) = 0, f(4, 9) = f(7, 9) = 1/2, f(8, 9) = 1/3. It is conjectured that if 0 < a, b < n are integers, gcd(a, n) = gcd(b, n) = 1, f(a, n) > f(b, n), then (x; n, b) > (x; n, a) occurs more often than not. References P.L. Chebyshev: Lettre de M. le Professeur Tchébychev à M. Fuss sur un nouveaux théorème relatif aux nombres premiers contenus dans les formes 4n + 1 et 4n + 3, Bull. Classe Phys. Acad. Imp. Sci. St. Petersburg, 11 (1853), 208. J. Kaczorowski: On the distribution of primes (mod 4), Analysis, 15 (1995), 159–171. S. Knapowski, Turan: Comparative prime number theory, I, Acta Math. Acad. Sci. Hung., 13 (1962), 299–314. External links (where prime race 4n+1 versus 4n+3 changes leader) (where prime race 3n+1 versus 3n+2 changes leader) Theorems in analytic number theory Prime numbers
Chebyshev's bias
Mathematics
1,623
1,631,931
https://en.wikipedia.org/wiki/Seifert%20fiber%20space
A Seifert fiber space is a 3-manifold together with a decomposition as a disjoint union of circles. In other words, it is a -bundle (circle bundle) over a 2-dimensional orbifold. Many 3-manifolds are Seifert fiber spaces, and they account for all compact oriented manifolds in 6 of the 8 Thurston geometries of the geometrization conjecture. Definition A Seifert manifold is a closed 3-manifold together with a decomposition into a disjoint union of circles (called fibers) such that each fiber has a tubular neighborhood that forms a standard fibered torus. A standard fibered torus corresponding to a pair of coprime integers with is the surface bundle of the automorphism of a disk given by rotation by an angle of (with the natural fibering by circles). If the middle fiber is called ordinary, while if the middle fiber is called exceptional. A compact Seifert fiber space has only a finite number of exceptional fibers. The set of fibers forms a 2-dimensional orbifold, denoted by B and called the base —also called the orbit surface— of the fibration. It has an underlying 2-dimensional surface , but may have some special orbifold points corresponding to the exceptional fibers. The definition of Seifert fibration can be generalized in several ways. The Seifert manifold is often allowed to have a boundary (also fibered by circles, so it is a union of tori). When studying non-orientable manifolds, it is sometimes useful to allow fibers to have neighborhoods that look like the surface bundle of a reflection (rather than a rotation) of a disk, so that some fibers have neighborhoods looking like fibered Klein bottles, in which case there may be one-parameter families of exceptional curves. In both of these cases, the base B of the fibration usually has a non-empty boundary. Classification Herbert Seifert classified all closed Seifert fibrations in terms of the following invariants. Seifert manifolds are denoted by symbols where: is one of the 6 symbols: , (or Oo, No, NnI, On, NnII, NnIII in Seifert's original notation) meaning: if B is orientable and M is orientable. if B is orientable and M is not orientable. if B is not orientable and M is not orientable and all generators of preserve orientation of the fiber. if B is not orientable and M is orientable, so all generators of reverse orientation of the fiber. if B is not orientable and M is not orientable and and exactly one generator of preserves orientation of the fiber. if B is not orientable and M is not orientable and and exactly two generators of preserve orientation of the fiber. Here g is the genus of the underlying 2-manifold of the orbit surface. b is an integer, normalized to be 0 or 1 if M is not orientable and normalized to be 0 if in addition some is 2. are the pairs of numbers determining the type of each of the r exceptional orbits. They are normalized so that when M is orientable, and when M is not orientable. The Seifert fibration of the symbol can be constructed from that of symbol by using surgery to add fibers of types b and . If we drop the normalization conditions then the symbol can be changed as follows: Changing the sign of both and has no effect. Adding 1 to b and subtracting from has no effect. (In other words, we can add integers to each of the rational numbers provided that their sum remains constant.) If the manifold is not orientable, changing the sign of has no effect. Adding a fiber of type (1,0) has no effect. Every symbol is equivalent under these operations to a unique normalized symbol. When working with unnormalized symbols, the integer b can be set to zero by adding a fiber of type . Two closed Seifert oriented or non-orientable fibrations are isomorphic as oriented or non-orientable fibrations if and only if they have the same normalized symbol. However, it is sometimes possible for two Seifert manifolds to be homeomorphic even if they have different normalized symbols, because a few manifolds (such as lens spaces) can have more than one sort of Seifert fibration. Also an oriented fibration under a change of orientation becomes the Seifert fibration whose symbol has the sign of all the bs changed, which after normalization gives it the symbol and it is homeomorphic to this as an unoriented manifold. The sum is an invariant of oriented fibrations, which is zero if and only if the fibration becomes trivial after taking a finite cover of B. The orbifold Euler characteristic of the orbifold B is given by , where is the usual Euler characteristic of the underlying topological surface of the orbifold B. The behavior of M depends largely on the sign of the orbifold Euler characteristic of B. Fundamental group The fundamental group of M fits into the exact sequence where is the orbifold fundamental group of B (which is not the same as the fundamental group of the underlying topological manifold). The image of group is cyclic, normal, and generated by the element h represented by any regular fiber, but the map from π1(S1) to π1(M) is not always injective. The fundamental group of M has the following presentation by generators and relations: B orientable: where ε is 1 for type o1, and is −1 for type o2. B non-orientable: where εi is 1 or −1 depending on whether the corresponding generator vi preserves or reverses orientation of the fiber. (So εi are all 1 for type n1, all −1 for type n2, just the first one is one for type n3, and just the first two are one for type n4.) Positive orbifold Euler characteristic The normalized symbols of Seifert fibrations with positive orbifold Euler characteristic are given in the list below. These Seifert manifolds often have many different Seifert fibrations. They have a spherical Thurston geometry if the fundamental group is finite, and an S2×R Thurston geometry if the fundamental group is infinite. Equivalently, the geometry is S2×R if the manifold is non-orientable or if b + Σbi/ai= 0, and spherical geometry otherwise. {b; (o1, 0);} (b integral) is S2×S1 for b=0, otherwise a lens space L(b,1). In particular, {1; (o1, 0);} =L(1,1) is the 3-sphere. {b; (o1, 0);(a1, b1)} (b integral) is the lens space L(ba1+b1,a1). {b; (o1, 0);(a1, b1), (a2, b2)} (b integral) is S2×S1 if ba1a2+a1b2+a2b1 = 0, otherwise the lens space L(ba1a2+a1b2+a2b1, ma2+nb2) where ma1 − n(ba1 +b1) = 1. {b; (o1, 0);(2, 1), (2, 1), (a3, b3)} (b integral) This is the prism manifold with fundamental group of order 4a3|(b+1)a3+b3| and first homology group of order 4|(b+1)a3+b3|. {b; (o1, 0);(2, 1), (3, b2), (3, b3)} (b integral) The fundamental group is a central extension of the tetrahedral group of order 12 by a cyclic group. {b; (o1, 0);(2, 1), (3, b2), (4, b3)} (b integral) The fundamental group is the product of a cyclic group of order |12b+6+4b2 + 3b3| and a double cover of order 48 of the octahedral group of order 24. {b; (o1, 0);(2, 1), (3, b2), (5, b3)} (b integral) The fundamental group is the product of a cyclic group of order m=|30b+15+10b2 +6b3| and the order 120 perfect double cover of the icosahedral group. The manifolds are quotients of the Poincaré homology sphere by cyclic groups of order m. In particular, {−1; (o1, 0);(2, 1), (3, 1), (5, 1)} is the Poincaré sphere. {b; (n1, 1);} (b is 0 or 1.) These are the non-orientable 3-manifolds with S2×R geometry. If b is even this is homeomorphic to the projective plane times the circle, otherwise it is homeomorphic to a surface bundle associated to an orientation reversing automorphism of the 2-sphere. {b; (n1, 1);(a1, b1)} (b is 0 or 1.) These are the non-orientable 3-manifolds with S2×R geometry. If ba1+b1 is even this is homeomorphic to the projective plane times the circle, otherwise it is homeomorphic to a surface bundle associated to an orientation reversing automorphism of the 2-sphere. {b; (n2, 1);} (b integral.) This is the prism manifold with fundamental group of order 4|b| and first homology group of order 4, except for b=0 when it is a sum of two copies of real projective space, and |b|=1 when it is the lens space with fundamental group of order 4. {b; (n2, 1);(a1, b1)} (b integral.) This is the (unique) prism manifold with fundamental group of order 4a1|ba1 + b1| and first homology group of order 4a1. Zero orbifold Euler characteristic The normalized symbols of Seifert fibrations with zero orbifold Euler characteristic are given in the list below. The manifolds have Euclidean Thurston geometry if they are non-orientable or if b + Σbi/ai= 0, and nil geometry otherwise. Equivalently, the manifold has Euclidean geometry if and only if its fundamental group has an abelian group of finite index. There are 10 Euclidean manifolds, but four of them have two different Seifert fibrations. All surface bundles associated to automorphisms of the 2-torus of trace 2, 1, 0, −1, or −2 are Seifert fibrations with zero orbifold Euler characteristic (the ones for other (Anosov) automorphisms are not Seifert fiber spaces, but have sol geometry). The manifolds with nil geometry all have a unique Seifert fibration, and are characterized by their fundamental groups. The total spaces are all acyclic. {b; (o1, 0); (3, b1), (3, b2), (3, b3)}    (b integral, bi is 1 or 2) For b + Σbi/ai= 0 this is an oriented Euclidean 2-torus bundle over the circle, and is the surface bundle associated to an order 3 (trace −1) rotation of the 2-torus. {b; (o1, 0); (2,1), (4, b2), (4, b3)}    (b integral, bi is 1 or 3) For b + Σbi/ai= 0 this is an oriented Euclidean 2-torus bundle over the circle, and is the surface bundle associated to an order 4 (trace 0) rotation of the 2-torus. {b; (o1, 0); (2, 1), (3, b2), (6, b3)}    (b integral, b2 is 1 or 2, b3 is 1 or 5) For b + Σbi/ai= 0 this is an oriented Euclidean 2-torus bundle over the circle, and is the surface bundle associated to an order 6 (trace 1) rotation of the 2-torus. {b; (o1, 0); (2, 1), (2, 1), (2, 1), (2, 1)}    (b integral) These are oriented 2-torus bundles for trace −2 automorphisms of the 2-torus. For b=−2 this is an oriented Euclidean 2-torus bundle over the circle (the surface bundle associated to an order 2 rotation of the 2-torus) and is homeomorphic to {0; (n2, 2);}. {b; (o1, 1); }   (b integral) This is an oriented 2-torus bundle over the circle, given as the surface bundle associated to a trace 2 automorphism of the 2-torus. For b=0 this is Euclidean, and is the 3-torus (the surface bundle associated to the identity map of the 2-torus). {b; (o2, 1); }   (b is 0 or 1) Two non-orientable Euclidean Klein bottle bundles over the circle. The first homology is Z+Z+Z/2Z if b=0, and Z+Z if b=1. The first is the Klein bottle times S1 and other is the surface bundle associated to a Dehn twist of the Klein bottle. They are homeomorphic to the torus bundles {b; (n1, 2);}. {0; (n1, 1); (2, 1), (2, 1)}   Homeomorphic to the non-orientable Euclidean Klein bottle bundle {1; (n3, 2);}, with first homology Z + Z/4Z. {b; (n1, 2); }   (b is 0 or 1) These are the non-orientable Euclidean surface bundles associated with orientation reversing order 2 automorphisms of a 2-torus with no fixed points. The first homology is Z+Z+Z/2Z if b=0, and Z+Z if b=1. They are homeomorphic to the Klein bottle bundles {b; (o2, 1);}. {b; (n2, 1); (2, 1), (2, 1)}   (b integral) For b=−1 this is oriented Euclidean. {b; (n2, 2); }   (b integral) For b=0 this is an oriented Euclidean manifold, homeomorphic to the 2-torus bundle {−2; (o1, 0); (2, 1), (2, 1), (2, 1), (2, 1)} over the circle associated to an order 2 rotation of the 2-torus. {b; (n3, 2); }   (b is 0 or 1) The other two non-orientable Euclidean Klein bottle bundles. The one with b = 1 is homeomorphic to {0; (n1, 1); (2, 1), (2, 1)}. The first homology is Z+Z/2Z+Z/2Z if b=0, and Z+Z/4Z if b=1. These two Klein bottle bundle are surface bundles associated to the y-homeomorphism and the product of this and the twist. Negative orbifold Euler characteristic This is the general case. All such Seifert fibrations are determined up to isomorphism by their fundamental group. The total spaces are aspherical (in other words all higher homotopy groups vanish). They have Thurston geometries of type the universal cover of SL2(R), unless some finite cover splits as a product, in which case they have Thurston geometries of type H2×R. This happens if the manifold is non-orientable or b + Σbi/ai= 0. References Herbert Seifert, Topologie dreidimensionaler gefaserter Räume, Acta Mathematica 60 (1933) 147–238 (There is a translation by W. Heil, published by Florida State University in 1976 and found in: Herbert Seifert, William Threlfall, Seifert and Threllfall: a textbook of topology, Pure and Applied Mathematics, Academic Press Inc (1980), vol. 89.) Peter Orlik, Seifert manifolds, Lecture Notes in Mathematics 291, Springer (1972). Frank Raymond, Classification of the actions of the circle on 3-manifolds, Transactions of the American Mathematical Society 31, (1968) 51–87. William H. Jaco, Lectures on 3-manifold topology William H. Jaco, Peter B. Shalen, Seifert Fibered Spaces in Three Manifolds: Memoirs Series No. 220 (Memoirs of the American Mathematical Society; v. 21, no. 220) John Hempel, 3-manifolds, American Mathematical Society, Peter Scott, The geometries of 3-manifolds. (errata), Bull. London Math. Soc. 15 (1983), no. 5, 401–487. Fiber bundles 3-manifolds Geometric topology
Seifert fiber space
Mathematics
3,750
5,079,960
https://en.wikipedia.org/wiki/Yttrium%28III%29%20bromide
Yttrium(III) bromide is an inorganic compound with the chemical formula YBr3. It is a white solid. Anhydrous yttrium(III) bromide can be produced by reacting yttrium oxide or yttrium(III) bromide hydrate and ammonium bromide. The reaction proceeds via the intermediate (NH4)3YBr6. Another method is to react yttrium carbide (YC2) and elemental bromine. Yttrium(III) bromide can be reduced by yttrium metal to YBr or Y2Br3. It can react with osmium to produce Y4Br4Os. References Bromides Metal halides Yttrium compounds
Yttrium(III) bromide
Chemistry
152
62,279,536
https://en.wikipedia.org/wiki/Electrostatic%20septum
An electrostatic septum is a dipolar electric field device used in particle accelerators to inject or extract a particle beam into or from a synchrotron. In an electrostatic septum, basically an electric field septum, two separate areas can be identified, one with an electric field and a field free region. The two areas are separated by a physical wall that is called the septum. An important feature of septa is to have a homogeneous field in the gap and no field in the region of the circulating beam. The basic principle Electrostatic septa provide an electric field in the direction of extraction, by applying a voltage between the septum foil and an electrode. The septum foil is very thin to have the least interaction with the beam when it is slowly extracted. Slowly means over millions of turns of the particles in the synchrotron. The orbiting beam generally passes through the hollow support of the septum foil, which ensures a field free region, as not to affect the circulating beam. The field free region is achieved by using the hollow support of the septum and the septum foil itself as a Faraday cage. The extracted beam passes just on the other side of the septum, where the electric field changes the direction of the beam to be extracted. The septum separates the gap field between the electrode and the foil from the field free region for the circulating beam. Electrostatic septa are always sitting in a vacuum tank to allow high electric fields, since the vacuum works as an insulator between the septum and high voltage electrode. To allow precise matching of the septum position with the circulation beam trajectory, the septum is often fitted with a displacement system, which allows parallel and angular displacement with respect to the circulating beam. Great difficulty lies in the choice of materials and the manufacturing techniques of the different components. In the figure a typical cross section of an electrostatic septum is shown. The septum foil and its support are marked in blue, while the electrode is marked in red. In the lower part of the figure the electric field E is shown as it could be measured on the axis indicated as a dotted line in the cross section. The field free region is inside the support of the septum foil. The electric field E in the gap between the septum foil and the electrode is homogeneous on the axis and is equal to: Where V is the voltage applied to the electrode and d is the distance between the septum foil and the electrode. Typical technical specifications Typical device specifications are listed below. Electrode length: 500 – 3000 mm Gap width: variable between 10 – 35 mm Septum thickness: 0.1 mm Vacuum: (10−9 to 10−12 mbar range) Electric field strength: up to 15 MV/m Voltage: up to 300 kV Septum materials: Molybdenum foil, Tungsten Rhenium alloy wires, Tungsten Rhenium alloy ribbons Electrode materials: stainless steel, anodised aluminium or titanium for extreme low vacuum applications Bakeable up to 200 °C for low vacuum applications Power supplied by high voltage Cockcroft–Walton generator References Electrostatic Septum Accelerator physics
Electrostatic septum
Physics
639
1,875,715
https://en.wikipedia.org/wiki/Elliptic%20filter
An elliptic filter (also known as a Cauer filter, named after Wilhelm Cauer, or as a Zolotarev filter, after Yegor Zolotarev) is a signal processing filter with equalized ripple (equiripple) behavior in both the passband and the stopband. The amount of ripple in each band is independently adjustable, and no other filter of equal order can have a faster transition in gain between the passband and the stopband, for the given values of ripple (whether the ripple is equalized or not). Alternatively, one may give up the ability to adjust independently the passband and stopband ripple, and instead design a filter which is maximally insensitive to component variations. As the ripple in the stopband approaches zero, the filter becomes a type I Chebyshev filter. As the ripple in the passband approaches zero, the filter becomes a type II Chebyshev filter and finally, as both ripple values approach zero, the filter becomes a Butterworth filter. The gain of a lowpass elliptic filter as a function of angular frequency ω is given by: where Rn is the nth-order elliptic rational function (sometimes known as a Chebyshev rational function) and is the cutoff frequency is the ripple factor is the selectivity factor The value of the ripple factor specifies the passband ripple, while the combination of the ripple factor and the selectivity factor specify the stopband ripple. Properties In the passband, the elliptic rational function varies between zero and unity. The gain of the passband therefore will vary between 1 and . In the stopband, the elliptic rational function varies between infinity and the discrimination factor which is defined as: The gain of the stopband therefore will vary between 0 and . In the limit of the elliptic rational function becomes a Chebyshev polynomial, and therefore the filter becomes a Chebyshev type I filter, with ripple factor ε Since the Butterworth filter is a limiting form of the Chebyshev filter, it follows that in the limit of , and such that the filter becomes a Butterworth filter In the limit of , and such that and , the filter becomes a Chebyshev type II filter with gain Poles and zeroes The zeroes of the gain of an elliptic filter will coincide with the poles of the elliptic rational function, which are derived in the article on elliptic rational functions. The poles of the gain of an elliptic filter may be derived in a manner very similar to the derivation of the poles of the gain of a type I Chebyshev filter. For simplicity, assume that the cutoff frequency is equal to unity. The poles of the gain of the elliptic filter will be the zeroes of the denominator of the gain. Using the complex frequency this means that: Defining where cd() is the Jacobi elliptic cosine function and using the definition of the elliptic rational functions yields: where and . Solving for w where the multiple values of the inverse cd() function are made explicit using the integer index m. The poles of the elliptic gain function are then: As is the case for the Chebyshev polynomials, this may be expressed in explicitly complex form where is a function of and and are the zeroes of the elliptic rational function. is expressible for all n in terms of Jacobi elliptic functions, or algebraically for some orders, especially orders 1,2, and 3. For orders 1 and 2 we have where The algebraic expression for is rather involved (See ). The nesting property of the elliptic rational functions can be used to build up higher order expressions for : where . Minimum order To design an Elliptic filter using the minimum required number of elements, the minimum order of the Elliptic filter may be calculated with elliptic integrals as follows. The equations account for standard low pass Elliptic filters, only. Even order modifications will introduce error that the equations do not account for. The elliptic integral computations may eliminated with the use of the following expression. where: and are the pass band ripple frequency and maximum ripple attenuation in dB and are the stop band frequency and minimum stop band attenuation in dB is the minimum number of poles, the order of the filter. ceil[] is a round up to next integer function. Minimum Q-factor elliptic filters See . Elliptic filters are generally specified by requiring a particular value for the passband ripple, stopband ripple and the sharpness of the cutoff. This will generally specify a minimum value of the filter order which must be used. Another design consideration is the sensitivity of the gain function to the values of the electronic components used to build the filter. This sensitivity is inversely proportional to the quality factor (Q-factor) of the poles of the transfer function of the filter. The Q-factor of a pole is defined as: and is a measure of the influence of the pole on the gain function. For an elliptic filter, it happens that, for a given order, there exists a relationship between the ripple factor and selectivity factor which simultaneously minimizes the Q-factor of all poles in the transfer function: This results in a filter which is maximally insensitive to component variations, but the ability to independently specify the passband and stopband ripples will be lost. For such filters, as the order increases, the ripple in both bands will decrease and the rate of cutoff will increase. If one decides to use a minimum-Q elliptic filter in order to achieve a particular minimum ripple in the filter bands along with a particular rate of cutoff, the order needed will generally be greater than the order one would otherwise need without the minimum-Q restriction. An image of the absolute value of the gain will look very much like the image in the previous section, except that the poles are arranged in a circle rather than an ellipse. They will not be evenly spaced and there will be zeroes on the ω axis, unlike the Butterworth filter, whose poles are arranged in an evenly spaced circle with no zeroes. Comparison with other linear filters Here is an image showing the elliptic filter next to other common kind of filters obtained with the same number of coefficients: As is clear from the image, elliptic filters are sharper than all the others, but they show ripples on the whole bandwidth. Construction from Chebyshev transmission zeros Elliptic filter stop bands are essentially Chebyshev filters with transmission zeros where the transmission zeros are arranged in a manner that yields an equi-ripple stop band. Given this, it is possible to convert a Chebyshev filter characteristic equation, containing the Chebyshev reflection zeros in the numerator and no transmission zeros in the denominator, to an Elliptic filter containing the Elliptic reflection zeros in the numerator and Elliptic transmission zeros in the denominator, by iteratively creating transmission zeros from the scaled inverse of the Chebyshev reflection zeros, and then reestablishing an equi-ripple Chebyshev pass band from the transmission zeros, and repeating until the iterations produce no further changes of significance to . The scaling factor used, , is the stop band to pass band cutoff frequency ratios and is also known as the inverse of the "selectivity factor". Since Elliptic designs are generally specified from the stop band attenuation requirements, , may be derived from the equations that establish the minimum order, n, above. the ratio, may be derived by working the minimum order, n, problem above backwards from n to find . The characteristic polynomials, computed from and attenuation requirements, may then be translated to the transfer function polynomials, with the classic translation, where and is the pass band ripple. Simple example Design an Elliptic filter with a pass band ripple of 1 dB from 0 to 1 rad/sec and a stop band ripple of 40 dB from at least 1.25 rad/sec to . Applying the calculations above for the value for n prior to applying the ceil() function, n is found to be 4.83721900 rounded up to the next integer, 5, by applying the ceil() function, which means a 5 pole Elliptic filter is required to meet the specified design requirements. Applying the calculations above for needed to design a stop band of exactly 40 dB of attenuation, is found to be 1.2186824. The polynomial scaled inversion function may be performed by translating each root, s, to , which may be easily accomplished by inverting the polynomial and scaling it by , as shown. The Elliptic design steps are then as follows: Design a Chebyshev filters with 1 dB pass band ripple. Invert all the reflections zeros about to create transmission zeros Create an equi-ripple pass band from the transmission zeros using the process outlined in Chebyshev transmission zeros Repeat steps 2 and 3 until both the pass band and stop band no longer change by any appreciable amount. Typically, 15 to 25 iterations produce coefficient differences in the order of than 1.e-15. To illustrate the steps, the below K(s) equations begin with a standard Chebyshev K(s), then iterate through the process. Visible differences are seen in the first three iterations. By time 18 iterations have been reached, the differences in K(s) become negligible. Iterations may be discontinued when the change in K(s) coefficients becomes sufficiently small so as to meet design accuracy requirements. The below K(s) iterations have all been normalized such that , however, this step may be postponed until the last iteration, if desired. To find the transfer function, do the following. To obtain from the left half plane, factor the numerator and denominator to obtain the roots using a root finding algorithm. Discard all roots from the right half plane of the denominator, half the repeated roots in the numerator, and rebuild with the remaining roots. Generally, normalize to 1 at . To confirm that the example is correct, the plot of along is shown below with a pass band ripple of 1 dB, a cut off frequency of 1 rad/sec, a stop band attenuation of 40 dB beginning at 1.21868 rad/sec Even order modifications Even order Elliptic filters implemented with passive elements, typically inductors, capacitors, and transmission lines, with terminations of equal value on each side cannot be implemented with the traditional Elliptic transfer function without the use of coupled coils, which may not be desirable or feasible. This is due to the physical inability to accommodate the even order Chebyshev reflection zeros and transmission zeros that result in the scattering matrix S12 values that exceed the S12 value at , and the finite S12 values that exist at . If it is not feasible to design the filter with one of the terminations increased or decreased to accommodate the pass band S12, then the Elliptic transfer function must be modified so as to move the lowest even order reflection zero to and the highest even order transmission zero to while maintaining the equi-ripple response of the pass band and stop band. The needed modification involves mapping each pole and zero of the Elliptic transfer function in a manner that maps the lowest frequency reflection zero to zero, the highest frequency transmission zero to , and the remaining poles and zeros as needed to maintain the equi-ripple pass band and stop band. The lowest frequency reflection zero may be found by factoring the numerator, and the highest frequency transmission zero may be found be factoring the denominator. The translate the reflection zeros, the following equation is applied to all poles and zeros of . While in theory, the translation operations may be performed on either or , the reflection zeros must be extracted from , so it is generally more efficient to perform the translation operations on . Where: is the original Elliptic function zero or pole is the mapped zero or pole for the modified even order transfer function. is the lowest frequency reflection zero in the pass band. The sign of imaginary component of is determined by the sign of the original . The translate the transmission zeros, the following equation is applied to all poles and zeros of . While in theory, the translation operations may be performed on either or , if the reflection zeros must be extracted from , it may be more efficient to perform the translation operations on . Where: is the original Elliptic function zero or pole is the mapped zero or pole for the modified even order transfer function. is the highest frequency transmission zero in the pass band. The sign of imaginary component of is determined by the sign of the original . If operating on the sign of the real component of must be negative to conform to the left half plane requirement. It is important to note that all applications require both pass and stop translations. Passive network diplexers, for example, only require even order stop band translations, and perform more efficiently with untranslated even order pass bands. When is completed, an equi-ripple transfer function is created with scattering matrix values for S12 of 1 and 0 at , which may be implements with passive equally terminated networks. The illustration below shows an 8th order Elliptic filter modified to support even order equally terminated passive networks by relocating the lowest frequency reflection zero from a finite frequency to 0 and the highest frequency transmission zero to while maintaining an equi-ripple pass band and stop band frequency response. The and order computation in the Elliptic construction paragraph above are for unmodified Elliptic filters only. Although even order modifications have no effect on the pass band or stop band attenuation, small errors are to be expected in the order and computations. Therefore, it is important to apply even order modifications after all iterations complete if it is desired to preserve the pass and stop band attenuations. If the even order modified Elliptic function is created from an requirement, the actual will be slightly larger than the design . Likewise, an order, n, computation may result in a smaller value than the actual required order. Hourglass implementation An Hourglass filter is a special case of filter where the reflection zeros, are the reciprocal of the transmission zeros about a 3.01 dB normalized cut-off attenuation frequency of 1 rad/sec, resulting all poles of the filter residing on the unit circle. The Elliptic Hourglass implementation has an advantage over an Inverse Chebyshev filter in that the pass band is flatter, and has an advantage over traditional Elliptic filters in that the group delay has a less sharp peak at the cut-off frequency. Syntheses process The most straightforward way to synthesize an Hourglass filter is to design an Elliptic filter with a specified design stop band attenuation, As, and a calculated pass band attenuation that meets the lossless two-port network requirement that scattering parameters . Together with the well known magnitude dB to arithmetic translation, , algebraic manipulation yields the following pass band attenuation calculated requirement. The Ap, defined above will produce reciprocal reflection and transmission zeros about a yet unknown 3.01 dB cut-off frequency. to Design an Elliptic filter with a pass band frequency of 1 rad/sec the 3.01 dB attenuation frequency needs to be determined and that frequency needs to be used to inversely scale the Elliptic design polynomials. The result will be polynomials with an attenuation of 3.01 dB at a normalized frequency of 1 rad/sec. Newton's method or solving the equations directly with a root finding algorithm may be used to determine the 3.01 dB attenuation frequency. Frequency scaling with Newton's method If is the Hourglass transfer function to find the 3.01 dB frequency, and is the 3 dB frequency to find, the steps below may be used to find If is not already available, multiply by to obtain . negate all terms of when is divisible by . That would be , , , and so on. The modified function will be called , and this modification will allow the use of real numbers instead of complex numbers when evaluating the polynomial and its derivative. the real can now be used in place of the complex Convert the desired attenuation in dB, , to a squared arithmetic gain value, , by using . For example, 3.010 dB converts to 0.5, 1 dB converts to 0.79432823 and so on. Calculate the modified in Newton's method using the real value, . Always take the absolute value. Calculate the derivative the modified with respect to the real value, . DO NOT take the absolute value of the derivative. When steps 1) through 4) are complete, the expression involving Newton's method may be written as: using a real value for with no complex arithmetic needed. The movement of should be limited to prevent it from going negative early in the iterations for increased reliability. When convergence is complete, can used for the that can be used to scale the original transfer function denominator. The attenuation of the modified will then be virtually the exact desired value at 1 rad/sec. If performed properly, only a handful of iterations are needed to set the attenuation through a wide range of desired attenuation values for both small and very large order filters. Frequency scaling with root finding Since does not contain any phase information, directly factoring the transfer function will not produce usable results. However, the transfer function may be modified by multiplying it with to eliminate all odd powers of , which in turn forces to be real at all frequencies, and then finding the frequency that result on the square of the desired attention. If is not already available, multiply by to obtain . Convert the desired attenuation in dB, , to a squared arithmetic gain value, , by using . For example, 3.010 dB converts to 0.5, 1 dB converts to 0.79432823 and so on. Find Find the roots of P(S) using a root finding algorithm. Of the set of roots from above, select the positive imaginary root for all order filters, and positive real root for even order filters for . Scaling the transfer function When has been determined, the Hourglass transfer function polynomial may be scaled as follows: Even order modifications Even order Hourglass filters have the same limitations regarding equally terminated passive networks as other Elliptic filters. The same even order modifications that resolve the problem with Elliptic filters also resolve the problem with Hourglass filters. References Linear filters Network synthesis filters Electronic design
Elliptic filter
Engineering
3,772
70,302,410
https://en.wikipedia.org/wiki/4D%20scanning%20transmission%20electron%20microscopy
4D scanning transmission electron microscopy (4D STEM) is a subset of scanning transmission electron microscopy (STEM) which utilizes a pixelated electron detector to capture a convergent beam electron diffraction (CBED) pattern at each scan location. This technique captures a 2 dimensional reciprocal space image associated with each scan point as the beam rasters across a 2 dimensional region in real space, hence the name 4D STEM. Its development was enabled by evolution in STEM detectors and improvements computational power. The technique has applications in visual diffraction imaging, phase orientation and strain mapping, phase contrast analysis, among others. The name 4D STEM is common in literature, however it is known by other names: 4D STEM EELS, ND STEM (N- since the number of dimensions could be higher than 4), position resolved diffraction (PRD), spatial resolved diffractometry, momentum-resolved STEM, "nanobeam precision electron diffraction", scanning electron nano diffraction (SEND), nanobeam electron diffraction (NBED), or pixelated STEM. History The use of diffraction patterns as a function of position dates back to the earliest days of STEM, for instance the early review of John M. Cowley and John C. H. Spence in 1978 or the analysis in 1983 by Laurence D. Marks and David J. Smith of the orientation of different crystalline segments in nanoparticles. Later work includes the analysis of diffraction patterns as a function of probe position in 1995, where Peter Nellist, B.C. McCallum and John Rodenburg attempted electron ptychography analysis of crystalline silicon. There is also fluctuation electron microscopy (FEM) technique, proposed in 1996 by Treacy and Gibson, which also included quantitative analysis of the differences in images or diffraction patterns taken at different locations on a given sample. The field of 4D STEM remained underdeveloped due to the limited capabilities of detectors available at the time. The earliest work used either Grigson coils to scan the diffraction pattern, or an optical camera pickup from a phosphur screen. Later on CCD detectors became available, but while these are commonly used in transmission electron microscopy (TEM) they had limited data acquisition rates, could not distinguish where on the detector an electron strikes with high accuracy, and had low dynamic range which made them undesirable for use in 4D STEM. In the late 2010s, the development of hybrid pixel array detectors (PAD) with single electron sensitivity, high dynamic range, and fast readout speeds allowed for practical 4D STEM experiments. Operating Principle While the process of data collection in 4D STEM is identical to that of standard STEM, each technique utilizes different detectors and collects different data. In 4D STEM there is a pixelated electron detector located at the back focal plane which collects the CBED pattern at each scan location. An image of the sample can be constructed from the CBED patterns by selecting an area in reciprocal space and assigning the average intensity of that area in each CBED pattern to the real space pixel the pattern corresponds to. It is also possible for there to be a(n) ADF or HAADF image taken concurrently with the CBED pattern collection, depending on where the detector is located on the microscope. An annular dark-field image taken may be complementary to a bright-field image constructed from the captured CBED images. The use of a hollow detector with a hole in the middle can allow for transmitted electrons to be passed to an EELS detector while scanning. This allows for the simultaneous collection of chemical spectra information and structure information. Detectors In traditional TEM, imaging detectors use phosphorescent scintillators paired with a charge coupled device (CCD) to detect electrons. While these devices have good electron sensitivity, they lack the necessary readout speed and dynamic range necessary for 4D STEM. Additionally, the use of a scintillator can worsen the point spread function (PSF) of the detector due to the electron's interaction with the scintillator resulting in a broadening of the signal. In contrast, traditional annular STEM detectors have the necessary readout speed, but instead of collecting a full CBED pattern the detector integrates the collected intensity over a range of angles into a single data point. The development of pixelated detectors in the 2010s with single electron sensitivity, fast readout speeds, and high dynamic range has enabled 4D STEM as a viable experimental method. 4D STEM detectors are typically built as either a monolithic active pixel sensor (MAPS) or as a hybrid pixel array detector (PAD). Monolithic active pixel sensor (MAPS) A MAPS detector consists of a complementary metal–oxide–semiconductor (CMOS) chip paired with a doped epitaxial surface layer which converts high energy electrons into many lower energy electrons that travel down to the detector. MAPS detectors must be radiation hardened as their direct exposure to high energy electrons makes radiation damage a key concern. Due to its monolithic nature and straightforward design, MAPS detectors can attain high pixel densities on the order of 4000 x 4000. This high pixel density when paired with low electron doses can enable single electron counting for high efficiency imaging. Additionally, MAPS detectors tend to have electron high sensitivities and fast readout speeds, but suffer from limited dynamic range. Pixel array detector (PAD) PAD detectors consist of a photodiode bump bonded to an integrated circuit, where each solder bump represents a single pixel on the detector. These detectors typically have lower pixel densities on the order of 128 x 128 but can achieve much higher dynamic range on the order of 32 bits. These detectors can achieve relatively high readout speeds on the order of 1 ms/pixel but are still lacking compared to their annular detector counterparts in STEM which can achieve readout speeds on the order of 10 μs/pixel. Detector noise performance is often measured by its detective quantum efficiency (DQE) defined as: where is output signal to noise ratio squared and is the input signal to noise ratio squared. Ideally the DQE of a sensor is 1 indicating the sensor generates zero noise. The DQE of MAPS, APS and other direct electron detectors tend to be higher than their CCD camera counterparts. Computational Methods A major issue in 4D STEM is the large quantity of data collected by the technique. With upwards of 100s of TB of data produced over the course of an hour of scanning, finding pertinent information is challenging and requires advanced computation. Analysis of such large datasets can be quite complex and computational methods to process this data are being developed. Many code repositories for analysis of 4D STEM are currently in development including: HyperSpy, , LiberTEM, Pycroscopy, and . AI driven analysis is possible. However, some methods require databases of information to train on which currently do not exist. Additionally, lack of metrics for data quality, limited scalability due to poor cross-platform support across different manufacturers, and lack of standardization in analysis and experimental methods brings up questions of comparability across different datasets as well as reproducibility. Selected Applications 4D STEM has been utilized in a wide array of applications, the most common uses include virtual diffraction imaging, orientation and strain mapping, and phase contrast analysis which are covered below. The technique has also been applied in: medium range order measurement, Higher order Laue zone (HOLZ) channeling contrast imaging, Position averaged CBED, fluctuation electron microscopy, biomaterials characterization, and medical fields (microstructure of pharmaceutical materials and orientation mapping of peptide crystals). This list is in no way exhaustive and as the field is still relatively young more applications are actively being developed. Virtual Diffraction (Dark Field / Bright Field) Imaging Virtual diffraction imaging is a method developed to generate real space images from diffraction patterns. This technique has been used in characterizing material structures since the 90s but more recently has been applied in 4D STEM applications. This technique often works best with scanning electron nano diffraction (SEND), where the probe convergence angle is relatively low to give separated diffraction disks (thus also giving a resolution measured in nm, not Å). A "virtual detector," is not a detector at all but rather a method of data processing which integrates a subset of pixels in diffraction patterns at each raster position to create a bright-field or dark-field image. A region of interest is selected on some representative diffraction pattern, and only those pixels within the aperture summed to form the image. This virtual aperture can be any size/shape desired and can be created using the 4D dataset gathered from a single scan. This ability to apply different apertures to the same dataset is possible because of having the whole diffraction pattern in the 4D STEM dataset. This eliminates a typical weaknesses in conventional STEM operation as STEM bright-field and dark-field detectors are placed at fixed angles and cannot be changed during imaging. With a 4D dataset bright/dark-field images can be obtained by integrating diffraction intensities from diffracted and transmitted beams respectively. Creating images from these patterns can give nanometer or atomic resolution information (depending on the pixel step size and the range of diffracted angles used to form the image) and is typically used to characterize the structure of nanomaterials. Additionally, these diffraction patterns can be indexed and analyzed using other 4DSTEM techniques, such as orientation and phase mapping, or strain mapping. A key advantage of performing virtual diffraction imaging in 4D STEM is the flexibility. Any shape of aperture could be used: a circle (cognate with traditional TEM bright/dark field imaging), a rectangle, an annulus (cognate with STEM ADF/ABF imaging), or any combination of apertures in a more complex pattern. The use of regular grids of apertures is particularly powerful at imaging a crystal with high signal to noise and minimising the effects of bending and has been used by McCartan et al.; this also allowed the imaging of an array of superlattice spots associated with a particular crystal ordering in part of the crystal as a result of chemical segregation. Virtual diffraction imaging has been used to map interfaces, select intensity from selected areas of the diffraction plane to form enhanced dark field images, map positions of nanoscale precipitates, create phase maps of beam sensitive battery cathode materials, and measure degree of crystallinity in metal-organic frameworks (MOFs). Recent work has further extended the possibilities of virtual diffraction imaging, by applying a more digital approach adapted from one developed for orientation and phase mapping, or strain mapping. In these methods, the diffraction spot positions in a 4D dataset are determined for each diffraction pattern and turned into a list, and operations are performed on the list, not on the whole images. For dark field imaging, the centroid positions for the list of diffraction spots can be simply compared against a list of centroid positions for where spots are expected and intensity only added where diffraction spot centroids agree with the selected positions. This gives far more selectivity than simply integrating all intensity in an aperture (particularly because it ignores diffuse intensity that does not fall in spots), and consequently, much higher contrast in the resulting images and has recently been submitted to arXiv. Phase Orientation Mapping Phase orientation mapping is typically done with electron back scattered diffraction in SEM which can give 2D maps of grain orientation in polycrystalline materials. The technique can also be done in TEM using Kikuchi lines, which is more applicable for thicker samples since formation of Kikuchi lines relies on diffuse scattering being present. Alternatively, in TEM one can utilize precession electron diffraction (PED) to record a large number of diffraction patterns and through comparison to known patterns, the relative orientation of grains in can be determined. 4D STEM can also be used to map orientations, in a technique called Bragg spot imaging. The use of traditional TEM techniques typically results in better resolution than the 4D STEM approach but can fail in regions with high strain as the DPs become too distorted. In Bragg spot imaging, first correlation analysis method is performed to group diffraction patterns (DPs) using a correlation method between 0 (no correlation) and 1 (exact match); then the DP's are grouped by their correlation using a correlation threshold. A correlation image can then be obtained from each group. These are summed and averaged to obtain an overall representative diffraction template from each grouping. Different orientations can be assigned colors which helps visualize individual grain orientations. With proper tilting and utilizing precession electron diffraction (PED) it is even possible to make 3D tomographic renderings of grain orientation and distribution. Since the technique is computationally intensive, recent efforts have been focused on a machine learning approach to analysis of diffraction patterns. Strain Mapping TEM can measure local strains and is often used to map strain in samples using condensed beam electron diffraction CBED. The basis of this technique is to compare an unstrained region of the sample's diffraction pattern with a strained region to see the changes in the lattice parameter. With STEM, the disc positions diffracted from an area of a specimen can provide spatial strain information. The use of this technique with 4D STEM datasets includes fairly involved calculations. Utilizing SEND, bright and dark field images can be obtained from diffraction patterns by integration of direct and diffracted beams respectively, as discussed previously. During 4D STEM operation the ADF detector can be used to visualize a particular region of interest through a collection of scattered electrons to large angles to correlate probe location with diffraction during measurements. There is a tradeoff between resolution and strain information; since larger probes can average strain measurements over a large volume, but moving to smaller probe sizes gives higher real space resolution. There are ways to combat this issue such as spacing probes further apart than the resolution limit to increase the field of view. This strain mapping technique has been applied in many crystalline materials and has been extended to semi-crystalline and amorphous materials (such as metallic glasses) since they too exhibit deviations from mean atomic spacing in regions of high strain Phase Contrast Analysis Differential phase contrast The differential phase contrast imaging technique (DPC) can be used in STEM to characterise magnetic and electric fields inside a thin specimen. The electric or magnetic field in samples is estimated by measuring the deflection of the electron beam caused by the field at each scan point. This differs from the more traditional annular dark field (ADF) measurements by the placement of the detector in the bright field area such that the center of mass of the (mostly) unscattered electron beam may be measured. Additionally, segmented or pixelated detectors are used in order to gain the necessary radial resolution. ADF detectors are typically monolithic (single-segment) and are placed in the dark field region, such that they collect the electrons that have been scattered by the sample. Using DPC to image the local electric fields surrounding single atoms or atomic columns is possible. The use of a pixelated detector in 4D STEM and a computer to track the movement of the "center of mass" of the CBED patterns was found to provide comparable results to those found using segmented detectors. 4D STEM allows for phase change measurements along all directions to be measured without the need to rotate the segmented detector to align with specimen orientation. The ability to measure local polarization in parallel with the local electric field has also been demonstrated with 4D STEM. DPC imaging with 4D STEM is up to 2 orders of magnitude slower than DPC with segmented detectors and requires advanced analysis of large four-dimensional datasets. Ptychography The overlapping CBED measurements present in a 4D STEM dataset allow for the construction of the complex electron probe and complex sample potential using the ptychography technique. Ptychographic reconstructions with 4D STEM data were shown to provide higher contrast than ADF, BF, ABF, and segmented DPC imaging in STEM. The high signal-to-noise ratio of this technique under 4D STEM makes it attractive for imaging radiation sensitive specimens such as biological specimens The use of a pixelated detector with a hole in the middle to allow the unscattered electron beam to pass to a spectrometer has been shown to allow ptychographic analysis in conjunction with chemical analysis in 4D STEM. MIDI STEM This technique MIDI-STEM (matched illumination and detector interferometry-STEM), while being less common, is used with ptychography to create higher contrast phase images. The placement of a phase plate with zones of 0 and π/2 phase shift in the probe forming aperture creates a series of concentric rings in the resulting CBED pattern. The difference in counts between the 0 and π/2 regions allows for direct measurement of local sample phase. The counts in the different regions could be measured via complex standard detector geometries or the use of a pixelated detector in 4D STEM. Pixelated detectors have been shown to utilize this technique with atomic resolution. (MIDI)-STEM produces image contrast information with less high-pass filtering than DPC or ptychography but is less efficient at high spatial frequencies than those techniques. (MIDI)-STEM used in conjunction with ptychography has been shown to be more efficient in providing contrast information than either technique individually. See also Electron diffraction Detectors for transmission electron microscopy Energy filtered transmission electron microscopy (EFTEM) High-resolution transmission electron microscopy (HRTEM) Scanning confocal electron microscopy (SCEM) Scanning electron microscope (SEM) Scanning Transmission Electron Microscopy (STEM) Transmission electron microscopy (TEM) References Electron beam Electron microscopy techniques
4D scanning transmission electron microscopy
Chemistry
3,706
33,168,609
https://en.wikipedia.org/wiki/WISEPA%20J022623.98-021142.8
WISE 0226−0211 (also known as WISEPA J022623.98−021142.8) is a brown dwarf binary with a combined spectral type of T7. Its individual components have a spectral type that is as of now somewhat uncertain at T8-T8.5 for the primary and T9.5-Y0 for the secondary. The object was first discovered in 2011 with the Wide-field Infrared Survey Explorer and follow-up observations with Keck revealed a spectral type of T7. In 2019 the same team showed that the object is a common proper motion binary with a separation of 2.1 arcseconds from Spitzer images and one H-band Keck image (acquired by David Ciardi). The H-ch2 colors suggest a spectral type of about T8-T8.5 for the primary and about Y0 for the secondary. The absolute magnitudes on the other hand suggest spectral types of about T8-T8.5 for the primary and about T9.5-Y0 for the secondary. It is suspected that the combined spectral type of T7 is in error. See also List of Y-dwarfs other late T to Y dwarf binaries: WISE 1217+1626 T9+Y0 WISE J0336−0143 Y+Y CFBDSIR J1458+1013 T9+Y0 WISE 0146+4234 T9+Y0 WISE J1711+3500 T8+T9.5 References T-type brown dwarfs Binary stars Y-type brown dwarfs Cetus Astronomical objects discovered in 2011
WISEPA J022623.98-021142.8
Astronomy
336
2,828,024
https://en.wikipedia.org/wiki/EDN%20%28magazine%29
EDN is an electronics industry website and formerly a magazine owned by AspenCore Media, an Arrow Electronics company. The editor-in-chief is Majeed Ahmad. EDN was published monthly until, in April 2013, EDN announced that the print edition would cease publication after the June 2013 issue. History The first issue of Electrical Design News, the original name, was published in May 1956 by Rogers Corporation of Englewood, Colorado. In January 1961, Cahners Publishing Company, Inc., of Boston, acquired Rogers Publishing Company. In February 1966, Cahners sold 40% of its company to International Publishing Company in London In 1970, the Reed Group merged with International Publishing Corporation and changed its name to Reed International Limited. Acquisition of EEE magazine Cahners Publishing Company acquired Electronic Equipment Engineering, a monthly magazine, in March 1971 and discontinued it. In doing so, Cahners folded EEE's best features into EDN, and renamed the magazine EDN/EEE. At the time, George Harold Rostky (1926–2003) was editor-in-chief of EEE. Rostky joined EDN and eventually became editor-in-chief before leaving to join Electronic Engineering Times as editor-in-chief. Taking EDN worldwide Roy Forsberg later became editor-in-chief of EDN magazine. He was later promoted to publisher and Jon Titus PhD was named editor-in-chief. Forsberg and Titus established EDN Europe, EDN Asia and EDN China, creating one of the largest global circulations for a design engineering magazine. EDNs 25th anniversary issue was a 425-page folio. Reed Limited acquires remaining interest in Cahners In 1977, Reed acquired the remaining interest in Cahners, then known as Cahners Publications. In 1982, Reed International Limited changed its name to Reed International PLC. In 1992, Reed International merged with Elsevier NV, becoming Reed Elsevier PLC on January 1, 1993. Reed Business Media then removed the Cahners Business Publishing name to rebrand itself as Reed Business Information. Reed sells EDN to Canon Communications LLC, Canon acquired by United Business Media, UBM sells EDN to AspenCore Media Reed Business Information, part of Reed Elsevier, sold the magazine to Canon Communications LLC in February 2010. United Business Media, now UBM LLC, acquired Canon Communications LLC in October 2010. On June 3, 2016, UBM announced that EE Times, along with the rest of the electronics media portfolio (EDN, Embedded.com, TechOnline and Datasheets.com) was being sold to AspenCore Media, a company owned by Arrow Electronics, for $23.5 million. The acquisition was completed on August 1, 2016. On April 9, 2013, UBM announced that EDNs print edition would cease publication after the June 2013 issue and that the online EDN.com community would continue. Michael Dunn led EDN through mid-2018. Santo succeeded him shortly thereafter and Majeed Ahmad became Editor-in-Chief in August 2020. International editions EDN is also published in China and Taiwan and in Japan by ITmedia, Inc. which licenses content from AspenCore Media. Publishing Segment The website, EDN Network, caters to the needs of the working electrical engineer and covers new technologies and electronic component products at an engineering level. Columns discuss everything from managing engineers and engineering projects to technical issues faced in the design of electronic components, systems and developing technologies. Design ideas The "Design Ideas" section features several user-submitted designs that are innovative or novel solutions to constrained design problems. Every issue features a column called "Prying Eyes" which disassembles a popular or intriguing consumer product and investigates the technologies that enable it. ASBPE Awards In May 2006, EDN won three awards from the American Society of Business Publication Editors. The Best Regular Department of the Year award went to "Prying Eyes". Executives and journalists William M. Platt, appointed publisher of EDN in December 1967 by Cahners Publishing Robert H. Cushman (1924–1996), editor for EDN from 1962 to the late-1980s covering, among other things, the early development of microprocessing References External links EDN website EDN Asia EDN China EDN Japan EDN Taiwan Defunct magazines published in the United States Engineering magazines Magazines established in 1956 Magazines disestablished in 2013 Magazines published in California Online magazines with defunct print editions Science and technology magazines published in the United States Magazines published in Colorado Professional and trade magazines Electrical and electronic engineering magazines
EDN (magazine)
Engineering
930
18,762,036
https://en.wikipedia.org/wiki/Pool%20chlorine%20hypothesis
The pool chlorine hypothesis is the hypothesis that long-term attendance at indoor chlorinated swimming pools by children up to the age of about 6–7 years is a major factor in the rise of asthma in rich countries since the late twentieth century. A narrower version of the hypothesis, i.e. that asthma may be induced by chlorine related compounds from swimming pools, has been stated based on a small numbers of cases at least as early as 1995. An empirically motivated statement of the wider form of the hypothesis is first known to have been published on the basis of tests of the effects of nitrogen trichloride above chlorinated water on the lung as well as epidemiological evidence by a group of medical researchers led by Alfred Bernard of the Department of Public Health in the Catholic University of Louvain in Brussels, Belgium in 2003. In the epidemiological studies, the association between chlorinated swimming pools and asthma was found to be more significant than factors such as age, sex, ethnic origin, socioeconomic status, exposure to domestic animals and passive smoking (in a study in Brussels), and independent of altitude, climate, and GDP per capita (in a Europe-wide study of 21 countries). Effects of nitrogen trichloride (trichloramine) on the human lung Nitrogen trichloride has been directly linked as a factor causing asthma in two lifeguards and a swimming teacher. A study of 624 swimming pool workers found a significant correlation between upper respiratory symptoms and their total exposure to nitrogen trichloride. The study also found an excess risk in the workers for the specific symptoms indicative of asthma. In a study by Alfred Bernard's group, two hours exposure to an average concentration of 0.490 mg/m3 of nitrogen trichloride above a swimming pool was found in both children and adults to significantly increase the levels of the alveolar surfactant associated proteins A and B, which indicate hyperpermeability of lung epithelium. In other words, exposure to nitrogen trichloride was found to weaken the protective nature of the surface of the lungs. Epidemiological studies In a study of 341 schoolchildren, Bernard and his colleagues found that long-term attendance at indoor chlorinated swimming pools by the children up to the age of about 6–7 years was a strong predictor of airway inflammation (measured by exhaled nitric oxide) independently of other factors, while for those children susceptible to allergic problems, as defined by having a blood serum level of immunoglobulin E greater than 100 kIU/L, their total time spent at indoor chlorinated swimming pools was a strong predictor of the probability that they would have asthma. Relations to demographic and environmental variables In the Bernard group's study of 226 children in Brussels and the Ardenne region in 2003, asthma and exercise-induced bronchoconstriction (a test related to potential breathing difficulties) were not found to have any statistically significant correlation with the demographic and environmental factors of age, sex, ethnic origin, socioeconomic status or exposure to pets or passive smoking alone. However, when the time spent at chlorinated swimming pools (modified for pool height as a statistical way to indicate likely concentrations of chlorine related gases) was adjusted for exposure to pets and passive smoking, the significance of the correlations with asthma increased further. The authors describe this saying that a "very strong argument in [favour] of causality [between pool attendance and asthma] comes from the synergistic action of exposure to pets and [passive smoking], two well documented risk factors for asthma, which together considerably increase the strength of the associations, to levels largely above those usually observed in asthma epidemiology." In a later study by the Bernard group of 190,000 children in 21 countries in Europe, it was found that 13- to 14-year-old children were 2% to 3.5% more likely to have or have had asthma for every additional indoor chlorinated pool per 100,000 inhabitants in their place of residence. Other atopic diseases such as hay fever or atopic dermatitis were found to be not associated with the presence of the pools. The association of asthma with the number of indoor chlorinated swimming pools per 100,000 inhabitants was found by the authors to be independent of altitude, climate, and GDP per capita. Scientific debate on the epidemiological studies After the publication of Bernard's group's 2003 study, B. Armstrong and D. Strachan described the study as "generally well conducted", but stated that some aspects of the statistical analysis and interpretation were "misleading", to the extent that "the epidemiological association of asthma with swimming pool use [was] not as strong as claimed by the authors". Following publication of Bernard's group's 2006 study, some concerns by P. A. Eggleston and a response by Bernard's group were published. For example, Eggleston argued that if "chlorinated compounds at indoor swimming pools could cause asthma", then "frequent and longer exposures at home" should be even stronger causes of asthma, in contradiction to the available evidence from a single group of children. Bernard's group's response was that while children at an indoor chlorinated pool "actively inhale [the chlorination products] as gases, aerosols, or even water", they are not usually involved in household cleaning tasks, so they could benefit from the hygienic effects of the chlorine based cleaning products while avoiding any significant contact with the related gases. Members of Bernard's group's declared that they had no potentially conflicting financial interests, while Eggleston declared that he had received money from the United States-based group called the Chlorine Chemistry Council. In a "Faculty Disclosure" statement in an asthma-related publication, it was declared that Eggleston is "a consultant for Chlorine Chemistry Council, Church and Dwight, Merck Sharp & Dohme, and Procter & Gamble, and is on the speakers' bureau for AstraZeneca, GlaxoSmithKline, and Merck." Hypothesised mechanistic explanation Alfred Bernard and colleagues argue that what is common to the pool chlorine hypothesis and epidemiological studies associating chlorine based irritants to atopy could be that frequent, long-term disruption of the epithelium of the lung, which normally provides a protective barrier against various pathogens, could allow allergens to cross this barrier. This process would also cause certain proteins from the lung epithelium to have increased blood serum concentrations. See also Hygiene hypothesis References Allergology Epidemiology
Pool chlorine hypothesis
Environmental_science
1,391
43,386,258
https://en.wikipedia.org/wiki/Gelfand%E2%80%93Raikov%20theorem
The Gel'fand–Raikov (Гельфанд–Райков) theorem is a theorem in the mathematics of locally compact topological groups. It states that a locally compact group is completely determined by its (possibly infinite dimensional) unitary representations. The theorem was first published in 1943. A unitary representation of a locally compact group on a Hilbert space defines for each pair of vectors a continuous function on , the matrix coefficient, by . The set of all matrix coefficientsts for all unitary representations is closed under scalar multiplication (because we can replace ), addition (because of direct sum representations), multiplication (because of tensor representations) and complex conjugation (because of the complex conjugate representations). The Gel'fand–Raikov theorem now states that the points of are separated by its irreducible unitary representations, i.e. for any two group elements there exist a Hilbert space and an irreducible unitary representation such that . The matrix elements thus separate points, and it then follows from the Stone–Weierstrass theorem that on every compact subset of the group, the matrix elements are dense in the space of continuous functions, which determine the group completely. See also Gelfand–Naimark theorem Representation theory References Representation theory of groups
Gelfand–Raikov theorem
Mathematics
264
70,750,663
https://en.wikipedia.org/wiki/Podospora%20macrodecipiens
Podospora macrodecipiens is a species of coprophilous fungus in the family Podosporaceae. It was discovered in Antiparos in Greece, where it was found growing on sheep dung. References External links Fungi described in 2008 Fungi of Greece Sordariales Fungus species
Podospora macrodecipiens
Biology
63
9,025,255
https://en.wikipedia.org/wiki/List%20of%20UN%20numbers%202801%20to%202900
UN numbers from UN2801 to UN2900 as assigned by the United Nations Committee of Experts on the Transport of Dangerous Goods are as follows: UN 2801 to UN 2900 See also Lists of UN numbers References External links ADR Dangerous Goods, cited on 7 May 2015. UN Dangerous Goods List from 2015, cited on 7 May 2015. UN Dangerous Goods List from 2013, cited on 7 May 2015. Lists of UN numbers
List of UN numbers 2801 to 2900
Chemistry,Technology
88
27,875,390
https://en.wikipedia.org/wiki/Symbols%20for%20zero
The modern numerical digit 0 is usually written as a circle, an ellipse or a rounded square or rectangle. Glyphs In most modern typefaces, the height of the 0 character is the same as the other digits. However, in typefaces with text figures, the character is often shorter (x-height). Traditionally, many print typefaces made the capital letter O more rounded than the narrower, elliptical digit 0. Typewriters originally made no distinction in shape between O and 0; some models did not even have a separate key for the digit 0. The distinction came into prominence on modern character displays. The digit 0 with a dot in the centre seems to have originated as an option on IBM 3270 displays. Its appearance has continued with Taligent's command line typeface Andalé Mono. One variation used a short vertical bar instead of the dot. This could be confused with the Greek letter Theta on a badly focused display, but in practice there was no confusion because theta was not (then) a displayable character and very little used anyway. An alternative, the slashed zero (looking similar to the letter O except for the slash), was primarily used in hand-written coding sheets before transcription to punched cards or tape, and is also used in old-style ASCII graphic sets descended from the default typewheel on the Teletype Model 33 ASR. This form is similar to the symbol , or "∅" (Unicode character U+2205), representing the empty set, as well as to the letter Ø used in several Scandinavian languages. Some Burroughs/Unisys equipment displays a digit 0 with a reversed slash. The opposing convention that has the letter O with a slash and the digit 0 without was advocated by SHARE, a prominent IBM user group, and recommended by IBM for writing FORTRAN programs, and by a few other early mainframe makers; this is even more problematic for Scandinavians because it means two of their letters collide. Others advocated the opposite convention, including IBM for writing Algol programs. Another convention used on some early line printers left digit 0 unornamented but added a tail or hook to the capital O so that it resembled an inverted Q (like U+213A ℺) or cursive capital letter-O (). Some fonts designed for use with computers made one of the capital-O–digit-0 pair more rounded and the other more angular (closer to a rectangle). The TI-99/4A computer has a more angular capital O and a more rounded digit 0, whereas others made the choice the other way around. The typeface used on most European vehicle registration plates distinguishes the two symbols partially in this manner (having a more rectangular or wider shape for the capital O than the digit 0), but in several countries a further distinction is made by slitting open the digit 0 on the upper right side (as in German plates using the fälschungserschwerende Schrift, "forgery-impeding typeface"). Sometimes the digit 0 is used either exclusively, or not at all, to avoid confusion altogether. For example, confirmation numbers used by Southwest Airlines use only the capital letters O and I instead of the digits 0 and 1, while Canadian postal codes use only the digits 1 and 0 and never the capital letters O and I, although letters and numbers always alternate. Other On the seven-segment displays of calculators, watches, and household appliances, 0 is usually written with six line segments, though on some historical calculator models it was written with four line segments. The international maritime signal flag has five plus signs in an X arrangement. Zero symbols in Unicode See also Ø (disambiguation) References 0 (number) Mathematical symbols
Symbols for zero
Mathematics
774
11,175,098
https://en.wikipedia.org/wiki/Astrolinguistics
Astrolinguistics is a field of linguistics connected with the search for extraterrestrial intelligence (SETI). Early Soviet experiments Arguably the first attempt to construct a language for interplanetary communication was the AO language created by the anarchist philosopher Wolf Gordin (brother of Abba Gordin) in his books Grammar of the Language of the Mankind AO (1920) and Grammar of the Language AO (1924), was presented as a language for interplanetary communication at the First International Exhibition of Interplanetary Machines and Mechanisms (dedicated to the 10th anniversary of the Russian Revolution and the 70th anniversary of the birth of Tsiolkovsky) in Moscow, 1927. The declared goal of Gordin was to construct a language which would be non-"fetishizing", non-"sociomorphic", non-gender based and non-classist. The design of the language was inspired by Russian Futurist poetry, the Gordin brothers' pan-anarchist philosophy, and Tsiolkovsky's early remarks on possible cosmic messaging (which were in accord with Hans Freudenthal's later insights). However, Sergei N. Kuznetsov notes that "Gordin nowhere defines his language as intended for space use," and that "in none of his works does he deal with problems of space communication, only mentioning 'Interplanetary Communication' in passing among other technical areas." Freudenthal's LINCOS An integral part of the SETI project in general is research in the field of the construction of messages for extraterrestrial intelligence, possibly to be transmitted into space from Earth. As far as such messages are based on linguistic principles, the research can be considered to belong to astrolinguistics. The first proposal in this field was put forward by the mathematician Hans Freudenthal at the University of Utrecht in the Netherlands, in 1960 – around the time of the first SETI effort at Greenbank in the US. Freudenthal conceived a complete Lingua Cosmica. His book LINCOS: Design of a Language for Cosmic Intercourse seems at first sight non-linguistic, because mathematical concepts are the core of the language. The concepts are, however, introduced in conversations between persons (Homo sapiens), de facto by linguistic means. This is witnessed by the innovative examples presented. The book set a landmark in astrolinguistics. This was witnessed by Bruno Bassi's review years later. Bassi noted: “LINCOS is there. In spite of its somewhat ephemeral 'cosmic intercourse' purpose it remains a fascinating linguistic and educational construction, deserving existence as another Toy of Man's Designing”. Freudenthal eventually had lost interest in creating further work altogether because of rising issues in applying LINCOS "for [anything] other than mathematical contents due to the potential different sociological aspects of alien receivers". Ollongren's LINCOS The concept astrolinguistics in scientific research was coined as such, also with a view towards message construction for ETI, in 2013 in the monograph Astrolinguistics: Design of a Linguistic System for Interstellar Communication Based on Logic, written by the astronomer and computer scientist Alexander Ollongren from the University of Leiden (the Netherlands). This book presents a new Lingua Cosmica totally different from Freudenthal's design. It describes the way the logic of situations in human societies can be formulated in the lingua, also named LINCOS. This astrolinguistic system, also designed for use in interstellar communication, is based on modern constructive logic – which assures that all expressions are verifiable. At a deeper, more fundamental level, however, astrolinguistics is concerned with the question whether linguistic universalia can be identified which are potentially useful in communication across interstellar distances between intelligence species. In the view of the new LINCOS these might be certain logic descriptions of specific situations and relations (possibly in an Aristotelian sense). Kadri Tinn's (Astronomy for Humans) review of Ollongren's book recognised that aspect – she wrote: See also Alien language Alien language in science fiction Wow! signal References Further reading LINCOS: Design of a Language for Cosmic Intercourse, Part I, by Hans Freudenthal, Professor of Mathematics and Logic, University of Utrecht, The Netherlands, 224 pp. North Holland Publishing Company, Amsterdam 1960. Astrolinguistics: Design of a Linguistic System for Interstellar Communication Based on Logic, by Alexander Ollongren, Professor of Computer Science and Dynamical Astronomy, University of Leiden, The Netherlands, 248 pp. Springer Publishers, New York 2013 Extraterrestrial life Alien language
Astrolinguistics
Astronomy,Biology
949
75,225,735
https://en.wikipedia.org/wiki/Ivarmacitinib
Ivarmacitinib (SHR0302) is a small molecule drug and selective janus kinase 1 (JAK1) inhibitor. It is being developed for ulcerative colitis, eczema, alopecia areata, and graft-versus-host disease. References Janus kinase inhibitors Pyrrolopyrimidines Thiadiazoles Ureas
Ivarmacitinib
Chemistry
84
8,756,738
https://en.wikipedia.org/wiki/Resource%20leveling
In project management, resource leveling is defined by A Guide to the Project Management Body of Knowledge (PMBOK Guide) as "A technique in which start and finish dates are adjusted based on resource limitation with the goal of balancing demand for resources with the available supply." Resource leveling problem could be formulated as an optimization problem. The problem could be solved by different optimization algorithms such as exact algorithms or meta-heuristic methods. When performing project planning activities, the manager will attempt to schedule certain tasks simultaneously. When more resources such as machines or people are needed than are available, or perhaps a specific person is needed in both tasks, the tasks will have to be rescheduled concurrently or even sequentially to manage the constraint. Project planning resource leveling is the process of resolving these conflicts. It can also be used to balance the workload of primary resources over the course of the project[s], usually at the expense of one of the traditional triple constraints (time, cost, scope). When using specially designed project software, leveling typically means resolving conflicts or over allocations in the project plan by allowing the software to calculate delays and update tasks automatically. Project management software leveling requires delaying tasks until resources are available. In more complex environments, resources could be allocated across multiple, concurrent projects thus requiring the process of resource leveling to be performed at company level. In either definition, leveling could result in a later project finish date if the tasks affected are in the critical path. Resource leveling is also useful in the world of maintenance management. Many organizations have maintenance backlogs. These backlogs consist of work orders. In a "planned state" these work orders have estimates such as 2 electricians for 8 hours. These work orders have other attributes such as report date, priority, asset operational requirements, and safety concerns. These same organizations have a need to create weekly schedules. Resource-leveling can take the "work demand" and balance it against the resource pool availability for the given week. The goal is to create this weekly schedule in advance of performing the work. Without resource-leveling the organization (planner, scheduler, supervisor) is most likely performing subjective selection. For the most part, when it comes to maintenance scheduling, there is less, if any, task interdependence, and therefore less need to calculate critical path and total float. See also Resource allocation References External links Project Management for Construction, by Chris Hendrickson Resource-Constrained Project Scheduling: Past Work and New Directions, by Bibo Yang, Joseph Geunes, William J. O'Brien Petri Nets for Project Management and Resource Levelling, by V. A. Jeetendra, O. V. Krishnaiah Chetty, J. Prashanth Reddy Schedule (project management)
Resource leveling
Physics
569
30,224,834
https://en.wikipedia.org/wiki/List%20of%20ice%20cream%20varieties%20by%20country
There are many ice cream varieties around the world. Argentina While industrial ice cream exists in Argentina and can be found in supermarkets, restaurants or kiosks, and ice cream pops are sold on some streets and at the beaches, the most traditional Argentine helado (ice cream) is very similar to Italian gelato, rather than US-style ice cream, and it has become one of the most popular desserts in the country. Among the most famous manufacturers are "Freddo," "Persicco," "Chungo", "Cremolatti" and "Munchi's," all of them located in Buenos Aires. Each city has its own heladerías (ice cream parlors) which offer different varieties of creamy and water-based ice creams, including both standard and regional flavors. There are hundreds of flavors but Argentina's most traditional and popular one is dulce de leche, which has become popular abroad, especially in the US. There are two kinds of heladerías in Argentina: the cheaper ones which sell ice cream with artificial ingredients (like Helarte, Pirulo, Sei Tu and the largest one, Grido), and the ones that sell helado artesanal, made with natural ingredients and usually distinguished by a logo featuring an ice cream cone and the letters HA. There are no regulations in Argentina regarding the amount of milk an ice cream can have. In fact, all ice cream parlors serve both cream-based and water-based ice cream (helado a la crema and helado al agua respectively). Instead, the distinctions are made according to the quality of the ingredients. A standard Argentine cone or cup contains two different flavors of ice cream. In addition to these, most heladerías offer ice-cream-based desserts like Bombón Suizo (Swiss Bonbon: chocolate-covered chantilly ice cream filled with dulce de leche and sprinkled with nuts), Bombón Escocés (Scottish Bonbon: same as the Swiss Bonbon, only with chocolate ice-cream and white chocolate topping), Cassata (strawberry, vanilla and chocolate ice cream) and Almendrado (almond ice cream sprinkled with almond praline). Australia and New Zealand Per capita, Australians and New Zealanders are among the leading ice cream consumers in the world, eating 18 litres and 20 litres each per year respectively, behind the United States where people eat 23 litres each per year. Brands include Tip Top, Streets, Peters, Sara Lee, New Zealand Natural, Cadbury, Baskin-Robbins and Bulla Dairy Foods. Hokey pokey, which consists of vanilla ice cream with chunks of honeycomb is popular in New Zealand. The flavor is also popular in Australia and Japan. Another New Zealand export is real fruit ice cream, which uses a special machine to blend vanilla ice cream and frozen fruit. The style has recently caught on in the United States, where it is sometimes made with more indulgent ingredients. Goody Goody Gum Drops is a New Zealand flavour of ice cream. It is green, bubble gum flavoured, and laced with gum drops. It is considered to be a polarising flavour, with New Zealanders either loving or hating it. China, Hong Kong, Macao Besides the popular flavors such as vanilla, chocolate, coffee, mango and strawberry, many Chinese ice-cream manufacturers have also introduced other traditional flavors such as black sesame, red beans. In recent years, Hong Kong and Macao dessert houses have also served ice-cream moon cake during the Mid-Autumn festival (moon festival). Fried ice cream is served at street food stalls in Beijing. Finland The first ice cream manufacturer in Finland was the Italian Magi family, who opened the Helsingin jäätelötehdas in 1922 and Suomen Eskimo Oy. Other manufacturers soon spawned, like Pietarsaaren jäätelötehdas (1928–2002). Finland's first ice cream bar opened at the Lasipalatsi in 1936, and at the same time another manufacturer, Maanviljelijäin Maitokeskus, started their production. Today, the two largest ice cream manufacturers are Ingman and Nestlé (who bought Valiojäätelö). Finland is also the leading consumer of ice cream in Europe, with 13.7 litres per person in 2003. France In 1651, François Procope opened an ice cream café in Paris and the product became so popular that during the next 50 years another 250 cafés opened in Paris. Some people eat heart or log shaped cakes made of ice cream on New Year's Eve or New Year's Day. Places who make and sell ice cream in France are called glaciers. They sell ice-creams, called glaces in French, of different flavors and some of them are typically French. One of the most traditional is the glace plombières invented in 1815 and still very popular in family events like weddings. The glace à la Chantilly made with chantilly are also very common and were created during the 17th century. Another traditional ice-cream is the fontainebleau created in the 18th century near Paris. Nowadays French ice-creams with a base of fromage blanc are found especially in the countryside where farmers make artisanal fromage blanc. Here are some traditional French recipes found in glaciers: Café liégeois: sweetened coffee, coffee-flavored ice cream and chantilly cream. Peach Melba: peaches and raspberry sauce with vanilla ice cream. Bombe glacée: ice cream dessert frozen in a spherical mould. Dame blanche: vanilla ice cream with whipped cream, and warm molten chocolate. Poire belle Hélène: pears poached in sugar syrup served with vanilla ice cream and chocolate syrup. Colonel: lemon sorbet with vodka. Plombières: almond extract, kirsch, and candied fruit. Vacherin glacé: a layer or two of meringue, topped with vanilla ice cream and raspberry sorbet and finished off with a Chantilly cream. Omelette norvégienne: sponge cake, ice cream and meringue, hot on the outside and iced on the inside. Germany One of the first well known Italian ice cream parlors (Eisdiele) was founded in Munich in 1879 and run by the Sarcletti family. This traditional family business was handed from generation to generation ever since. Since the 1920s, when many Italians immigrated and set up business, the traditional ice cream parlors became very popular. A popular German ice cream dish is Spaghettieis, created by Dario Fontanella in the 1960s and made to look like a plate of spaghetti. About 80% percent of the ice cream sold in Germany is produced industrially, with the leading manufacturer being Unilever. About 17% is produced commercially and the remaining 3% is produced for the soft serve sector. In 2013, Germany had the largest market for ice cream in Europe at $2.7 billion revenue. Ghana In 1962, the Ghanaian treat FanIce was created by the Fan Milk Limited Company. FanIce comes in strawberry, chocolate, and vanilla. FanMilk also makes additional products, though FanIce is the closest to Western ice cream. Pouches of FanIce and other FanMilk products can be bought from men on bikes equipped with chill boxes in any moderately sized town, and in cities large enough for grocery stores. FanMilk can also be bought in tubs for eating at home. In 2006, FanMilk was voted best ice-cream in the world. Greece Ice cream in its modern form, or pagotó (), was introduced in Greece along its development in Europe in the beginning of the 20th century. Earlier than that, ice treats have been enjoyed in the country since ancient times. During the 5th century BC, ancient Greeks ate snow mixed with honey and fruit in the markets of Athens. The father of modern medicine, Hippocrates, encouraged his Ancient Greek patients to eat ice "as it livens the lifejuices and increases the well-being." In the 4th century BC, it was well known that a favorite treat of Alexander the Great was snow ice mixed with honey and nectar. In the modern day, Greek ice cream has been heavily influenced by Turkish ice cream Dondurma; thus the name used to be called Dudurmas but, because of Greco-Turkish relations most Turkish related foods are coined a more Greek idiom. Greek ice cream recipes have some unique flavors such as Pagoto Kaimaki (), made from mastic resin which gives it an almost chewy texture, and Salepi, used as a thickening agent to increase resistance to melting, both giving a unique taste to the ice cream; Pagoto Elaeolado me syko (), made of olive oil and figs; Pagoto Kataifi cocoa (), made from the shredded filo dough pastry that resembles angel's hair pasta, similar to vermicelli but much thinner; and Pagoto Mavrodaphne (), made from a Greek dessert wine. Fruity Greek Sweets of the Spoon are usually served as toppings with Greek-inspired ice cream flavors. India India is one of the largest producers of ice cream in the world, but most of its ice cream is consumed domestically. India also has an ice cream dish known as "Kulfi" or "Matka Kulfi", which is famous in small towns and villages. Major brands are Amul, Havmor, Kwality Wall's, Vadilal, Mother Dairy, Sadguru Dadarwale etc. In recent times, many domestic ice cream companies have opened their exclusive outlets in metros and several international ice-cream companies also started their operation in India. Indonesia In Indonesia there is a type of traditional ice cream called "Es Puter" or "stirred ice cream". It is made from the ingredients coconut milk, pandan leaves and sugar, with flavorings such as avocado, jackfruit, durian, palm sugar, chocolate, red bean, mung bean and various other flavors. Iran Fālūde () or Pālūde () is a Persian sorbet made of thin vermicelli noodles frozen with corn starch, rose water, lime juice, and often ground pistachios. It is a traditional dessert in Iran and Afghanistan. It was brought to the Indian subcontinent during the Mughal period. The faloodeh of Shiraz is famous. Faloodeh is one of the earliest forms of frozen desserts, having existed as early as 400 BC. Ice was brought down from high mountains and stored in tall refrigerated buildings called yakhchals, which were kept cool by windcatchers. There is also a drink called faloodeh, but it is made using other ingredients. Bastani Sorbet Italy Italian ice cream or Gelato as it is known, is a traditional and a popular dessert in Italy. Much of the production is still hand-made and flavored by each individual shop in "produzione propria" gelaterias. Gelato is made from whole milk, sugar, sometimes eggs, and natural flavorings. Gelato typically contains 7–8% fat. Before the cone became popular for serving ice cream, in English speaking countries, Italian street vendors would serve the ice cream in a small glass dish referred to as a "penny lick" or wrapped in waxed paper and known as a hokey-pokey (possibly a corruption of the Italian – "here is a little"). Some of the most known artisanal gelato machine makers are Italian companies Carpigiani, Crm-Telme, Corema-Telme, Technogel, Cattabriga and high capacity industrial plants made by Catta 27 and Cogil and Teknoice. Japan Ice cream is a popular dessert in Japan, with almost two in five adults eating some at least once a week. From 1999 through 2006, the most popular flavors in Japan have been vanilla, chocolate, matcha (powdered green tea), and strawberry. Other notable popular flavors are milk, caramel, and azuki (red bean). Azuki is particularly favored by people in their 50s and older. Kakigori Laos A typical variety is Laotian vanilla ice cream made from pandan ("Laotian vanilla"). Pakistan Pakistan's most popular ice cream brands include OMORÉ, the products of which are exported to Europe and America, Igloo and the British company Wall's. Beside these, other large ice cream brands from Pakistan include Eat More Ice Cream and Yummy. These companies receive most of their profit from within the country. Pakistani Peshawari ice cream is very famous in South Asia and Middle East. The most popular flavors in Pakistan are pista, qulfi (spelt also as "qulfa", and in Punjabi with a "K-"), vanilla and chocolate. Philippines Sorbetes is a Philippine version for common ice cream usually peddled from carts that roam streets in the Philippines. This should not be confused with the known sorbet. It is also commonly called 'dirty ice cream' because it is sold along the streets exposing it to pollution and that the factory where it comes from is usually unknown; though it is not really "dirty" as the name implies. It is usually served with small wafer or sugar cones and recently, bread buns. Popular ice cream flavors in the Philippines include ube ice cream made from ube (purple yam) and queso ice cream made from cheese. South Korea Potato gelato, also known as Yangyang ice cream, is a type of ice cream made from potato and pepper. It originated in Gangwon Province, South Korea. It is made out of potatoes grown in the Gangwon Province and is a speciality there. Potato gelato was developed with potatoes grown in Yangyang, or Lato Layo. It was created to promote the taste of Gangwon-do, using the specialities of the province as ingredients. Potato gelato is made by churning potatoes that are a speciality of Gangwon-do itself. The gelato is made by using natural ingredients from the Gangwon-do province. Pepper is sprinkled over the potato ice cream, thus earning the name of potato-pepper gelato. Spain Ice cream, in the style of Italian gelato, can be found in many cafes or specialty ice cream stores throughout Spain. Usually the flavors reflect local tastes such as nata, viola, crema catalana, or tiramisu. There are also industrial producers such as Frigo (owned by Unilever), Camy (later merged into Nestlé), Avidesa, Menorquina, many of them are part of transnational groups. The industrial producers also serve ice cream sandwiches and polos, ice cream on a stick, such as Magnum, sometimes with whimsical shapes like the foot-shaped Frigopié. Ice cream is consumed mostly in summer. Hence, some ice cream stores become hot-chocolate cafés in winter. Syria Booza ( in Arabic: "milk ice cream", also called "Arabic ice cream") is a mastic-based ice cream. It is elastic, sticky, and resistant to melting in the hotter climates of the Arab world, where it is most commonly found. The ice cream is usually and traditionally made with an ingredient called sahlab () or salep, which provides it with the ability to resist melting. Salep is also a primary ingredient in the Turkish version of this style of ice cream called dondurma. Turkey Dondurma (in , meaning "the ice cream of the city of Maraş", also called , meaning "battered ice cream") is a Turkish mastic ice cream. It is similar to the Syrian dessert booza. Dondurma typically includes the ingredients cream, whipped cream, salep, mastic, and sugar. It is believed to originate from the city and region of Maraş and hence also known as Maraş ice cream. United States In the United States, ice cream made with just cream, sugar, and a flavoring (usually fruit) is sometimes referred to as "Philadelphia style" ice cream. Ice creams made with eggs, usually in the form of frozen custards, are sometimes called "French" ice creams or traditional ice cream. American federal labeling standards require ice cream to contain a minimum of 10% milk fat (about 7 grams (g) of fat per 1/2 cup [120 mL] serving), 20% total milk solids by weight, to weigh no less than 4.5 pounds per gallon (in order to put a limit on replacing ingredients with air), and to contain less than 1.4% egg yolk solids. Federal government regulations pertaining to the process of making ice cream, allowable ingredients, and standards, may be found in Part 135 of Title 21 of the Code of Federal Regulations. Americans consume about 23 liters of ice cream per person per year—the most in the world. As a foodstuff it is deeply ingrained into the American psyche and has been available in America since its founding in 1776: there are records of Thomas Jefferson serving it as a then-expensive treat to guests at his home in Monticello. In American supermarkets it is not uncommon for ice cream and related products to take up a wall full of freezers. All different kinds of ice cream fill the walls with strawberry, chocolate, vanilla, among other flavors. Although chocolate, vanilla, and strawberry are the traditional favorite flavors of ice cream, and once enjoyed roughly equal popularity, vanilla has grown to be the most popular, most likely because of its use as a topping for fruit based pies and its use as the key ingredient for milkshakes. According to the International Ice Cream Association (1994), supermarket sales of ice cream break down as follows: vanilla, 28%; fruit flavors, 15%; nut flavors, 13.5%; candy mix-in flavors, 12.5%; chocolate, 8%; cake and cookie flavors, 7.5%; Neapolitan, 7%; and coffee/mocha, 3%. Other flavors combine to make 5.5%. Sales in ice cream parlors are more variable, as new flavors come and go, but about three times as many people call vanilla their favorite compared to chocolate, the runner-up. References External links The History of Ice Cream in Thailand Ice cream Human geography
List of ice cream varieties by country
Environmental_science
3,829
6,620
https://en.wikipedia.org/wiki/Cotangent%20space
In differential geometry, the cotangent space is a vector space associated with a point on a smooth (or differentiable) manifold ; one can define a cotangent space for every point on a smooth manifold. Typically, the cotangent space, is defined as the dual space of the tangent space at , , although there are more direct definitions (see below). The elements of the cotangent space are called cotangent vectors or tangent covectors. Properties All cotangent spaces at points on a connected manifold have the same dimension, equal to the dimension of the manifold. All the cotangent spaces of a manifold can be "glued together" (i.e. unioned and endowed with a topology) to form a new differentiable manifold of twice the dimension, the cotangent bundle of the manifold. The tangent space and the cotangent space at a point are both real vector spaces of the same dimension and therefore isomorphic to each other via many possible isomorphisms. The introduction of a Riemannian metric or a symplectic form gives rise to a natural isomorphism between the tangent space and the cotangent space at a point, associating to any tangent covector a canonical tangent vector. Formal definitions Definition as linear functionals Let be a smooth manifold and let be a point in . Let be the tangent space at . Then the cotangent space at x is defined as the dual space of Concretely, elements of the cotangent space are linear functionals on . That is, every element is a linear map where is the underlying field of the vector space being considered, for example, the field of real numbers. The elements of are called cotangent vectors. Alternative definition In some cases, one might like to have a direct definition of the cotangent space without reference to the tangent space. Such a definition can be formulated in terms of equivalence classes of smooth functions on . Informally, we will say that two smooth functions f and g are equivalent at a point if they have the same first-order behavior near , analogous to their linear Taylor polynomials; two functions f and g have the same first order behavior near if and only if the derivative of the function f − g vanishes at . The cotangent space will then consist of all the possible first-order behaviors of a function near . Let be a smooth manifold and let x be a point in . Let be the ideal of all functions in vanishing at , and let be the set of functions of the form , where . Then and are both real vector spaces and the cotangent space can be defined as the quotient space by showing that the two spaces are isomorphic to each other. This formulation is analogous to the construction of the cotangent space to define the Zariski tangent space in algebraic geometry. The construction also generalizes to locally ringed spaces. The differential of a function Let be a smooth manifold and let be a smooth function. The differential of at a point is the map where is a tangent vector at , thought of as a derivation. That is is the Lie derivative of in the direction , and one has . Equivalently, we can think of tangent vectors as tangents to curves, and write In either case, is a linear map on and hence it is a tangent covector at . We can then define the differential map at a point as the map which sends to . Properties of the differential map include: is a linear map: for constants and , The differential map provides the link between the two alternate definitions of the cotangent space given above. Since for all there exist such that , we have, i.e. All function in have differential zero, it follows that for every two functions , , we have . We can now construct an isomorphism between and by sending linear maps to the corresponding cosets . Since there is a unique linear map for a given kernel and slope, this is an isomorphism, establishing the equivalence of the two definitions. The pullback of a smooth map Just as every differentiable map between manifolds induces a linear map (called the pushforward or derivative) between the tangent spaces every such map induces a linear map (called the pullback) between the cotangent spaces, only this time in the reverse direction: The pullback is naturally defined as the dual (or transpose) of the pushforward. Unraveling the definition, this means the following: where and . Note carefully where everything lives. If we define tangent covectors in terms of equivalence classes of smooth maps vanishing at a point then the definition of the pullback is even more straightforward. Let be a smooth function on vanishing at . Then the pullback of the covector determined by (denoted ) is given by That is, it is the equivalence class of functions on vanishing at determined by . Exterior powers The -th exterior power of the cotangent space, denoted , is another important object in differential and algebraic geometry. Vectors in the -th exterior power, or more precisely sections of the -th exterior power of the cotangent bundle, are called differential -forms. They can be thought of as alternating, multilinear maps on tangent vectors. For this reason, tangent covectors are frequently called one-forms. References Differential topology Tensors
Cotangent space
Mathematics,Engineering
1,077
22,694,038
https://en.wikipedia.org/wiki/Null%20semigroup
In mathematics, a null semigroup (also called a zero semigroup) is a semigroup with an absorbing element, called zero, in which the product of any two elements is zero. If every element of a semigroup is a left zero then the semigroup is called a left zero semigroup; a right zero semigroup is defined analogously. According to A. H. Clifford and G. B. Preston, "In spite of their triviality, these semigroups arise naturally in a number of investigations." Null semigroup Let S be a semigroup with zero element 0. Then S is called a null semigroup if xy = 0 for all x and y in S. Cayley table for a null semigroup Let S = {0, a, b, c} be (the underlying set of) a null semigroup. Then the Cayley table for S is as given below: Left zero semigroup A semigroup in which every element is a left zero element is called a left zero semigroup. Thus a semigroup S is a left zero semigroup if xy = x for all x and y in S. Cayley table for a left zero semigroup Let S = {a, b, c} be a left zero semigroup. Then the Cayley table for S is as given below: Right zero semigroup A semigroup in which every element is a right zero element is called a right zero semigroup. Thus a semigroup S is a right zero semigroup if xy = y for all x and y in S. Cayley table for a right zero semigroup Let S = {a, b, c} be a right zero semigroup. Then the Cayley table for S is as given below: Properties A non-trivial null (left/right zero) semigroup does not contain an identity element. It follows that the only null (left/right zero) monoid is the trivial monoid. The class of null semigroups is: closed under taking subsemigroups closed under taking quotient of subsemigroup closed under arbitrary direct products. It follows that the class of null (left/right zero) semigroups is a variety of universal algebra, and thus a variety of finite semigroups. The variety of finite null semigroups is defined by the identity ab = cd. See also Right group References Semigroup theory
Null semigroup
Mathematics
498
4,707,019
https://en.wikipedia.org/wiki/Throttle
A throttle is a mechanism by which fluid flow is managed by construction or obstruction. An engine's power can be increased or decreased by the restriction of inlet gases (by the use of a throttle), but usually decreased. The term throttle has come to refer, informally, to any mechanism by which the power or speed of an engine is regulated, such as a car's accelerator pedal. What is often termed a throttle (in an aviation context) is also called a thrust lever, particularly for jet engine powered aircraft. For a steam locomotive, the valve which controls the steam is known as the regulator. Internal combustion engines In an internal combustion engine, the throttle is a means of controlling an engine's power by regulating the amount of fuel or air entering the engine. In a motor vehicle the control used by the driver to regulate power is sometimes called the throttle, accelerator, or gas pedal. For a gasoline engine, the throttle most commonly regulates the amount of air and fuel allowed to enter the engine. However, in a gasoline direct injection engine, the throttle regulates only the amount of fuel allowed to enter the engine. Historically, the throttle pedal or lever acts via a direct mechanical linkage. The butterfly valve of the throttle is operated by means of an arm piece, loaded by a spring. This arm is usually directly linked to the accelerator cable, and operates in accordance with the driver action. The further the pedal is pushed, the wider the throttle valve opens so that more air flow occurs, and then the carburetor responds by creating more fuel flow. Modern engines of both types (gas and diesel) are commonly drive-by-wire systems where sensors monitor the driver controls and in response a computerized system controls the flow of fuel and air. This means that the operator does not have direct control over the flow of fuel and air; the Engine Control Unit (ECU) can achieve better control in order to reduce emissions, maximize performance and adjust the engine idle to make a cold engine warm up faster or to account for eventual additional engine loads such as running air conditioning compressors in order to avoid engine stalls. The throttle on a gasoline engine is typically a butterfly valve. In a fuel-injected engine, the throttle valve is placed on the entrance of the intake manifold, or housed in the throttle body. In a carbureted engine, it is found in the carburetor. When a throttle is wide open, the intake manifold is usually close to ambient atmospheric pressure. When the throttle is partially closed, manifold vacuum drops further below ambient pressure. The power output of a diesel engine is controlled by regulating the quantity of fuel that is injected into the cylinder. Because diesel engines do not need to control air volumes, they usually lack a butterfly valve in the intake tract. An exception to this generalization is newer diesel engines meeting stricter emissions standards, where such a valve is used to generate intake manifold vacuum, thereby allowing the introduction of exhaust gas (see EGR) to lower combustion temperatures and thereby minimize NOx production. In a reciprocating engine aircraft, the throttle control is usually a hand-operated lever or knob. It controls the engine power output, which may or may not reflect in a change of RPM, depending on the propeller installation (fixed-pitch or constant speed). Some modern internal combustion engines do not use a traditional throttle, instead relying on their variable intake valve timing system to regulate the airflow into the cylinders, although the result is the same, albeit with less pumping losses. Throttle body In fuel injected engines, the throttle body is the part of the air intake system that controls the amount of air flowing into the engine, in response to driver accelerator pedal input in the main. The throttle body is usually located between the air filter box and the intake manifold, and it is usually attached to, or near, the mass airflow sensor. Often, an engine coolant line also runs through it in order for the engine to draw intake air at a certain temperature (the engine's current coolant temperature, which the ECU senses through the relevant sensor) and therefore with a known density. The largest piece inside the throttle body is the throttle plate, which is a butterfly valve that regulates the airflow. On many cars, the accelerator pedal motion is communicated via the throttle cable, which is mechanically connected to the throttle linkages, which, in turn, rotate the throttle plate. In cars with electronic throttle control (also known as "drive-by-wire"), an electric actuator controls the throttle linkages and the accelerator pedal connects not to the throttle body, but to a sensor, which outputs a signal proportional to the current pedal position and sends it to the ECU. The ECU then determines the throttle opening based on the accelerator pedal's position and inputs from other engine sensors such as the engine coolant temperature sensor. When the driver presses on the accelerator pedal, the throttle plate rotates within the throttle body, opening the throttle passage to allow more air into the intake manifold, immediately drawn inside by its vacuum. Usually a mass airflow sensor measures this change and communicates it to the ECU. The ECU then increases the amount of fuel injected by the injectors in order to obtain the required air-fuel ratio. Often a throttle position sensor (TPS) is connected to the shaft of the throttle plate to provide the ECU with information on whether the throttle is in the idle position, wide-open throttle (WOT) position, or somewhere in between these extremes. Throttle bodies may also contain valves and adjustments to control the minimum airflow during idle. Even in those units that are not "drive-by-wire", there will often be a small solenoid driven valve, the Idle Air Control Valve (IACV), that the ECU uses to control the amount of air that can bypass the main throttle opening to allow the engine to idle when the throttle is closed. The most basic carbureted engines, such as older small single cylinder Briggs & Stratton lawn-mower engines, feature a single throttle plate in a basic carburetor with a single venturi. The throttle can be varied and there is always a small hole or other bypass to allow a small amount of air to flow through so the engine can idle if the throttle can be fully closed, or at some intermediate positions. Some of these engines draw fuel directly up from a small fuel tank which must have some venting (such as in the fuel cap) to allow ambient air pressure to reach the surface of the fuel for a differential pressure to be available. Since air velocity is crucial to the functioning of a carburetor, to keep average air velocity up, larger engines require more complex carburetors with multiple small venturis, typically two or four (these venturis are commonly called "barrels"). A typical "2-barrel" carburetor uses a single oval or rectangular throttle plate, and works similarly to a single venturi carburetor, but with two small openings instead of one. A 4-venturi carburetor has two pairs of venturis, each pair regulated by a single oval or rectangular throttle plate. Under normal operation, only one throttle plate (the "primary") opens when the accelerator pedal is pressed, allowing more air into the engine, but keeping overall airflow velocity through the carburetor high (thus improving efficiency). The "secondary" throttle is operated either mechanically when the primary plate is opened past a certain amount, or via engine vacuum, influenced by the position of the accelerator pedal and engine load, allowing for greater air flow into the engine at high RPM and load and better efficiency at low RPM. Multiple 2-venturi or 4-venturi carburetors can be used simultaneously in situations where maximum engine power is of priority. A throttle body is somewhat analogous to the carburetor in a non-injected engine, although it is important to remember that a throttle body is not the same thing as a throttle, and that carbureted engines have throttles as well. A throttle body simply supplies a convenient place to mount a throttle in the absence of a carburetor venturi. Carburetors are an older technology, which mechanically modulate the amount of air flow (with an internal throttle plate) and combine air and fuel together (venturi). Cars with fuel injection don't need a mechanical device to meter the fuel flow, since that duty is taken over by injectors in the intake pathways (for multipoint fuel injection systems) or cylinders (for direct injection systems) coupled with electronic sensors and computers which precisely calculate how long should a certain injector stay open and therefore how much fuel should be injected by each injection pulse. However, they do still need a throttle to control the airflow into the engine, together with a sensor that detects its current opening angle, so that the correct air/fuel ratio can be met at any RPM and engine load combination. The simplest way to do this is to simply remove the carburetor unit, and bolt a simple unit containing a throttle body and fuel injectors on instead. This is known as single-port injection, also known by different marketing names (such as "throttle-body injection" by General Motors and "central fuel injection" by Ford, among others), and it allows an older engine design to be converted from carburetor to fuel injection without significantly altering the intake manifold design. More complex later designs use intake manifolds, and even cylinder heads, specially designed for the inclusion of injectors. Multiple throttle bodies Most fuel injected cars have a single throttle, contained in a throttle body. Vehicles can sometimes employ more than one throttle body, connected by linkages to operate simultaneously, which improves throttle response and allows a straighter path for the airflow to the cylinder head, as well as for equal-distance intake runners of short length, difficult to achieve when all the runners have to travel to certain location to connect to a single throttle body, at the cost of greater complexity and packaging issues. At the extreme, higher-performance cars like the E92 BMW M3 and Ferraris, and high-performance motorcycles like the Yamaha R6, can use a separate throttle body for each cylinder, often called "individual throttle bodies" or ITBs. Although rare in production vehicles, these are common equipment on many racing cars and modified street vehicles. This practice harks back to the days when many high performance cars were given one, small, single-venturi carburettor for each cylinder or pair of cylinders (i.e. Weber, SU carburettors), each one with their own small throttle plate inside. In a carburettor, the smaller throttle opening also allowed for more precise and fast carburettor response, as well as better atomization of the fuel when running at low engine speeds. Other engines Steam locomotives normally have the throttle (North American English) or regulator (British English) in a characteristic steam dome at the top of the boiler (although not all boilers feature these). The additional height afforded by the dome helps to avoid any liquid (e.g. from bubbles on the surface of the boiler water) being drawn into the throttle valve, which could damage it, or lead to priming. The throttle is basically a poppet valve, or series of poppet valves which open in sequence to regulate the amount of steam admitted to the steam chests over the pistons. It is used in conjunction with the reversing lever to start, stop and to control the locomotive's power although, during steady-state running of most locomotives, it is preferable to leave the throttle wide open and to control the power by varying the steam cut-off point (which is done with the reversing lever), as this is more efficient. A steam locomotive throttle valve poses a difficult design challenge as it must be opened and closed using hand effort against the considerable pressure (typically ) of boiler steam. One of the primary reasons for later multiple-sequential valves: it is far easier to open a small poppet valve against the pressure differential, and open the others once pressure begins to equalize than to open a single large valve, especially as steam pressures eventually exceeded or even . Examples include the balanced "double beat" type used on Gresley A3 Pacifics. Throttling of a rocket engine means varying the thrust level in-flight. This is not always a requirement; in fact, the thrust of a solid-fuel rocket is not controllable after ignition, and is instead pre-planned by varying the shape of the void down the center of the booster when the fuel is molded. However, liquid-propellant rockets can be throttled by means of valves which regulate the flow of fuel and oxidizer to the combustion chamber. Hybrid rocket engines, such as the one used in Space Ship One, use solid fuel with a liquid oxidizer, and therefore can be throttled. Throttling tends to be required more for powered landings, and launch into space using a single main stage (such as the Space Shuttle), than for launch with multistage rockets. They are also useful in situations where the airspeed of the vehicle must be limited due to aerodynamic stress in the denser atmosphere at lower levels (e.g. the Space Shuttle). Rockets characteristically become lighter the longer they burn, with the changing ratio of thrust:weight resulting in increasing acceleration, so engines are often throttled (or switched off) to limit acceleration forces towards the end of a stage's burn time if it is carrying sensitive cargo (e.g. humans). In a jet engine, thrust is controlled by changing the amount of fuel flowing into the combustion chamber, similar to a diesel engine. Lifespan of the throttle in cars The lifespan of the throttle is not set since it highly depends on the driving style and specific vehicle. The throttle tends to be quite dirty after 100-150 thousand kilometers, and it is necessary to clean it up. The malfunction of the throttle could be indicated by illuminated EPC warning light. This is usually the case with modern Volkswagen Group vehicles. Vehicles not equipped with the EPC warning light indicate issues with the throttle by illuminated check engine symbol. Symptoms of the throttle malfunction could vary from poor idle, decreased engine power, poor mileage, bad acceleration, and so on. The effective way to increase the throttle's lifespan is through regular maintenance and cleaning. See also Adapted automobile References External links Engine technology Engine fuel system technology Engine components
Throttle
Technology
2,976
28,343,460
https://en.wikipedia.org/wiki/Polyporus%20umbellatus
Polyporus umbellatus is an edible species of mushroom, found growing on roots of old beeches or oak (e.g.). It is also called umbrella polypore. Description The fruit body is composed of numerous (sometimes several hundred) caps. They are 1–4 cm in diameter, deeply umbilicate, light brown, and form the extremities of a strong, many branched stalk. The compound fungus can be up to 40 cm in diameter. The pores are narrow and white. The stalk is whitish grey, and originates from a strong, tuber-like nodule that is underground. The flesh is white, rather soft when young, although hardens with age. Edibility and cooking Choice edible. Bioactive compounds Polyporus umbellatus may contain bioactive compounds with immunostimulating, anticancer, anti-inflammatory, and hepatoprotective properties. References umbellatus Fungi of Europe Fungi described in 1821 Fungus species
Polyporus umbellatus
Biology
203
47,357,235
https://en.wikipedia.org/wiki/Neural%20efficiency%20hypothesis
The neural efficiency hypothesis proposes that while performing a cognitive task, individuals with higher intelligence levels exhibit lower brain activation in comparison to individuals with lower intelligence levels. This hypothesis suggests that individual differences in cognitive abilities are due to differences in the efficiency of neural processing. Essentially, individuals with higher cognitive abilities utilize fewer neural resources to perform a given task than those with lower cognitive abilities. History Since the late 19th century, there has been a growing interest among psychologists to understand the influence of individual differences in intelligence and the underlying neural mechanisms of intelligence. The Neural efficiency hypothesis was first introduced by Haier et al. in 1988 through a Position Emission Tomography (PET) study aimed at investigating the relationship between intelligence and brain activation. PET is a type of nuclear medicine procedure that measures the metabolic activity of the cells of body tissues. During the study, participants underwent PET of the head while completing different cognitive tasks such as Raven's Advanced Progressive Matrices (RAPM) and Continuous Performance Tests (CPT). The PET Scans showed that task performance activated specific regions of the participant's brain. Also, a negative correlation was found between brain glucose metabolism levels and intelligence test scores. The results of the study indicated that individuals with higher intelligence levels exhibited lower levels of brain glucose metabolism while solving cognitive tasks. A few years later, Haier confirmed the results of the study by replicating it while considering learning as a factor. Research The early studies mainly focused on certain cognitive tasks such as intelligence tests to test the hypothesis, potentially confounding efficiency during the intelligence-test performance with neural efficiency in general. To overcome this limitation recent studies have refined and expanded the hypothesis by applying and testing it in various domains. In one study, researchers used a personal decision-making task to test the NEH which included questions about preferences like, “which profession do you prefer?”. Subjective preferences were used to force participants to make decisions, and preference ratings were used to manipulate the level of decisional conflict. The study found that individuals with higher intelligence test scores displayed less brain activity during simple tasks and greater brain activity during complex tasks, compared to individuals with lower intelligence test scores. This suggested that smarter people can use their brains more effectively by turning on only the areas that are required for the activity at hand. Also, more intelligent people displayed quicker reaction times during challenging tasks. These findings offered fresh evidence in support of the NEH and indicated that the neural efficiency of highly intelligent people can be applied to tasks that are different from typical intelligence tests. Another study focused on understanding the effect of long-term specialized training on an athlete's neural efficiency, using functional neuroimaging while performing a sport-specific task. The results of this study showed that athletes with prolonged experience or “experts” in their domains performed better than novices in terms of speed, accuracy, and efficiency, with lower activity levels in the sensory and motor cortex and less energy expenditure. These findings supported the Neural Efficiency Hypothesis (NEH) and proved that individuals who are highly skilled and experienced have more efficient brain functioning. Limitations Recent studies on the Neural Efficiency Hypothesis have identified several limitations in the former research. They have also found several moderating variables, such as task complexity, sex and task type. Task complexity The difficulty level of the task is one of the key moderating variables that influence the neural efficiency hypothesis. In a study, it was found that the hypothesis only holds for easy tasks. For difficult tasks, intelligent individuals may show increased brain activation. The study revealed that participants with high IQ showed weaker activation during easy tasks but had a significant increase from easy to difficult tasks. This pattern was not observed in the average IQ group. The study suggests that the relationship between intelligence and brain activation depends on the difficulty of the task. Sex and task type Former studies have primarily used uniform tasks and have mainly focused on male participants. One study found that neural efficiency was influenced by sex and task content. The study tried to examine possible sex differences in human brain functioning. It aimed at investigating the relationship between intelligence and cortical activation during the cognitive performance in various versions of a task, using brain imaging techniques. The results of the study suggested that, In the verbal task, the females were more likely to produce cortical activation patterns consistent with the NEH. Whereas, in the figural task, the expected neural activation was primarily in the males in comparison to the female participants. This suggested the role of sex and task type as moderating variables. References Intelligence Cognitive tests Biological hypotheses
Neural efficiency hypothesis
Biology
909
1,291,336
https://en.wikipedia.org/wiki/Reinventing%20the%20wheel
To reinvent the wheel is to attempt to duplicate—most likely with inferior results—a basic method that has already previously been created or optimized by others. The inspiration for this idiomatic metaphor is that the wheel is an ancient archetype of human ingenuity (one so profound that it continues to underlie much of modern technology). As it has already been invented and is not considered to have any inherent flaws, an attempt to reinvent it would add no value to it and be a waste of time, diverting the investigator's resources from possibly more worthy goals. Usage The phrase is sometimes used without derision when a person's activities might be perceived as merely reinventing the wheel when they actually possess additional value. For example, "reinventing the wheel" is an important tool in the instruction of complex ideas. Rather than providing students simply with a list of known facts and techniques and expecting them to incorporate these ideas perfectly and rapidly, the instructor instead will build up the material anew, leaving the student to work out those key steps which embody the reasoning characteristic of the field. "Reinventing the wheel" may be an ironic cliche – it is not clear when the wheel itself was actually invented. The modern "invention" of the wheel might actually be a "re-invention" of an age-old invention. Additionally, many different wheels featuring enhancements on existing wheels (such as the many types of available tires) are regularly developed and marketed. The metaphor emphasizes understanding existing solutions, but not necessarily settling for them. In software development In software development, reinventing the wheel is often necessary in order to work around software licensing incompatibilities or around technical and policy limitations present in parts or modules provided by third parties. An example would be to implement a quicksort for a script written in JavaScript and destined to be embedded in a web page. The quicksort algorithm is well known and readily available from libraries for software developers writing general-purpose applications in C++ or Java, but some JavaScript implementations do not provide this specific algorithm. Hence, if a developer wants to reliably use quicksort on their web page, they must "reinvent the wheel" by reimplementing the algorithm. They could conceivably copy it from another web page, but then they could run into copyright and software licensing issues. Reinventing the wheel in this case provides the missing functionality and also avoids copyright issues. Additionally, those new to a language (and especially those new to programming) will often attempt to manually write many functions for which a more robust and optimized equivalent already exists in the standard library or other easily available libraries. While this can be useful as a learning exercise, when done unknowingly the result is often less readable, less reliable, less tested and less optimized software which takes longer to write, test, maintain, and debug. Software projects that are reinvented wheels FreeDOS, a replica of MS-DOS FreeWin95, a replica of Windows 95 ReactOS, a replica of Windows NT Apache Harmony, a replica of Java SE 5 and Java SE 6 ruffle, a replica of Flash Player Related phrases Reinventing the square wheel is the practice of unnecessarily engineering artifacts that provide functionality already provided by existing standard artifacts (reinventing the wheel) and ending up with a worse result than the standard (a square wheel). This is an anti-pattern which occurs when the engineer is unaware or contemptuous of the standard solution or does not understand the problem or the standard solution sufficiently to avoid problems overcome by the standard. It is mostly an affliction of inexperienced engineers, or the second-system effect. Many problems contain subtleties that were resolved long ago in mainstream engineering (such as the importance of a wheel's rim being smooth). Anyone starting from scratch, ignoring the prior art, will naturally face these problems afresh, and to produce a satisfactory result they will have to spend time developing solutions for them (most likely the same solutions that are already well known). However, when reinventing the wheel is undertaken as a subtask of a bigger engineering project, rather than as a project in its own right hoping to produce a better wheel, the engineer often does not anticipate spending much time on it. The result is that an underdeveloped, poorly performing version of the wheel is used, when using a standard wheel would have been quicker and easier, and would have given better results. Preinventing the wheel involves delaying a task if it is expected to be undertaken later. An example would be, "We don't want to preinvent the wheel" when discussing a solution to a problem when it is known that the solution is being developed elsewhere. It is not necessarily pejorative. Redefining the wheel is the practice of coming up with new and often abstruse ways of describing things when the existing way of describing them was perfectly adequate. See also Anti-pattern Best practice Design around: an alternative invention that is created in order to avoid patent infringement Not invented here Patent thicket Standing on the shoulders of giants, an expression referring to the re-use of existing ideas Stovepipe system Tragedy of the anticommons References English-language idioms Software engineering folklore Pejorative terms related to technology
Reinventing the wheel
Engineering
1,089
77,313,933
https://en.wikipedia.org/wiki/Elmo%20Motion%20Control
Elmo Motion Control is an engineering company specializing in developing, producing, and selling innovative hardware and software solutions in motion control. The company was founded in 1988 and is based in Petah Tikva, Israel. On September 4, 2022, Elmo was fully acquired by Bosch Rexroth. History Elmo Motion Control was established in 1988 by Haim Monhait. Four years later, in 1992, the company expanded its operations by opening its first subsidiary in the United States. In 2008, Elmo acquired and merged with Control Solutions (Pitronot Bakara), further solidifying its position in the market. In 2015, the company opened an additional production facility in Warsaw, Poland, to meet the growing demand. Over the years, Elmo has steadily expanded its global presence by establishing eight additional subsidiaries worldwide. These include operations in China, Europe, and the APAC region. The most recent subsidiary was opened in Singapore in 2019. Operations Elmo employs over 400 personnel and has its headquarters and manufacturing facilities in Petah Tikva, Israel. The company also has worldwide sales and technical support offices and additional manufacturing facilities. Products and markets Elmo offers complete motion control solutions, ranging from design to delivery, including cutting-edge servo drives, network-based multi-axis motion controllers, power supplies, and integrated servo motors. These solutions can be customized, configured, and simulated using Elmo's proprietary software tools, which are designed to be advanced and easy to use. Elmo's products cater to various industries, such as semiconductors, lasers, robots, drones, life sciences, industrial automation, and extreme environments. Product lines Elmo Motion Control provides various servo drives suitable for various motion requirements, from industrial applications that require high precision and power density to extreme applications designed for critical missions in harsh environments. Since its establishment, Elmo has developed three generations of products, each offering servo drives and motion controllers for both industrial and harsh environments. Platinum's latest product line is known for its EtherCAT networking precision and fully certified functional safety in all its products. Elmo's servo-drive product lines comply with global industry standards. Acquisition by Bosch Rexroth In September 2022, Elmo Motion Control was fully acquired by Bosch Rexroth, a leading global supplier of drive and control technologies. References External links Official website Companies based in Petah Tikva Israeli companies established in 1988 Motion control 2022 mergers and acquisitions
Elmo Motion Control
Physics,Engineering
501
1,110,875
https://en.wikipedia.org/wiki/High-temperature%20electrolysis
High-temperature electrolysis (also HTE or steam electrolysis, or HTSE) is a technology for producing hydrogen from water at high temperatures or other products, such as iron or carbon nanomaterials, as higher energy lowers needed electricity to split molecules and opens up new, potentially better electrolytes like molten salts or hydroxides. Unlike electrolysis at room temperature, HTE operates at elevated temperature ranges depending on the thermal capacity of the material. Because of the detrimental effects of burning fossil fuels on humans and the environment, HTE has become a necessary alternative and efficient method by which hydrogen can be prepared on a large scale and used as fuel. The vision of HTE is to move towards decarbonization in all economic sectors. The material requirements for this process are: the heat source, the electrodes, the electrolyte, the electrolyzer membrane, and the source of electricity. Principle The process utilizes energy (in the form of heat) from sources to convert water into steam, which is then passed into an electrolytic system (made up of two electrodes connected to the source of current, an electrolyte, and a membrane). At high temperatures (over 650 °C in most topologies), the materials used to construct the cells become conductive. Therefore, electrochemical reactions begin to occur, and the cell begins to function once it has reached the proper temperature and electricity is supplied while it is being fed with steam. The steam will eventually split into hydrogen (cathode) and oxygen (anode) according to the equations below: Overall: 2H2O -> 2H2 + O2 Cathode: 2H2O ->2H + 2OH^{-} Anode: 2OH^{-} -> H2O + (1/2)O2 Efficiency High temperature electrolysis is more efficient economically than traditional room-temperature electrolysis because some of the energy is supplied as heat, which is cheaper than electricity, and also because the electrolysis reaction is more efficient at higher temperatures. In fact, at 2500 °C, electrical input is unnecessary because water breaks down to hydrogen and oxygen through thermolysis. Such temperatures are impractical; proposed HTE systems operate between 100 °C and 850 °C. If one assumes that the electricity used comes from a heat engine, it takes 141.86 megajoules (MJ) of heat energy to produce one kg of hydrogen, for the HTE process itself and for the electricity required. At 100 °C, 350 MJ of thermal energy are required (41% efficient). At 850 °C, 225 MJ are required (64% efficient). Above 850 °C, one begins to exceed the capacity of standard chromium steels to resist corrosion, and it's already no easy matter to design and implement an industrial scale chemical process to operate at such a high temperature point. Materials Solid oxide electrolysis cells (SOECs) are electrochemical devices that function at high temperatures and are used for high-temperature electrolysis. These cells' ingredients ensure that the device will function well both physically and electrochemically at high temperatures. Therefore, the selection of materials for the electrodes and electrolyte in a solid oxide electrolyser cell is essential. One option being investigated for the process used yttria-stabilized zirconia (YSZ) electrolytes, Nickel (Ni)-cermet steam/Hydrogen electrodes, and d Oxide of Lanthanum oxide (La2O3), Strontium and Cobalt oxygen electrodes. Economic potential Even with HTE, electrolysis is a fairly inefficient way to store energy. Significant conversion losses of energy occur both in the electrolysis process, and in the conversion of the resulting hydrogen back into power. At current hydrocarbon prices, HTE can not compete with pyrolysis of hydrocarbons as an economical source of hydrogen, which produces carbon dioxide as a by-product. HTE is of interest as a more efficient route to the production "green" hydrogen, to be used as a carbon neutral fuel and general energy storage. It may become economical if cheap non-fossil fuel sources of heat (concentrating solar, nuclear, geothermal, waste heat) can be used in conjunction with non-fossil fuel sources of electricity (such as solar, wind, ocean, nuclear). Possible supplies of cheap high-temperature heat for HTE are all nonchemical, including nuclear reactors, concentrating solar thermal collectors, and geothermal sources. HTE has been demonstrated in a laboratory at 108 kilojoules (electric) per gram of hydrogen produced, but not at a commercial scale. Advantages and Challenges Obviously, the most notable advantage of HTE is that it provides an opportunity for which green hydrogen is prepared on a large scale, because it has the potential for zero emissions. The process provides an improved reaction kinetics for the splitting of water molecule. Part of the electricity requirement is replaced with heat, which makes it a bit cheaper because electricity is more expensive than heat. However, HTE technology suffered limitations due to: Above 100 °C, the electrolysis of liquid water requires pressurization, and is therefore limited by the working pressures that can be reasonably attained. creating materials that are both chemically and physically stable in conditions of intense oxidation and reduction, as well as high working temperatures. chemical and physical stability at low electrical conductivities, high working temperatures, and/or ionic concentrations. Alternatives There are hundreds of thermochemical cycles known to use heat to extract hydrogen from water. For instance, the thermochemical sulfur-iodine cycle. Since the electricity generation step has a fairly low efficiency and is eliminated, thermochemical production might reach higher efficiencies than HTE. However, large-scale thermochemical production will require significant advances in materials that can withstand high-temperature, high-pressure, highly corrosive environments. United States Department of Energy The DOE Office of Nuclear Energy has demonstration projects to test 3 nuclear facilities with high-temperature electrolysis in the United States at: Nine Mile Point Nuclear Generating Station in Oswego, NY Davis–Besse Nuclear Power Station in Oak Harbor, Ohio Prairie Island Nuclear Power Plant in Red Wing, Minnesota Mars ISRU High temperature electrolysis with solid oxide electrolyser cells was used to produce 5.37 grams of oxygen per hour on Mars from atmospheric carbon dioxide for the Mars Oxygen ISRU Experiment in the NASA Mars 2020 Perseverance rover, using zirconia electrolysis devices. See also Office of Nuclear Energy High-pressure electrolysis References U.S. DOE high-temperature electrolysis Footnotes Electrolysis Hydrogen production
High-temperature electrolysis
Chemistry
1,373
23,970,576
https://en.wikipedia.org/wiki/TopoFlight
TopoFlight is a three-dimensional flight planning software for photogrammetric flights. Originally conceived by a team of experts in the mapping industry, it has been in use since 2003. The program is used to facilitate the planning of flight lines with the help of a Digital Terrain Model (DTM), to document the flight plan and transfer it into the flight management system of the camera (for instance SoftNav, TrackAir, ASCOT or CCNS4), to calculate the costs of photogrammetric flight and subsequent photogrammetric products with the aid of Microsoft Excel as well as flight parameters, and to complete the post checking of a flight (flying height, length overlap, and side lap). Coordinates that have been calculated can be exported to be used during flight. TopoFlight is able to work with frame, line, and LIDAR sensors. the software is at version 10.5.3. History from 2003–2015 Released in April 2015 The TopoFlight Mission Planner has been upgraded to version 9.5. TopoFlight does use the edge of the image to calculate side overlap and length overlap. It actually calculates over the whole covered area by photos. It is easier now to plan the flights with cameras, sensors and LiDAR systems. Maps can be directly downloaded from Google into TopoFlight with user selectable resolution. Due to a change in Google’s API, the Google Maps Tool in the TopoFlight Program had to be adapted. Improvements, like enhanced calculation of side lap, importing from TIFF DTM, importing of XYZOPK files for quality control and many other features have been completed. Large flight plans in fairly flat areas can be computed much faster now by switching OFF the ‘Precise Calculation' option. Version 7 Released in May 2009 In 2008 at the International Congress on Geomatic and Surveying Engineering in Valencia, Spain, Professor Jorge Delgado (specializing in cartographic engineering, geodesy and photogrammetry) from the University of Jaen in Spain as well as Klaus Budmiger (of Flotron AG) et al. presented an oral presentation concerning the TopoFlight flight planning system (see contribution 22 at the International Congress on Geomatic and Surveying Engineering) Version 6 Version 6 moved from a 32 bit to a 64 bit system. Version 6 Beta was released for testing in January 2008. It was available to existing users for testing in February. By April of that year, it was still not fully operational. There were issues with the latitude and longitude with respect to measurements in feet that had to be corrected. Eventually v.6 would allow for constant latitude calculations as well as: Google Earth export, more data formats, new projection management, faster calculation of overlap between strips, use of .prj files, LIDAR flight planning, automatic checking over the internet if a new version is available, and downloading it, licensing with USB hardlocks (Aladdin), and coordinate grid display. By May 2008 the full version was operational and being distributed. Version 5 2007 saw the advent of Version 5 of TopoFlight. It allowed for projections to be made using the Universal Transverse Mercator coordinate system (UTM). DTM importing functions began to be addressed with this version as it was not possible to import DEM data without reprojecting it as DTM first. Version 5 could only plan in projected coordinates but could output the flight lines and image centers to lat/long. Input required a text file with projected coordinates (like UTM). These issues were going to be changed for v.6. Another issue that was encountered in v. 5 that needed to be addressed was the impossibility to perform flight planning by filling a "horseshoe shaped" area of interest. The break line tool had to be used line per line to modify the new lines to the desired area of interest manually. After this, an enumeration could be performed. Version 4 Version 4 was the first version available for sale to the public. Version 1 In 2003 the first version of TopoFlight was created mainly for internal use by specialized, technical professionals. File formats TopoFlight works with multiple layers. The generated flight plans are stored in the widely used "shape format" (ESRI/ARC VIEW). Additional maps can be attached as reference files. These maps include: Topographic maps Project area Existing control points Flight navigation maps The reference files can be in the following formats: SHAPE from ESRI DXF from AutoCAD DGN from Mirostation TIFF with tfw- header for all raster files Features Best fit flying height – Calculation of the best flying height to achieve the desired image scale as well as minimum and maximum image scales for given flying height. Coordinate transformations – Transformation of the coordinates from the local grid to another system (such as WGS84). Calculation of image centers – calculation of coordinates of each image with image scales and overlap. Calculation of the effective covered area by the images of each strip Area of side lap – Calculation of the side lap between two neighboring flight lines. Calculation of costs – Calculation of costs of flight and photogrammetric products which can be transferred into Excel. Custom forms can be later defined. List of coordinates – Transfer of the flight parameters to Excel. Custom forms can later be defined. Ground control points – The coordinates of existing ground control points can be imported and annotated. They can also be placed with a mouse click to show the surveyor where to paint and measure a new ground control point. Exporting the flight plan – The plot can be exported either through SHAPE files, DXF format, or in TIFF format with a TFW header file. Transfer to flight management system – coordinates can be exported to ASCOT, CCNS, or TrackAir. Check overlap for aerial triangulation – Check if the minimal overlap is achieved over the whole strip area. Create image indexes – The coordinates of image centers, stored in a text file can be read by TopoFlight. Users TopoFlight is currently in use in 19 countries including the United States, Germany, Brazil, Mexico, Canada, Austria, Italy and others. See also Aerial photography Aerial survey Orthophoto Photogrammetry Photomapping Remote Sensing Topography References TopoFlight Web Page International Congress on Geomatic and Surveying Engineering (Contribution 22) Budmiger, K, Delgado J, and Perez J. Planificacion y Control de la Calidad de Vuelos Fotogrametricos. "El Sistema TopoFlight." Spain, 2006. GIM International. "TopoFlight Included in Filanda Flight Planning Tool." 2007. Photogrammetry Aerial photography Flight planning Software features 3D graphics software Lidar
TopoFlight
Technology
1,366
51,075,791
https://en.wikipedia.org/wiki/Kepler-1229
Kepler-1229 is a red dwarf star located about away from the Earth in the constellation of Cygnus. It is known to host a super-Earth exoplanet within its habitable zone, Kepler-1229b, which was discovered in 2016. Nomenclature and history Prior to Kepler observation, Kepler-1229 had the 2MASS catalogue number 2MASS J19495680+4659481. In the Kepler Input Catalog it has the designation of KIC 10027247, and when it was found to have a transiting planet candidate it was given the Kepler object of interest number of KOI-2418. Planetary candidates were detected around the star by NASA's Kepler Mission, a mission tasked with discovering planets in transit around their stars. The transit method that Kepler uses involves detecting dips in brightness in stars. These dips in brightness can be interpreted as planets whose orbits pass in front of their stars from the perspective of Earth, although other phenomenon can also be responsible which is why the term planetary candidate is used. Following the acceptance of the discovery paper, the Kepler team provided an additional moniker for the system of "Kepler-1229". The discoverers referred to the star as Kepler-1229, which is the normal procedure for naming the exoplanets discovered by the spacecraft. Hence, this is the name used by the public to refer to the star and its planet. Candidate planets that are associated with stars studied by the Kepler Mission are assigned the designations ".01" etc. after the star's name, in the order of discovery. If planet candidates are detected simultaneously, then the ordering follows the order of orbital periods from shortest to longest. Following these rules, there was only one candidate planet were detected, with an orbital period of 86.829 days. The designation b, derives from the order of discovery. The designation of b is given to the first planet orbiting a given star, followed by the other lowercase letters of the alphabet. In the case of Kepler-1229, there was only one planet, so only the letter b is used. The name Kepler-1229 derives directly from the fact that the star is the catalogued 1,229th star discovered by Kepler to have confirmed planets. Stellar characteristics Kepler-1229 is a red dwarf star that is approximately 54% the mass of and 51% the radius of the Sun. It has a temperature of 3784 K and is roughly 3.72 billion years old. In comparison, the Sun is about 4.6 billion years old and has a temperature of 5778 K. The star is slightly poor in metals, with a metallicity ([Fe/H]) of about −0.06, or about 87% of the amount of iron and other heavier metals found in the Sun. The star's luminosity is somewhat normal-low for a star like Kepler-1229, with a luminosity of around 4.8% of that of the solar luminosity. The star's apparent magnitude, or how bright it appears from Earth's perspective, is 15.474. Therefore, it is too dim to be seen with the naked eye. Planetary system The only known planet transits the star; this means that the planet's orbit appear to cross in front of their star as viewed from the Earth's perspective. Its inclination relative to Earth's line of sight, or how far above or below the plane of sight it is, vary by less than one degree. This allows direct measurements of the planet's periods and relative diameters (compared to the host star) by monitoring the planet's transit of the star. Kepler-1229b is a super-Earth, likely rocky, with a radius of 1.4 , and it orbits well within the habitable zone. In terms of stellar flux, radius, and equilibrium temperature, Kepler-1229b is similar (or an analog in some terms) to the potentially habitable exoplanet Kepler-62f. References Planetary systems with one confirmed planet M-type main-sequence stars Planetary transit variables Cygnus (constellation) 2418
Kepler-1229
Astronomy
852
39,379,024
https://en.wikipedia.org/wiki/NGC%204845
NGC 4845 (also known as NGC 4910) is a spiral galaxy located in the constellation Virgo around 65 million light years away. The galaxy was originally discovered by William Herschel in 1786. It is a member of the NGC 4753 Group of galaxies, which is a member of the Virgo II Groups, a series of galaxies and galaxy clusters strung out from the southern edge of the Virgo Supercluster. The galaxy has a supermassive black hole, called IGR J12580+0134, at its center with a mass of 300,000. In 2013, the ESA observed the black hole absorbing matter from a nearby, low-mass object; possibly a brown dwarf star. The observed X-ray flare was caught by the ESA's INTEGRAL telescope. Gallery References External links Virgo (constellation) Unbarred spiral galaxies 4845 8087 044392
NGC 4845
Astronomy
185
15,368,504
https://en.wikipedia.org/wiki/Computational%20gene
A computational gene is a molecular automaton consisting of a structural part and a functional part; and its design is such that it might work in a cellular environment. The structural part is a naturally occurring gene, which is used as a skeleton to encode the input and the transitions of the automaton (Fig. 1A). The conserved features of a structural gene (e.g., DNA polymerase binding site, start and stop codons, and splicing sites) serve as constants of the computational gene, while the coding regions, the number of exons and introns, the position of start and stop codon, and the automata theoretical variables (symbols, states, and transitions) are the design parameters of the computational gene. The constants and the design parameters are linked by several logical and biochemical constraints (e.g., encoded automata theoretic variables must not be recognized as splicing junctions). The input of the automaton are molecular markers given by single stranded DNA (ssDNA) molecules. These markers are signalling aberrant (e.g., carcinogenic) molecular phenotype and turn on the self-assembly of the functional gene. If the input is accepted, the output encodes a double stranded DNA (dsDNA) molecule, a functional gene which should be successfully integrated into the cellular transcription and translation machinery producing a wild type protein or an anti-drug (Fig. 1B). Otherwise, a rejected input will assemble into a partially dsDNA molecule which cannot be translated. A potential application: in situ diagnostics and therapy of cancer Computational genes might be used in the future to correct aberrant mutations in a gene or group of genes that can trigger disease phenotypes. One of the most prominent examples is the tumor suppressor p53 gene, which is present in every cell, and acts as a guard to control growth. Mutations in this gene can abolish its function, allowing uncontrolled growth that can lead to cancer. For instance, a mutation at codon 249 in the p53 protein is characteristic for hepatocellular cancer. This disease could be treated by the CDB3 peptide which binds to the p53 core domain and stabilises its fold. A single disease-related mutation can be then diagnosed and treated by the following diagnostic rule: Such a rule might be implemented by a molecular automaton consisting of two partially dsDNA molecules and one ssDNA molecule, which corresponds to the disease-related mutation and provides a molecular switch for the linear self-assembly of the functional gene (Fig. 2). The gene structure is completed by a cellular ligase present in both eukaryotic and prokaryotic cells. The transcription and translation machinery of the cell is then in charge of therapy and administers either a wild-type protein or an anti-drug (Fig. 3). The rule (1) may even be generalised to involve mutations from different proteins allowing a combined diagnosis and therapy. In this way, computational genes might allow implementation in situ of a therapy as soon as the cell starts developing defective material. Computational genes combine the techniques of gene therapy which allows to replace in the genome an aberrant gene by its healthy counterpart, as well as to silence the gene expression (similar to antisense technology). Challenges Although mechanistically simple and quite robust on molecular level, several issues need to be addressed before an in vivo implementation of computational genes can be considered. First, the DNA material must be internalised into the cell, specifically into the nucleus. In fact, the transfer of DNA or RNA through biological membranes is a key step in the drug delivery. Some results show that nuclear localisation signals can be irreversibly linked to one end of the oligonucleotides, forming an oligonucleotide-peptide conjugate that allows effective internalisation of DNA into the nucleus. In addition, the DNA complexes should have low immunogenicity to guarantee their integrity in the cell and their resistance to cellular nucleases. Current strategies to eliminate nuclease sensitivity include modifications of the oligonucleotide backbone such as methylphosphonate and phosphorothioate (S-ODN) oligodeoxynucleotides, but along with their increased stability, modified oligonucleotides often have altered pharmacologic properties. Finally, similar to any other drug, DNA complexes could cause nonspecific and toxic side effects. In vivo applications of antisense oligonucleotides showed that toxicity is largely due to impurities in the oligonucleotide preparation and lack of specificity of the particular sequence used. Undoubtedly, progress on antisense biotechnology will also result in a direct benefit to the model of computational genes. See also Biocomputers Cyclic enzyme system DNA computing Finite-state machine Molecular electronics Nanobiotechnology Nanomedicine References Nanotechnology
Computational gene
Materials_science,Engineering
1,019
2,074,009
https://en.wikipedia.org/wiki/Trisil
Trisil is a trade name for a thyristor surge protection device, an electronic component designed to protect electronic circuits against overvoltage. Unlike a transient voltage suppression diodes, such as Transil, a Trisil acts as a crowbar device, switching ON when the voltage on it exceeds its breakover voltage. Overview A Trisil is bidirectional, behaving the same way in both directions. It is principally a voltage-controlled triac without gate. The behavior of a Trisil is similar to a SIDAC, but unlike SIDAC, Trisil devices are commonly used to protect circuits from overvoltage. They act faster and can handle more current. In 1982, the only manufacturer was Thomson SA; a successor company, ST Microelectronics continues to make the devices. This type of crowbar protector is widely used for protecting telecom equipment from lightning-induced transients and induced currents from power lines. Other manufacturers of this type of device include Bourns (TISP) and Littelfuse (SIDACtor). Rather than using the natural breakdown voltage of the device, an extra region is fabricated within the device to form a Zener diode. This allows a much tighter control of the breakdown voltage. It is also possible to make gated versions of this type of protector. In this case, the gate is connected to the telecom circuit power supply (via a diode or transistor) so that the device will crowbar if the transient exceeds the power supply voltage. The main advantage of this configuration is that the protection voltage tracks the power supply, thus eliminating the problem of selecting a particular breakdown voltage for the protection circuit. See also Transil Zener diode References External links Overvoltage protection Trisil/Transil Comparison, ST Application Note (PDF) Solid state switches Voltage stability
Trisil
Physics
377
61,618,427
https://en.wikipedia.org/wiki/Mediterranean%20Biogeographic%20Region
The Mediterranean Biogeographic Region is the biogeographic region around and including the Mediterranean Sea. The term is defined by the European Environment Agency as applying to the land areas of Europe that border on the Mediterranean Sea, and the corresponding territorial waters. The region is rich in biodiversity and has many endemic species. The term may also be used in the broader sense of all the lands of the Mediterranean Basin, or in the narrow sense of just the Mediterranean Sea. Extent The European Commission defines the Mediterranean Biogeographic Region as consisting of the Mediterranean Sea, Greece, Malta, Cyprus, large parts of Portugal, Spain and Italy, and a smaller part of France. The region includes 20.6% of European Union territory. Climate The region has cool humid winters and hot dry summers. Wladimir Köppen divided his "Cs" mediterranean climate classification into "Csa" with a highest mean monthly temperature over and "Csb" where the mean monthly temperature was always lower than . The region may also be subdivided into dry zones such as Alicante in Spain, and humid zones such as Cinque Terre in Italy. Terrain The region has generally hilly terrain and includes islands, high mountains, semi-arid steppes and thick Mediterranean forests, woodlands, and scrub with many aromatic plants. There are rocky shorelines and sandy beaches. The region has been greatly affected by human activity such as livestock grazing, cultivation, forest clearance and forest fires. In recent years tourism has put greater pressure on the shoreline environment. Biodiversity The Mediterranean Biogeographic Region is rich in biodiversity and has many endemic species. The region has more plants species than all the other biogeographical regions of Europe combined. The wildlife and vegetation are adapted to the unpredictable weather, with sudden downpours or strong winds. Coastal wetlands are home to endemic species of insects, amphibians and fish, which provide food for large flocks of waders and dabbling ducks. The sea is also rich in marine life, including many endemic species. The shallow coastal waters hold huge Posidonia beds, underwater meadows that harbor rare crustaceans, sponges and Ascidiacea (sea squirts). As of 2009 the region was not sufficiently covered in the EuMon database. Recruiting volunteers to monitor species may help address the issue. The Iberian Peninsula is particularly rich in species, including rare and endemic species, due to its complex climate and terrain, and because it provided refugia during the glacial period of the Pleistocene. A 2011 study of spiders in the coastal dunes of Portugal showed that the primary factor in beta diversity was a broad-scale gradient of mediterraneity. Diversity was lower in the northern dunes, which are in the Eurosiberian biogeographic region, and higher in the center and south in the Mediterranean biogeographic region Related concepts Mediterranean Basin Arco Aguilar and Rodríguez Delgado state that three large floristic regions originated in the Mesogean region after the Pleistocene glaciation, the Mediterranean, Saharo-North-Arabian and Iranian-Turanian. Academics such as Ana Isabel Queiroz and Simon Pooley consider that the Mediterranean biogeographic region includes all of the Mediterranean Sea and all the lands surrounding it that have a Mediterranean-type climate (MTC). The Mediterranean Basin is about long, from Lebanon in the east to Portugal in the west, and about wide, from Morocco and Libya in the south to Italy in the north. The region contains about 1.6% of the world's dry land but has about 10% of the known vascular plant species, with over 25,000 identified to date. More than half of them are endemic. The biogeographic origins of the non-indigenous plants of the region include northern and central Eurasia, southwest and central Asia, North Africa, Arabia and the tropics of Africa. For example, the Mediterranean species of the Androcymbium genus migrated northward from tropical Africa via the Eastern African mountain ranges to reach the Mediterranean in the Middle Miocene, at a time when the climate was quite different from today. Molecular phylogeography is starting to give new insights into the origins and evolution of Mediterranean species. Mediterranean Sea An analysis of literature has found about 17,000 marine species recorded as occurring in the Mediterranean Sea. This estimate is probably low, with microbes significantly under-reported, and with large gaps in knowledge of the deep sea areas and the southern and eastern part of the sea. Biodiversity is generally greater in the coastal and shallow regions, lower in deeper areas. The ecology is threatened by habitat loss or degradation from fishing, pollution, climate change, eutrophication and alien species. Notes Citations Sources Mediterranean Sea Biogeography
Mediterranean Biogeographic Region
Biology
952
75,415,258
https://en.wikipedia.org/wiki/Ketenyl%20anion
A ketenyl anion contains a C=C=O allene-like functional group, similar to ketene, with a negative charge on either terminal carbon or oxygen atom, forming resonance structures by moving a lone pair of electrons on C-C-O bond. Ketenes have been sources for many organic compounds with its reactivity despite a challenge to isolate them as crystal. Precedent method to obtain this product has been at gas phase or at reactive intermediate, and synthesis of ketene is used be done in extreme conditions (i.e., high temperature, low pressure). Recently found stabilized ketenyl anions become easier to prepare compared to precedent synthetic procedure. A major feature about stabilized ketene is that it can be prepared from carbon monoxide (CO) reacting with main-group starting materials such as ylides, silylene, and phosphinidene to synthesize and isolate for further steps. As CO becomes a more common carbon source for various type of synthesis, this recent finding about stabilizing ketene with main-group elements opens a variety of synthetic routes to target desired products. Synthesis Gessner et al. first revealed a synthetic route for stabilized ketenyl anion using metalated ylides in 2022. In their paper, upon introducing CO, metalated ylide with posassium cation exchange CO with phosphine group R, also known for carbonylation of ylide. Their isolated ketenyl anion [K(PPh2(=S)CCO] is stable solid for a week under inert atmosphere, and its crystal structure was characterized. An alternate synthetic pathway for synthesizing ketenyl anion from ylide, shown in Figure 2, includes sulfuration on diphenylphosphine group, deprotonation on carbon center, and CO substitution in exchange of triphenylphosphine leaving. This synthesis resulted in 88% isolation of the product. Later in their studies, the ketenyl anion product upon carbonylation can be selective by changing electron-withdrawing ability on a certain leaving group and Lewis acidity of coordinated alkali metal cation. In their example with ylide containing phosphine group and tosyl group (Ts), Gessner et al. was able to produce the ketenyl anion product more selective by modifying those parameters, shown in Figure 2. As R group is more electron-withdrawing group, it becomes more likely to leave than tosyl group. For example, changing R group from cyclohexyl group (Cy) to phenyl group (Ph) favored the ketenyl anion product with R1 group leaving by 76%. This is because phenyl group is less electron rich and less nucleophilic compared to cyclohexyl group, resulting in more stable by itself. For alkali metal cation trend, when triphenylphosphine group is present, changing from M = Li to M = K favored in phosphine group leaving by 9%. Although it is a small effect compared to leaving group effect, this is due to Lewis acidity on metal cations because a stronger Lewis acidic metal cation (Li > K in Lewis acidity) attracts tosyl group to interact, resulting in increasing leaving group ability. Inoue et al. presented synthetic route of stabilizing ketene via silica-carbonyl anion, silicon analogue of ketene. They motivated this goals from recent reactivity study of silylene and disilane activating CO and isolating intermediate, hypothesizing that silica-ketenyl anion is also capable to stabilize ketene. While Gessner et al. uses ylides to accept CO, Inoue et al. uses silylene anion with another silyl group substituted to afford insertion of CO or carbonylation at room temperature in exchange of silyl group. Liu et al. had another approach to stabilize and isolate ketene by using carbene coordinated by phosphinidene. Carbene coordinated by 2,6-diisopropylphenyl(Dipp)-substituted phosphinidene and dinitrogen (N2) perform N2/CO ligand exchange. The starting material is similar to N-heterocyclic carbene with bulky substituents, invented by Bertrand. In their studies, this reaction is concerted and thermodynamically favorable (-47.4 kcal/mol relative to N2-coordinated carbene) on coordinating CO ligand to NHC. This product is stable at room temperature inert atmosphere for a month, and no decomposition while heating in THF at 80 °C for 12 hours was observed. Structure As shown in Figure 5, ketenyl anion has two major resonance structures: ketenyl form and ynolate form. Due to the resonance structures, alkali metal cations can be coordinated to either at central carbon atom or terminal oxygen atom depending on its electronic structure. A series of structural analysis revealed both ketene and ynolate structures evenly contribute to the overall electronic structure of ketenyl anion. From an example in Gessner's paper, the crystal structure of the ketenyl anion K[PPh2(=S)CCO] had the bond length of C-C bond (1.245 Å) and C-O bond (1.215 Å). By comparing these bond length with Pyykkő's analysis on bond, C-C bond is in between double bond and triple bond whereas C-O bond is in between single bond and double bond. In natural bond orbital (NBO) analysis, Wiberg bond index is found to be 2.06 and 1.72 for C-C bond and C-O bond, respectively. These values also suggests that both double and triple bond character for C-C bond (range of 1.20 - 1.34 Å) and both single bond and double bond character for C-O bond (range of 1.24 - 1.38 Å). The characteristic of allene-like (C=C=C) structure is also applied other ketenyl anion compounds so far. Inoue's silica-ketenyl anion product, shown in Figure 3, had Wiberg bond index of 1.68 and 1.76 for Si-C bond and C-O bond, respectively. Their bond indices demonstrate that both Si-C and C-O bonds have part of double bond character that contributes of Si=C=O structure. This ketenyl anion can dimerize in solid state as oxygen atoms interacts with alkali metal cation. This dimer can be broken up by adding M(18-crown-6) (where M = alkali metal cation), resulting in isolation of single ketenyl anion structure. Intrinsic bond orbitals (IBO) of the molecule [K(PPh2(=S)CCO] reveal molecular orbital describing π-orbital of C-C and C-O and delocalized orbital on oxygen atom. The stability of ketenyl anion is come from the decrease of charge on ketene carbon from parent ketene to ketenyl anion. In Gessner's study, parent ketenyl anion [H-C=C=O]− has smaller positive charge (+4.0 e) on C compared to parent ketene [H2C=C=O] (+7.0 e on C). This drops of charge makes the ketene less amphiphilic, leading to a more stable compound. Reactivity The advantage of using ketenyl anion molecule is to synthesize desired compound selectively without concerning dimerization before synthesizing a target product. In ylide-ketenyl anion, electrophile can be substituted in exchange of metal to functionalize the ketene moiety at high yield. Since the central carbon is negatively charged, this nucleophilicity enable substitution with a series of electrophilic compounds such as triphenylmethyl group. Some ketenyl anion can further react with other compounds to form a new functional group. For example, after electrophilic substitution of ketenyl anion with triphenylmethyl group, the treatment with water results in formation of carboxylic acid at C=O moiety. Reported compounds from Gessner et al. had more than 90% yield isolated as solid. Not only at the central carbon where a cation can be coordinated, other carbon atom and terminal oxygen atom can also be functionalized upon electrophilic substitution. This reactivity allows activation of chemical bonds such as S-S and C=O bonds and new bonds C-S bond and C=C bond. These products requires CO and substrates of interests, which highlight new synthetic pathways of organic compounds at room temperature instead of extreme conditions such as pyrolysis. A stabilized ketenyl anion also undergoes dimerization with disubstituted phosphine compound to form a heterocyclic product. In this reaction, an intermediate is proposed to be electrophilic substitution of a disubstituted phosphine compound followed by dimerization. In different ketenyl anion compound, cleavage of Csp-H bond, C=N bond, and I2 bond at room temperature were also reported in phosphinidene-stabilized ketene. For I2 cleaving reaction, the mechanism is proposed to be cleavage of the bond at central carbon and migration of I to phosphorus atom. References Wikipedia Student Program Anions Functional groups
Ketenyl anion
Physics,Chemistry
2,023
39,830,246
https://en.wikipedia.org/wiki/Evolving%20digital%20ecological%20network
Evolving digital ecological networks are webs of interacting, self-replicating, and evolving computer programs (i.e., digital organisms) that experience the same major ecological interactions as biological organisms (e.g., competition, predation, parasitism, and mutualism). Despite being computational, these programs evolve quickly in an open-ended way, and starting from only one or two ancestral organisms, the formation of ecological networks can be observed in real-time by tracking interactions between the constantly evolving organism phenotypes. These phenotypes may be defined by combinations of logical computations (hereafter tasks) that digital organisms perform and by expressed behaviors that have evolved. The types and outcomes of interactions between phenotypes are determined by task overlap for logic-defined phenotypes and by responses to encounters in the case of behavioral phenotypes. Biologists use these evolving networks to study active and fundamental topics within evolutionary ecology (e.g., the extent to which the architecture of multispecies networks shape coevolutionary outcomes, and the processes involved). Overview In nature, species do not evolve in isolation but in large networks of interacting species. One of the main goals in evolutionary ecology is to disentangle the evolutionary mechanisms that shape and are shaped by patterns of interaction between species. A particularly important question concerns how coevolution, the reciprocal evolutionary change in local populations of interacting species driven by natural selection, is shaped by the architecture of food webs, plant-animal mutualistic networks, and host-parasite communities. The concept of diffuse coevolution, where adaptation is in response to a suite of biotic interactions, was the first step towards a framework unifying relevant theories in community ecology and coevolution. Understanding how individual interactions within networks influence coevolution, and conversely how coevolution influences the overall structure of networks, requires an appreciation for how pair-wise interactions change due to their broader community contexts as well as how this community context shapes selective pressures. Accordingly, research is now focusing on how reciprocal selection influences and is embedded within the structure of multispecies interactive webs, not only on particular species in isolation. Coevolution in a community context can be addressed theoretically via mathematical modeling and simulation, by looking at ancient footprints of evolutionary history via ecological patterns that persist and are observable today, and by performing laboratory experiments with microorganisms. In spite of the long time scales involved and the substantial effort that is necessary to isolate and quantify samples, the latter approach of testing biological evolution in the lab has been successful over the last two decades. However, studying the evolution of interspecific interactions, which involves dealing with more complex webs of multiple interacting species, has proven to be a much more difficult challenge. A meta-analysis of host-phage interaction networks, carried out by Weitz and his team, found a striking statistical structure to the patterns of infection and resistance across a wide variety of environments and methods from which the hosts and phage were obtained. However, the ecological mechanisms and evolutionary processes responsible have yet to be unraveled. Digital ecological networks enable the direct, comprehensive, and real time observation of evolving ecological interactions between antagonistic and/or mutualistic digital organisms that are difficult to study in nature. Research using self-replicating computer programs can help us understand how coevolution shapes the emergence and diversification of coevolving species interaction networks and, in turn, how changes in the overall structure of the web (e.g., through extinction of taxa or the introduction of invasive species) affect the evolution of a given species. Studying the evolution of species interaction networks in these artificial evolving systems also contributes to the development of the field, while overcoming limitations evolutionary biologists may face. For example, laboratory studies have shown that historical contingency can enable or impede the outcome of the interactions between bacteria and phage, depending on the order in which mutations occur: the phage often, but not always, evolve the ability to infect a novel host. Therefore, in order to obtain statistical power for predicting such outcomes of the coevolutionary process, experiments require a high level of replication. This stochastic nature of the evolutionary process was exemplified by Stephen Jay Gould's inquiry ("What would happen if the tape of the history of life were rewound and replayed?") Because of their ease in scalability and replication, evolving digital ecological networks open the door to experiments that incorporate this approach of replaying the tape of life. Such experiments allow researchers to quantify the role of historical contingency and repeatability in network evolution, enabling predictions about the architecture and dynamics of large networks of interacting species. The inclusion of ecological interactions in digital systems enables new research avenues: investigations using self-replicating computer programs complement laboratory efforts by broadening the breadth of viable experiments focused on the emergence and diversification of coevolving interactions in complex communities. This cross-disciplinary research program provides fertile grounds for new collaborations between computer scientists and evolutionary biologists. History Coreworld The field of digital life was inspired by the rampant computer viruses of the 1980s. These viruses were self-replicating computer programs that spread from one computer to another, but they did not evolve. Steen Rasmussen was the first to include the possibility of mutation in self-replicating computer programs by extending the once-popular Core War game, where programs competed in a digital battle ground for the computer's resources. Although Rasmussen observed some interesting evolution, mutations in this early genetic programming language produced many unstable organisms, thus prohibiting scientific experiments. Just one year later, Thomas S. Ray developed an alternative system, Tierra, and performed the first successful experiments with evolving populations of self-replicating computer programs. Tierra Thomas S. Ray created a genetic language similar to earlier digital systems, but added several key features that made it more suitable for evolution in his artificial life system, Tierra. Primarily, he prevented instructions from writing beyond the privately allocated memory space, thus limiting the potential for organisms writing over others. The only selective pressure in Tierra was for rapid self-replication. Over the course of evolution, this pressure led to shorter and shorter genomes, reducing the time spent copying instructions during replication. Some individuals even started executing the replication code in other organisms, allowing those "cheaters", which were originally referred to as parasites in Ray's work, to further shrink their genetic programs. This form of cheating was the first evolved ecological interaction between organisms in artificial life software. Ray's cheaters pre-dated the formal study of evolving ecological interactions using Tierra-like digital evolution platforms by 20 years. Avida In 1993, Christoph Adami, Charles Ofria, and C. Titus Brown created the artificial life platform Avida at the California Institute of Technology. They added the ability for digital organisms to obtain bonus CPU cycles for performing computational tasks, like adding two numbers together. In Avida, researchers can define the available tasks and set the consequences for organisms upon successful calculation. When organisms are rewarded with additional CPU cycles, their replication rate increases. Since Avida was designed specifically as a scientific tool, it allows users to collect a comprehensive suite of data about evolving populations. Due to its flexibility and data tracking abilities, Avida has become the most widely used digital system for studying evolution. The Devolab at the BEACON Center currently continues development of Avida. Implementation Digital organisms Digital organisms in Avida are self-replicating computer programs with a genome composed of assembly-like instructions. The genetic programming language in Avida contains instructions for manipulating values in registers and stacks as well as for control flow and mathematical operations. Each digital organism contains virtual hardware on which its genome is executed. To reproduce, digital organisms must copy their genome instruction by instruction into a new region of memory through a potentially noisy channel that may lead to errors (i.e., mutations). While most mutations are detrimental, mutants will occasionally have higher fitness than their parents, thereby providing the basis for natural selection with all of the necessary components for Darwinian evolution. Digital organisms can acquire random binary numbers from the environment and are able to manipulate them using their genetic instructions, including the logic instruction NAND. With only this instruction, digital organisms can compute any other task by stringing together various operations because NAND is a universal logic function. If the output of processing random numbers from the environment corresponds to the result of a particular logic task, then that task is incorporated into the set of tasks the organism performs, which in turn, defines part of its phenotype. Digital interactions Interactions between digital organisms occur through phenotypic matching, which, in the case of task-based phenotypes, results from the performance of overlapping logic functions. Different mechanisms for mapping phenotypic matching to interactions can be implemented, depending on the antagonistic or mutualistic nature of the interaction. Host-parasite interactions In host-parasite interactions, the parasite organisms benefits at the expense of the host organisms. Parasites in Avida are implemented just like other self-replicating digital organisms, but they live inside hosts and execute parasitic threads using CPU cycles stolen from their hosts. Because parasites impose a cost (lost CPU cycles) on hosts, there is selection for resistance, and when resistance starts to spread in a population, there is selective pressure for parasites to infect those new resistant hosts. Infection occurs when both the parasite and host perform at least one overlapping task. Thus a host is resistant to a particular parasite if they do not share any tasks. This mechanism of infection mimics the inverse-gene-for-gene model, in which infection only occurs if a host susceptibility gene (the presence of a logic task) is matched by a parasite virulence gene (a parasite performing the same task). Additional infection mechanisms, such as the matching allele and gene-for-gene models, can also be implemented. In traditional infection genetic models, host resistance and pathogen infectivity have associated costs. These costs are an important part of theory about why defense genes do not always fix rapidly within populations. Costs are also present in digital host-parasite interactions: performing more or more complex tasks implies larger genomes and hence slower reproduction. Tasks may also allow organisms access to resources present in the abiotic environment, and the environment can be carefully manipulated to control the relative costs or benefits of resistance. By keeping track of task-based phenotypes as well as tracking information about successful infections in the community, researchers are able to perfectly reconstruct the interaction networks of digital coevolving hosts and parasites. The structure of these networks is a result of the interplay between ecological processes, mainly host abundance, and coevolutionary dynamics, which lead to changes in host specificity. Mutualistic interactions Interactions in which both species obtain mutual benefit, such as those between flowering plants and pollinators, and birds and fleshy fruits, can be implemented in evolving digital experiments by following the same task matching approach used for host-parasite interactions, but using free-living organisms instead of parasitic threads. For example, one way to set up a plant-pollinator type of interaction is to use an environment containing two mutually exclusive resources: one designated for "plant" organisms and one for "pollinator" organisms. Similar to parasites attempting infection, if tasks overlap between a pollinator and a plant it visits, pollination is successful and both organisms obtain extra CPU cycles. Thus, these digital organisms obtain mutual benefit when they perform at least one common task, and more common tasks lead to larger mutual benefits. While this is one specific way to enable mutualistic interactions, many others are possible in Avida. Interactions that begin as parasitic may even evolve to be mutualistic under the right conditions. In most cases, coevolution will result in concurrent interactions between multiple phenotypes. Thus, observed networks of mutualistic interactions can inform our understanding about the outcomes and processes of coevolution in complex communities. Predator-prey interactions While host-parasite and mutualistic interactions are determined by task-based phenotypes, predator-prey interactions are determined by behavior. Predators are digital organisms that have evolved from ancestral prey phenotypes to locate, attack, and consume organisms. When a predator executes an attack instruction (acquired through mutation), it kills a neighboring organism. When predators kill prey, they gain resources required for reproduction (e.g., CPU cycles) proportional to the level accumulated by the consumed prey. Selection favors behavioral strategies in prey that enable them to avoid being eaten. At the same time, selection favors predators with behavioral strategies that improve their food finding and prey attacking abilities. The resulting diversity in the continuously evolving behavioral phenotypes creates dynamic predator-prey interaction networks in which selective forces are constantly changing as a consequence of the emergence of new, and loss of old, behaviors. Because predators and prey move around in and use information about their environment, these experiments are typically carried out using spatially structured populations. On the other hand, host-parasite and mutualistic coevolution are often done in well-mixed environments, though the choice of the environment is at the discretion of the experimenter. Applications Studies of digital organisms allows a depth of research of evolutionary processes that is not possible with natural ecosystems. They therefore provide a complementary approach to traditional studies of natural or experimental evolution. A key caveat in translating predictions of evolving digital networks to biological ones is that mechanistic details may differ substantially between interacting digital organisms and interacting biological organisms. Nevertheless, the general operational processes (Darwinian evolution, mutualism, parasitism, etc.) are equivalent, and so studies utilizing digital networks can uncover rules shaping the web of interactions among species and their coevolutionary processes. The evolution of digital communities and their ecological networks also allows for a perfect 'fossil record' of how the number and patterns of links among interacting phenotypes evolved. The selection pressures and parameters can also be controlled to an extent that is impossible in experimental evolution of living organisms. For example, the stability-diversity debate is a long-standing debate about whether more diverse ecological networks are also more stable. Mathematical models were able to show that a mixture of antagonistic and mutualistic interactions can stabilize population dynamics and that the loss of one interaction type may critically destabilize ecosystems. These techniques also enable detailed analysis of coevolutionary dynamics in multispecies networks, their historical contingency, and their genetic constraints. References Artificial life Evolutionary biology Ecology Articles containing video clips
Evolving digital ecological network
Biology
2,971
78,427,812
https://en.wikipedia.org/wiki/Ziritaxestat
Ziritaxestat is a small-molecule, selective autotaxin inhibitor that was investigated as a potential treatment for idiopathic pulmonary fibrosis (IPF). Initially showing promise in early-phase studies, ziritaxestat underwent evaluation in two large-scale phase 3 clinical trials, ISABELA 1 and ISABELA 2. These trials aimed to assess the efficacy and safety of ziritaxestat in patients with IPF, including those receiving standard of care treatment with pirfenidone or nintedanib. However, both trials were prematurely terminated due to a lack of efficacy, as ziritaxestat failed to demonstrate significant improvement in lung function or other clinical outcomes compared to placebo. References Azetidines 4-Fluorophenyl compounds Imidazopyridines Piperazines Thiazoles Nitriles
Ziritaxestat
Chemistry
178
36,035,103
https://en.wikipedia.org/wiki/Zinc%20finger%20transcription%20factor
Zinc finger transcription factors or ZF-TFs, are transcription factors composed of a zinc finger-binding domain and any of a variety of transcription-factor effector-domains that exert their modulatory effect in the vicinity of any sequence to which the protein domain binds. Zinc finger protein transcription factors can be encoded by genes small enough to fit a number of such genes into a single vector, allowing the medical intervention and control of expression of multiple genes and the initiation of an elaborate cascade of events. In this respect, it is also possible to target a sequence that is common to multiple (usually functionally related) genes to control the transcription of all these genes with a single transcription factor. Also, it is possible to target a family of related genes by targeting and modulating the expression of the endogenous transcription factor(s) that control(s) them. They also have the advantage that the targeted sequence need not be symmetrical unlike most other DNA-binding motifs based on natural transcription factors that bind as dimers. Applications By targeting the ZF-TF toward a specific DNA sequence and attaching the necessary effector domain, it is possible to downregulate or upregulate the expression of the gene(s) in question while using the same DNA-binding domain. The expression of a gene can also be downregulated by blocking elongation by RNA polymerase (without the need for an effector domain) in the coding region or RNA itself can also be targeted. Besides the obvious development of tools for the research of gene function, engineered ZF-TFs have therapeutic potential including correction of abnormal gene expression profiles (e.g., erbB-2 overexpression in human adenocarcinomas) and anti-retrovirals (e.g. HIV-1). See also Artificial transcription factor, of which the ZF-TF is a type Gene therapy Zinc finger proteins Zinc finger chimera Zinc finger nuclease References Transcription factors Zinc proteins
Zinc finger transcription factor
Chemistry,Biology
410
3,224,696
https://en.wikipedia.org/wiki/Photodegradation
Photodegradation is the alteration of materials by light. Commonly, the term is used loosely to refer to the combined action of sunlight and air, which cause oxidation and hydrolysis. Often photodegradation is intentionally avoided, since it destroys paintings and other artifacts. It is, however, partly responsible for remineralization of biomass and is used intentionally in some disinfection technologies. Photodegradation does not apply to how materials may be aged or degraded via infrared light or heat, but does include degradation in all of the ultraviolet light wavebands. Applications Foodstuffs The protection of food from photodegradation is very important. Some nutrients, for example, are affected by degradation when exposed to sunlight. In the case of beer, UV radiation causes a process that entails the degradation of hop bitter compounds to 3-methyl-2-buten-1-thiol and therefore changes the taste. As amber-colored glass has the ability to absorb UV radiation, beer bottles are often made from such glass to prevent this process. Paints, inks, and dyes Paints, inks, and dyes that are organic are more susceptible to photodegradation than those that are not. Ceramics are almost universally colored with non-organic origin materials so as to allow the material to resist photodegradation even under the most relentless conditions, maintaining its color. Pesticides and herbicides The photodegradation of pesticides is of great interest because of the scale of agriculture and the intensive use of chemicals. Pesticides are however selected in part not to photodegrade readily in sunlight in order to allow them to exert their biocidal activity. Thus, more modalities are implemented to enhance their photodegradation, including the use of photosensitizers, photocatalysts (e.g., titanium dioxide), and the addition of reagents such as hydrogen peroxide that would generate hydroxyl radicals that would attack the pesticides. Pharmaceuticals The photodegradation of pharmaceuticals is of interest because they are found in many water supplies. They have deleterious effects on aquatic organisms including toxicity, endocrine disruption, genetic damage. But also in the primary packaging material the photodegradation of pharmaceuticals has to be prevented. For this, amber glasses like Fiolax amber and Corning 51-L are commonly used to protect the pharmaceutical from UV radiations. Iodine (in the form of Lugol's solution) and colloidal silver are universally used in packaging that lets through very little UV light so as to avoid degradation. Polymers Common synthetic polymers that can be attacked include polypropylene and LDPE, where tertiary carbon bonds in their chain structures are the centres of attack. Ultraviolet rays interact with these bonds to form free radicals, which then react further with oxygen in the atmosphere, producing carbonyl groups in the main chain. The exposed surfaces of products may then discolour and crack, and in extreme cases, complete product disintegration can occur. In fibre products like rope used in outdoor applications, product life will be low because the outer fibres will be attacked first, and will easily be damaged by abrasion for example. Discolouration of the rope may also occur, thus giving an early warning of the problem. Polymers which possess UV-absorbing groups such as aromatic rings may also be sensitive to UV degradation. Aramid fibres like Kevlar, for example, are highly UV-sensitive and must be protected from the deleterious effects of sunlight. Mechanism Many organic chemicals are thermodynamically unstable in the presence of oxygen; however, their rate of spontaneous oxidation is slow at room temperature. In the language of physical chemistry, such reactions are kinetically limited. This kinetic stability allows the accumulation of complex environmental structures in the environment. Upon the absorption of light, triplet oxygen converts to singlet oxygen, a highly reactive form of the gas, which effects spin-allowed oxidations. In the atmosphere, the organic compounds are degraded by hydroxyl radicals, which are produced from water and ozone. Photochemical reactions are initiated by the absorption of a photon, typically in the wavelength range 290–700 nm (at the surface of the Earth). The energy of an absorbed photon is transferred to electrons in the molecule and briefly changes their configuration (i.e., promotes the molecule from a ground state to an excited state). The excited state represents what is essentially a new molecule. Often excited state molecules are not kinetically stable in the presence of O2 or H2O and can spontaneously decompose (oxidize or hydrolyze). Sometimes molecules decompose to produce high energy, unstable fragments that can react with other molecules around them. The two processes are collectively referred to as direct photolysis or indirect photolysis, and both mechanisms contribute to the removal of pollutants. The United States federal standard for testing plastic for photodegradation is 40 CFR Ch. I (7–1–03 Edition) PART 238. Protection against photodegradation Photodegradation of plastics and other materials can be inhibited with polymer stabilizers, which are widely used. These additives include antioxidants, which interrupt degradation processes. Typical antioxidants are derivatives of aniline. Another type of additive are UV-absorbers. These agents capture the photon and convert it to heat. Typical UV-absorbers are hydroxy-substituted benzophenones, related to the chemicals used in sunscreen. Restoration of yellowed plastic of old toys is nicknamed retrobright. See also Plastic bag Polymer degradation UV degradation References Sources Boltres, Bettine, "When glass meets pharma", ECV Editio Cantor, 2015, Chemical reactions Plastics and the environment Biodegradable waste management Photochemistry Light Molecular biology Environmental chemistry Articles containing video clips
Photodegradation
Physics,Chemistry,Biology,Environmental_science
1,199
13,222,892
https://en.wikipedia.org/wiki/IC%204703
IC 4703 is the diffuse emission nebula or HII region associated with Messier 16, which is actually a cluster of stars. It is the nebulous region surrounding Messier 16. These two objects make up the Eagle Nebula. They are relatively bright and are located in the constellation Serpens Cauda. This region contains the picturesque Pillars of Creation. This is an active star forming region 7,000 light years away. It is approximately magnitude 8. The cluster was discovered by Jean-Philippe Loys de Cheseaux, but Charles Messier later rediscovered it and remarked on its apparent nebulous appearance. The cluster is estimated to be 5.5 million years old, and the nebula would be a bit older. The nebula is about 55 x 70 light years. The Eagle Nebula lies in the Sagittarius Arm of the Milky Way. References The Belt of Venus. M16 and IC 4703 - The Eagle Nebula. 9/12/07. The Belt of Venus See also Messier 16 Eagle Nebula 4703 H II regions Carina–Sagittarius Arm Serpens
IC 4703
Astronomy
224
48,718,678
https://en.wikipedia.org/wiki/Doppler%20tracking
Doppler tracking. The Doppler effect allows the measurement of the distance between a transmitter from space and a receiver on the ground by observing how the frequency received from the transmitter changes as it approaches the transmitter, is overhead, and moves away. When approaching, the frequency of the transmission appears to be higher and as the transmitter moves away, the frequency appears to be lower. When overhead, the transmitted frequency and the received frequency are the same. References Doppler effects
Doppler tracking
Physics,Astronomy
97
31,465,766
https://en.wikipedia.org/wiki/DE-9IM
The Dimensionally Extended 9-Intersection Model (DE-9IM) is a topological model and a standard used to describe the spatial relations of two regions (two geometries in two-dimensions, R2), in geometry, point-set topology, geospatial topology, and fields related to computer spatial analysis. The spatial relations expressed by the model are invariant to rotation, translation and scaling transformations. The matrix provides an approach for classifying geometry relations. Roughly speaking, with a true/false matrix domain, there are 512 possible 2D topologic relations, that can be grouped into binary classification schemes. The English language contains about 10 schemes (relations), such as "intersects", "touches" and "equals". When testing two geometries against a scheme, the result is a spatial predicate named by the scheme. The model was developed by Clementini and others based on the seminal works of Egenhofer and others. It has been used as a basis for standards of queries and assertions in geographic information systems (GIS) and spatial databases. Matrix model The DE-9IM model is based on a 3×3 intersection matrix with the form: where is the dimension of the intersection (∩) of the interior (I), boundary (B), and exterior (E) of geometries a and b. The terms interior and boundary in this article are used in the sense used in algebraic topology and manifold theory, not in the sense used in general topology: for example, the interior of a line segment is the line segment without its endpoints, and its boundary is just the two endpoints (in general topology, the interior of a line segment in the plane is empty and the line segment is its own boundary). In the notation of topological space operators, the matrix elements can be expressed also as The dimension of empty sets (∅) are denoted as −1 or (false). The dimension of non-empty sets (¬∅) are denoted with the maximum number of dimensions of the intersection, specifically for points, for lines, for areas. Then, the domain of the model is . A simplified version of values are obtained mapping the values to (true), so using the boolean domain . The matrix, denoted with operators, can be expressed as The elements of the matrix can be named as shown below: Both matrix forms, with dimensional and boolean domains, can be serialized as "DE-9IM string codes", which represent them in a single-line string pattern. Since 1999 the string codes have a standard format. For output checking or pattern analysis, a matrix value (or a string code) can be checked by a "mask": a desired output value with optional asterisk symbols as wildcards — that is, "" indicating output positions that the designer does not care about (free values or "don't-care positions"). The domain of the mask elements is , or for the boolean form. The simpler models 4-Intersection and 9-Intersection were proposed before DE-9IM for expressing spatial relations (and originated the terms 4IM and 9IM). They can be used instead of the DE-9IM to optimize computation when input conditions satisfy specific constraints. Illustration Visually, for two overlapping polygonal geometries, the result of the function DE_9IM(a,b) looks like: This matrix can be serialized. Reading from left-to-right and top-to-bottom, the result is .  So, in a compact representation as string code is ''. Spatial predicates Any topological property based on a DE-9IM binary spatial relation is a spatial predicate. For ease of use "named spatial predicates" have been defined for some common relations, which later became standard predicates. The spatial predicate functions that can be derived from DE-9IM include: Predicates defined with masks of domain : Predicates that can be obtained from the above by logical negation or parameter inversion (matrix transposition), as indicated by the last column: Predicates that utilize the input dimensions, and are defined with masks of domain : Notice that: The topologically equal definition does not imply that they have the same points or even that they are of the same class. The output of have the information contained in a list of all interpretable predicates about geometries a and b. All predicates are computed by masks. Only Crosses and Overlaps have additional conditions about and . All mask string codes end with *. This is because EE is trivially true, and thus provides no useful information. The Equals mask, T*F**FFF*, is the "merge" of Contains (T*****FF*) and Within (T*F**F***): . The mask T*****FF* occurs in the definition of both Contains and Covers. Covers is a more inclusive relation. In particular, unlike Contains it does not distinguish between points in the boundary and in the interior of geometries. For most situations, Covers should be used in preference to Contains. Similarly, the mask T*F**F*** occurs in the definition of both Within and CoveredBy. For most situations, CoveredBy should be used in preference to Within. Historically, other terms and other formal approaches have been used to express spatial predicates; for example region connection calculus was introduced in 1992 by Randell, Cohn and Cohn. Properties The spatial predicates have the following properties of binary relations: Reflexive: Equals, Contains, Covers, CoveredBy, Intersects, Within Anti-reflexive: Disjoint Symmetric: Equals, Intersects, Crosses, Touches, Overlaps Transitive: Equals, Contains, Covers, CoveredBy, Within Interpretation The choice of terminology and semantics for the spatial predicates is based on reasonable conventions and the tradition of topological studies. Relationships such as Intersects, Disjoint, Touches, Within, Equals (between two geometries a and b) have an obvious semantic: Equals a = b that is (a ∩ b = a) ∧ (a ∩ b = b) Within a ∩ b = a Intersects a ∩ b ≠ ∅ Touches (a ∩ b ≠ ∅) ∧ (aο ∩ bο = ∅) The predicates Contains and Within have subtle aspects to their definition which are contrary to intuition. For example, a line L which is completely contained in the boundary of a polygon P is not considered to be contained in P. This quirk can be expressed as "Polygons do not contain their boundary". This issue is caused by the final clause of the Contains definition above: "at least one point of the interior of B lies in the interior of A". For this case, the predicate Covers has more intuitive semantics (see definition), avoiding boundary considerations. For better understanding, the dimensionality of inputs can be used as justification for a gradual introduction of semantic complexity: {|class="wikitable" |- !Relations between !Appropriate predicates !Semantic added |- |point/point |Equals, Disjoint |Other valid predicates collapses into Equals. |- |point/line |adds Intersects |Intersects is a refinement of Equals: "some equal point at the line". |- |line/line |width="180"|adds Touches, Crosses, ... |Touches is a refinement of Intersects, about "boundaries" only. Crosses is about "only one point". |} Coverage on possible matrix results The number of possible results in a boolean 9IM matrix is 29=512, and in a DE-9IM matrix is 39=6561. The percentage of these results that satisfy a specific predicate is determined as following, On usual applications the geometries intersects a priori, and the other relations are checked. The composite predicates "Intersects OR Disjoint" and "Equals OR Different" have the sum 100% (always true predicates), but "Covers OR CoveredBy" have 41%, that is not the sum, because they are neither logical complements or independent relations; similarly "Contains OR Within", have 21%. The sum 25 % + 12.5 % = 37.5 % is obtained when ignoring overlapping lines in "Crosses OR Overlaps", because the valid input sets are disjoint. Queries and assertions The DE-9IM offers a full descriptive assertion about the two input geometries. It is a mathematical function that represents a complete set of all possible relations about two entities, like a Truth table, the Three-way comparison, a Karnaugh map or a Venn diagram. Each output value is like a truth table line, that represent relations of specific inputs. As illustrated above, the output '212101212' resulted from DE-9IM(a,b) is a complete description of all topologic relations between specific geometries a and b. It says to us that . By other hand, if we check predicates like Intersects(a,b) or Touches(a,b) — for the same example we have "Intersects= and Touches=" — it is an incomplete description of "all topologic relations". Predicates also do not say any thing about the dimensionality of the geometries (it doesn't matter if a and b are lines, areas or points). This independence of geometry-type and the lack of completeness, on predicates, are useful for general queries about two geometries: {|border="0" class="wikitable" | !interior/boundary/exterior semantic !usual semantic |- !Assertions |style="background-color:#DDC" align="center"|more descriptive " a and b have " |style="background-color:#DDC" align="center"|less descriptive " a Touches b " |- !Queries |style="background-color:#DDC" align="center"|more restrictive" Show all pair of geometries where " |style="background-color:#DDC" align="center"|more general" Show all pair of geometries where Touches(a,b) " |} For usual applications, the use of spatial predicates also is justified by being more human-readable than DE-9IM descriptions: a typical user have better intuition about predicates (than a set of interiors/border/exterior intersections). Predicates have useful semantic into usual applications, so it is useful the translation of a DE-9IM description into a list of all associated predicates, that is like a casting process between the two different semantic types. Examples: The string codes "" and "" have the semantic of "Intersects & Crosses & Overlaps". The string code "" have the semantic of "Equals". The string codes "", "", "", "", and "" have the semantic of "Intersects & Touches". Standards The Open Geospatial Consortium (OGC) has standardized the typical spatial predicates (Contains, Crosses, Intersects, Touches, etc.) as boolean functions, and the DE-9IM model, as a function that returns a string (the DE-9IM code), with domain of , meaning =point, =line, =area, and ="empty set". This DE-9IM string code is a standardized format for data interchange. The Simple Feature Access (ISO 19125) standard, in the chapter 7.2.8, "SQL routines on type Geometry", recommends as supported routines the SQL/MM Spatial (ISO 13249-3 Part 3: Spatial) ST_Dimension, ST_GeometryType, ST_IsEmpty, ST_IsSimple, ST_Boundary for all Geometry Types. The same standard, consistent with the definitions of relations in "Part 1, Clause 6.1.2.3" of the SQL/MM, recommends (shall be supported) the function labels: ST_Equals, ST_Disjoint, ST_Intersects, ST_Touches, ST_Crosses, ST_Within, ST_Contains, ST_Overlaps and ST_Relate. The DE-9IM in the OGC standards use the following definitions of Interior and Boundary, for the main OGC standard geometry types: Implementation and practical use Most spatial databases, such as PostGIS, implements the DE-9IM() model by the standard functions: ST_Relate, ST_Equals, ST_Intersects, etc. The function ST_Relate(a,b) outputs the standard OGC's DE-9IM string code. Examples: two geometries, a and b, that intersects and touches with a point (for instance with and ), can be ST_Relate(a,b)='FF1F0F1F2' or ST_Relate(a,b)='FF10F0102' or ST_Relate(a,b)='FF1F0F1F2'. It also satisfies ST_Intersects(a,b)=true and ST_Touches(a,b)=true. When ST_Relate(a,b)='0FFFFF212', the returned DE-9IM code have the semantic of "Intersects(a,b) & Crosses(a,b) & Within(a,b) & CoveredBy(a,b)", that is, returns true on the boolean expression ST_Intersects(a,b) AND ST_Crosses(a,b) AND ST_Within(a,b) AND ST_Coveredby(a,b). The use of is faster than direct computing of a set of correspondent predicates. There are cases where using is the only way to compute a complex predicate — see the example of the code 0FFFFF0F2, of a point that not "crosses" a multipoint (an object that is a set of points), but predicate Crosses (when defined by a mask) returns true. It is usual to overload the by adding a mask parameter, or use a returned string into the function. When using , it returns a boolean. Examples: ST_Relate(a,b,'*FF*FF212') returns true when ST_Relate(a,b) is 0FFFFF212 or 01FFFF212, and returns false when 01FFFF122 or 0FF1FFFFF. ST_RelateMatch('0FFFFF212','*FF*FF212') and ST_RelateMatch('01FFFF212','TTF*FF212') are true, ST_RelateMatch('01FFFF122','*FF*FF212') is false. Synonyms "Egenhofer-Matrix" is a synonym for the 9IM 3x3 matrix of boolean domain. "Clementini-Matrix" is a synonym for the DE-9IM 3x3 matrix of domain. "Egenhofer operators" and "Clementini operators" are sometimes a reference to matrix elements as II, IE, etc. that can be used in boolean operations. Example: the predicate "G1 contains G2" can be expressed by "", that can be translated to mask syntax, T*****FF*. Predicates "meets" is a synonym for touches; "inside" is a synonym for within Oracle's "ANYINTERACT" is a synonym for intersects and "OVERLAPBDYINTERSECT" is a synonym for overlaps. Its "OVERLAPBDYDISJOINT" does not have a corresponding named predicate. In Region connection calculus operators offer some synonyms for predicates: disjoint is DC (disconnected), touches is EC (externally connected), equals is EQ. Other, like Overlaps as PO (partially overlapping), need context analysis or composition. See also References External links Point Set Theory and the DE-9IM Matrix Illustrated Tutorial for DE-9IM Matrices Geometric topology Geographic data and information Binary operations Geometric intersection
DE-9IM
Mathematics,Technology
3,403
7,382,910
https://en.wikipedia.org/wiki/Astrological%20symbols
Historically, astrological and astronomical symbols have overlapped. Frequently used symbols include signs of the zodiac and classical planets. These originate from medieval Byzantine codices. Their current form is a product of the European Renaissance. Other symbols for astrological aspects are used in various astrological traditions. History and origin Symbols for the classical planets, zodiac signs, aspects, lots, and the lunar nodes appear in the medieval Byzantine codices in which many ancient horoscopes were preserved. In the original papyri of these Greek horoscopes, there was a circle with the glyph representing shine () for the Sun; and a crescent for the Moon. Classical planets The written symbols for Mercury, Venus, Jupiter, and Saturn have been traced to forms found in late Classical Greek papyri. The symbols for Jupiter and Saturn are monograms of the initial letters of the corresponding Greek names, and the symbol for Mercury is a stylized caduceus. A.S.D. Maunder finds antecedents of the planetary symbols in earlier sources, used to represent the gods associated with the classical planets. Bianchini's planisphere, produced in the 2nd century, shows Greek personifications of planetary gods charged with early versions of the planetary symbols: Mercury has a caduceus; Venus has, attached to her necklace, a cord connected to another necklace; Mars, a spear; Jupiter, a staff; Saturn, a scythe; the Sun, a circlet with rays radiating from it; and the Moon, a headdress with a crescent attached. A diagram in Johannes Kamateros' 12th-century Compendium of Astrology shows the Sun represented by the circle with a ray, Jupiter by the letter zeta (the initial of Zeus, Jupiter's counterpart in Greek mythology), Mars by a shield crossed by a spear, and the remaining classical planets by symbols resembling the modern ones, without the cross-mark seen in modern versions of the symbols. The modern sun symbol, pictured as a circle with a dot (), first appeared in the Renaissance. (The conventional symbols for the signs of the zodiac also develop in the Renaissance period as simplifications of the classical pictorial representations of the signs.) The modern sun symbol resembles the Egyptian hieroglyph for "sun" – a circle that sometimes had a dot in the center, (). Similar in appearance were several variants of the ancestral form of the modern Chinese logograph for "sun", which in the oracle bone script and bronze script were . It is not known if the Egyptian and Chinese logographs have any connection to the European astrological symbol. Major planets discovered in the modern era Symbols for Uranus, Neptune and Pluto were created shortly after their discovery. For Uranus, two variant symbols are seen. One symbol, , invented by J. G. Köhler and refined by Bode, was intended to represent the newly discovered metal platinum; since platinum, sometimes described as white gold was found by chemists mixed with iron, the symbol for platinum combines the alchemical symbols for iron, ♂, and gold, ☉. An inverted version of that same symbol, was in use in the early 20th century. Another symbol, , was suggested by Lalande in 1784. In a letter to Herschel, Lalande described it as "un globe surmonté par la première lettre de votre nom" ("a globe surmounted by the first letter of your name"). After Neptune was discovered, the Bureau des Longitudes proposed the name Neptune and the familiar trident for the planet's symbol, though at bottom may be either a cross or an orb . Pluto, like Uranus, has multiple symbols in use. One symbol, ♇, is a monogram of the letters PL (which can be interpreted to stand for Pluto or for astronomer Percival Lowell), was announced with the name of the new planet by the discoverers on May 1, 1930. Another symbol, popularized in Paul Clancy's American Astrology magazine, is based on Pluto's bident: . Asteroids The astrological symbols for the first four objects discovered at the beginning of the 19th century — Ceres, Pallas, Juno and Vesta — were created shortly after their discoveries. They were initially listed as planets, and half a century later came to be called asteroids, though such "minor planets" continued to be considered planets for perhaps another century. Shortly after Giuseppe Piazzi's discovery of Ceres, a group of astronomers ratified the name, proposed by the discoverer, and chose the sickle as a symbol of the planet. The symbol for Pallas, the spear of Pallas Athena, was invented by Baron Franz Xaver von Zach, and introduced in his Monatliche Correspondenz zur Beförderung der Erd- und Himmels-Kunde. Karl Ludwig Harding, who discovered and named Juno, assigned to it the symbol of a scepter topped with a star. The modern astrological form of the symbol for Vesta, ⚶, was created by Eleanor Bach, who is credited with pioneering the use of the big four asteroids with the publication of her Ephemerides of the Asteroids in the early 1970s. The original form of the symbol for Vesta, , was created by German mathematician Carl Friedrich Gauss. Olbers, having previously discovered and named one new planet (as the asteroids were then classified), gave Gauss the honor of naming his newest discovery. Gauss decided to name the planet for the goddess Vesta, and also specified that the symbol should be the altar of the goddess with the sacred fire burning on it. Bach's variant is a simplification of 19th-century elaborations of Gauss's altar symbol. Centaurs The symbol for the centaur Chiron, ⚷, is both a key and a monogram of the letters O and K (for 'Object Kowal', a provisional name of the object, for discoverer Charles T. Kowal) was proposed by astrologer Al Morrison, who presented the symbol as "an inspiration shared amongst Al H. Morrison, Joelle K.D. Mahoney, and Marlene Bassoff." A widely used convention for other centaurs, proposed by Robert von Heeren in the 1990s, is to replace the K of the Chiron key glyph with the initial letter of the object: e.g. P or φ for Pholus and N for Nessus (, ). Other trans-Neptunian objects Symbols for other large trans-Neptunian objects have mostly been proposed on the Internet; some created by Denis Moskowitz have been used by NASA and are used by the popular open-source astrological software Astrolog, as well as being used less consistently by commercial programs. Miscellaneous orbital stations The symbol for retrograde motion is , a capital 'R' with a tail stroke. An 'R' with a tail stroke was used to abbreviate many words beginning with the letter 'R'; in medical prescriptions, it abbreviated the word recipe (from the Latin imperative of recipere "to take"), and in missals, an R with a tail stroke marked the responses. Meanings of the symbols Signs of the zodiac Planets Asteroids and other celestial bodies Since the 1970s, some astrologers have used asteroids and other celestial bodies in their horoscopes. The symbol for the first-recognised centaur, 2060 Chiron, was devised by Al H. Morrison soon after it had been discovered by Charles Kowal, and has become standard amongst astrologers. In the late 1990s, German astrologer Robert von Heeren created symbols for other centaurs based on the Chiron model, though only those for 5145 Pholus and 7066 Nessus are included in Unicode, and only that for Pholus in Astrolog. The following list is by no means exhaustive, but for bodies outside this list, there is often very little to no independent usage beyond the symbols' creators. The Hamburg School of Astrology, also called Uranian Astrology, is a sub-variety of western astrology. It adds eight fictitious trans-Neptunian planets to the normal ones used by western astrologers: Aspects In astrology, an aspect is an angle the planets make to each other in the horoscope, also to the ascendant, midheaven, descendant, lower midheaven, and other points of astrological interest. The following symbols are used to note aspect: Russian aspects In addition to the aspect symbols above, some Russian astrologers use additional or unique aspect symbols: Miscellaneous symbols See also Alchemical symbols Aztec calendar Behenian fixed star Classical elements Earthly Branches Gender symbols Heavenly Stems Maya calendar Monas Hieroglyphica Planet symbol Nakshatra Navagraha Sexagenary cycle Sri Rama Chakra Vedic astrology Notes References External links Glyphs and keywords for asteroids (often different from the astronomical ones) Symbols Astronomical symbols Religious symbols Symbols Western astrological signs Heraldic charges Unicode
Astrological symbols
Astronomy,Mathematics
1,855
14,669,190
https://en.wikipedia.org/wiki/Spectrin%20repeat
Spectrin repeats are found in several proteins involved in cytoskeletal structure. These include spectrin, alpha-actinin, dystrophin and more recently the plakin family. The spectrin repeat forms a three-helix bundle. These conform to the rules of the heptad repeat. Spectrin repeats give rise to linear proteins. This however may be due to sample bias in which linear and rigid structures are more amenable to crystallization. There are hints however, that some proteins harbouring spectrin repeats may also be flexible. This is most likely due to specifically evolved functional purposes. Human proteins containing this domain ACTN1; ACTN2; ACTN3; ACTN4; AKAP6; SYNE3; CATX-15; DMD; DRP2; DST; KALRN; MACF1; MCF2L; SPTA1; SPTAN1; SPTB; SPTBN1; SPTBN2; SPTBN4; SPTBN5; SYNE1; SYNE2; TRIO; UTRN; References Further reading Peripheral membrane proteins Protein domains Protein superfamilies
Spectrin repeat
Biology
247
75,038,808
https://en.wikipedia.org/wiki/Socially%20assistive%20robot
A socially assistive robot (SAR) aids users through social engagement and support rather than through physical tasks and interactions. Background The field of socially assistive robotics emerged in the early 2000s, following the emergence of the field of social robots. In contrast to social robots, SARs aid users with specific goals related to behavior change rather than serving as purely social entities. The term "Socially assistive robot" was initially defined by Maja Matarić and David Feil-Seifer in 2005. Since its inception, the field has gained substantial recognition, featuring numerous research projects, a wealth of global research publications, startup companies, and a growing array of products on the consumer market. The COVID-19 pandemic has underscored the immense potential of socially assistive robots, particularly in addressing the needs of large user populations, including children engaged in remote learning, elderly individuals grappling with loneliness, and those affected by social isolation and its associated negative consequences. Characteristics of interaction SARs rely on artificial intelligence (AI) to generate real-time, responsive, natural, and meaningful robot behaviors during interactions with humans. The robots employ various forms of communication, such as facial expressions, gestures, body movements, and speech. In contrast to robots intended for physical tasks, SARs are designed to support and motivate users to perform their own tasks. The tasks a user engages in can be physical (e.g., rehabilitation exercises for post-stroke users), cognitive (e.g., dementia screening for elderly users), or social (e.g., turn-taking for users with autism spectrum disorders). This complex interaction involves detecting and interpreting the user's movement, behavior, intent, goals, speech, and preferences. Machine learning and robot learning techniques are frequently employed to enhance the robot's understanding of the user, predict user preferences, and provide effective assistance. The effectiveness of socially assistive robots is assessed based on objective measurements of user performance and improvement resulting from the robot’s assistance and support. Unlike other branches of robotics, where effectiveness depends on the robot's physical task completion, SAR measures the success of the robot based on the user's progress and achievements. This evaluation is carried out using quantitative objective metrics, such as time spent on tasks, accuracy, retention, and verbalization, as well as quantitative subjective metrics, such as user survey tools. SAR is based on the large body of evidence showing that users tend to respond more positively to interactions with physical robots compared to interactions with screens. Interaction with physical robots also encourages users to learn and retain more information than screen-based interactions. This fundamental insight underlines why physical robots in SAR applications are more effective, as opposed to interactions solely involving screens, tablets, or computers. Uses and applications SARs have been developed and validated in a wide array of applications, including healthcare, elder care, education, and training. For example, SARs have been developed to support children on the autism spectrum in acquiring and practicing social and cognitive skills, to motivate and coach stroke patients throughout their rehabilitation exercises, monitoring individuals health (ex. fall detection), and to encourage elderly users to be more physically and socially active. There is a concern that technophobia and lack of trust in robots will pose a barrier to the effectiveness of SARs in older adults. References Robotics Social work Machine learning
Socially assistive robot
Engineering
683
1,394,395
https://en.wikipedia.org/wiki/Blackberry%20winter
Blackberry winter is a colloquial expression used in south, midwest North America; as well as in Europe, Sinosphere Vietnam and East Asia, referring to a cold snap that often occurs in late spring when the blackberries are in bloom. Other colloquial names for spring cold snaps include "dogwood winter," "whippoorwill winter," "locust winter," and "redbud winter." The different names are based on what is blooming in particular regions during the typical spring cold snaps. Another colloquialism for these spring cold snaps is "linsey-woolsey britches winter," referring to a type of winter long underwear which could be put away after the last cold snap. The blackberry winter term may have arisen to describe the belief that a spring cold snap helps the blackberry canes to start growing. In East Asia and Vietnam, the blackberry winter is known as Miss Ban's Winter (, , ), as it associated with an ancient folk tale of Miss Ban, a young daughter of the Jade Emperor who is hard-working but clumsy. She marries a husband, who is also a god, with the hope that she could improve her housework skills. In the winter, she dedicates herself to tailor clothes for her husband, but her clumsiness results in her being unable to finish the job until the end of winter. When she finishes, March has already gone by; thus she misses the winter. She falls down crying, and the Jade Emperor, touched by her will, decides to return the cold for a week to allow her husband to wear the clothes of Miss Ban. Thus, this is known as Miss Ban's Winter. In rural England, the equivalent term is "blackthorn winter", so-called because the blackthorn in hedgerows blossoms in early April, preceding the leaves, and presents an intense white spray against the black branches of the bush. In Finland, where the phenomenon is incredibly common – even in the month of May – the expression to describe it "takatalvi" (lit: back winter; most likely from the word takaisin 'come back', as in "returning winter") is part of common parlance. In media "Blackberry Winter" is the name of a frequently anthologized short story from 1946 by Robert Penn Warren. It is also the name of a song written by Edith Lindeman and Carl Stutz. This became a back-door million-seller as the B-side of Mitch Miller's recording of The Yellow Rose of Texas, a number 1 hit in the U.S. in 1955. It is also the name of a well-reviewed (if not major) classical/symphonic work by composer Conni Ellisor, and a well-reviewed ballet based on this composition. It is also the name of a song by Alec Wilder and Loonis McGlohon. Blackberry Winter is also the name of a 2006 short film directed by Brent Stewart about a cannibal clown in the antebellum South. Blackberry Winter is also the title of the autobiography (1972) of anthropologist Margaret Mead. See also Indian summer Strawberry Spring Footnotes Sources Millichap, Joseph R. 1992. Robert Penn Warren: A Study of the Short Fiction. Twayne Publishers, New York. Gordon Weaver, editor. Warren, Robert Penn. 1983. The Circus in the Attic and Other Stories. A Harvest Book, Harcourt Brace & Company, New York. (paperback). Weather lore Colloquial terms
Blackberry winter
Physics
727
61,423,208
https://en.wikipedia.org/wiki/C25H40N7O17P3S
{{DISPLAYTITLE:C25H40N7O17P3S}} The molecular formula C25H40N7O17P3S (molar mass: 835.609 g/mol) may refer to: Crotonyl-CoA Methacrylyl-CoA Molecular formulas
C25H40N7O17P3S
Physics,Chemistry
68
1,435,538
https://en.wikipedia.org/wiki/Internet%20History%20Sourcebooks%20Project
The Internet History Sourcebooks Project is located at the Fordham University History Department and Center for Medieval Studies. It is a web site with modern, medieval and ancient primary source documents, maps, secondary sources, bibliographies, images and music. Paul Halsall is the editor, with Jerome S. Arkenberg as the contributing editor. It was first created in 1996, and is used extensively by teachers as an alternative to textbooks. Internet Medieval Sourcebook The Internet Medieval Sourcebook or IMS is a web site with Medieval source documents, maps, secondary sources, bibliographies, images and music. It is located at the Fordham University Center for Medieval Studies. A large number of the documents on IMS are older copyright-expired translations from the 19th and early 20th century. However, IMS also has a section of "recently translated texts" which have been translated just for IMS. In fact, IMS claims it "contains more newly-translated texts than any available published collection of medieval sources." Internet Ancient Sourcebook Internet Modern Sourcebook The Internet Modern History Sourcebook is intended to serve the needs of teachers and students in college survey courses in modern European history and American history, as well as in modern Western Civilization and World Cultures. Other Sourcebooks In addition to the large collections in the Medieval, Ancient, and Modern Sourcebooks, the Internet History Sourcebooks Project also includes Sourcebooks on African, East Asian, Global, Indian, Islamic, Jewish, Lesbian and Gay, Science, and Women's History. References External links Internet History Sourcebooks Project. Internet Ancient Sourcebook. Internet Medieval Sourcebook. Internet Modern Sourcebook. Fordham University Medieval studies literature Discipline-oriented digital libraries Computing in classical studies American digital libraries Digital humanities projects
Internet History Sourcebooks Project
Technology
355
495,185
https://en.wikipedia.org/wiki/Western%20esotericism
Western esotericism, also known as Western mystery tradition, is a term scholars use to classify a wide range of loosely related ideas and movements that developed within Western society. These ideas and currents are united since they are largely distinct both from orthodox Judeo-Christian religion and Age of Enlightenment rationalism. It has influenced, or contributed to, various forms of Western philosophy, mysticism, religion, science, pseudoscience, art, literature, and music. The idea of grouping a wide range of Western traditions and philosophies together under the term esotericism developed in 17th-century Europe. Various academics have debated numerous definitions of Western esotericism. One view adopts a definition from certain esotericist schools of thought themselves, treating "esotericism" as a perennial hidden inner tradition. A second perspective sees esotericism as a category of movements that embrace an "enchanted" worldview in the face of increasing disenchantment. A third views Western esotericism as encompassing all of Western culture's "rejected knowledge" that is accepted neither by the scientific establishment nor orthodox religious authorities. The earliest traditions of Western esotericism emerged in the Eastern Mediterranean during Late Antiquity, where Hermeticism, Gnosticism and Neoplatonism developed as schools of thought distinct from what became mainstream Christianity. Renaissance Europe saw increasing interest in many of these older ideas, with various intellectuals combining pagan philosophies with the Kabbalah and Christian philosophy, resulting in the emergence of esoteric movements like Christian Kabbalah and Christian theosophy. The 17th century saw the development of initiatory societies professing esoteric knowledge such as Rosicrucianism and Freemasonry, while the Age of Enlightenment of the 18th century led to the development of new forms of esoteric thought. The 19th century saw the emergence of new trends of esoteric thought now known as occultism. Significant groups in this century included the Societas Rosicruciana in Anglia, the Theosophical Society and the Hermetic Order of the Golden Dawn. Also important in this connection is Martinus Thomsen's "spiritual science". Modern paganism developed within occultism and includes religious movements such as Wicca. Esoteric ideas permeated the counterculture of the 1960s and later cultural tendencies, which led to the New Age phenomenon in the 1970s. The idea that these disparate movements could be classified as "Western esotericism" developed in the late 18th century, but these esoteric currents were largely ignored as a subject of academic enquiry. The academic study of Western esotericism only emerged in the late 20th century, pioneered by scholars like Frances Yates and Antoine Faivre. Etymology The concept of the "esoteric" originated in the 2nd century with the coining of the Ancient Greek adjective ("belonging to an inner circle"); the earliest known example of the word appeared in a satire authored by Lucian of Samosata ( – after 180). In the 15th and 16th centuries, differentiations in Latin between exotericus and esotericus (along with internus and externus) were common in the scholar discourse on ancient philosophy. The categories of doctrina vulgaris and doctrina arcana are found among Cambridge Platonists. Perhaps for the first time in English, Thomas Stanley, between 1655 and 1660, would refer to the Pythagorean exoterick and esoterick. John Toland in 1720 would state that the so-called nowadays "esoteric distinction" was a universal phenomenon, present in both the West and the East. As for the noun "esotericism", probably the first mention in German of Esoterismus appeared in a 1779 work by Johann Georg Hamann, and the use of Esoterik in 1790 by Johann Gottfried Eichhorn. But the word esoterisch had already existed at least since 1731–1736, as found in the works of Johann Jakob Brucker; this author rejected everything that is characterized today as an "esoteric corpus". In this 18th century context, these terms referred to Pythagoreanism or Neoplatonic theurgy, but the concept was particularly sedimentated by two streams of discourses: speculations about the influences of the Egyptians on ancient philosophy and religion, and their associations with Masonic discourses and other secret societies, who claimed to keep such ancient secrets until the Enlightenment; and the emergence of orientalist academic studies, which since the 17th century identified the presence of mysteries, secrets or esoteric "ancient wisdom" in Persian, Arab, Indian and Far Eastern texts and practices (see also Early Western reception of Eastern esotericism). The noun "esotericism", in its French form "ésotérisme", first appeared in 1828 in the work by Protestant historian of gnosticism Jacques Matter (1791–1864), (3 vols.). The term "esotericism" thus came into use in the wake of the Age of Enlightenment and of its critique of institutionalised religion, during which alternative religious groups such as the Rosicrucians began to disassociate themselves from the dominant Christianity in Western Europe. During the 19th and 20th centuries, scholars increasingly saw the term "esotericism" as meaning something distinct from Christianity—as a subculture at odds with the Christian mainstream from at least the time of the Renaissance. After being introduced by Jacques Matter in French, the occultist and ceremonial magician Eliphas Lévi (1810–1875) popularized the term in the 1850s. Lévi also introduced the term , a notion that he developed against the background of contemporary socialist and Catholic discourses. "Esotericism" and "occultism" were often employed as synonyms until later scholars distinguished the concepts. Philosophical usage In the context of Ancient Greek philosophy, the terms "esoteric" and "exoteric" were sometimes used by scholars not to denote that there was secrecy, but to distinguish two procedures of research and education: the first reserved for teachings that were developed "within the walls" of the philosophical school, among a circle of thinkers ("eso-" indicating what is unseen, as in the classes internal to the institution), and the second referring to those whose works were disseminated to the public in speeches and published ("exo-": outside). The initial meaning of this last word is implied when Aristotle coined the term "exoteric speeches" (), perhaps to refer to the speeches he gave outside his school. However, Aristotle never employed the term "esoteric" and there is no evidence that he dealt with specialized secrets; there is a dubious report by Aulus Gellius, according to which Aristotle disclosed the exoteric subjects of politics, rhetoric and ethics to the general public in the afternoon, while he reserved the morning for "akroatika" (acroamatics), referring to natural philosophy and logic, taught during a walk with his students. Furthermore, the term "exoteric" for Aristotle could have another meaning, hypothetically referring to an extracosmic reality, ta exo, superior to and beyond Heaven, requiring abstraction and logic. This reality stood in contrast to what he called enkyklioi logoi, knowledge "from within the circle", involving the intracosmic physics that surrounds everyday life. There is a report by Strabo and Plutarch, however, which states that the Lyceum's school texts were circulated internally, their publication was more controlled than the exoteric ones, and that these "esoteric" texts were rediscovered and compiled only with the efforts of Andronicus of Rhodes. Plato would have orally transmitted intramural teachings to his disciples, the supposed "esoteric" content of which regarding the First Principles is particularly highlighted by the Tübingen School as distinct from the apparent written teachings conveyed in his books or public lectures. Hegel commented on the analysis of this distinction in the modern hermeneutics of Plato and Aristotle: To express an external object not much is required, but to communicate an idea a capacity must be present, and this always remains something esoteric, so that there has never been anything purely exoteric about what philosophers say. In any case, drawing from the tradition of discourses that supposedly revealed a vision of the absolute and truth present in mythology and initiatory rites of mystery religions, Plato and his philosophy began the Western perception of esotericism, to the point that Kocku von Stuckrad stated "esoteric ontology and anthropology would hardly exist without Platonic philosophy." In his dialogues, he uses expressions that refer to cultic secrecy (for example, , , one of the Ancient Greek expressions referring to the prohibition of revealing a secret, in the context of mysteries). In Theaetetus 152c, there is an example of this concealment strategy: Can it be, then, that Protagoras was a very ingenious person who threw out this obscure utterance for the unwashed like us but reserved the truth as a secret doctrine (ἐν ἀπορρήτῳ τὴν ἀλήθειαν) to be revealed to his disciples? The Neoplatonists intensified the search for a "hidden truth" under the surface of teachings, myths and texts, developing the hermeneutics and allegorical exegesis of Plato, Homer, Orpheus and others. Plutarch, for example, developed the justification of a theological esotericism, and Numenius wrote "On the Secrets of Plato" (Peri tôn para Platoni aporrhèta). Probably based on the "exôtikos/esôtikos" dichotomy, the Hellenic world developed the classical distinction between exoteric/esoteric, stimulated by criticism from various currents such as the Patristics. According to examples in Lucian, Galen and Clement of Alexandria, at that time it was a common practice among philosophers to keep secret writings and teachings. A parallel secrecy and reserved elite was also found in the contemporary environment of Gnosticism. Later, Iamblichus would present his definition (close to the modern one), as he classified the ancient Pythagoreans as either "exoteric" mathematicians or "esoteric" acousmatics, the latter being those who disseminated enigmatic teachings and hidden allegorical meanings. Conceptual development The concept of "Western esotericism" represents a modern scholarly construct rather than a pre-existing, self-defined tradition of thought. In the late 17th century, several European Christian thinkers presented the argument that one could categorise certain traditions of Western philosophy and thought together, thus establishing the category now labelled "Western esotericism". The first to do so, (1659–1698), a German Lutheran theologian, wrote Platonisch-Hermetisches Christianity (1690–91). A hostile critic of various currents of Western thought that had emerged since the Renaissance—among them Paracelsianism, Weigelianism, and Christian theosophy—in his book he labelled all of these traditions under the category of "Platonic–Hermetic Christianity", portraying them as heretical to what he saw as "true" Christianity. Despite his hostile attitude toward these traditions of thought, Colberg became the first to connect these disparate philosophies and to study them under one rubric, also recognising that these ideas linked back to earlier philosophies from late antiquity. In 18th-century Europe, during the Age of Enlightenment, these esoteric traditions came to be regularly categorised under the labels of "superstition", "magic", and "the occult"—terms often used interchangeably. The modern academy, then in the process of developing, consistently rejected and ignored topics coming under "the occult", thus leaving research into them largely to enthusiasts outside of academia. Indeed, according to historian of esotericism Wouter J. Hanegraaff (born 1961), rejection of "occult" topics was seen as a "crucial identity marker" for any intellectuals seeking to affiliate themselves with the academy. Scholars established this category in the late 18th century after identifying "structural similarities" between "the ideas and world views of a wide variety of thinkers and movements" that, previously, had not been in the same analytical grouping. According to the scholar of esotericism Wouter J. Hanegraaff, the term provided a "useful generic label" for "a large and complicated group of historical phenomena that had long been perceived as sharing an air de famille." Various academics have emphasised that esotericism is a phenomenon unique to the Western world. As Faivre stated, an "empirical perspective" would hold that "esotericism is a Western notion." As scholars such as Faivre and Hanegraaff have pointed out, there is no comparable category of "Eastern" or "Oriental" esotericism. The emphasis on Western esotericism was nevertheless primarily devised to distinguish the field from a universal esotericism. Hanegraaff has characterised these as "recognisable world views and approaches to knowledge that have played an important though always controversial role in the history of Western culture". Historian of religion Henrik Bogdan asserted that Western esotericism constituted "a third pillar of Western culture" alongside "doctrinal faith and rationality", being deemed heretical by the former and irrational by the latter. Scholars nevertheless recognise that various non-Western traditions have exerted "a profound influence" over Western esotericism, citing the example of the Theosophical Society's incorporation of Hindu and Buddhist concepts like reincarnation into its doctrines. Given these influences and the imprecise nature of the term "Western", the scholar of esotericism Kennet Granholm has argued that academics should cease referring to "Western esotericism" altogether, instead simply favouring "esotericism" as a descriptor of this phenomenon. Egil Asprem has endorsed this approach. Definition The historian of esotericism Antoine Faivre noted that "never a precise term, [esotericism] has begun to overflow its boundaries on all sides", with both Faivre and Karen-Claire Voss stating that Western esotericism consists of "a vast spectrum of authors, trends, works of philosophy, religion, art, literature, and music". Scholars broadly agree on which currents of thought fall within a category of esotericism—ranging from ancient Gnosticism and Hermeticism through to Rosicrucianism and the Kabbalah and on to more recent phenomenon such as the New Age movement. Nevertheless, esotericism itself remains a controversial term, with scholars specialising in the subject disagreeing as to how best to define it. As a universal secret inner tradition Some scholars have used Western esotericism to refer to "inner traditions" concerned with a "universal spiritual dimension of reality, as opposed to the merely external ('exoteric') religious institutions and dogmatic systems of established religions." This approach views Western esotericism as just one variant of a worldwide esotericism at the heart of all world religions and cultures, reflecting a hidden esoteric reality. This use is closest to the original meaning of the word in late antiquity, where it applied to secret spiritual teachings that were reserved for a specific elite and hidden from the masses. This definition was popularised in the published work of 19th-century esotericists like A.E. Waite, who sought to combine their own mystical beliefs with a historical interpretation of esotericism. It subsequently became a popular approach within several esoteric movements, most notably Martinism and Traditionalism. This definition, originally developed by esotericists themselves, became popular among French academics during the 1980s, exerting a strong influence over the scholars Mircea Eliade, Henry Corbin, and the early work of Faivre. Within the academic field of religious studies, those who study different religions in search of an inner universal dimension to them all are termed "religionists". Such religionist ideas also exerted an influence on more recent scholars like Nicholas Goodrick-Clarke and Arthur Versluis. Versluis for instance defined "Western esotericism" as "inner or hidden spiritual knowledge transmitted through Western European historical currents that in turn feed into North American and other non-European settings". He added that these Western esoteric currents all shared a core characteristic, "a claim to gnosis, or direct spiritual insight into cosmology or spiritual insight", and accordingly he suggested that these currents could be referred to as "Western gnostic" just as much as "Western esoteric". There are various problems with this model for understanding Western esotericism. The most significant is that it rests upon the conviction that there really is a "universal, hidden, esoteric dimension of reality" that objectively exists. The existence of this universal inner tradition has not been discovered through scientific or scholarly enquiry; this had led some to claim that it does not exist, though Hanegraaff thought it better to adopt a view based in methodological agnosticism by stating that "we simply do not know—and cannot know" if it exists or not. He noted that, even if such a true and absolute nature of reality really existed, it would only be accessible through "esoteric" spiritual practices, and could not be discovered or measured by the "exoteric" tools of scientific and scholarly enquiry. Hanegraaff pointed out that an approach that seeks a common inner hidden core of all esoteric currents masks that such groups often differ greatly, being rooted in their own historical and social contexts and expressing mutually exclusive ideas and agendas. A third issue was that many of those currents widely recognised as esoteric never concealed their teachings, and in the 20th century came to permeate popular culture, thus problematizing the claim that esotericism could be defined by its hidden and secretive nature. He noted that when scholars adopt this definition, it shows that they subscribe to the religious doctrines espoused by the very groups they are studying. As an enchanted world view Another approach to Western esotericism treats it as a world view that embraces "enchantment" in contrast to world views influenced by post-Cartesian, post-Newtonian, and positivist science that sought to "dis-enchant" the world. That approach understands esotericism as comprising those world views that eschew a belief in instrumental causality and instead adopt a belief that all parts of the universe are interrelated without a need for causal chains. It stands as a radical alternative to the disenchanted world views that have dominated Western culture since the scientific revolution, and must therefore always be at odds with secular culture. An early exponent of this definition was the historian of Renaissance thought Frances Yates in her discussions of a Hermetic Tradition, which she saw as an "enchanted" alternative to established religion and rationalistic science. The primary exponent of this view was Faivre, who published a series of criteria for how to define "Western esotericism" in 1992. Faivre claimed that esotericism was "identifiable by the presence of six fundamental characteristics or components", four of which were "intrinsic" and thus vital to defining something as being esoteric, while the other two were "secondary" and thus not necessarily present in every form of esotericism. He listed these characteristics as follows: "Correspondences": This is the idea that there are both real and symbolic correspondences existing between all things within the universe. As examples for this, Faivre pointed to the esoteric concept of the macrocosm and microcosm, often presented as the dictum of "as above, so below", as well as the astrological idea that the actions of the planets have a direct corresponding influence on the behaviour of human beings. "Living Nature": Faivre argued that all esotericists envision the natural universe as being imbued with its own life force, and that as such they understand it as being "complex, plural, hierarchical". "Imagination and Mediations": Faivre believed that all esotericists place great emphasis on both the human imagination, and mediations—"such as rituals, symbolic images, mandalas, intermediary spirits"—and mantras as tools that provide access to worlds and levels of reality existing between the material world and the divine. "Experience of Transmutation": Faivre's fourth intrinsic characteristic of esotericism was the emphasis that esotericists place on fundamentally transforming themselves through their practice, for instance through the spiritual transformation that is alleged to accompany the attainment of gnosis. "Practice of Concordance": The first of Faivre's secondary characteristics of esotericism was the belief—held by many esotericists, such as those in the Traditionalist School—that there is a fundamental unifying principle or root from which all world religions and spiritual practices emerge. The common esoteric principle is that attaining this unifying principle can bring the world's different belief systems together in unity. "Transmission": Faivre's second secondary characteristic was the emphasis on the transmission of esoteric teachings and secrets from a master to their disciple, through a process of initiation. Faivre's form of categorisation has been endorsed by scholars like Goodrick-Clarke, and by 2007 Bogdan could note that Faivre's had become "the standard definition" of Western esotericism in use among scholars. In 2013 the scholar Kennet Granholm stated only that Faivre's definition had been "the dominating paradigm for a long while" and that it "still exerts influence among scholars outside the study of Western esotericism". The advantage of Faivre's system is that it facilitates comparing varying esoteric traditions "with one another in a systematic fashion." Other scholars criticised his theory, pointing out various weaknesses. Hanegraaff claimed that Faivre's approach entailed "reasoning by prototype" in that it relied upon already having a "best example" of what Western esotericism should look like, against which other phenomena then had to be compared. The scholar of esotericism Kocku von Stuckrad (born 1966) noted that Faivre's taxonomy was based on his own areas of specialism—Renaissance Hermeticism, Christian Kabbalah, and Protestant Theosophy—and that it was thus not based on a wider understanding of esotericism as it has existed throughout history, from the ancient world to the contemporary period. Accordingly, Von Stuckrad suggested that it was a good typology for understanding "Christian esotericism in the early modern period" but lacked utility beyond that. As higher knowledge As an alternative to Faivre's framework, Kocku von Stuckrad developed his own variant, though he argued that this did not represent a "definition" but rather "a framework of analysis" for scholarly usage. He stated that "on the most general level of analysis", esotericism represented "the claim of higher knowledge", a claim to possessing "wisdom that is superior to other interpretations of cosmos and history" that serves as a "master key for answering all questions of humankind." Accordingly, he believed that esoteric groups placed a great emphasis on secrecy, not because they were inherently rooted in elite groups but because the idea of concealed secrets that can be revealed was central to their discourse. Examining the means of accessing higher knowledge, he highlighted two themes that he believed could be found within esotericism, that of mediation through contact with non-human entities, and individual experience. Accordingly, for Von Stuckrad, esotericism could be best understood as "a structural element of Western culture" rather than as a selection of different schools of thought. As rejected knowledge Hanegraaff proposed an additional definition that "Western esotericism" is a category that represents "the academy's dustbin of rejected knowledge." In this respect, it contains all of the theories and world views rejected by the mainstream intellectual community because they do not accord with "normative conceptions of religion, rationality and science." His approach is rooted within the field of the history of ideas, and stresses the role of change and transformation over time. Goodrick-Clarke was critical of this approach, believing that it relegated Western esotericism to the position of "a casualty of positivist and materialist perspectives in the nineteenth-century" and thus reinforces the idea that Western esoteric traditions were of little historical importance. Bogdan similarly expressed concern regarding Hanegraaff's definition, believing that it made the category of Western esotericism "all inclusive" and thus analytically useless. History Late Antiquity The origins of Western esotericism are in the Hellenistic Eastern Mediterranean, then part of the Roman Empire, during Late Antiquity. This was a milieu that mixed religious and intellectual traditions from Greece, Egypt, the Levant, Babylon, and Persia—in which globalisation, urbanisation, and multiculturalism were bringing about socio-cultural change. One component of this was Hermeticism, an Egyptian Hellenistic school of thought that takes its name from the legendary Egyptian wise man, Hermes Trismegistus. In the 2nd and 3rd centuries, a number of texts attributed to Hermes Trismegistus appeared, including the Corpus Hermeticum, Asclepius, and The Discourse on the Eighth and Ninth. Some still debate whether Hermeticism was a purely literary phenomenon or had communities of practitioners who acted on these ideas, but it has been established that these texts discuss the true nature of God, emphasising that humans must transcend rational thought and worldly desires to find salvation and be reborn into a spiritual body of immaterial light, thereby achieving spiritual unity with divinity. Another tradition of esoteric thought in Late Antiquity was Gnosticism. Various Gnostic sects existed, and they broadly believed that the divine light had been imprisoned within the material world by a malevolent entity known as the Demiurge, who was served by demonic helpers, the Archons. It was the Gnostic belief that people, who were imbued with the divine light, should seek to attain gnosis and thus escape from the world of matter and rejoin the divine source. A third form of esotericism in Late Antiquity was Neoplatonism, a school of thought influenced by the ideas of the philosopher Plato. Advocated by such figures as Plotinus, Porphyry, Iamblichus, and Proclus, Neoplatonism held that the human soul had fallen from its divine origins into the material world, but that it could progress, through a number of hierarchical spheres of being, to return to its divine origins once more. The later Neoplatonists performed theurgy, a ritual practice attested in such sources as the Chaldean Oracles. Scholars are still unsure of precisely what theurgy involved, but know it involved a practice designed to make gods appear, who could then raise the theurgist's mind to the reality of the divine. Middle Ages After the fall of Rome, alchemy and philosophy and other aspects of the tradition were largely preserved in the Arab and Near Eastern world and reintroduced into Western Europe by Jews and by the cultural contact between Christians and Muslims in Sicily and southern Italy. The 12th century saw the development of the Kabbalah in southern Italy and medieval Spain. The medieval period also saw the publication of grimoires, which offered often elaborate formulas for theurgy and thaumaturgy. Many of the grimoires seem to have kabbalistic influence. Figures in alchemy from this period seem to also have authored or used grimoires. Medieval sects deemed heretical such as the Waldensians were thought to have utilized esoteric concepts. Renaissance and Early Modern period During the Renaissance, a number of European thinkers began to synthesize "pagan" (that is, not Christian) philosophies, which were then being made available through Arabic translations, with Christian thought and the Jewish kabbalah. The earliest of these individuals was the Byzantine philosopher Plethon (1355/60–1452?), who argued that the Chaldean Oracles represented an example of a superior religion of ancient humanity that had been passed down by the Platonists. Plethon's ideas interested the ruler of Florence, Cosimo de' Medici, who employed Florentine thinker Marsilio Ficino (1433–1499) to translate Plato's works into Latin. Ficino went on to translate and publish the works of various Platonic figures, arguing that their philosophies were compatible with Christianity, and allowing for the emergence of a wider movement in Renaissance Platonism, or Platonic Orientalism. Ficino also translated part of the Corpus Hermeticum, though the rest was translated by his contemporary, Lodovico Lazzarelli (1447–1500). Another core figure in this intellectual milieu was Giovanni Pico della Mirandola (1463–1494), who achieved notability in 1486 by inviting scholars from across Europe to come and debate with him 900 theses that he had written. Pico della Mirandola argued that all of these philosophies reflected a grand universal wisdom. Pope Innocent VIII condemned these ideas, criticising him for attempting to mix pagan and Jewish ideas with Christianity. Pico della Mirandola's increased interest in Jewish kabbalah led to his development of a distinct form of Christian Kabbalah. His work was built on by the German Johannes Reuchlin (1455–1522) who authored an influential text on the subject, De Arte Cabalistica. Christian Kabbalah was expanded in the work of the German Heinrich Cornelius Agrippa (1486–1535/36), who used it as a framework to explore the philosophical and scientific traditions of Antiquity in his work De occulta philosophia libri tres. The work of Agrippa and other esoteric philosophers had been based in a pre-Copernican worldview, but following the arguments of Copernicus, a more accurate understanding of the cosmos was established. Copernicus' theories were adopted into esoteric strains of thought by Giordano Bruno (1548–1600), whose ideas were deemed heresy by the Roman Catholic Church, which eventually publicly executed him. A distinct strain of esoteric thought developed in Germany, where it became known as Naturphilosophie. Though influenced by traditions from Late Antiquity and medieval Kabbalah, it only acknowledged two main sources of authority: Biblical scripture and the natural world. The primary exponent of this approach was Paracelsus (1493/94–1541), who took inspiration from alchemy and folk magic to argue against the mainstream medical establishment of his time—which, as in Antiquity, still based its approach on the ideas of the second-century physician and philosopher, Galen, a Greek in the Roman Empire. Instead, Paracelsus urged doctors to learn medicine through an observation of the natural world, though in later work he also began to focus on overtly religious questions. His work gained significant support in both areas over the following centuries. One of those influenced by Paracelsus was the German cobbler Jakob Böhme (1575–1624), who sparked the Christian theosophy movement through his attempts to solve the problem of evil. Böhme argued that God had been created out of an unfathomable mystery, the Ungrund, and that God himself was composed of a wrathful core, surrounded by the forces of light and love. Though condemned by Germany's Lutheran authorities, Böhme's ideas spread and formed the basis for a number of small religious communities, such as Johann Georg Gichtel's Angelic Brethren in Amsterdam, and John Pordage and Jane Leade's Philadelphian Society in England. From 1614 to 1616, the three Rosicrucian Manifestos were published in Germany. These texts purported to represent a secret, initiatory brotherhood founded centuries before by a German adept named Christian Rosenkreutz. There is no evidence that Rosenkreutz was a genuine historical figure, nor that a Rosicrucian Order had ever existed before then. Instead, the manifestos are likely literary creations of Lutheran theologian Johann Valentin Andreae (1586–1654). They interested the public, so several people described themselves as "Rosicrucian", claiming access to secret esoteric knowledge. A real initiatory brotherhood was established in late 16th-century Scotland through the transformation of Medieval stonemason guilds to include non-craftsmen: Freemasonry. Soon spreading into other parts of Europe, in England it largely rejected its esoteric character and embraced humanism and rationalism, while in France it embraced new esoteric concepts, particularly those from Christian theosophy. 18th, 19th and early 20th centuries The Age of Enlightenment witnessed a process of increasing secularisation of European governments and an embrace of modern science and rationality within intellectual circles. In turn, a "modernist occult" emerged that reflected varied ways esoteric thinkers came to terms with these developments. One of the esotericists of this period was the Swedish naturalist Emanuel Swedenborg (1688–1772), who attempted to reconcile science and religion after experiencing a vision of Jesus Christ. His writings focused on his visionary travels to heaven and hell and his communications with angels, claiming that the visible, materialist world parallels an invisible spiritual world, with correspondences between the two that do not reflect causal relations. Following his death, followers founded the Swedenborgian New Church—though his writings influenced a wider array of esoteric philosophies. Another major figure within the esoteric movement of this period was the German physician Franz Anton Mesmer (1734–1814), who developed the theory of Animal Magnetism, which later became known more commonly as Mesmerism. Mesmer claimed that a universal life force permeated everything, including the human body, and that illnesses were caused by a disturbance or block in this force's flow; he developed techniques he claimed cleansed such blockages and restored the patient to full health. One of Mesmer's followers, the Marquis de Puységur, discovered that mesmeric treatment could induce a state of somnumbulic trance in which they claimed to enter visionary states and communicate with spirit beings. These somnambulic trance-states heavily influenced the esoteric religion of Spiritualism, which emerged in the United States in the 1840s and spread throughout North America and Europe. Spiritualism was based on the concept that individuals could communicate with spirits of the deceased during séances. Most forms of Spiritualism had little theoretical depth, being largely practical affairs—but full theological worldviews based on the movement were articulated by Andrew Jackson Davis (1826–1910) and Allan Kardec (1804–1869). Scientific interest in the claims of Spiritualism resulted in the development of the field of psychical research. Somnambulism also exerted a strong influence on the early disciplines of psychology and psychiatry; esoteric ideas pervade the work of many early figures in this field, most notably Carl Gustav Jung—though with the rise of psychoanalysis and behaviourism in the 20th century, these disciplines distanced themselves from esotericism. Also influenced by artificial somnambulism was the religion of New Thought, founded by the American mesmerist Phineas P. Quimby (1802–1866). It revolved around the concept of "mind over matter"—believing that illness and other negative conditions could be cured through the power of belief. In Europe, a movement usually termed occultism emerged as various figures attempted to find a "third way" between Christianity and positivist science while building on the ancient, medieval, and Renaissance traditions of esoteric thought. In France, following the social upheaval of the 1789 Revolution, various figures emerged in this occultist milieu who were heavily influenced by traditional Catholicism, the most notable of whom were Éliphas Lévi (1810–1875) and Papus (1865–1916). Also significant was René Guénon (1886–1951), whose concern with tradition led him to develop an occult viewpoint termed Traditionalism; it espoused the idea of an original, universal tradition, and thus a rejection of modernity. His Traditionalist ideas strongly influenced later esotericists like Julius Evola (1898–1974), founder of the UR Group, and Frithjof Schuon (1907–1998). In the Anglophone world, the burgeoning occult movement owed more to Enlightenment libertines, and thus was more often of an anti-Christian bent that saw wisdom as emanating from the pre-Christian pagan religions of Europe. Various Spiritualist mediums came to be disillusioned with the esoteric thought available, and sought inspiration in pre-Swedenborgian currents, including Emma Hardinge Britten (1823–1899) and Helena Blavatsky (1831–1891), the latter of whom called for the revival of the "occult science" of the ancients, which could be found in both the East and West. Authoring the influential Isis Unveiled (1877) and The Secret Doctrine (1888), she co-founded the Theosophical Society in 1875. Subsequent leaders of the Society, namely Annie Besant (1847–1933) and Charles Webster Leadbeater (1854–1934) interpreted modern theosophy as a form of ecumenical esoteric Christianity, resulting in their proclamation of Indian Jiddu Krishnamurti (1895–1986) as world messiah. In rejection of this was the breakaway Anthroposophical Society founded by Rudolf Steiner (1861–1925). According to Maria Carlson, ""Both turned out to be 'positivistic religions,' offering a seemingly logical theology based on pseudoscience." Another form of esoteric Christianity is the spiritual science of the Danish mystic Martinus (1890-1981) who is popular in Scandinavia. New esoteric understandings of magic also developed in the latter part of the 19th century. One of the pioneers of this was American Paschal Beverly Randolph (1825–1875), who argued that sexual energy and psychoactive drugs could be used for magical purposes. In England, the Hermetic Order of the Golden Dawn—an initiatory order devoted to magic based on kabbalah—was founded in the latter years of the century. One of the members of that order was Aleister Crowley (1875–1947), who went on to proclaim the religion of Thelema and become a member of Ordo Templi Orientis. Some of their contemporaries developed esoteric schools of thought that did not entail magic, namely the Greco-Armenian teacher George Gurdjieff (1866–1949) and his Russian pupil P.D. Ouspensky (1878–1947). Emergent occult and esoteric systems found increasing popularity in the early 20th century, especially in Western Europe. Occult lodges and secret societies flowered among European intellectuals of this era who had largely abandoned traditional forms of Christianity. The spreading of secret teachings and magical practices found enthusiastic adherents in the chaos of Germany during the interwar years. Notable writers such as Guido von List spread neo-pagan, nationalist ideas, based on Wotanism and the Kabbalah. Many influential and wealthy Germans were drawn to secret societies such as the Thule Society. Thule Society activist Karl Harrer was one of the founders of the German Workers' Party, which later became the Nazi Party; some Nazi Party members like Alfred Rosenberg and Rudolf Hess were listed as "guests" of the Thule Society, as was Adolf Hitler's mentor Dietrich Eckart. After their rise to power, the Nazis persecuted occultists. While many Nazi Party leaders like Hitler and Joseph Goebbels were hostile to occultism, Heinrich Himmler used Karl Maria Wiligut as a clairvoyant "and was regularly consulting for help in setting up the symbolic and ceremonial aspects of the SS" but not for important political decisions. By 1939, Wiligut was "forcibly retired from the SS" due to being institutionalised for insanity. On the other hand, the German hermetic magic order Fraternitas Saturni was founded on Easter 1928 and it is one of the oldest continuously running magical groups in Germany. In 1936, the Fraternitas Saturni was prohibited by the Nazi regime. The leaders of the lodge emigrated to avoid imprisonment, but in the course of the war Eugen Grosche, one of their main leaders, was arrested for a year by the Nazi government. After World War II they reformed the Fraternitas Saturni. Later 20th century In the 1960s and 1970s, esotericism came to be increasingly associated with the growing counter-culture in the West, whose adherents understood themselves in participating in a spiritual revolution that marked the Age of Aquarius. By the 1980s, these millenarian currents had come to be widely known as the New Age movement, and it became increasingly commercialised as business entrepreneurs exploited a growth in the spiritual market. Conversely, other forms of esoteric thought retained the anti-commercial and counter-cultural sentiment of the 1960s and 1970s, namely the techno-shamanic movement promoted by figures such as Terence McKenna and Daniel Pinchbeck, which built on the work of anthropologist Carlos Castaneda. This trend was accompanied by the increased growth of modern paganism, a movement initially dominated by Wicca, the religion propagated by Gerald Gardner. Wicca was adopted by members of the second-wave feminist movement, most notably Starhawk, and developing into the Goddess movement. Wicca also greatly influenced the development of Pagan neo-druidry and other forms of Celtic revivalism. In response to Wicca there has also appeared literature and groups who label themselves followers of traditional witchcraft in opposition to the growing visibility of Wicca and these claim older roots than the system proposed by Gardner. Other trends that emerged in western occultism in the later 20th century included satanism, as exposed by groups such as the Church of Satan and Temple of Set, as well as chaos magick through the Illuminates of Thanateros group. Additionally, since the start of the 1990s, countries inside of the former Iron Curtain have undergone a radiative and varied religious revival, with a large number of occult and new religious movements gaining popularity. Gnostic revivalists, New Age organizations, and Scientology splinter groups have found their way into much of the former Soviet bloc since the cultural and political shift resulting from the dissolution of the USSR. In Hungary, a significant number of citizens (relative to the size of the country's population and compared to its neighbors) practice or adhere to new currents of Western Esotericism. In April 1997, the Fifth Esoteric Spiritual Forum was held for two days in the country and was attended at-capacity; In August of the same year, the International Shaman Expo began, being broadcast on live TV and ultimately taking place for two months wherein various neo-Shamanist, Millenarian, mystic, neo-Pagan, and even UFO religion congregations and figures were among the attendees. Academic study The academic study of Western esotericism was pioneered in the early 20th century by historians of the ancient world and the European Renaissance, who came to recognise that—even though previous scholarship had ignored it—the effect of pre-Christian and non-rational schools of thought on European society and culture was worthy of academic attention. One of the key centres for this was the Warburg Institute in London, where scholars like Frances Yates, Edgar Wind, Ernst Cassirer, and D. P. Walker began arguing that esoteric thought had had a greater effect on Renaissance culture than had been previously accepted. The work of Yates in particular, most notably her 1964 book Giordano Bruno and the Hermetic Tradition, has been cited as "an important starting-point for modern scholarship on esotericism", succeeding "at one fell swoop in bringing scholarship onto a new track" by bringing wider awareness of the effect that esoteric ideas had on modern science. In 1965, at the instigation of the scholar Henry Corbin, École pratique des hautes études in the Sorbonne established the world's first academic post in the study of esotericism, with a chair in the History of Christian Esotericism. Its first holder was François Secret, a specialist in the Christian Kabbalah, though he had little interest in developing the wider study of esotericism as a field of research. In 1979 Faivre assumed Secret's chair at the Sorbonne, which was renamed the "History of Esoteric and Mystical Currents in Modern and Contemporary Europe". Faivre has since been cited as being responsible for developing the study of Western esotericism into a formalised field, with his 1992 work L'ésotérisme having been cited as marking "the beginning of the study of Western esotericism as an academic field of research". He remained in the chair until 2002, when he was succeeded by Jean-Pierre Brach. Faivre noted two significant obstacles to establishing the field. One was an ingrained prejudice toward esotericism within academia, resulting in the widespread perception that the history of esotericism was not worthy of academic research. The other was esotericism's status as a trans-disciplinary field, the study of which did not fit clearly within any particular discipline. As Hanegraaff noted, Western esotericism had to be studied as a separate field to religion, philosophy, science, and the arts, because while it "participates in all these fields" it does not squarely fit into any of them. Elsewhere, he noted that there was "probably no other domain in the humanities that has been so seriously neglected" as Western esotericism. In 1980, the U.S.-based Hermetic Academy was founded by Robert A. McDermott as an outlet for American scholars interested in Western esotericism. From 1986 to 1990 members of the Hermetic Academy participated in panels at the annual meeting of the American Academy of Religion under the rubric of the "Esotericism and Perennialism Group". By 1994, Faivre could comment that the academic study of Western esotericism had taken off in France, Italy, England, and the United States, but he lamented that it had not done so in Germany. In 1999, the University of Amsterdam established a chair in the History of Hermetic Philosophy and Related Currents, which was occupied by Hanegraaff, while in 2005 the University of Exeter created a chair in Western Esotericism, which was taken by Goodrick-Clarke, who headed the Exeter Center for the Study of Esotericism. Thus, by 2008 there were three dedicated university chairs in the subject, with Amsterdam and Exeter also offering master's degree programs in it. Several conferences on the subject were held at the quintennial meetings of the International Association for the History of Religions, while a peer-reviewed journal, Aries: Journal for the Study of Western Esotericism began publication in 2001. 2001 also saw the foundation of the North American Association for the Study of Esotericism (ASE), with the European Society for the Study of Western Esotericism (ESSWE) being established shortly after. Within a few years, Michael Bergunder expressed the view that it had become an established field within religious studies, with Asprem and Granholm observing that scholars within other sub-disciplines of religious studies had begun to take an interest in the work of scholars of esotericism. Asprem and Granholm noted that the study of esotericism had been dominated by historians and thus lacked the perspective of social scientists examining contemporary forms of esotericism, a situation that they were attempting to correct through building links with scholars operating in Pagan studies and the study of new religious movements. On the basis that "English culture and literature have been traditional strongholds of Western esotericism", in 2011 Pia Brînzeu and György Szönyi urged that English studies also have a role in this interdisciplinary field. Emic and etic divisions Emic and etic refer to two kinds of field research done and viewpoints obtained, emic, from within the social group (from the perspective of the subject) and etic, from outside (from the perspective of the observer). Wouter Hanegraaff follows a distinction between an emic and an etic approach to religious studies. The emic approach is that of the alchemist or theosopher. The etic approach is that of the scholar as an historian, a researcher, with a critical view. An empirical study of esotericism needs "emic material and etic interpretation": Arthur Versluis proposes approaching esotericism through an "imaginative participation": Many scholars of esotericism have come to be regarded as respected intellectual authorities by practitioners of various esoteric traditions. Many esotericism scholars have sought to emphasise that esotericism is not a single object, but practitioners who read this scholarship have begun to regard it and think of it as a singular object, with which they affiliate themselves. Thus, Asprem and Granholm noted that the use of the term "esotericism" among scholars "significantly contributes to the reification of the category for the general audience—despite the explicated contrary intentions of most scholars in the field." In popular culture In 2013, Asprem and Granholm highlighted that "contemporary esotericism is intimately, and increasingly, connected with popular culture and new media." Granholm noted that esoteric ideas and images appear in many aspects of Western popular media, citing such examples as Buffy the Vampire Slayer, Avatar, Hellblazer, and His Dark Materials. Granholm has argued that there are problems with the field in that it draws a distinction between esotericism and non-esoteric elements of culture that draw upon esotericism. He cites extreme metal as an example, noting that it is extremely difficult to differentiate between artists who were "properly occult" and those who superficially referenced occult themes and aesthetics. Writers interested in occult themes have adopted three different strategies for dealing with the subject: those who are knowledgeable on the subject including attractive images of the occult and occultists in their work, those who disguise occultism within "a web of intertextuality", and those who oppose it and seek to deconstruct it. See also Brotherhood of Myriam Notes References Sources Further reading Aries: Journal for the Study of Western Esotericism, Leiden: Brill, since 2001. Aries Book Series: Texts and Studies in Western Esotericism , Leiden: Brill, since 2006. Esoterica , East Lansing, Michigan State University (MSU). An online resource since 1999. I (1999) ; VIII (2006) ; IX (2007) Hanegraaff, Wouter J., “The Study of Western Esotericism: New Approaches to Christian and Secular Culture ”, in Peter Antes, Armin W. Geertz and Randi R. Warne, New Approaches to the Study of Religion, vol. I: Regional, Critical, and Historical Approaches, Berlin / New York: Walter de Gruyter, 2004. Kelley, James L., Anatomyzing Divinity: Studies in Science, Esotericism and Political Theology, Trine Day, 2011, . Martin, Pierre, Esoterische Symbolik heute – in Alltag. Sprache und Einweihung. Basel: Edition Oriflamme, 2010, illustrated . Martin, Pierre, Le Symbolisme Esotérique Actuel – au Quotidien, dans le Langage et pour l'Auto-initiation. Basel: Edition Oriflamme, 2011, illustrated External links An Esoteric Archive Center for History of Hermetic Philosophy and Related Currents, University of Amsterdam, the Netherlands Association for the Study of Esotericism (ASE) European Society for the Study of Western Esotericism (ESSWE) Centre for Magic and Esotericism, University of Exeter, United Kingdom Aries: Journal for the Study of Western Esotericism Esoterica academic journal The Secret History of Western Esotericism Podcast (SHWEP) Western Schools of thought
Western esotericism
Biology
10,767
10,249,946
https://en.wikipedia.org/wiki/Natural%20gasoline
Natural gasoline is a liquid hydrocarbon mixture condensed from natural gas, similar to common gasoline (petrol) derived from petroleum. The chemical composition of natural gasoline is mostly five- and six-carbon alkanes (pentanes and hexanes) with smaller amounts of alkanes with longer chains. It contains significant amounts of isopentane (methyl butane) , which is rare in the petroleum product. Its boiling point is within the standard range for gasoline, and its vapor pressure is intermediate between those of natural gas condensate (drip gas) and liquefied petroleum gas. Its typical gravity is around 80 API. Natural gasoline is rather volatile and unstable, and has a low octane rating, but can be blended with other hydrocarbons to produce commercial gasoline. It is also used as a solvent to extract oil from oil shale. Its properties are standardized by GPA Midstream (formerly Gas Processors Association). Uses Natural gasoline is often used as a denaturant for fuel-grade ethanol, where it is commonly added volumetrically between 2.0% and 2.5% to make denatured fuel ethanol (DFE), or E98. This process renders the fuel-grade ethanol undrinkable. It is then transferred to a blender, which will add this E98 to conventional gasoline to make common 87 octane fuels (E10). It can also be added to ethanol in higher volumetric concentrations to produce high-level blends of ethanol, such as E85. Natural gasoline has a lower octane content (RON roughly equal to 70) than conventional commercial distilled gasoline, so it cannot normally be used by itself for fuel for modern automobiles. However, when mixed with higher concentrations of ethanol (RON roughly equal to 113) to produce products such as E85, the octane level of the natural gasoline and ethanol mixture is now within the usable range for flex-fuel vehicles. Sources It may be sourced from production of natural-gas wells (see "drip gas") or produced by extraction processes in the field, as opposed to refinery cracking of conventional gasoline. References Fuels Natural gas
Natural gasoline
Chemistry
441
12,812,896
https://en.wikipedia.org/wiki/Sulfurisphaera
Sulfurisphaera is a genus of the Sulfolobaceae. Description and significance Sulfurisphaera is a facultatively anaerobic, thermophilic, Gram-negative archaeon that occurs in acidic solfataric fields. The organism grows under the temperature range of 63–92 °C with the optimum temperature at 84 °C, and under the pH range of 1.0–5.0, with an optimum of pH 2.0. It forms colonies that are smooth, roundly convex, and slightly yellow. The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature (LPSN) and National Center for Biotechnology Information (NCBI) Genome structure The genome of Sulfurisphaera is yet to be sequenced. The G + C content is estimated to be 30–33%. Cell structure and metabolism The spherical cells of Sulfurisphaera ohwakuensis are 1.2–1.5 μm in diameter. Thin sections of the organism reveal an envelope (approx. 24 nm) surrounding the cell membrane. It grows organotrophically on proteinaceous, complex substrates such as yeast extract, peptone, and tryptone. Growth was not observed on single sugars or amino acids such as D-glucose, D-galactose, D-fructose, D-xylose, lactose, maltose, sucrose, alanine, glutamate, glycine, and histidine. Ecology The strains of Sulfurisphaera ohwakuensis were isolated from multiple locations in the acidic hot springs in Ohwaku Valley, Hakone, Japan. See also List of Archaea genera References Further reading Scientific journals Scientific books External links Sulfurisphaera at BacDive - the Bacterial Diversity Metadatabase Archaea genera Thermoproteota
Sulfurisphaera
Biology
389
14,798,852
https://en.wikipedia.org/wiki/GFI1
Zinc finger protein Gfi-1 is a transcriptional repressor that in humans is encoded by the GFI1 gene. It is important normal hematopoiesis. Gfi1 (growth factor independence 1) is a transcriptional repressor that plays a critical role in hematopoiesis and in protecting hematopoietic cells against stress-induced apoptosis. Recent research has shown that Gfi1 upregulates the expression of the nuclear protein Hemgn, which contributes to its anti-apoptotic activity. This upregulation is mediated through a specific 16-bp promoter region and is dependent on Gfi1’s interaction with the histone demethylase LSD1. Gfi1 represses PU.1, and this repression precedes and correlates with the upregulation of Hemgn. The upregulation of Hemgn, in turn, contributes to the anti-apoptotic function of Gfi1, acting in a p53-independent manner. These findings suggest that Gfi1 promotes cell survival by upregulating Hemgn through the repression of PU.1, offering a new understanding of its role in apoptosis regulation. Interactions GFI1 has been shown to interact with PIAS3 and RUNX1T1. References Further reading External links Transcription factors
GFI1
Chemistry,Biology
282
1,474,720
https://en.wikipedia.org/wiki/Concrete%20Roman
Concrete Roman is a slab serif typeface designed by Donald Knuth using his METAFONT program. It was intended to accompany the Euler mathematical font which it partners in Knuth's book Concrete Mathematics. It has a darker appearance than its more famous sibling, Computer Modern. Some favour it for use on the computer screen because of this, as the thinner strokes of Computer Modern can make it hard to read at low resolutions. References External links Computer Modern family, for general use select .otf fonts Typefaces designed by Donald Knuth Slab serif typefaces TeX
Concrete Roman
Mathematics
120
11,522
https://en.wikipedia.org/wiki/Fly-by-wire
Fly-by-wire (FBW) is a system that replaces the conventional manual flight controls of an aircraft with an electronic interface. The movements of flight controls are converted to electronic signals, and flight control computers determine how to move the actuators at each control surface to provide the ordered response. Implementations either use mechanical flight control backup systems or else are fully electronic. Improved fully fly-by-wire systems interpret the pilot's control inputs as a desired outcome and calculate the control surface positions required to achieve that outcome; this results in various combinations of rudder, elevator, aileron, flaps and engine controls in different situations using a closed feedback loop. The pilot may not be fully aware of all the control outputs acting to affect the outcome, only that the aircraft is reacting as expected. The fly-by-wire computers act to stabilize the aircraft and adjust the flying characteristics without the pilot's involvement, and to prevent the pilot from operating outside of the aircraft's safe performance envelope. Rationale Mechanical and hydro-mechanical flight control systems are relatively heavy and require careful routing of flight control cables through the aircraft by systems of pulleys, cranks, tension cables and hydraulic pipes. Both systems often require redundant backup to deal with failures, which increases weight. Both have limited ability to compensate for changing aerodynamic conditions. Dangerous characteristics such as stalling, spinning and pilot-induced oscillation (PIO), which depend mainly on the stability and structure of the aircraft rather than the control system itself, are dependent on the pilot's actions. The term "fly-by-wire" implies a purely electrically signaled control system. It is used in the general sense of computer-configured controls, where a computer system is interposed between the operator and the final control actuators or surfaces. This modifies the manual inputs of the pilot in accordance with control parameters. Side-sticks or conventional flight control yokes can be used to fly fly-by-wire aircraft. Weight saving A fly-by-wire aircraft can be lighter than a similar design with conventional controls. This is partly due to the lower overall weight of the system components and partly because the natural stability of the aircraft can be relaxed (slightly for a transport aircraft; more for a maneuverable fighter), which means that the stability surfaces that are part of the aircraft structure can therefore be made smaller. These include the vertical and horizontal stabilizers (fin and tailplane) that are (normally) at the rear of the fuselage. If these structures can be reduced in size, airframe weight is reduced. The advantages of fly-by-wire controls were first exploited by the military and then in the commercial airline market. The Airbus series of airliners used full-authority fly-by-wire controls beginning with their A320 series, see A320 flight control (though some limited fly-by-wire functions existed on A310 aircraft). Boeing followed with their 777 and later designs. Basic operation Closed-loop feedback control A pilot commands the flight control computer to make the aircraft perform a certain action, such as pitch the aircraft up, or roll to one side, by moving the control column or sidestick. The flight control computer then calculates what control surface movements will cause the plane to perform that action and issues those commands to the electronic controllers for each surface. The controllers at each surface receive these commands and then move actuators attached to the control surface until it has moved to where the flight control computer commanded it to. The controllers measure the position of the flight control surface with sensors such as LVDTs. Automatic stability systems Fly-by-wire control systems allow aircraft computers to perform tasks without pilot input. Automatic stability systems operate in this way. Gyroscopes and sensors such as accelerometers are mounted in an aircraft to sense rotation on the pitch, roll and yaw axes. Any movement (from straight and level flight for example) results in signals to the computer, which can automatically move control actuators to stabilize the aircraft. Safety and redundancy While traditional mechanical or hydraulic control systems usually fail gradually, the loss of all flight control computers immediately renders the aircraft uncontrollable. For this reason, most fly-by-wire systems incorporate either redundant computers (triplex, quadruplex etc.), some kind of mechanical or hydraulic backup or a combination of both. A "mixed" control system with mechanical backup feedbacks any rudder elevation directly to the pilot and therefore makes closed loop (feedback) systems senseless. Aircraft systems may be quadruplexed (four independent channels) to prevent loss of signals in the case of failure of one or even two channels. High performance aircraft that have fly-by-wire controls (also called CCVs or Control-Configured Vehicles) may be deliberately designed to have low or even negative stability in some flight regimes rapid-reacting CCV controls can electronically stabilize the lack of natural stability. Pre-flight safety checks of a fly-by-wire system are often performed using built-in test equipment (BITE). A number of control movement steps can be automatically performed, reducing workload of the pilot or groundcrew and speeding up flight-checks. Some aircraft, the Panavia Tornado for example, retain a very basic hydro-mechanical backup system for limited flight control capability on losing electrical power; in the case of the Tornado this allows rudimentary control of the stabilators only for pitch and roll axis movements. History Servo-electrically operated control surfaces were first tested in the 1930s on the Soviet Tupolev ANT-20. Long runs of mechanical and hydraulic connections were replaced with wires and electric servos. In 1934, filed a patent about the automatic-electronic system, which flared the aircraft, when it was close to the ground. In 1941, Karl Otto Altvater, who was an engineer at Siemens, developed and tested the first fly-by-wire system for the Heinkel He 111, in which the aircraft was fully controlled by electronic impulses. The first non-experimental aircraft that was designed and flown (in 1958) with a fly-by-wire flight control system was the Avro Canada CF-105 Arrow, a feat not repeated with a production aircraft (though the Arrow was cancelled with five built) until Concorde in 1969, which became the first fly-by-wire airliner. This system also included solid-state components and system redundancy, was designed to be integrated with a computerised navigation and automatic search and track radar, was flyable from ground control with data uplink and downlink, and provided artificial feel (feedback) to the pilot. The first electronic fly-by-wire testbed operated by the U.S. Air Force was a Boeing B-47E Stratojet (Ser. No. 53-2280) The first pure electronic fly-by-wire aircraft with no mechanical or hydraulic backup was the Apollo Lunar Landing Training Vehicle (LLTV), first flown in 1968. This was preceded in 1964 by the Lunar Landing Research Vehicle (LLRV) which pioneered fly-by-wire flight with no mechanical backup. Control was through a digital computer with three analog redundant channels. In the USSR, the Sukhoi T-4 also flew. At about the same time in the United Kingdom a trainer variant of the British Hawker Hunter fighter was modified at the British Royal Aircraft Establishment with fly-by-wire flight controls for the right-seat pilot. In the UK the two seater Avro 707C was flown with a Fairey system with mechanical backup in the early to mid-60s. The program was curtailed when the air-frame ran out of flight time. In 1972, the first digital fly-by-wire fixed-wing aircraft without a mechanical backup to take to the air was an F-8 Crusader, which had been modified electronically by NASA of the United States as a test aircraft; the F-8 used the Apollo guidance, navigation and control hardware. The Airbus A320 began service in 1988 as the first mass-produced airliner with digital fly-by-wire controls. As of June 2024, over 11,000 A320 family aircraft, variants included, are operational around the world, making it one of the best-selling commercial jets. Boeing chose fly-by-wire flight controls for the 777 in 1994, departing from traditional cable and pulley systems. In addition to overseeing the aircraft's flight control, the FBW offered "envelope protection", which guaranteed that the system would step in to avoid accidental mishandling, stalls, or excessive structural stress on the aircraft. The 777 used ARINC 629 buses to connect primary flight computers (PFCs) with actuator-control electronics units (ACEs). Every PFC housed three 32-bit microprocessors, including a Motorola 68040, an Intel 80486, and an AMD 29050, all programmed in Ada programming language. Analog systems All fly-by-wire flight control systems eliminate the complexity, fragility and weight of the mechanical circuit of the hydromechanical or electromechanical flight control systems – each being replaced with electronic circuits. The control mechanisms in the cockpit now operate signal transducers, which in turn generate the appropriate commands. These are next processed by an electronic controller—either an analog one, or (more modernly) a digital one. Aircraft and spacecraft autopilots are now part of the electronic controller. The hydraulic circuits are similar except that mechanical servo valves are replaced with electrically controlled servo valves, operated by the electronic controller. This is the simplest and earliest configuration of an analog fly-by-wire flight control system. In this configuration, the flight control systems must simulate "feel". The electronic controller controls electrical devices that provide the appropriate "feel" forces on the manual controls. This was used in Concorde, the first production fly-by-wire airliner. Digital systems A digital fly-by-wire flight control system can be extended from its analog counterpart. Digital signal processing can receive and interpret input from multiple sensors simultaneously (such as the altimeters and the pitot tubes) and adjust the controls in real time. The computers sense position and force inputs from pilot controls and aircraft sensors. They then solve differential equations related to the aircraft's equations of motion to determine the appropriate command signals for the flight controls to execute the intentions of the pilot. The programming of the digital computers enable flight envelope protection. These protections are tailored to an aircraft's handling characteristics to stay within aerodynamic and structural limitations of the aircraft. For example, the computer in flight envelope protection mode can try to prevent the aircraft from being handled dangerously by preventing pilots from exceeding preset limits on the aircraft's flight-control envelope, such as those that prevent stalls and spins, and which limit airspeeds and g forces on the airplane. Software can also be included that stabilize the flight-control inputs to avoid pilot-induced oscillations. Since the flight-control computers continuously feedback the environment, pilot's workloads can be reduced. This also enables military aircraft with relaxed stability. The primary benefit for such aircraft is more maneuverability during combat and training flights, and the so-called "carefree handling" because stalling, spinning and other undesirable performances are prevented automatically by the computers. Digital flight control systems (DFCS) enable inherently unstable combat aircraft, such as the Lockheed F-117 Nighthawk and the Northrop Grumman B-2 Spirit flying wing to fly in usable and safe manners. Legislation The United States Federal Aviation Administration (FAA) has adopted the RTCA/DO-178C, titled "Software Considerations in Airborne Systems and Equipment Certification", as the certification standard for aviation software. Any safety-critical component in a digital fly-by-wire system including applications of the laws of aeronautics and computer operating systems will need to be certified to DO-178C Level A or B, depending on the class of aircraft, which is applicable for preventing potential catastrophic failures. Nevertheless, the top concern for computerized, digital, fly-by-wire systems is reliability, even more so than for analog electronic control systems. This is because the digital computers that are running software are often the only control path between the pilot and aircraft's flight control surfaces. If the computer software crashes for any reason, the pilot may be unable to control an aircraft. Hence virtually all fly-by-wire flight control systems are either triply or quadruply redundant in their computers and electronics. These have three or four flight-control computers operating in parallel and three or four separate data buses connecting them with each control surface. Redundancy The multiple redundant flight control computers continuously monitor each other's output. If one computer begins to give aberrant results for any reason, potentially including software or hardware failures or flawed input data, then the combined system is designed to exclude the results from that computer in deciding the appropriate actions for the flight controls. Depending on specific system details there may be the potential to reboot an aberrant flight control computer, or to reincorporate its inputs if they return to agreement. Complex logic exists to deal with multiple failures, which may prompt the system to revert to simpler back-up modes. In addition, most of the early digital fly-by-wire aircraft also had an analog electrical, mechanical, or hydraulic back-up flight control system. The Space Shuttle had, in addition to its redundant set of four digital computers running its primary flight-control software, a fifth backup computer running a separately developed, reduced-function, software flight-control system – one that could be commanded to take over in the event that a fault ever affected all of the other four computers. This backup system served to reduce the risk of total flight control system failure ever happening because of a general-purpose flight software fault that had escaped notice in the other four computers. Efficiency of flight For airliners, flight-control redundancy improves their safety, but fly-by-wire control systems, which are physically lighter and have lower maintenance demands than conventional controls also improve economy, both in terms of cost of ownership and for in-flight economy. In certain designs with limited relaxed stability in the pitch axis, for example the Boeing 777, the flight control system may allow the aircraft to fly at a more aerodynamically efficient angle of attack than a conventionally stable design. Modern airliners also commonly feature computerized Full-Authority Digital Engine Control systems (FADECs) that control their engines, air inlets, fuel storage and distribution system, in a similar fashion to the way that FBW controls the flight control surfaces. This allows the engine output to be continually varied for the most efficient usage possible. The second generation Embraer E-Jet family gained a 1.5% efficiency improvement over the first generation from the fly-by-wire system, which enabled a reduction from 280 ft.² to 250 ft.² for the horizontal stabilizer on the E190/195 variants. Airbus/Boeing Airbus and Boeing differ in their approaches to implementing fly-by-wire systems in commercial aircraft. Since the Airbus A320, Airbus flight-envelope control systems always retain ultimate flight control when flying under normal law and will not permit pilots to violate aircraft performance limits unless they choose to fly under alternate law. This strategy has been continued on subsequent Airbus airliners. However, in the event of multiple failures of redundant computers, the A320 does have a mechanical back-up system for its pitch trim and its rudder, the Airbus A340 has a purely electrical (not electronic) back-up rudder control system and beginning with the A380, all flight-control systems have back-up systems that are purely electrical through the use of a "three-axis Backup Control Module" (BCM). Boeing airliners, such as the Boeing 777, allow the pilots to completely override the computerized flight control system, permitting the aircraft to be flown outside of its usual flight control envelope. Applications Concorde was the first production fly-by-wire aircraft with analog control. The General Dynamics F-16 was the first production aircraft to use digital fly-by-wire controls. The Space Shuttle orbiter had an all-digital fly-by-wire control system. This system was first exercised (as the only flight control system) during the glider unpowered-flight "Approach and Landing Tests" that began with the Space Shuttle Enterprise during 1977. Launched into production during 1984, the Airbus Industries Airbus A320 became the first airliner to fly with an all-digital fly-by-wire control system. With its launch in 1993 the Boeing C-17 Globemaster III became the first fly-by-wire military transport aircraft. In 2005, the Dassault Falcon 7X became the first business jet with fly-by-wire controls. A fully digital fly-by-wire without a closed feedback loop was integrated in 2002 in the first generation Embraer E-Jet family. By closing the loop (feedback), the second generation Embraer E-Jet family gained a 1.5% efficiency improvement in 2016. Engine digital control The advent of FADEC (Full Authority Digital Engine Control) engines permits operation of the flight control systems and autothrottles for the engines to be fully integrated. On modern military aircraft other systems such as autostabilization, navigation, radar and weapons system are all integrated with the flight control systems. FADEC allows maximum performance to be extracted from the aircraft without fear of engine misoperation, aircraft damage or high pilot workloads. In the civil field, the integration increases flight safety and economy. Airbus fly-by-wire aircraft are protected from dangerous situations such as low-speed stall or overstressing by flight envelope protection. As a result, in such conditions, the flight control systems commands the engines to increase thrust without pilot intervention. In economy cruise modes, the flight control systems adjust the throttles and fuel tank selections precisely. FADEC reduces rudder drag needed to compensate for sideways flight from unbalanced engine thrust. On the A330/A340 family, fuel is transferred between the main (wing and center fuselage) tanks and a fuel tank in the horizontal stabilizer, to optimize the aircraft's center of gravity during cruise flight. The fuel management controls keep the aircraft's center of gravity accurately trimmed with fuel weight, rather than drag-inducing aerodynamic trims in the elevators. Further developments Fly-by-optics Fly-by-optics is sometimes used instead of fly-by-wire because it offers a higher data transfer rate, immunity to electromagnetic interference and lighter weight. In most cases, the cables are just changed from electrical to optical fiber cables. Sometimes it is referred to as "fly-by-light" due to its use of fiber optics. The data generated by the software and interpreted by the controller remain the same. Fly-by-light has the effect of decreasing electro-magnetic disturbances to sensors in comparison to more common fly-by-wire control systems. The Kawasaki P-1 is the first production aircraft in the world to be equipped with such a flight control system. Power-by-wire Having eliminated the mechanical transmission circuits in fly-by-wire flight control systems, the next step is to eliminate the bulky and heavy hydraulic circuits. The hydraulic circuit is replaced by an electrical power circuit. The power circuits power electrical or self-contained electrohydraulic actuators that are controlled by the digital flight control computers. All benefits of digital fly-by-wire are retained since the power-by-wire components are strictly complementary to the fly-by-wire components. The biggest benefits are weight savings, the possibility of redundant power circuits and tighter integration between the aircraft flight control systems and its avionics systems. The absence of hydraulics greatly reduces maintenance costs. This system is used in the Lockheed Martin F-35 Lightning II and in Airbus A380 backup flight controls. The Boeing 787 and Airbus A350 also incorporate electrically powered backup flight controls which remain operational even in the event of a total loss of hydraulic power. Fly-by-wireless Wiring adds a considerable amount of weight to an aircraft; therefore, researchers are exploring implementing fly-by-wireless solutions. Fly-by-wireless systems are very similar to fly-by-wire systems, however, instead of using a wired protocol for the physical layer a wireless protocol is employed. In addition to reducing weight, implementing a wireless solution has the potential to reduce costs throughout an aircraft's life cycle. For example, many key failure points associated with wire and connectors will be eliminated thus hours spent troubleshooting wires and connectors will be reduced. Furthermore, engineering costs could potentially decrease because less time would be spent on designing wiring installations, late changes in an aircraft's design would be easier to manage, etc. Intelligent flight control system A newer flight control system, called intelligent flight control system (IFCS), is an extension of modern digital fly-by-wire flight control systems. The aim is to intelligently compensate for aircraft damage and failure during flight, such as automatically using engine thrust and other avionics to compensate for severe failures such as loss of hydraulics, loss of rudder, loss of ailerons, loss of an engine, etc. Several demonstrations were made on a flight simulator where a Cessna-trained small-aircraft pilot successfully landed a heavily damaged full-size concept jet, without prior experience with large-body jet aircraft. This development is being spearheaded by NASA Dryden Flight Research Center. It is reported that enhancements are mostly software upgrades to existing fully computerized digital fly-by-wire flight control systems. The Dassault Falcon 7X and Embraer Legacy 500 business jets have flight computers that can partially compensate for engine-out scenarios by adjusting thrust levels and control inputs, but still require pilots to respond appropriately. See also Index of aviation articles Aircraft flight control system Air France Flight 296Q Drive by wire Dual control (aviation) Flight control modes MIL-STD-1553, a standard data bus for fly-by-wire Relaxed stability Note References External links "Fly-by-wire" a 1972 Flight article archive version Aircraft controls Fault tolerance Flight control systems
Fly-by-wire
Engineering
4,554
41,617,627
https://en.wikipedia.org/wiki/56%20Cygni
56 Cygni is a single star in the northern constellation of Cygnus, located 135 light years from Earth. It is visible to the naked eye as a white-hued star with an apparent visual magnitude of 5.06. The star is moving closer to the Earth with a heliocentric radial velocity of −21.5. It has a relatively high proper motion, traversing the celestial sphere at an angular rate of /yr. According to Eggen (1998), this is a member of the Hyades Supercluster. This is an A-type main-sequence star with a stellar classification of A6 V. Cowley et al. (1969) classified it as a Delta Delphini star, which is a type of suspected Am star. The star is around 394 million years old with a projected rotational velocity of 73 km/s. It has 1.72 times the mass of the Sun and is radiating 13 times the Sun's luminosity from its photosphere at an effective temperature of 8,124 K. 56 Cygni has a visual companion: a magnitude 11.9 star at an angular separation of along a position angle of 48°, as of 2015. References A-type main-sequence stars Am stars Hyades Stream Cygnus (constellation) Durchmusterung objects Cygni, 56 198639 102843 7984
56 Cygni
Astronomy
287
1,145,733
https://en.wikipedia.org/wiki/BIBO%20stability
In signal processing, specifically control theory, bounded-input, bounded-output (BIBO) stability is a form of stability for signals and systems that take inputs. If a system is BIBO stable, then the output will be bounded for every input to the system that is bounded. A signal is bounded if there is a finite value such that the signal magnitude never exceeds , that is For discrete-time signals: For continuous-time signals: Time-domain condition for linear time-invariant systems Continuous-time necessary and sufficient condition For a continuous time linear time-invariant (LTI) system, the condition for BIBO stability is that the impulse response, , be absolutely integrable, i.e., its L1 norm exists. Discrete-time sufficient condition For a discrete time LTI system, the condition for BIBO stability is that the impulse response be absolutely summable, i.e., its norm exists. Proof of sufficiency Given a discrete time LTI system with impulse response the relationship between the input and the output is where denotes convolution. Then it follows by the definition of convolution Let be the maximum value of , i.e., the -norm. (by the triangle inequality) If is absolutely summable, then and So if is absolutely summable and is bounded, then is bounded as well because . The proof for continuous-time follows the same arguments. Frequency-domain condition for linear time-invariant systems Continuous-time signals For a rational and continuous-time system, the condition for stability is that the region of convergence (ROC) of the Laplace transform includes the imaginary axis. When the system is causal, the ROC is the open region to the right of a vertical line whose abscissa is the real part of the "largest pole", or the pole that has the greatest real part of any pole in the system. The real part of the largest pole defining the ROC is called the abscissa of convergence. Therefore, all poles of the system must be in the strict left half of the s-plane for BIBO stability. This stability condition can be derived from the above time-domain condition as follows: where and The region of convergence must therefore include the imaginary axis. Discrete-time signals For a rational and discrete time system, the condition for stability is that the region of convergence (ROC) of the z-transform includes the unit circle. When the system is causal, the ROC is the open region outside a circle whose radius is the magnitude of the pole with largest magnitude. Therefore, all poles of the system must be inside the unit circle in the z-plane for BIBO stability. This stability condition can be derived in a similar fashion to the continuous-time derivation: where and . The region of convergence must therefore include the unit circle. See also LTI system theory Finite impulse response (FIR) filter Infinite impulse response (IIR) filter Nyquist plot Routh–Hurwitz stability criterion Bode plot Phase margin Root locus method Input-to-state stability Further reading Gordon E. Carlson Signal and Linear Systems Analysis with Matlab second edition, Wiley, 1998, John G. Proakis and Dimitris G. Manolakis Digital Signal Processing Principals, Algorithms and Applications third edition, Prentice Hall, 1996, D. Ronald Fannin, William H. Tranter, and Rodger E. Ziemer Signals & Systems Continuous and Discrete fourth edition, Prentice Hall, 1998, Proof of the necessary conditions for BIBO stability. Christophe Basso Designing Control Loops for Linear and Switching Power Supplies: A Tutorial Guide first edition, Artech House, 2012, 978-1608075577 References Signal processing Digital signal processing Articles containing proofs Stability theory
BIBO stability
Mathematics,Technology,Engineering
760
13,372
https://en.wikipedia.org/wiki/Human%20geography
Human geography or anthropogeography is the branch of geography which studies spatial relationships between human communities, cultures, economies, and their interactions with the environment, examples of which include urban sprawl and urban redevelopment. It analyzes spatial interdependencies between social interactions and the environment through qualitative and quantitative methods. This multidisciplinary approach draws from sociology, anthropology, economics, and environmental science, contributing to a comprehensive understanding of the intricate connections that shape lived spaces. History The Royal Geographical Society was founded in England in 1830. The first professor of geography in the United Kingdom was appointed in 1883, and the first major geographical intellect to emerge in the UK was Halford John Mackinder, appointed professor of geography at the London School of Economics in 1922. The National Geographic Society was founded in the United States in 1888 and began publication of the National Geographic magazine which became, and continues to be, a great popularizer of geographic information. The society has long supported geographic research and education on geographical topics. The Association of American Geographers was founded in 1904 and was renamed the American Association of Geographers in 2016 to better reflect the increasingly international character of its membership. One of the first examples of geographic methods being used for purposes other than to describe and theorize the physical properties of the earth is John Snow's map of the 1854 Broad Street cholera outbreak. Though Snow was primarily a physician and a pioneer of epidemiology rather than a geographer, his map is probably one of the earliest examples of health geography. The now fairly distinct differences between the subfields of physical and human geography developed at a later date. The connection between both physical and human properties of geography is most apparent in the theory of environmental determinism, made popular in the 19th century by Carl Ritter and others, and has close links to the field of evolutionary biology of the time. Environmental determinism is the theory that people's physical, mental and moral habits are directly due to the influence of their natural environment. However, by the mid-19th century, environmental determinism was under attack for lacking methodological rigor associated with modern science, and later as a means to justify racism and imperialism. A similar concern with both human and physical aspects is apparent during the later 19th and first half of the 20th centuries focused on regional geography. The goal of regional geography, through something known as regionalisation, was to delineate space into regions and then understand and describe the unique characteristics of each region through both human and physical aspects. With links to possibilism and cultural ecology some of the same notions of causal effect of the environment on society and culture remain with environmental determinism. By the 1960s, however, the quantitative revolution led to strong criticism of regional geography. Due to a perceived lack of scientific rigor in an overly descriptive nature of the discipline, and a continued separation of geography from its two subfields of physical and human geography and from geology, geographers in the mid-20th century began to apply statistical and mathematical models in order to solve spatial problems. Much of the development during the quantitative revolution is now apparent in the use of geographic information systems; the use of statistics, spatial modeling, and positivist approaches are still important to many branches of human geography. Well-known geographers from this period are Fred K. Schaefer, Waldo Tobler, William Garrison, Peter Haggett, Richard J. Chorley, William Bunge, and Torsten Hägerstrand. From the 1970s, a number of critiques of the positivism now associated with geography emerged. Known under the term 'critical geography,' these critiques signaled another turning point in the discipline. Behavioral geography emerged for some time as a means to understand how people made perceived spaces and places and made locational decisions. The more influential 'radical geography' emerged in the 1970s and 1980s. It draws heavily on Marxist theory and techniques and is associated with geographers such as David Harvey and Richard Peet. Radical geographers seek to say meaningful things about problems recognized through quantitative methods, provide explanations rather than descriptions, put forward alternatives and solutions, and be politically engaged, rather than using the detachment associated with positivists. (The detachment and objectivity of the quantitative revolution was itself critiqued by radical geographers as being a tool of capital). Radical geography and the links to Marxism and related theories remain an important part of contemporary human geography (See: Antipode). Critical geography also saw the introduction of 'humanistic geography', associated with the work of Yi-Fu Tuan, which pushed for a much more qualitative approach in methodology. The changes under critical geography have led to contemporary approaches in the discipline such as feminist geography, new cultural geography, settlement geography, and the engagement with postmodern and post-structural theories and philosophies. Fields The primary fields of study in human geography focus on the core fields of: Cultures Cultural geography is the study of cultural products and norms – their variation across spaces and places, as well as their relations. It focuses on describing and analyzing the ways language, religion, economy, government, and other cultural phenomena vary or remain constant from one place to another and on explaining how humans function spatially. Subfields include: Social geography, Animal geographies, Language geography, Sexuality and space, Children's geographies, and Religion and geography. Development Development geography is the study of the Earth's geography with reference to the standard of living and the quality of life of its human inhabitants, study of the location, distribution and spatial organization of economic activities, across the Earth. The subject matter investigated is strongly influenced by the researcher's methodological approach. Economies Economic geography examines relationships between human economic systems, states, and other factors, and the biophysical environment. Subfields include: Marketing geography and Transportation geography Emotion Food Health Medical or health geography is the application of geographical information, perspectives, and methods to the study of health, disease, and health care. Health geography deals with the spatial relations and patterns between people and the environment. This is a sub-discipline of human geography, researching how and why diseases are spread and contained. Histories Historical geography is the study of the human, physical, fictional, theoretical, and "real" geographies of the past. Historical geography studies a wide variety of issues and topics. A common theme is the study of the geographies of the past and how a place or region changes through time. Many historical geographers study geographical patterns through time, including how people have interacted with their environment, and created the cultural landscape. Politics Political geography is concerned with the study of both the spatially uneven outcomes of political processes and the ways in which political processes are themselves affected by spatial structures. Subfields include: Electoral geography, Geopolitics, Strategic geography and Military geography. Population Population geography is the study of ways in which spatial variations in the distribution, composition, migration, and growth of populations are related to their environment or location. Settlement Settlement geography, including urban geography, is the study of urban and rural areas with specific regards to spatial, relational and theoretical aspects of settlement. That is the study of areas which have a concentration of buildings and infrastructure. These are areas where the majority of economic activities are in the secondary sector and tertiary sectors. Urbanism Urban geography is the study of cities, towns, and other areas of relatively dense settlement. Two main interests are site (how a settlement is positioned relative to the physical environment) and situation (how a settlement is positioned relative to other settlements). Another area of interest is the internal organization of urban areas with regard to different demographic groups and the layout of infrastructure. This subdiscipline also draws on ideas from other branches of Human Geography to see their involvement in the processes and patterns evident in an urban area. Subfields include: Economic geography, Population geography, and Settlement geography. These are clearly not the only subfields that could be used to assist in the study of Urban geography, but they are some major players. Philosophical and theoretical approaches Within each of the subfields, various philosophical approaches can be used in research; therefore, an urban geographer could be a Feminist or Marxist geographer, etc. Such approaches are: Animal geographies Behavioral geography Cognitive geography Critical geography Feminist geography Marxist geography Non-representational theory Positivism Postcolonialism Poststructuralist geography Psychoanalytic geography Psychogeography Spatial analysis Time geography List of notable human geographers Journals As with all social sciences, human geographers publish research and other written work in a variety of academic journals. Whilst human geography is interdisciplinary, there are a number of journals that focus on human geography. These include: ACME: An International E-Journal for Critical Geographies Antipode Area Dialogues in Human Geography Economic geography Environment and Planning Geoforum Geografiska Annaler GeoHumanities Global Environmental Change: Human and Policy Dimensions Human Geography Migration Letters Progress in Human Geography Southeastern Geographer Social & Cultural Geography Tijdschrift voor economische en sociale geografie Transactions of the Institute of British Geographers See also References Further reading External links Worldmapper – Mapping project using social data sets Anthropology Environmental social science
Human geography
Environmental_science
1,887
30,247,317
https://en.wikipedia.org/wiki/Tr%C3%A9maux%20tree
In graph theory, a Trémaux tree of an undirected graph is a type of spanning tree, generalizing depth-first search trees. They are defined by the property that every edge of connects an ancestor–descendant pair in the tree. Trémaux trees are named after Charles Pierre Trémaux, a 19th-century French author who used a form of depth-first search as a strategy for solving mazes. They have also been called normal spanning trees, especially in the context of infinite graphs. All depth-first search trees and all Hamiltonian paths are Trémaux trees. In finite graphs, every Trémaux tree is a depth-first search tree, but although depth-first search itself is inherently sequential, Trémaux trees can be constructed by a randomized parallel algorithm in the complexity class RNC. They can be used to define the tree-depth of a graph, and as part of the left-right planarity test for testing whether a graph is a planar graph. A characterization of Trémaux trees in the monadic second-order logic of graphs allows graph properties involving orientations to be recognized efficiently for graphs of bounded treewidth using Courcelle's theorem. Not every infinite connected graph has a Trémaux tree, and not every infinite Trémaux tree is a depth-first search tree. The graphs that have Trémaux trees can be characterized by forbidden minors. An infinite Trémaux tree must have exactly one infinite path for each end of the graph, and the existence of a Trémaux tree characterizes the graphs whose topological completions, formed by adding a point at infinity for each end, are metric spaces. Definition and examples A Trémaux tree, for a graph , is a spanning tree with the property that, for every edge in , one of the two endpoints and is an ancestor of the other. To be a spanning tree, it must only use edges of , and include every vertex, with a unique finite path between every pair of vertices. Additionally, to define the ancestor–descendant relation in this tree, one of its vertices must be designated as its root. If a finite graph has a Hamiltonian path, then rooting that path at one of its two endpoints produces a Trémaux tree. For such a path, every pair of vertices is an ancestor–descendant pair. In the graph shown below, the tree with edges 1–3, 2–3, and 3–4 is a Trémaux tree when it is rooted at vertex 1 or vertex 2: every edge of the graph belongs to the tree except for the edge 1–2, which (for these choices of root) connects an ancestor-descendant pair. However, rooting the same tree at vertex 3 or vertex 4 produces a rooted tree that is not a Trémaux tree, because with this root 1 and 2 are no longer an ancestor and descendant of each other. In finite graphs Existence Every finite connected undirected graph has at least one Trémaux tree. One can construct such a tree by performing a depth-first search and connecting each vertex (other than the starting vertex of the search) to the earlier vertex from which it was discovered. The tree constructed in this way is known as a depth-first search tree. If is an arbitrary edge in the graph, and is the earlier of the two vertices to be reached by the search, then must belong to the subtree descending from in the depth-first search tree, because the search will necessarily discover while it is exploring this subtree, either from one of the other vertices in the subtree or, failing that, from directly. Every finite Trémaux tree can be generated as a depth-first search tree: If is a Trémaux tree of a finite graph, and a depth-first search explores the children in of each vertex prior to exploring any other vertices, it will necessarily generate as its depth-first search tree. Parallel construction It is P-complete to find the Trémaux tree that would be found by a sequential depth-first search algorithm, in which the neighbors of each vertex are searched in order by their identities. Nevertheless, it is possible to find a different Trémaux tree by a randomized parallel algorithm, showing that the construction of Trémaux trees belongs to the complexity class RNC. The algorithm is based on another randomized parallel algorithm, for finding minimum-weight perfect matchings in 0-1-weighted graphs. As of 1997, it remained unknown whether Trémaux tree construction could be performed by a deterministic parallel algorithm, in the complexity class NC. If matchings can be found in NC, then so can Trémaux trees. Logical expression It is possible to express the property that a set of edges with a choice of root vertex forms a Trémaux tree, in the monadic second-order logic of graphs, and more specifically in the form of this logic called MSO2, which allows quantification over both vertex and edge sets. This property can be expressed as the conjunction of the following properties: The graph is connected by the edges in . This can be expressed logically as the statement that, for every non-empty proper subset of the graph's vertices, there exists an edge in with exactly one endpoint in the given subset. is acyclic. This can be expressed logically as the statement that there does not exist a nonempty subset of for which each vertex is incident to either zero or two edges of . Every edge not in connects an ancestor-descendant pair of vertices in . This is true when both endpoints of belong to a path in . It can be expressed logically as the statement that, for all edges , there exists a subset of such that exactly two vertices, one of them , are incident to a single edge of , and such that both endpoints of are incident to at least one edge of . Once a Trémaux tree has been identified in this way, one can describe an orientation of the given graph, also in monadic second-order logic, by specifying the set of edges whose orientation is from the ancestral endpoint to the descendant endpoint. The remaining edges outside this set must be oriented in the other direction. This technique allows graph properties involving orientations to be specified in monadic second order logic, allowing these properties to be tested efficiently on graphs of bounded treewidth using Courcelle's theorem. Related properties If a graph has a Hamiltonian path, then that path (rooted at one of its endpoints) is also a Trémaux tree. The undirected graphs for which every Trémaux tree has this form are the cycle graphs, complete graphs, and balanced complete bipartite graphs. Trémaux trees are closely related to the concept of tree-depth. The tree-depth of a graph can be defined as the smallest number for which there exist a graph , with a Trémaux tree of height , such that is a subgraph of . Bounded tree-depth, in a family of graphs, is equivalent to the existence of a path that cannot occur as a graph minor of the graphs in the family. Many hard computational problems on graphs have algorithms that are fixed-parameter tractable when parameterized by the tree-depth of their inputs. Trémaux trees also play a key role in the Fraysseix–Rosenstiehl planarity criterion for testing whether a given graph is planar. According to this criterion, a graph is planar if, for a given Trémaux tree of , the remaining edges can be placed in a consistent way to the left or the right of the tree, subject to constraints that prevent edges with the same placement from crossing each other. In infinite graphs Existence Not every infinite graph has a normal spanning tree. For instance, a complete graph on an uncountable set of vertices does not have one: a normal spanning tree in a complete graph can only be a path, but a path has only a countable number of vertices. However, every connected graph on a countable set of vertices does have a normal spanning tree. Even in countable graphs, a depth-first search might not succeed in eventually exploring the entire graph, and not every normal spanning tree can be generated by a depth-first search: to be a depth-first search tree, a countable normal spanning tree must have only one infinite path or one node with infinitely many children (and not both). Minors If an infinite graph has a normal spanning tree, so does every connected graph minor of . It follows from this that the graphs that have normal spanning trees have a characterization by forbidden minors. One of the two classes of forbidden minors consists of bipartite graphs in which one side of the bipartition is countable, the other side is uncountable, and every vertex has infinite degree. The other class of forbidden minors consists of certain graphs derived from Aronszajn trees. The details of this characterization depend on the choice of set-theoretic axiomatization used to formalize mathematics. In particular, in models of set theory for which Martin's axiom is true and the continuum hypothesis is false, the class of bipartite graphs in this characterization can be replaced by a single forbidden minor. However, for models in which the continuum hypothesis is true, this class contains graphs which are incomparable with each other in the minor ordering. Ends and metrizability Normal spanning trees are also closely related to the ends of an infinite graph, equivalence classes of infinite paths that, intuitively, go to infinity in the same direction. If a graph has a normal spanning tree, this tree must have exactly one infinite path for each of the graph's ends. An infinite graph can be used to form a topological space by viewing the graph itself as a simplicial complex and adding a point at infinity for each end of the graph. With this topology, a graph has a normal spanning tree if and only if its set of vertices can be decomposed into a countable union of closed sets. Additionally, this topological space can be represented by a metric space if and only if the graph has a normal spanning tree. References Graph theory objects Spanning tree Graph minor theory Infinite graphs
Trémaux tree
Mathematics
2,097
31,954,655
https://en.wikipedia.org/wiki/Jamshidian%27s%20trick
Jamshidian's trick is a technique for one-factor asset price models, which re-expresses an option on a portfolio of assets as a portfolio of options. It was developed by Farshid Jamshidian in 1989. The trick relies on the following simple, but very useful mathematical observation. Consider a sequence of monotone (increasing) functions of one real variable (which map onto ), a random variable , and a constant . Since the function is also increasing and maps onto , there is a unique solution to the equation Since the functions are increasing: In financial applications, each of the random variables represents an asset value, the number is the strike of the option on the portfolio of assets. We can therefore express the payoff of an option on a portfolio of assets in terms of a portfolio of options on the individual assets with corresponding strikes . References Jamshidian, F. (1989). "An exact bond option pricing formula," Journal of Finance, Vol 44, pp 205-209 Mathematical finance Fixed income analysis Financial models
Jamshidian's trick
Mathematics
211
1,283,725
https://en.wikipedia.org/wiki/Orthant
In geometry, an orthant or hyperoctant is the analogue in n-dimensional Euclidean space of a quadrant in the plane or an octant in three dimensions. In general an orthant in n-dimensions can be considered the intersection of n mutually orthogonal half-spaces. By independent selections of half-space signs, there are 2n orthants in n-dimensional space. More specifically, a closed orthant in Rn is a subset defined by constraining each Cartesian coordinate to be nonnegative or nonpositive. Such a subset is defined by a system of inequalities: ε1x1 ≥ 0      ε2x2 ≥ 0     · · ·     εnxn ≥ 0, where each εi is +1 or −1. Similarly, an open orthant in Rn is a subset defined by a system of strict inequalities ε1x1 > 0      ε2x2 > 0     · · ·     εnxn > 0, where each εi is +1 or −1. By dimension: In one dimension, an orthant is a ray. In two dimensions, an orthant is a quadrant. In three dimensions, an orthant is an octant. John Conway and Neil Sloane defined the term n-orthoplex from orthant complex as a regular polytope in n-dimensions with 2n simplex facets, one per orthant. The nonnegative orthant is the generalization of the first quadrant to n-dimensions and is important in many constrained optimization problems. See also Cross polytope (or orthoplex) – a family of regular polytopes in n-dimensions which can be constructed with one simplex facets in each orthant space. Measure polytope (or hypercube) – a family of regular polytopes in n-dimensions which can be constructed with one vertex in each orthant space. Orthotope – generalization of a rectangle in n-dimensions, with one vertex in each orthant. References Further reading The facts on file: Geometry handbook, Catherine A. Gorini, 2003, , p.113 Euclidean geometry Linear algebra zh:卦限
Orthant
Mathematics
461
36,513,711
https://en.wikipedia.org/wiki/Peziza%20varia
Peziza varia, commonly known as the spreading brown cup fungus, Palomino cup or recurved cup, is a species of fungus in the genus Peziza, family Pezizaceae. Description Peziza varia can be identified by its growth on rotted wood or wood chips, its brown upper surface (at maturity) that is usually somewhat wrinkled near the center; a whitish and minutely fuzzy under surface; a round, cuplike shape when young, and a flattened-irregular shape when mature; attachment to the wood under the center of the mushroom, rather than under the whole cup; thin, brittle flesh (rather than thick and gelatinous) and smooth, elliptical spores that lack oil droplets. The cup at first is pale brown or whitish overall, the under surface minutely fuzzy and the upper surface smoother, with a tiny stem-like structure. In maturity it is flattened-irregular or bent backwards, 2–12 cm across, the margin often splitting, upper surface brown and smooth, often "pinched" or somewhat wrinkled over the center, under surface whitish and minutely fuzzy, attached to the substrate centrally, without a stem. It has no odor. The flesh is brownish or pale, and brittle. Peziza means a sort of mushroom without a root or stalk. Microscopic features: Spores 11–16 x 6–10 μm; smooth; elliptical; without oil droplets. Asci eight-spored; up to 225 x 15 μm. Similar species Similar species include Peziza arvernensis, P. domiciliana, P. vesiculosa, and P. violacea. Peziza repanda, Peziza cerea and Peziza micropus are synonyms. Ecology Well decayed logs may sport the Palomino cup fungus, which is saprobic, usually on the wood of hardwoods. Soil rich in decayed wood and occasionally that which is covered with wood chips may support Palomino cup; growing alone, gregariously, or in clusters. This member of the cup fungi is commonly found in colder weather (spring and autumn in temperate regions), but sometimes appearing in summer. Edibility Peziza varia is nonpoisonous but inedible. Distribution Peziza varia is widely distributed throughout America and Europe. References External links Spore release Wild about Britain Pezizaceae Fungi described in 1789 Fungi of North America Fungi of Europe Inedible fungi Fungus species
Peziza varia
Biology
507
21,064,516
https://en.wikipedia.org/wiki/Fiberglass%20sheet%20laminating
Fiberglass sheet laminating is the process of taking a thin fiberglass sheet and laminating it to another material in order to provide strength and support to that material. Process characteristics Fiberglass is composed of very fine strands of glass. It has many different purposes, one of which is used for strength. The strength of fiberglass depends on the size of the glass strands, the temperature, and the humidity. Materials needed Fiberglass sheet, resin, wood or metal roller, brush or other tool to spread epoxy, material to be strengthened Process description Start by applying the epoxy to the fiberglass sheet. Continue carefully but quickly until all areas are sufficiently covered by the epoxy. Next, start at one end of the material to be strengthened and stick the epoxy covered fiberglass to the material, being sure to smooth out any bubbles that may form between the material and fiberglass. If the epoxy hardens before you are able to stick the fiberglass to the material, recoat and apply again. After the fiberglass sheet has been applied, use a roller to press the fiberglass firmly to the other sheet to ensure complete bonding has occurred. Effect on work material Certain laminating techniques use two steps of applying the epoxy to form resin impregnated fiber glass sheets. In the first step there is a resin solvent mixture which is partially cured so it will not redissolve in a second coating of the same mixture. The same resin mixture is subsequently given to the covered fiberglass with moderately cured resin in the second step. This second glaze which covers the first fills in the empty spaces between the fibers. The second coating is also only partially cured. This partial curing of the second layer furthers the curing of the first epoxy layer. This process also produces a thin sticky layer. The first coating acts like a sealed insulating sheet, preventing glass fiber contact with conductive planes. The second coating fills the planes and can form adhesive bonds to cores and conductive layers. Safety Be sure to keep fiberglass and epoxy away from open flames Do not inhale excess fumes from epoxy or allow epoxy to come in contact with eyes or skin References External links A Guide for Fiberglass Operations Composite materials Glass applications
Fiberglass sheet laminating
Physics
465
27,059,655
https://en.wikipedia.org/wiki/Cloudera
Cloudera, Inc. is an American data lake software company. History Cloudera, Inc. was formed on June 27, 2008 in Burlingame, California by Christophe Bisciglia, Amr Awadallah, Jeff Hammerbacher, and chief executive Mike Olson. Prior to Cloudera, Bisciglia, Awadallah, and Hammerbacher were engineers at Google, Yahoo!, and Facebook respectively, and Olson was a database executive at Oracle after his previous company Sleepycat was acquired by Oracle in 2006. The four were joined in 2009 by Doug Cutting, a co-founder of Hadoop. Cloudera originally offered a free product based on Hadoop, earning revenue by selling support and consulting services around it. In March 2009, the company began offering a commercial distribution of Hadoop. In 2009 the company received a $5 million investment led by Accel Partners. This was followed by a $25 million funding round in October 2010 and a $40M funding round in November 2011. In June 2013, Olson transitioned from CEO to Chairman of the Board and Chief Strategy Officer. Tom Reilly, former CEO of ArcSight, was appointed CEO. In March 2014, Cloudera raised another $160 million in funding from T. Rowe Price and other investors. Intel invested $740 million in Cloudera for an 18% stake in the company (a $4.1 billion company valuation). These shares were repurchased by Cloudera in December 2020 for $314 million. On April 28, 2017, the company became a public company via an initial public offering. Over the next four years, the company's share price declined in the wake of falling sales figures and competition from public cloud services like Amazon Web Services. In October 2018, Cloudera and Hortonworks announced their merger, which the two companies completed the following January. Five months later, CEO Reilly and founder Olson left the company in June 2019. Board member Martin Cole was appointed as temporary CEO. In January 2020, former Hortonworks CEO Rob Bearden was appointed as Cloudera's CEO. In October 2021, the company went private after an acquisition by KKR and Clayton, Dubilier & Rice in an all cash transaction valued at approximately $5.3 billion. In October 2023, R2 Solutions LLC filed a civil complaint against Cloudera in the United States District Court for the Western District of Texas for patent infringement. That same month, StreamScale won a $240 million jury verdict against Cloudera for patent infringement. In June 2024, Cloudera acquired Verta, a machine learning startup. Products and services Cloudera provides the Cloudera Data Platform, a collection of products related to cloud services and data processing. Some of these services are provided through public cloud servers such as Microsoft Azure or Amazon Web Services, while others are private cloud services that require a subscription. Cloudera markets these products for purposes related to machine learning and data analysis. Cloudera has adopted the marketing term "data lakehouse," which derives from a combination of the terms "data lake" and "data warehouse." Cloudera has formed partnerships with companies such as Dell, IBM, and Oracle. In 2022, Cloudera announced support for Apache Iceberg. References External links American companies established in 2008 2008 establishments in California 2017 initial public offerings 2021 mergers and acquisitions Business intelligence companies Business intelligence software Business analysis Big data companies Cloud computing providers Cloud infrastructure Companies based in Palo Alto, California Companies formerly listed on the New York Stock Exchange Data companies Data and information visualization software Free software companies Hadoop Software companies based in the San Francisco Bay Area Software companies established in 2008 Software companies of the United States Kohlberg Kravis Roberts companies Private equity portfolio companies
Cloudera
Technology
749
35,583,303
https://en.wikipedia.org/wiki/Zenazocine
Zenazocine (INN; WIN-42,964) is an opioid analgesic of the benzomorphan family which made it to phase II clinical trials before development was ultimately halted and it was never marketed. It acts as a partial agonist of the μ- and δ-opioid receptors, with less intrinsic activity at the former receptor and more at the latter receptor (hence, it behaves more antagonistically at the former and more agonistically at the latter), and produces antinociceptive effects in animal studies. See also Tonazocine References Benzomorphans Kappa-opioid receptor agonists Ketones Opioids Hydroxyarenes
Zenazocine
Chemistry
146
15,510,119
https://en.wikipedia.org/wiki/Dirty%2C%20dangerous%20and%20demeaning
"Dirty, dangerous and demeaning" (often "dirty, dangerous and demanding" or "dirty, dangerous and difficult"), also known as the 3Ds, is an American neologism derived from the Asian concept, and refers to certain kinds of labor often performed by unionized blue-collar workers. The term originated from the Japanese expression 3K: , , (respectively "dirty", "dangerous", "demanding"), and has subsequently gained widespread use, particularly regarding labor done by migrant workers and burakumin. Any task fitting the criteria of a 3D job can qualify, regardless of industry. These jobs can bring higher wages due to a shortage of willing qualified individuals and in many world regions are filled by migrant workers looking for higher wages. Economic status Traditionally, workers in 3D professions are better paid in relation to comparable employment available, due to the undesirability of the work, and the resulting need to pay higher wages to attract workers. This has allowed the uneducated and unskilled to earn a living wage by foregoing comfort, personal safety and social status. This concept proves itself in the economic theory of quantity supplied and quantity demanded (see Quantity adjustment). The wages paid to these workers is higher due to the undesirable nature of their professions. However, in regions where certain classes of workers are restricted to this type of work or there are contributing regional conditions - for example, high unemployment, adjacency to regions with high poverty, or those that are recipient of driven labor migration - there will be workers willing to accept lower than equilibrium wages and then these jobs are not well paid by any definition. Large scale international labor migration, from developing to developed countries since the late 19th century and early 20th century has provided a pool of migrants willing to undertake employment for lower wages than native residents. Higher wages in developed countries are a strong 'pull' factor in international migration, and thus while a migrant worker is willing to accept a comparatively low-wage for a 3D job in a developed country it may mean a significant increase in wages compared to their originating country. Prominent current examples of migration for 3D wages include Filipino entertainment workers who migrate to Japan, and of Indians and Pakistanis going to the Middle East to work in the construction industry. Migration for 3D wages is not new. In the United States, 3D occupations once filled by Irish and German immigrants, are today held by many Latin Americans. The highest paying work available to these often unskilled and uneducated (or their foreign certificates of skills and education are not recognized) immigrants is work that is of lower social status, and has a higher risk of injury. As immigrants make up an increasing share of the labor market in countries such as the United States it will become increasingly important for employers to find ways of effectively promoting occupational safety and health among immigrant workers. These workers are susceptible to exploitation, and without representation can have a difficult time maintaining fair living wages. Since the beginning of the labor movement, immigrant workers in 3D jobs have formed the backbone of many labor unions. When there are concentrations of low paid workers in 3D occupations, this is artificially created when there are either 'pull' mechanisms that create migrant worker flows or through mechanisms that create subclasses of indigenous populations, through a selective application or an absence of labor protections. Undocumented immigration status is one such mechanism which reinforces the social vulnerability of immigrant workers and can increase their risk for occupational injury and limit their access to institutional resources that protect worker health. In the worst case the concentration is exploitation and can become slavery in its various forms. Historically, the 3D occupations have at times been widely satisfied through forced employment due to the lack of available applicants, a supply of exploitable labor, and either legalization of forced employment or a disregard for the labor laws. People who find themselves working a 3D occupation will be well paid if they have the protection of the law, poorly paid if they have poor protection of the law or unfair laws, and unpaid if they exist under no protection of the law, no law, or in a society with legal slavery. Regardless of the hazards, engaging in high risk, low status work can be a way to escape poverty - captured by a line in the Irish folk song Finnegan's Wake, "to rise in the world he carried a hod." Risks As the name indicates, dirty, dangerous and demeaning work can impose severe physical and mental costs on workers. There is often a risk of early retirement due to injury, general joint depletion or mental fatigue. After witnessing the constant physical and mental injury to coworkers or even death, the stress can cause mental fatigue and post-traumatic stress disorder. See also Statute of Labourers 1351 Dangerous jobs References External links Come back alive dangerous jobs U.S. Department of Labor, Bureau of Labor Statistics, Census of Fatal Occupational Injuries (CFOI) The Economist Pocket Asia, 1998. Andrews, John. The Economist, 1998. Copyright The Economist, 199 The Worst Jobs in History Construction Ethically disputed working conditions Social class in the United States
Dirty, dangerous and demeaning
Engineering
1,031
51,357,915
https://en.wikipedia.org/wiki/Cottage%20window
A cottage window is a double-hung window — i.e., a window with two sashes sliding up and down, hung with one atop the other in the same frame — in which the upper sash is smaller (shorter) than the lower one. The upper sash often contains smaller lights divided by muntins (often known as a "divided light pattern" or "grille"), although in some cases both sashes may be divided. Cottage windows are especially characteristic of bungalow or Craftsman-style houses. It is also called a "front window". Windows Architectural styles Architectural design Architectural history Architectural elements
Cottage window
Technology,Engineering
125
53,909,617
https://en.wikipedia.org/wiki/Zhan%20catalyst
A Zhan catalyst is a type of ruthenium-based organometallic complex used in olefin metathesis. This class of chemicals is named after the chemist who first synthesized them, Zheng-Yun J. Zhan. These catalysts are ruthenium complexes with functionally substituted alkoxybenzylidene carbene ligands, which can be chemically bonded to the surface of resins, PEG chains, and polymers. Like the structurally similar Hoveyda-Grubbs catalyst, they contain an isopropoxystyrene moiety, but include an extra electron-withdrawing sulfonamide group attached to the carbon para to the phenol oxygen. Of the three catalysts, Zhan Catalyst-1B and -1C both contain a dimethylsulfonamide moiety attached to the aryl ring, while Zhan Catalyst-II is connected to a resin via a sulfonamide linker. History The Zhan catalysts were inspired by previous work in the olefin metathesis field. Robert H. Grubbs first reported the first and second generation of Ru catalysts in 1992, with good metathesis activity. However, the catalysts containing the tricyclohexylphospine ligand were unstable to air and water, and the catalytic activity is not good enough for some multiple substituted olefin substrates. In 1999, Amir H. Hoveyda showed that alkoxybenzylidene ligand based Ru catalysts offered higher activity and better stability than their Grubbs counterparts without these ligands. Later, Grela (2002) and Blechert (2003) further improved catalyst activity by incorporating substitution to Hoveyda’s alkoxybenzylidene ligands. Zhan’s catalysts were first reported in 2007, and include electron-withdrawing groups like dimethylsulfonamide on the aryl ring. Zhan's second generation catalysts are also tethered to a resin or PEG-linked support via the sulfonamide group on the isopropoxystyrene. As with other Grubbs-type catalysts with modified chelating benzylidenes, after one catalytic turnover, the chelate is no longer associated with the propagating catalyst, meaning that the initiate rate, the rate of o-alkoxystyrene rechelation, and the rates of various catalyst decomposition events are the factors that differ between the Zhan catalysts and the parent Hoveyda–Grubbs catalysts. A mechanistic study by Plenio and coworkers in 2012 suggested that the Zhan compounds, like other Hoveyda-type catalysts, initiate by competing dissociative and interchange mechanisms, with the relative activation energies being a function of catalyst structure, olefin identity, and reaction conditions. However, nobody had been able to rigorously establish through experimentation how the various changes to the structure affected catalytic activity of the complex. Engle, Luo, Houk, Grubbs, and coworkers developed a model that could rationalize initiation rates of ruthenium olefin metathesis catalysts with chelated benzylidenes, using a combination of organometallic synthesis, reaction kinetics, NMR spectroscopy, X-ray crystallography, and DFT calculations. Preparation In order to make the catalysts, the pre-complex is treated with CuCl and the isopropoxystyrene ligand. The isopropoxystyrene ligand is prepared using an ortho-vinylation of the phenol with ethyne, using conditions first proposed by Masahiko Yamaguchi in 1998. Here, SnCl4 and Bu3N were added to ethyne to generate stannylacetylene, which is the active vinylating species in this C–C bond formation. After coupling, the phenol can be alkylated using i-PrBr and a base. Recycling The Zhan catalysts can be recovered and recycled by simple precipitation or filtration. Zhan Catalyst-1B and -1C are soluble in dichloromethane, dichloroethane, chloroform, ether, and other solvents, but insoluble in methanol, ethanol, and other alcohols. Zhan Catalyst-II is linked to a resin- and PEG-linked support, offering a great advantage in recyclable utility, and leaving little or no trace of metal contamination within the product of olefin metathesis reactions. These catalysts can then be reused. References Organoruthenium compounds Catalysts Sulfonamides B Ruthenium(II) compounds
Zhan catalyst
Chemistry
965
1,079,629
https://en.wikipedia.org/wiki/Himetric
Himetric is a resolution-independent unit of length. Its role is similar to the twip, but it is one hundredth of a millimetre. It is mainly used in Object Linking and Embedding and derived technologies such as ActiveX, Active Template Library and Visual Basic up to version 6. References Typography Units of length Computer graphics Non-SI metric units
Himetric
Mathematics
80
57,712,865
https://en.wikipedia.org/wiki/Chemical%20looping%20reforming%20and%20gasification
Chemical looping reforming (CLR) and gasification (CLG) are the operations that involve the use of gaseous carbonaceous feedstock and solid carbonaceous feedstock, respectively, in their conversion to syngas in the chemical looping scheme. The typical gaseous carbonaceous feedstocks used are natural gas and reducing tail gas, while the typical solid carbonaceous feedstocks used are coal and biomass. The feedstocks are partially oxidized to generate syngas using metal oxide oxygen carriers as the oxidant. The reduced metal oxide is then oxidized in the regeneration step using air. The syngas is an important intermediate for generation of such diverse products as electricity, chemicals, hydrogen, and liquid fuels. The motivation for developing the CLR and CLG processes lies in their advantages of being able to avoid the use of pure oxygen in the reaction, thereby circumventing the energy intensive air separation requirement in the conventional reforming and gasification processes. The energy conversion efficiency of the processes can, thus, be significantly increased. Steam and carbon dioxide can also be used as the oxidants. As the metal oxide also serves as the heat transfer medium in the chemical looping process, the exergy efficiency of the reforming and gasification processes like that for the combustion process is also higher as compared to the conventional processes. Description The CLR and CLG processes use solid metal oxides as the oxygen carrier instead of pure oxygen as the oxidant. In one reactor, termed the reducer or fuel reactor, the carbonaceous feedstock is partially oxidized to syngas, while the metal oxide is reduced to a lower oxidation state as given by: CHaOb + MeOx → CO + H2 + MeOx-δ where Me is a metal. It is noted that the reaction in the reducer of the CLR and CLG processes differs from that in the chemical looping combustion (CLC) process in that, the feedstock in CLC process is fully oxidized to CO2 and H2O. In another reactor, termed the oxidizer, combustor or air reactor (when air is used as the regeneration agent), the reduced metal oxide from the reducer is re-oxidized by air or steam as given by: MeOx-δ + O2 (air) → MeOx + (O2 depleted air) MeOx-δ + H2O → MeOx + H2 The solid metal oxide oxygen carrier is then circulated between these two reactors. That is the reducer and the oxidizer/combustor are connected in a solids circulatory loop, while the gaseous reactants and products from each of the two reactors are isolated by the gas seals between the reactors. This streamlining configuration of the chemical looping system possesses a process intensification property with a smaller process footprint as compared to that for the traditional systems. Oxygen carriers The Ellingham diagram that provides the Gibbs free energy formation of a variety of metal oxides is widely used in metallurgical processing for determining the relative reduction-oxidation potentials of metal oxides at different temperatures. It depicts the thermodynamic property of a variety of metal oxides to be used as potential oxygen carrier materials. It can be modified to provide the Gibbs free energy changes for metals and metal oxides under various oxidation states so that it can be directly used for the selection of metal oxide oxygen carrier materials based on their oxidation capabilities for specific chemical looping applications. The modified Ellingham diagram is given in Fig 1a. As shown in Fig 1b, the diagram can be divided into four different sections based on the following four key reactions: Reaction line 1: 2CO + O2 → 2CO2 Reaction line 2: 2H2 + O2 → 2H2O Reaction line 3: 2C + O2 → 2CO Reaction line 4: 2CH4 + O2 → 2CO + 4H2 The sections identified in Fig 1b provide the information on metal oxide materials that can be selected as potential oxygen carriers for desired chemical looping applications. Specifically, highly oxidative metal oxides, such as NiO, CoO, CuO, Fe2O3 and Fe3O4 belong to the combustion section (Section A) and they all lie above the reaction lines 1 and 2. These metal oxides have a high oxidizing tendency and can be used as oxygen carriers for the chemical looping combustion, gasification or partial oxidation processes. The metal oxides in Section E, the small section between the reaction lines 1 and 2, can be used for CLR and CLG, although a significant amount of H2O may present in the syngas product. The section for syngas production lies between reaction lines 2 and 3 (Section B). Metal oxides lying in this region, such as CeO2, have moderate oxidation tendencies and are suitable for CLR and CLG but not for the complete oxidation reactions. Metal oxides below reaction line 3 (Sections C and D) are not thermodynamically favored for oxidizing the fuels to syngas. Thus, they cannot be used as oxygen carriers and are generally considered to be inert. These materials include Cr2O3 and SiO2. They can, however, be used as support materials along with active oxygen carrier materials. In addition to the relative redox potentials of metal oxide materials illustrated in Fig 1b, the development of desired oxygen carriers for chemical looping applications requires to consider such properties as oxygen carrying capacity, redox reactivity, reaction kinetics, recyclability, attrition resistance, heat carrying capacity, melting point, and production cost. Process configurations The CLR and CLG processes can be configured based on the types of carbonaceous feedstocks given and desired products to be produced. Among a broad range of products, the CLG process can produce electricity through chemical looping IGCC. The syngas produced from the CLR and the CLG can be used to synthesize a variety of chemicals, liquid fuels and hydrogen. Given below are some specific examples of the CLR and CLG processes. Steam methane reforming with chemical looping combustion (CLC-SMR) Hydrogen and syngas are currently produced largely by steam methane reforming (SMR). The main reaction in SMR is: CH4 + H2O → CO + 3H2 Steam can be further used to convert CO to H2 via the water-gas shift reaction (WGS): H2O + CO → CO2 + H2 The SMR reaction is endothermic, which requires heat input. The state-of-art SMR system places the tubular catalytic reactors in a furnace, in which fuel gas is burned to provide the required heat. In the SMR with chemical looping combustion (CLC-SMR) concepts shown in Fig 2, the syngas production is carried out by the SMR in a tubular catalytic reactor while the chemical looping combustion system is used to provide the heat for the catalytic reaction. Depending on which chemical looping reactor is used to provide the SMR reaction heat, two CLC-SMR schemes can be configured. In Scheme 1 (Fig 2a), the reaction heat is provided by the reducer (fuel reactor). In Scheme 2 (Fig 2b), the reaction heat is provided by the combustor (air reactor). In either scheme, the combustion of metal oxide by air in the chemical looping system provides the heat source that sustains the endothermic SMR reactions. In the chemical looping system, natural gas and the recycled off-gas from the pressure swing adsorption (PSA) of the SMR process system are used as the feedstock for the CLC fuel reactor operation with CO2 and the steam as the reaction products. The CLC-SMR concepts have mainly been studied from the perspective of the process simulation. It is seen that both schemes do not engage directly the chemical looping system as a means for syngas production. Chemical looping reforming (CLR) Chemical looping systems can directly be engaged as an effective means for syngas production. Compared to the conventional partial oxidation (POX) or autothermal reforming (ATR) processes, the key advantage of the chemical looping reforming (CLR) process is the elimination of the air separation unit (ASU) for oxygen production. The gaseous fuel, typically natural gas, is fed to the fuel reactor, in which a solid metal oxide oxygen carrier partially oxidizes the fuel to generate syngas: CH4 + MeOx → CO + 2H2 + MeOx-δ Steam can be added to the reaction in order to increase the generation of H2, via the water-gas shift reaction (WGS) and/or steam methane reforming. The CLR process can produce a syngas with a H2:CO molar ratio of 2:1 or higher, which is suitable for Fischer–Tropsch synthesis, methanol synthesis, or hydrogen production. The reduced oxygen carrier from the reducer is oxidized by air in the combustor: MeOx-δ + O2 (air) → MeOx The overall reaction in the CLR system is a combination of the partial oxidation reaction of the fuel and the WGS reaction: CH4 + O2 + a H2O → CO + (2+a) H2 It is noted that the actual reaction products for such reactions as those given above can vary depending on the actual operating conditions. For example, the CLR reactions can also produce CO2 when highly oxidative oxygen carriers such as NiO and Fe2O3 are used. The carbon deposition occurs particularly when the oxygen carrier is highly reduced. Reduced oxygen carrier species, such as Ni and Fe, catalyze the hydrocarbon pyrolysis reactions. Fig 3 shows a CLR system that has been studied experimentally by Vienna University of Technology. The system consists of a fluidized bed reducer and a fluidized bed combustor, connected by loop seals and cyclones. Commonly used oxygen carriers are based on NiO or Fe2O3. The NiO-based oxygen carriers exhibit excellent reactivity, as shown by the high conversion of natural gas. The Fe2O3-based oxygen carriers have a lower material cost while their reactivity is lower than that of the NiO-based ones. Operating variables such as temperature, pressure, type of metal oxide, and molar ratio of metal oxide to gaseous fuel will influence the fuel conversion and product compositions. However, with the effects of the back mixing and distributed residence time for the metal oxide particles in the fluidized bed, the oxidation state of the metal oxide particles in the fluidized bed varies that prevents a high purity of the syngas to be produced from the reactor. The moving bed reactor that does not have the effects of back mixing of the metal oxide particles is another gas-solid contact configuration for CLR/CLG operation. This reactor system developed by Ohio State University is characterized by a co-current gas-solid moving bed reducer as given in Fig 4. The moving bed reducer can maintain the uniform oxidation state of the exit metal oxide particles from the reactor. thereby synchronizing the process operation to achieve the thermodynamic equilibrium conditions. The CLR moving bed process applied to the methane to syngas (MTS) reactions has the flexibility of co-feeding CO2 as a feedstock with such gaseous fuels as natural gas, shale gas, and reducing tail gases, yielding a CO2 negative process system. The CLR-MTS system can yield a higher energy efficiency and cost benefits over the conventional syngas technologies. In a benchmark study for production of 50,000 barrels per day of liquid fuels using the natural gas as the feedstock, the CLR - MTS system for syngas production can reduce the natural gas usage by 20% over the conventional systems involving the Fischer–Tropsch technology. Chemical looping gasification (CLG) Chemical looping gasification (CLG) differs from the CLR in that it uses solid fuels such as coal and biomass instead of gaseous fuels as feedstocks. The operating principles for the CLG is similar to CLR. For solid feedstocks, devolatilization and pyrolysis of the solid fuel occur when the solid fuels are introduced into the reducer and mixed with the oxygen carrier particles. With the fluidized bed reducer, the released volatiles, including light organic compounds and tars, may channel through the reducer and exit with the syngas. The light organic compounds may reduce the purity of the syngas, while the tars may accumulate in downstream pipelines and instruments. For example, the carbon efficiency using the coal CLG fluidized bed reducer may vary from 55% to 81%, whereas the carbon efficiency using the coal moving bed reducer can reach 85% to 98%. The syngas derived from the biomass CLG fluidized bed reducer may consist of up to 15% methane, while the syngas derived from the biomass CLG moving bed reducer can reach a methane concentration of less than 5%. In general, increasing the temperature of the CLG system can promote volatile and char conversion. This may also promote the full oxidation side reaction resulting in an increased CO2 concentration in the syngas. Additional equipment for gas cleanup including scrubber, catalytic steam reformer and/or tar reformer may be necessary downstream of the CLG system in order to remove or convert the unwanted byproducts in the syngas stream. Char, the remaining solid from the devolatilization and reactions, requires additional time for conversion. For a fluidized bed reducer with particle back mixing, unconverted char may leave the reducer with the reduced metal oxide particles. A carbon stripper may be needed at the solid outlet of the fluidized bed reducer to allow the unconverted char to be separated from the oxygen carriers. The char can be recycled back to the reducer for further conversion. In a similar operating scheme to the CLR - MTS system given in Fig 4, chemical looping gasification (CLG) of solid fuels carried out in a co-current moving bed reducer to partially oxidize solid fuels into syngas can reach an appropriate H2/CO ratio for downstream processing. Coal ash is removed through in-situ gas-solid separation operation. The moving bed prevents the channeling or bypassing of the volatiles and chars, thereby maximizing the conversion of the solid fuel. The full oxidation side reactions can be impeded through the control of the oxidation state formed for the oxygen carriers in the moving bed reactor. The CLR moving bed process applied to the coal to syngas (CTS) reactions also has the flexibility of co-feeding CO2 as a feedstock with coal yielding a CO2 negative process system with a high purity of syngas production. In a benchmark study for production of 10,000 ton/day of methanol from coal, the upstream gasification capital cost can be reduced by 50% when the chemical looping moving bed gasification system is used. Broader context In a general sense, the CLR and CLG processes for syngas production are part of the chemical looping partial oxidation or selective oxidation reaction schemes. The syngas production can lead to the hydrogen production from the downstream water-gas shift reaction. The CLG process can also be applied to electricity generation, resembling the IGCC based on the syngas generated from the chemical looping processes. The chemical looping three-reactor (including reducer, oxidizer and combustor) system using a moving bed reducer for metal oxide reduction by fuel followed by a moving bed oxidizer for the water splitting to produce hydrogen is given in Fig 5. For coal-based feedstock applications, this system is estimated to reduce the cost for electricity generation by 5-15% as compared to conventional systems. The selective oxidation based chemical looping processes can be used to produce directly in one step value-added products beyond syngas. These chemical looping processes require the use of designed metal oxide oxygen carrier that has a high product selectivity and a high feedstock conversion. An example is the chemical looping selective oxidation process developed by DuPont for producing maleic anhydride from butane. The oxygen carrier used in this process is vanadium phosphorus oxide (VPO) based material. This chemical looping process was advanced to the commercial level. Its commercial operation, however, was hampered in part by the inadequacies in the chemical and mechanical viability of the oxygen carrier VPO and its associated effects on the reaction kinetics of the particles. Chemical looping selective oxidation was also applied to the production of olefins from methane. In chemical looping oxidative coupling of methane (OCM), the oxygen carrier selectively converts methane into ethylene. References looping reforming and gasification Chemical process engineering Industrial gases Synthetic fuel technologies
Chemical looping reforming and gasification
Chemistry,Engineering
3,496
47,360,133
https://en.wikipedia.org/wiki/Lasiodiplodia%20gilanensis
Lasiodiplodia gilanensis is an endophytic fungus. It was first isolated in Gilan Province, Iran, hence its name. It has since been isolated in other plants in other continents, and is considered a plant pathogen. L. gilanensis is phylogenetically related to L. plurivora, but can be distinguished by its conidial dimensions. Also, the paraphyses of the former are up to 95μm long and 4μm wide, whereas those of L. plurivora are up to 130μm long and 10μm wide. At the same time, the basal 1–3 cells in the paraphyses of L. plurivora are broader than its apical cells. Description Its conidiomata are stromatic and pycnidial; its mycelium being uniloculate and non-papillate, with a central ostiole. Paraphyses are hyaline and cylindrical. Conidiophores are absent in this species. Its conidiogenous cells are holoblastic and also hyaline, while its conidia are aseptate and ellipsoid. References Further reading Van der Linde, Johannes Alwyn, et al. "Lasiodiplodia species associated with dying Euphorbia ingens in South Africa." Southern Forests: a Journal of Forest Science 73.3-4 (2011): 165–173. Machado, Alexandre Reis, Danilo Batista Pinho, and Olinto Liparini Pereira. "Phylogeny, identification and pathogenicity of the Botryosphaeriaceae associated with collar and root rot of the biofuel plant Jatropha curcas in Brazil, with a description of new species of Lasiodiplodia." Fungal Diversity 67.1 (2014): 231–247. Sakalidis, Monique L., et al. "Pathogenic Botryosphaeriaceae associated with Mangifera indica in the Kimberley region of Western Australia.” European journal of plant pathology 130.3 (2011): 379–391. External links MycoBank Botryosphaeriaceae Fungi described in 2010 Fungus species
Lasiodiplodia gilanensis
Biology
467
41,643,980
https://en.wikipedia.org/wiki/Natanz%20Steel%20Plant
Natanz Steel Plant ( – Mojtame`-ye Kārkhāneh Hāy Fūlād-e Naţanz) is a steel plant in Karkas Rural District, in the Central District of Natanz County, Isfahan Province, Iran. It was started in 1994 and began production in 2002. Steelworkers strikes Steelworkers in Natanz held several protests inside the company in 2018 after they did not receive 10 months of salary before the Iranian New Year in March 2018. This resulted in steelworkers gathering at the factory ground and holding a demonstration. The governor of Natanz, Yusef Baferani held mediation between steelworkers and NSC. In 2018 the steelworkers' insurance was cut after payments from NSC to the insurance provider was halted. This issue received local coverage, with the Isfahan Municipality, the governor of Natanz and the parliamentary representative of Natanz expressing public sympathy for steelworkers. Debt scandal In 2019 Tejarat Bank publicized a debt report in which it released the names its debtors in which several steel companies stood out. In this report, the person of Javad Tavakoli holds first place on the list with an outstanding debt of 18.07 trillion Rials or 150 million dollars. The Mobarakeh Steel Company also held debts to Tejarat Bank. MSC debts stood at 6.3 million dollars in pure debt and 17.8 million dollars in non-pure/mutual debts at a total of 24.1 million dollars. References Iron and steel mills Buildings and structures in Isfahan province 2002 establishments in Iran Industrial buildings in Iran
Natanz Steel Plant
Chemistry
333
176,550
https://en.wikipedia.org/wiki/Standard%20molar%20entropy
In chemistry, the standard molar entropy is the entropy content of one mole of pure substance at a standard state of pressure and any temperature of interest. These are often (but not necessarily) chosen to be the standard temperature and pressure. The standard molar entropy at pressure = is usually given the symbol , and has units of joules per mole per kelvin (J⋅mol−1⋅K−1). Unlike standard enthalpies of formation, the value of is absolute. That is, an element in its standard state has a definite, nonzero value of at room temperature. The entropy of a pure crystalline structure can be 0J⋅mol−1⋅K−1 only at 0K, according to the third law of thermodynamics. However, this assumes that the material forms a 'perfect crystal' without any residual entropy. This can be due to crystallographic defects, dislocations, and/or incomplete rotational quenching within the solid, as originally pointed out by Linus Pauling. These contributions to the entropy are always present, because crystals always grow at a finite rate and at temperature. However, the residual entropy is often quite negligible and can be accounted for when it occurs using statistical mechanics. Thermodynamics If a mole of a solid substance is a perfectly ordered solid at 0K, then if the solid is warmed by its surroundings to 298.15K without melting, its absolute molar entropy would be the sum of a series of stepwise and reversible entropy changes. The limit of this sum as becomes an integral: In this example, and is the molar heat capacity at a constant pressure of the substance in the reversible process . The molar heat capacity is not constant during the experiment because it changes depending on the (increasing) temperature of the substance. Therefore, a table of values for is required to find the total molar entropy. The quantity represents the ratio of a very small exchange of heat energy to the temperature . The total molar entropy is the sum of many small changes in molar entropy, where each small change can be considered a reversible process. Chemistry The standard molar entropy of a gas at STP includes contributions from: The heat capacity of one mole of the solid from 0K to the melting point (including heat absorbed in any changes between different crystal structures). The latent heat of fusion of the solid. The heat capacity of the liquid from the melting point to the boiling point. The latent heat of vaporization of the liquid. The heat capacity of the gas from the boiling point to room temperature. Changes in entropy are associated with phase transitions and chemical reactions. Chemical equations make use of the standard molar entropy of reactants and products to find the standard entropy of reaction: The standard entropy of reaction helps determine whether the reaction will take place spontaneously. According to the second law of thermodynamics, a spontaneous reaction always results in an increase in total entropy of the system and its surroundings: Molar entropy is not the same for all gases. Under identical conditions, it is greater for a heavier gas. See also Entropy Heat Gibbs free energy Helmholtz free energy Standard state Third law of thermodynamics References External links Table of Standard Thermodynamic Properties for Selected Substances Chemical properties Thermodynamic entropy Molar quantities
Standard molar entropy
Physics,Chemistry
684
3,758
https://en.wikipedia.org/wiki/Berkelium
Berkelium is a synthetic chemical element; it has symbol Bk and atomic number 97. It is a member of the actinide and transuranium element series. It is named after the city of Berkeley, California, the location of the Lawrence Berkeley National Laboratory (then the University of California Radiation Laboratory) where it was discovered in December 1949. Berkelium was the fifth transuranium element discovered after neptunium, plutonium, curium and americium. The major isotope of berkelium, 249Bk, is synthesized in minute quantities in dedicated high-flux nuclear reactors, mainly at the Oak Ridge National Laboratory in Tennessee, United States, and at the Research Institute of Atomic Reactors in Dimitrovgrad, Russia. The longest-lived and second-most important isotope, 247Bk, can be synthesized via irradiation of 244Cm with high-energy alpha particles. Just over one gram of berkelium has been produced in the United States since 1967. There is no practical application of berkelium outside scientific research which is mostly directed at the synthesis of heavier transuranium elements and superheavy elements. A 22-milligram batch of berkelium-249 was prepared during a 250-day irradiation period and then purified for a further 90 days at Oak Ridge in 2009. This sample was used to synthesize the new element tennessine for the first time in 2009 at the Joint Institute for Nuclear Research, Russia, after it was bombarded with calcium-48 ions for 150 days. This was the culmination of the Russia–US collaboration on the synthesis of the heaviest elements on the periodic table. Berkelium is a soft, silvery-white, radioactive metal. The berkelium-249 isotope emits low-energy electrons and thus is relatively safe to handle. It decays with a half-life of 330 days to californium-249, which is a strong emitter of ionizing alpha particles. This gradual transformation is an important consideration when studying the properties of elemental berkelium and its chemical compounds, since the formation of californium brings not only chemical contamination, but also free-radical effects and self-heating from the emitted alpha particles. Characteristics Physical Berkelium is a soft, silvery-white, radioactive actinide metal. In the periodic table, it is located to the right of the actinide curium, to the left of the actinide californium and below the lanthanide terbium with which it shares many similarities in physical and chemical properties. Its density of 14.78 g/cm3 lies between those of curium (13.52 g/cm3) and californium (15.1 g/cm3), as does its melting point of 986 °C, below that of curium (1340 °C) but higher than that of californium (900 °C). Berkelium is relatively soft and has one of the lowest bulk moduli among the actinides, at about 20 GPa (2 Pa). ions shows two sharp fluorescence peaks at 652 nanometers (red light) and 742 nanometers (deep red – near-infrared) due to internal transitions at the f-electron shell. The relative intensity of these peaks depends on the excitation power and temperature of the sample. This emission can be observed, for example, after dispersing berkelium ions in a silicate glass, by melting the glass in presence of berkelium oxide or halide. Between 70 K and room temperature, berkelium behaves as a Curie–Weiss paramagnetic material with an effective magnetic moment of 9.69 Bohr magnetons (μB) and a Curie temperature of 101 K. This magnetic moment is almost equal to the theoretical value of 9.72 μB calculated within the simple atomic L-S coupling model. Upon cooling to about 34 K, berkelium undergoes a transition to an antiferromagnetic state. The enthalpy of dissolution in hydrochloric acid at standard conditions is −600 kJ/mol, from which the standard enthalpy of formation (ΔfH°) of aqueous ions is obtained as −601 kJ/mol. The standard electrode potential /Bk is −2.01 V. The ionization potential of a neutral berkelium atom is 6.23 eV. Allotropes At ambient conditions, berkelium assumes its most stable α form which has a hexagonal symmetry, space group P63/mmc, lattice parameters of 341 pm and 1107 pm. The crystal has a double-hexagonal close packing structure with the layer sequence ABAC and so is isotypic (having a similar structure) with α-lanthanum and α-forms of actinides beyond curium. This crystal structure changes with pressure and temperature. When compressed at room temperature to 7 GPa, α-berkelium transforms to the β modification, which has a face-centered cubic (fcc) symmetry and space group Fmm. This transition occurs without change in volume, but the enthalpy increases by 3.66 kJ/mol. Upon further compression to 25 GPa, berkelium transforms to an orthorhombic γ-berkelium structure similar to that of α-uranium. This transition is accompanied by a 12% volume decrease and delocalization of the electrons at the 5f electron shell. No further phase transitions are observed up to 57 GPa. Upon heating, α-berkelium transforms into another phase with an fcc lattice (but slightly different from β-berkelium), space group Fmm and the lattice constant of 500 pm; this fcc structure is equivalent to the closest packing with the sequence ABC. This phase is metastable and will gradually revert to the original α-berkelium phase at room temperature. The temperature of the phase transition is believed to be quite close to the melting point. Chemical Like all actinides, berkelium dissolves in various aqueous inorganic acids, liberating gaseous hydrogen and converting into the state. This trivalent oxidation state (+3) is the most stable, especially in aqueous solutions, but tetravalent (+4), pentavalent (+5), and possibly divalent (+2) berkelium compounds are also known. The existence of divalent berkelium salts is uncertain and has only been reported in mixed lanthanum(III) chloride-strontium chloride melts. A similar behavior is observed for the lanthanide analogue of berkelium, terbium. Aqueous solutions of ions are green in most acids. The color of ions is yellow in hydrochloric acid and orange-yellow in sulfuric acid. Berkelium does not react rapidly with oxygen at room temperature, possibly due to the formation of a protective oxide layer surface. However, it reacts with molten metals, hydrogen, halogens, chalcogens and pnictogens to form various binary compounds. Isotopes Nineteen isotopes and six nuclear isomers (excited states of an isotope) of berkelium have been characterized, with mass numbers ranging from 233 to 253 (except 235 and 237). All of them are radioactive. The longest half-lives are observed for 247Bk (1,380 years), 248Bk (over 300 years), and 249Bk (330 days); the half-lives of the other isotopes range from microseconds to several days. The isotope which is the easiest to synthesize is berkelium-249. This emits mostly soft β-particles which are inconvenient for detection. Its alpha radiation is rather weak (1.45%) with respect to the β-radiation, but is sometimes used to detect this isotope. The second important berkelium isotope, berkelium-247, is an alpha-emitter, as are most actinide isotopes. Occurrence All berkelium isotopes have a half-life far too short to be primordial. Therefore, any primordial berkelium − that is, berkelium present on the Earth during its formation − has decayed by now. On Earth, berkelium is mostly concentrated in certain areas, which were used for the atmospheric nuclear weapons tests between 1945 and 1980, as well as at the sites of nuclear incidents, such as the Chernobyl disaster, Three Mile Island accident and 1968 Thule Air Base B-52 crash. Analysis of the debris at the testing site of the first United States' first thermonuclear weapon, Ivy Mike, (1 November 1952, Enewetak Atoll), revealed high concentrations of various actinides, including berkelium. For reasons of military secrecy, this result was not published until 1956. Nuclear reactors produce mostly, among the berkelium isotopes, berkelium-249. During the storage and before the fuel disposal, most of it beta decays to californium-249. The latter has a half-life of 351 years, which is relatively long compared to the half-lives of other isotopes produced in the reactor, and is therefore undesirable in the disposal products. The transuranium elements from americium to fermium, including berkelium, occurred naturally in the natural nuclear fission reactor at Oklo, but no longer do so. Berkelium is also one of the elements that have theoretically been detected in Przybylski's Star. History Although very small amounts of berkelium were possibly produced in previous nuclear experiments, it was first intentionally synthesized, isolated and identified in December 1949 by Glenn T. Seaborg, Albert Ghiorso, Stanley Gerald Thompson, and Kenneth Street Jr. They used the 60-inch cyclotron at the University of California, Berkeley. Similar to the nearly simultaneous discovery of americium (element 95) and curium (element 96) in 1944, the new elements berkelium and californium (element 98) were both produced in 1949–1950. The name choice for element 97 followed the previous tradition of the Californian group to draw an analogy between the newly discovered actinide and the lanthanide element positioned above it in the periodic table. Previously, americium was named after a continent as its analogue europium, and curium honored scientists Marie and Pierre Curie as the lanthanide above it, gadolinium, was named after the explorer of the rare-earth elements Johan Gadolin. Thus the discovery report by the Berkeley group reads: "It is suggested that element 97 be given the name berkelium (symbol Bk) after the city of Berkeley in a manner similar to that used in naming its chemical homologue terbium (atomic number 65) whose name was derived from the town of Ytterby, Sweden, where the rare earth minerals were first found." This tradition ended with berkelium, though, as the naming of the next discovered actinide, californium, was not related to its lanthanide analogue dysprosium, but after the discovery place. The most difficult steps in the synthesis of berkelium were its separation from the final products and the production of sufficient quantities of americium for the target material. First, americium (241Am) nitrate solution was coated on a platinum foil, the solution was evaporated and the residue converted by annealing to americium dioxide (). This target was irradiated with 35 MeV alpha particles for 6 hours in the 60-inch cyclotron at the Lawrence Radiation Laboratory, University of California, Berkeley. The (α,2n) reaction induced by the irradiation yielded the 243Bk isotope and two free neutrons: + → + 2 After the irradiation, the coating was dissolved with nitric acid and then precipitated as the hydroxide using concentrated aqueous ammonia solution. The product was centrifugated and re-dissolved in nitric acid. To separate berkelium from the unreacted americium, this solution was added to a mixture of ammonium and ammonium sulfate and heated to convert all the dissolved americium into the oxidation state +6. Unoxidized residual americium was precipitated by the addition of hydrofluoric acid as americium(III) fluoride (). This step yielded a mixture of the accompanying product curium and the expected element 97 in form of trifluorides. The mixture was converted to the corresponding hydroxides by treating it with potassium hydroxide, and after centrifugation, was dissolved in perchloric acid. Further separation was carried out in the presence of a citric acid/ammonium buffer solution in a weakly acidic medium (pH≈3.5), using ion exchange at elevated temperature. The chromatographic separation behavior was unknown for the element 97 at the time, but was anticipated by analogy with terbium. The first results were disappointing because no alpha-particle emission signature could be detected from the elution product. With further analysis, searching for characteristic X-rays and conversion electron signals, a berkelium isotope was eventually detected. Its mass number was uncertain between 243 and 244 in the initial report, but was later established as 243. Synthesis and extraction Preparation of isotopes Berkelium is produced by bombarding lighter actinides uranium (238U) or plutonium (239Pu) with neutrons in a nuclear reactor. In a more common case of uranium fuel, plutonium is produced first by neutron capture (the so-called (n,γ) reaction or neutron fusion) followed by beta-decay: ^{238}_{92}U ->[\ce{(n,\gamma)}] ^{239}_{92}U ->[\beta^-][23.5 \ \ce{min}] ^{239}_{93}Np ->[\beta^-][2.3565 \ \ce{d}] ^{239}_{94}Pu (the times are half-lives) Plutonium-239 is further irradiated by a source that has a high neutron flux, several times higher than a conventional nuclear reactor, such as the 85-megawatt High Flux Isotope Reactor (HFIR) at the Oak Ridge National Laboratory in Tennessee, US. The higher flux promotes fusion reactions involving not one but several neutrons, converting 239Pu to 244Cm and then to 249Cm: Curium-249 has a short half-life of 64 minutes, and thus its further conversion to 250Cm has a low probability. Instead, it transforms by beta-decay into 249Bk: ^{249}_{96}Cm ->[{\beta^-}][64.15 \ \ce{min}] ^{249}_{97}Bk ->[\beta^-][330 \ \ce{d}] ^{249}_{98}Cf The thus-produced 249Bk has a long half-life of 330 days and thus can capture another neutron. However, the product, 250Bk, again has a relatively short half-life of 3.212 hours and thus does not yield any heavier berkelium isotopes. It instead decays to the californium isotope 250Cf: ^{249}_{97}Bk ->[\ce{(n,\gamma)}] ^{250}_{97}Bk ->[\beta^-][3.212 \ \ce{h}] ^{250}_{98}Cf Although 247Bk is the most stable isotope of berkelium, its production in nuclear reactors is very difficult because its potential progenitor 247Cm has never been observed to undergo beta decay. Thus, 249Bk is the most accessible isotope of berkelium, which still is available only in small quantities (only 0.66 grams have been produced in the US over the period 1967–1983) at a high price of the order 185 USD per microgram. It is the only berkelium isotope available in bulk quantities, and thus the only berkelium isotope whose properties can be extensively studied. The isotope 248Bk was first obtained in 1956 by bombarding a mixture of curium isotopes with 25 MeV α-particles. Although its direct detection was hindered by strong signal interference with 245Bk, the existence of a new isotope was proven by the growth of the decay product 248Cf which had been previously characterized. The half-life of 248Bk was estimated as hours, though later 1965 work gave a half-life in excess of 300 years (which may be due to an isomeric state). Berkelium-247 was produced during the same year by irradiating 244Cm with alpha-particles: Berkelium-242 was synthesized in 1979 by bombarding 235U with 11B, 238U with 10B, 232Th with 14N or 232Th with 15N. It converts by electron capture to 242Cm with a half-life of minutes. A search for an initially suspected isotope 241Bk was then unsuccessful; 241Bk has since been synthesized. Separation The fact that berkelium readily assumes oxidation state +4 in solids, and is relatively stable in this state in liquids greatly assists separation of berkelium away from many other actinides. These are inevitably produced in relatively large amounts during the nuclear synthesis and often favor the +3 state. This fact was not yet known in the initial experiments, which used a more complex separation procedure. Various inorganic oxidation agents can be applied to the solutions to convert it to the +4 state, such as bromates (), bismuthates (), chromates ( and ), silver(I) thiolate (), lead(IV) oxide (), ozone (), or photochemical oxidation procedures. More recently, it has been discovered that some organic and bio-inspired molecules, such as the chelator called 3,4,3-LI(1,2-HOPO), can also oxidize Bk(III) and stabilize Bk(IV) under mild conditions. is then extracted with ion exchange, extraction chromatography or liquid-liquid extraction using HDEHP (bis-(2-ethylhexyl) phosphoric acid), amines, tributyl phosphate or various other reagents. These procedures separate berkelium from most trivalent actinides and lanthanides, except for the lanthanide cerium (lanthanides are absent in the irradiation target but are created in various nuclear fission decay chains). A more detailed procedure adopted at the Oak Ridge National Laboratory was as follows: the initial mixture of actinides is processed with ion exchange using lithium chloride reagent, then precipitated as hydroxides, filtered and dissolved in nitric acid. It is then treated with high-pressure elution from cation exchange resins, and the berkelium phase is oxidized and extracted using one of the procedures described above. Reduction of the thus-obtained to the +3 oxidation state yields a solution, which is nearly free from other actinides (but contains cerium). Berkelium and cerium are then separated with another round of ion-exchange treatment. Bulk metal preparation In order to characterize chemical and physical properties of solid berkelium and its compounds, a program was initiated in 1952 at the Material Testing Reactor, Arco, Idaho, US. It resulted in preparation of an eight-gram plutonium-239 target and in the first production of macroscopic quantities (0.6 micrograms) of berkelium by Burris B. Cunningham and Stanley Gerald Thompson in 1958, after a continuous reactor irradiation of this target for six years. This irradiation method was and still is the only way of producing weighable amounts of the element, and most solid-state studies of berkelium have been conducted on microgram or submicrogram-sized samples. The world's major irradiation sources are the 85-megawatt High Flux Isotope Reactor at the Oak Ridge National Laboratory in Tennessee, USA, and the SM-2 loop reactor at the Research Institute of Atomic Reactors (NIIAR) in Dimitrovgrad, Russia, which are both dedicated to the production of transcurium elements (atomic number greater than 96). These facilities have similar power and flux levels, and are expected to have comparable production capacities for transcurium elements, although the quantities produced at NIIAR are not publicly reported. In a "typical processing campaign" at Oak Ridge, tens of grams of curium are irradiated to produce decigram quantities of californium, milligram quantities of berkelium-249 and einsteinium, and picogram quantities of fermium. In total, just over one gram of berkelium-249 has been produced at Oak Ridge since 1967. The first berkelium metal sample weighing 1.7 micrograms was prepared in 1971 by the reduction of fluoride with lithium vapor at 1000 °C; the fluoride was suspended on a tungsten wire above a tantalum crucible containing molten lithium. Later, metal samples weighing up to 0.5 milligrams were obtained with this method. Similar results are obtained with fluoride. Berkelium metal can also be produced by the reduction of oxide with thorium or lanthanum. Compounds Oxides Two oxides of berkelium are known, with the berkelium oxidation state of +3 () and +4 (). oxide is a brown solid, while oxide is a yellow-green solid with a melting point of 1920 °C and is formed from BkO2 by reduction with molecular hydrogen: Upon heating to 1200 °C, the oxide undergoes a phase change; it undergoes another phase change at 1750 °C. Such three-phase behavior is typical for the actinide sesquioxides. oxide, BkO, has been reported as a brittle gray solid but its exact chemical composition remains uncertain. Halides In halides, berkelium assumes the oxidation states +3 and +4. The +3 state is the most stable, especially in solutions, while the tetravalent halides and are only known in the solid phase. The coordination of berkelium atom in its trivalent fluoride and chloride is tricapped trigonal prismatic, with the coordination number of 9. In trivalent bromide, it is bicapped trigonal prismatic (coordination 8) or octahedral (coordination 6), and in the iodide it is octahedral. fluoride () is a yellow-green ionic solid and is isotypic with uranium tetrafluoride or zirconium tetrafluoride. fluoride () is also a yellow-green solid, but it has two crystalline structures. The most stable phase at low temperatures is isotypic with yttrium(III) fluoride, while upon heating to between 350 and 600 °C, it transforms to the structure found in lanthanum trifluoride. Visible amounts of chloride () were first isolated and characterized in 1962, and weighed only 3 billionths of a gram. It can be prepared by introducing hydrogen chloride vapors into an evacuated quartz tube containing berkelium oxide at a temperature about 500 °C. This green solid has a melting point of 600 °C, and is isotypic with uranium(III) chloride. Upon heating to nearly melting point, converts into an orthorhombic phase. Two forms of bromide are known: one with berkelium having coordination 6, and one with coordination 8. The latter is less stable and transforms to the former phase upon heating to about 350 °C. An important phenomenon for radioactive solids has been studied on these two crystal forms: the structure of fresh and aged 249BkBr3 samples was probed by X-ray diffraction over a period longer than 3 years, so that various fractions of berkelium-249 had beta decayed to californium-249. No change in structure was observed upon the 249BkBr3—249CfBr3 transformation. However, other differences were noted for 249BkBr3 and 249CfBr3. For example, the latter could be reduced with hydrogen to 249CfBr2, but the former could not – this result was reproduced on individual 249BkBr3 and 249CfBr3 samples, as well on the samples containing both bromides. The intergrowth of californium in berkelium occurs at a rate of 0.22% per day and is an intrinsic obstacle in studying berkelium properties. Beside a chemical contamination, 249Cf, being an alpha emitter, brings undesirable self-damage of the crystal lattice and the resulting self-heating. The chemical effect however can be avoided by performing measurements as a function of time and extrapolating the obtained results. Other inorganic compounds The pnictides of berkelium-249 of the type BkX are known for the elements nitrogen, phosphorus, arsenic and antimony. They crystallize in the rock-salt structure and are prepared by the reaction of either hydride () or metallic berkelium with these elements at elevated temperature (about 600 °C) under high vacuum. sulfide, , is prepared by either treating berkelium oxide with a mixture of hydrogen sulfide and carbon disulfide vapors at 1130 °C, or by directly reacting metallic berkelium with elemental sulfur. These procedures yield brownish-black crystals. and hydroxides are both stable in 1 molar solutions of sodium hydroxide. phosphate () has been prepared as a solid, which shows strong fluorescence under excitation with a green light. Berkelium hydrides are produced by reacting metal with hydrogen gas at temperatures about 250 °C. They are non-stoichiometric with the nominal formula (0 < x < 1). Several other salts of berkelium are known, including an oxysulfide (), and hydrated nitrate (), chloride (), sulfate () and oxalate (). Thermal decomposition at about 600 °C in an argon atmosphere (to avoid oxidation to ) of yields the crystals of oxysulfate (). This compound is thermally stable to at least 1000 °C in inert atmosphere. Organoberkelium compounds Berkelium forms a trigonal (η5–C5H5)3Bk metallocene complex with three cyclopentadienyl rings, which can be synthesized by reacting chloride with the molten beryllocene () at about 70 °C. It has an amber color and a density of 2.47 g/cm3. The complex is stable to heating to at least 250 °C, and sublimates without melting at about 350 °C. The high radioactivity of berkelium gradually destroys the compound (within a period of weeks). One cyclopentadienyl ring in (η5–C5H5)3Bk can be substituted by chlorine to yield . The optical absorption spectra of this compound are very similar to those of (η5–C5H5)3Bk. Applications There is currently no use for any isotope of berkelium outside basic scientific research. Berkelium-249 is a common target nuclide to prepare still heavier transuranium elements and superheavy elements, such as lawrencium, rutherfordium and bohrium. It is also useful as a source of the isotope californium-249, which is used for studies on the chemistry of californium in preference to the more radioactive californium-252 that is produced in neutron bombardment facilities such as the HFIR. A 22 milligram batch of berkelium-249 was prepared in a 250-day irradiation and then purified for 90 days at Oak Ridge in 2009. This target yielded the first 6 atoms of tennessine at the Joint Institute for Nuclear Research (JINR), Dubna, Russia, after bombarding it with calcium ions in the U400 cyclotron for 150 days. This synthesis was a culmination of the Russia-US collaboration between JINR and Lawrence Livermore National Laboratory on the synthesis of elements 113 to 118 which was initiated in 1989. Nuclear fuel cycle The nuclear fission properties of berkelium are different from those of the neighboring actinides curium and californium, and they suggest berkelium to perform poorly as a fuel in a nuclear reactor. Specifically, berkelium-249 has a moderately large neutron capture cross section of 710 barns for thermal neutrons, 1200 barns resonance integral, but very low fission cross section for thermal neutrons. In a thermal reactor, much of it will therefore be converted to berkelium-250 which quickly decays to californium-250. In principle, berkelium-249 can sustain a nuclear chain reaction in a fast breeder reactor. Its critical mass is relatively high at 192 kg, which can be reduced with a water or steel reflector but would still exceed the world production of this isotope. Berkelium-247 can maintain a chain reaction both in a thermal-neutron and in a fast-neutron reactor, however, its production is rather complex and thus the availability is much lower than its critical mass, which is about 75.7 kg for a bare sphere, 41.2 kg with a water reflector and 35.2 kg with a steel reflector (30 cm thickness). Health issues Little is known about the effects of berkelium on human body, and analogies with other elements may not be drawn because of different radiation products (electrons for berkelium and alpha particles, neutrons, or both for most other actinides). The low energy of electrons emitted from berkelium-249 (less than 126 keV) hinders its detection, due to signal interference with other decay processes, but also makes this isotope relatively harmless to humans as compared to other actinides. However, berkelium-249 transforms with a half-life of only 330 days to the strong alpha-emitter californium-249, which is rather dangerous and has to be handled in a glovebox in a dedicated laboratory. Most available berkelium toxicity data originate from research on animals. Upon ingestion by rats, only about 0.01% of berkelium ends in the blood stream. From there, about 65% goes to the bones, where it remains for about 50 years, 25% to the lungs (biological half-life about 20 years), 0.035% to the testicles or 0.01% to the ovaries where berkelium stays indefinitely. The balance of about 10% is excreted. In all these organs berkelium might promote cancer, and in the skeleton, its radiation can damage red blood cells. The maximum permissible amount of berkelium-249 in the human skeleton is 0.4 nanograms. References Bibliography External links Berkelium at The Periodic Table of Videos (University of Nottingham) Chemical elements Chemical elements with double hexagonal close-packed structure Actinides Synthetic elements
Berkelium
Physics,Chemistry
6,514
946,798
https://en.wikipedia.org/wiki/Entity%20integrity
Entity integrity is concerned with ensuring that each row of a table has a unique and non-null primary key value; this is the same as saying that each row in a table represents a single instance of the entity type modelled by the table. A requirement of E. F. Codd in his seminal paper is that a primary key of an entity, or any part of it, can never take a null value. The relational model states that every relation (or table) must have an identifier, called the primary key (abbreviated PK), in such a way that every row of the same relation be identifiable by its content, that is, by a unique and minimal value. The PK is a not empty set of attributes (or columns). The same format applies to the foreign key (abbreviated FK) because each FK matches a preexistent PK. Each of attributes being part of a PK (or of a FK) must have data values (such as numbers, letters or typographic symbols) but not data marks (also known as NULL marks in SQL world). Morphologically, a composite primary key is in a "steady state": If it is reduced, PK will lose its property of identifying every row of its relation but if it is extended, PK will be redundant. See also Domain integrity Data integrity Referential integrity Null (SQL) References Data modeling Data quality
Entity integrity
Engineering
288
46,288,985
https://en.wikipedia.org/wiki/George%20M.%20Marakas
George M. Marakas is an American author, scholar, research scientist, professor, consultant, entrepreneur, and an authority in specific areas within the field of information systems. He has been named a Distinguished Member - Cum Laude by the Association for Information Systems. His academic career, includes faculty appointments at the Robert H. Smith School of Business at University of Maryland where he was a Center for Teaching Excellence Eli Lilly Fellow, the Kelley School of Business at Indiana University where he held the British-American Tobacco Fellowship for Global Information Systems Strategy., the University of Kansas School of Business, and the Florida International University College of Business where he holds the rank of Full professor and is the Associate Dean for Research and Doctoral Studies for the FIU College of Business. Dr. Marakas received his Ph.D. in Information Systems from Florida International University in 1995, his MBA from Colorado State University, and his bachelor's degree from Governor's State University. Before his position at FIU, he was a faculty member at University of Maryland and a tenured senior faculty member at the University of Kansas and the Kelley School of Business at Indiana University. He also served as an adjunct faculty member at the Helsinki School of Economics. Preceding his academic career, he enjoyed a highly successful banking and real estate career. His corporate experience includes senior management positions with Continental Illinois National Bank and the Federal Deposit Insurance Corporation. In addition, George served as the first President and CEO for CMC Group, Inc., a major real estate development firm in Miami, FL, for three years. Marakas has co-authored and published more than 50 scholarly papers and five textbooks within the field of information systems. He writes, speaks and researches in the area of computer and technological self-efficacy from both a theoretical and applied perspective. He is also considered an expert on doctoral education and doctoral program management. Marakas has served as a Senior Editor for THE Database for Information Systems Research and an Associate Editor for Information Systems Research. Marakas holds senior executive positions in several entrepreneurial ventures in the motorcycle and olive oil industries. His writing has appeared in several motorcycle publications in both print and digital form and he is a regular contributor to blogs in both industries. Selected publications Aguirre-Urreta, M., Ronkko, M., and Marakas, G. (2023) “Reconsidering the Implications of Formative vs. Reflective Measurement Model Misspecification,” Information Systems Journal, in press. Lopez, A.M. and Marakas, G.M. (2023) “Public-Private Partnership (P3) Success: Critical Success Factors for Local Government Services and Infrastructure Delivery”, Engaged Management Review, 6:1. Marakas, G.M., Aguirre-Urreta, M., Shoja, A., Kim, E, and Wang, S. (2023) “The Computer Self-Efficacy Construct: A History of Application in Information Systems Research,” Foundations and Trends® in Information Systems: Vol. 6, No. 2, pp 94–170. DOI: 10.1561/2900000023. Lee, KJ, Choi, J., Marakas, G.M., Singh, S. (2018) "Two Distinct Routes for Inducing Emotions in HCI Design: Achieving Delight versus Avoiding Hatred. International Journal of Human-Computer Studies, 124, 67-80. Cousins, K., Zadeh, P.E., Marakas, G.M., Klein, R. (2018) “Frictionless Commerce,” Cutter Business Technology Journal, 31:5. Ellis, M., Aguirre-Urreta, M., Lee, K., Sun, W. Liu, Y., Mao, J, & Marakas, G.M. (2016) “Categorization of technologies: insights from the technology acceptance literature,” Journal of Applied Business and Economics, 18:4. Aguirre-Urreta, M., Marakas, G., and Ronkko, M. “Omission of Causal Indicators: Consequences and Implications for Measurement,” Measurement: Interdisciplinary Research and Perspectives, 14:3, 75–97. Aguirre-Urreta, M., Marakas, G., and Ronkko, M. “Omission of Causal Indicators: Consequences and Implications for Measurement – A Rejoinder,” Measurement: Interdisciplinary Research and Perspectives, 14:4, 170–175. Sun, W, Aguirre-Urreta, M. and Marakas, G., (2016). “Effectiveness of Pair Programming: Perceptions of Software Professionals,” IEEE Software, 33, 72–79. Aguirre-Urreta, M. and Marakas, G., (2014). "Research Commentary: A Rejoinder to Rigdon, et al. (2014),” Information Systems Research, 25:4, 785–788. Aguirre-Urreta, M. and Marakas, G., (2014). “PLS and Models with Formatively-Specified Endogenous Constructs: A Cautionary Note,” Information Systems Research, 25:4, 761–778. Aguirre-Urreta, M., Marakas, G.M., and Ellis, M. (2013). “Measurement of Composite Reliability in Research: Using Partial Least Squares Research: Some Issues and an Alternative Approach,” The DATA BASE for Advances in Information Systems, 44:4, pp. 11–43. Aguirre-Urreta, M. and Marakas, G., (2012). “Revisiting Bias Due To Construct Misspecification: Different Results From Considering Coefficients In Standardized Form,” MIS Quarterly, 36:1, 123–138. Aguirre-Urreta, M. and Marakas, G.M., (2012). “Exploring Choice as an Antecedent to Behavior: Incorporating Alternatives into the Technology Acceptance Process,” Journal of Organizational and End User Computing, 24:1, 82–107. Aguirre-Urreta, M. and Marakas, G., (2010). “Is it Really Gender?: An Empirical Investigation into the Moderating Gender Variable in Technology Adoption,” Human Technology: An Interdisciplinary Journal on Humans in ICT Environments, 6:2, 155–190. Lowry, P.B., Dean, D.L., Roberts, T. & Marakas, G.M., (2009). “Toward building self-sustaining groups in PCR-based tasks through implicit coordination: The case of heuristic evaluation,” Journal of the Association for Information Systems, 10:3, 170–195. Marakas, G.M., Johnson, R.D., & Clay, P., (2009). “Formative versus Reflective Measurement: A Reply to Hardin, Chang, and Fuller,” Journal of the Association for Information Systems, 9:9. Aguirre-Urreta, M. & Marakas, G.M. (2008). “Comparing Conceptual Modeling Techniques: A Critical Review of the EER vs. OO Empirical Literature,” The DATABASE for Advances in Information Systems, 39 (2), 9–32. Marakas, G.M., Johnson, R.D., & Clay, P., (2007). “The Evolving Nature Of the Computer Self-Efficacy Construct: An Empirical Investigation of Measurement, Construction, Reliability, and Stability Over Time,” Journal of the Association for Information Systems, 8:1. Wu. J.T. and Marakas, G.M., (2006). "The Impact of Non-leader User Participation to Perceived System Success in Stages of the System Development Life Cycle," Journal of Computer Information Systems, 46:5, 127–140. Johnson, R.D., Marakas, G.M., & Palmer, J. “Beliefs about the social roles and capabilities of computing technology: Development of the computing technology continuum of perspective.” Behaviour and Information Technology, in press, 2007. Johnson, R.D., Marakas, G.M., & Palmer, J. (2006) “Differential Social Attributions Toward Computing Technology: An Empirical Investigation” International Journal of Human Computer Studies, 64:5, 446–460. Sabherwal, R., Sein, M.K. and Marakas, G.M. (2003) “Escalating commitment to information systems projects: Findings from two simulated experiments,” Information and Management, 40, 781–798. Alavi, M., Marakas, G.M., & Yoo, Y. (2002) "A comparative study of technology-supported distributed learning environments on cognitive and perceived learning outcomes," Information Systems Research, Dec. 404–415. Wheeler, B.C., Marakas, G.M., & Brickley, P. (2002) “Taking IT From the backoffice to the boardroom: Educating the line to lead at British-American Tobacco.” MISQ Executive, 1:1, 47–62. Johnson, R.D., and Marakas, G.M. (2000)"The role of behavioral modeling in computer skills acquisition: Toward refinement of the model," Information Systems Research, Dec., 402–417. Marakas, G.M., Johnson, R.D., & Palmer, J. (2000) "A theoretical model of differential social attributions toward computing technology: When the metaphor becomes the model," International Journal of Human-Computer Studies, 52:4, 719–750. Wheeler, B.C. and Marakas, G.M. (1999) “Facts, faith, or fear: Making the business case for IT investments,” Two teaching cases and a teaching note. IS World Case Repository Marakas, G.M., Yi, M.Y., and Johnson, R.,(1998) “The multilevel and multifaceted character of computer self-efficacy: Toward a clarification of the construct and an integrative framework for research, Information Systems Research, June 1998. Marakas, G.M. and Elam, J.J., (1998) “The impact of semantic structuring on the representation of facts: With implications for improving analysts’ interview behavior,” Information Systems Research, March 1998. Marakas, G.M. and Elam, J.J., (1997) “Creativity enhancement in problem-solving: Through software or process?” Management Science, August. Marakas, G.M. and Hornik, S.,(1996) “Passive resistance misuse: Overt support and covert resistance in IS implementation,” European Journal of Information Systems, May. Batra, D. and Marakas, G.M., (1995) “Conceptual data modeling in theory and practice,” European Journal of Information Systems, August. References External links FIU Faculty Expert Guide Living people Academics from Florida Motorcycle builders American technology writers Information systems researchers American businesspeople American business theorists 1953 births
George M. Marakas
Technology
2,321
23,253,708
https://en.wikipedia.org/wiki/IEEE%20Herman%20Halperin%20Electric%20Transmission%20and%20Distribution%20Award
The IEEE Herman Halperin Electric Transmission and Distribution Award is a Technical Field Award of the IEEE that is presented for outstanding contributions to electric transmission and distribution. The award may be presented annually to an individual or a team of up to three people. It was instituted by the IEEE Board of Directors in 1986. Prior to 1987, the award was called the William M. Habirshaw Award. Starting in 1987, the award became renamed in honor of Herman Halperin, who had been a recipient of the Habirshaw Award in 1962 and had worked for 40 years for the Commonwealth Edison Company. The award is sponsored by the Robert and Ruth Halperin Foundation, in memory of Herman and Edna Halperin, and the IEEE Power and Energy Society. The funds for the award were contributed by the Halperins, and are administered by the IEEE Foundation. Recipients of this award receive a certificate and honorarium. Recipients Source 1987: Robert F. Lawrence 1988: Luigi Paris 1989: John J. Daugherty 1990: John A. Casazza 1991: John G. Anderson 1992: Andrew R. Hileman 1993: Mat Darveniza 1994: Abdel-Aziz A Fouad 1995: Vernon L. Chartier 1996: Farouk A. M. Rizk 1997: B. Don Russell 1998: Vincent T. Morgan 1999: Charles L. Wagner 2000: Arun G. Phadke 2001: Arthur C. Westrom 2002: John J. Vithayathil 2003: Sarma Maruvada 2004: Andrew J. Eriksson 2005: James J. Burke 2006: Anjan Bose 2007: Eric B. Forsyth 2008: Robert C. Degeneff 2009: Carson W. Taylor 2010: Carlos Katz 2011: John H. Brunke 2012: Michel Duval 2013: Vijay Vittal 2014: Willem Boone 2015: Wolfram Boeck 2016: George Anders 2017: George Dorwart Rockefeller 2018: Jinliang He 2019: Steven A. Boggs 2020: Dusan Povh 2021: Brian Stott References External links IEEE Herman Halperin Electric Transmission and Distribution Award page at IEEE List of recipients of the IEEE Herman Halperin Electric Transmission and Distribution Award Herman Halperin Electric Transmission and Distribution Award
IEEE Herman Halperin Electric Transmission and Distribution Award
Technology
455
1,495,744
https://en.wikipedia.org/wiki/Intrinsic%20equation
In geometry, an intrinsic equation of a curve is an equation that defines the curve using a relation between the curve's intrinsic properties, that is, properties that do not depend on the location and possibly the orientation of the curve. Therefore an intrinsic equation defines the shape of the curve without specifying its position relative to an arbitrarily defined coordinate system. The intrinsic quantities used most often are arc length , tangential angle , curvature or radius of curvature, and, for 3-dimensional curves, torsion . Specifically: The natural equation is the curve given by its curvature and torsion. The Whewell equation is obtained as a relation between arc length and tangential angle. The Cesàro equation is obtained as a relation between arc length and curvature. The equation of a circle (including a line) for example is given by the equation where is the arc length, the curvature and the radius of the circle. These coordinates greatly simplify some physical problem. For elastic rods for example, the potential energy is given by where is the bending modulus . Moreover, as , elasticity of rods can be given a simple variational form. References External links Curves Equations
Intrinsic equation
Mathematics
236
11,344,743
https://en.wikipedia.org/wiki/Biomolecular%20engineering
Biomolecular engineering is the application of engineering principles and practices to the purposeful manipulation of molecules of biological origin. Biomolecular engineers integrate knowledge of biological processes with the core knowledge of chemical engineering in order to focus on molecular level solutions to issues and problems in the life sciences related to the environment, agriculture, energy, industry, food production, biotechnology and medicine. Biomolecular engineers purposefully manipulate carbohydrates, proteins, nucleic acids and lipids within the framework of the relation between their structure (see: nucleic acid structure, carbohydrate chemistry, protein structure,), function (see: protein function) and properties and in relation to applicability to such areas as environmental remediation, crop and livestock production, biofuel cells and biomolecular diagnostics. The thermodynamics and kinetics of molecular recognition in enzymes, antibodies, DNA hybridization, bio-conjugation/bio-immobilization and bioseparations are studied. Attention is also given to the rudiments of engineered biomolecules in cell signaling, cell growth kinetics, biochemical pathway engineering and bioreactor engineering. Timeline History During World War II, the need for large quantities of penicillin of acceptable quality brought together chemical engineers and microbiologists to focus on penicillin production. This created the right conditions to start a chain of reactions that lead to the creation of the field of biomolecular engineering. Biomolecular engineering was first defined in 1992 by the U.S. National Institutes of Health as research at the interface of chemical engineering and biology with an emphasis at the molecular level". Although first defined as research, biomolecular engineering has since become an academic discipline and a field of engineering practice. Herceptin, a humanized Mab for breast cancer treatment, became the first drug designed by a biomolecular engineering approach and was approved by the U.S. FDA. Also, Biomolecular Engineering was a former name of the journal New Biotechnology. Future Bio-inspired technologies of the future can help explain biomolecular engineering. Looking at the Moore's law "Prediction", in the future quantum and biology-based processors are "big" technologies. With the use of biomolecular engineering, the way our processors work can be manipulated in order to function in the same sense a biological cell work. Biomolecular engineering has the potential to become one of the most important scientific disciplines because of its advancements in the analyses of gene expression patterns as well as the purposeful manipulation of many important biomolecules to improve functionality. Research in this field may lead to new drug discoveries, improved therapies, and advancement in new bioprocess technology. With the increasing knowledge of biomolecules, the rate of finding new high-value molecules including but not limited to antibodies, enzymes, vaccines, and therapeutic peptides will continue to accelerate. Biomolecular engineering will produce new designs for therapeutic drugs and high-value biomolecules for treatment or prevention of cancers, genetic diseases, and other types of metabolic diseases. Also, there is anticipation of industrial enzymes that are engineered to have desirable properties for process improvement as well the manufacturing of high-value biomolecular products at a much lower production cost. Using recombinant technology, new antibiotics that are active against resistant strains will also be produced. Basic biomolecules Biomolecular engineering deals with the manipulation of many key biomolecules. These include, but are not limited to, proteins, carbohydrates, nucleic acids, and lipids. These molecules are the basic building blocks of life and by controlling, creating, and manipulating their form and function there are many new avenues and advantages available to society. Since every biomolecule is different, there are a number of techniques used to manipulate each one respectively. Proteins Proteins are polymers that are made up of amino acid chains linked with peptide bonds. They have four distinct levels of structure: primary, secondary, tertiary, and quaternary. Primary structure refers to the amino acid backbone sequence. Secondary structure focuses on minor conformations that develop as a result of the hydrogen bonding between the amino acid chain. If most of the protein contains intermolecular hydrogen bonds it is said to be fibrillar, and the majority of its secondary structure will be beta sheets. However, if the majority of the orientation contains intramolecular hydrogen bonds, then the protein is referred to as globular and mostly consists of alpha helices. There are also conformations that consist of a mix of alpha helices and beta sheets as well as a beta helixes with an alpha sheets. The tertiary structure of proteins deal with their folding process and how the overall molecule is arranged. Finally, a quaternary structure is a group of tertiary proteins coming together and binding. With all of these levels, proteins have a wide variety of places in which they can be manipulated and adjusted. Techniques are used to affect the amino acid sequence of the protein (site-directed mutagenesis), the folding and conformation of the protein, or the folding of a single tertiary protein within a quaternary protein matrix. Proteins that are the main focus of manipulation are typically enzymes. These are proteins that act as catalysts for biochemical reactions. By manipulating these catalysts, the reaction rates, products, and effects can be controlled. Enzymes and proteins are important to the biological field and research that there are specific divisions of engineering focusing only on proteins and enzymes. Carbohydrates Carbohydrates are another important biomolecule. These are polymers, called polysaccharides, which are made up of chains of simple sugars connected via glycosidic bonds. These monosaccharides consist of a five to six carbon ring that contains carbon, hydrogen, and oxygen - typically in a 1:2:1 ratio, respectively. Common monosaccharides are glucose, fructose, and ribose. When linked together monosaccharides can form disaccharides, oligosaccharides, and polysaccharides: the nomenclature is dependent on the number of monosaccharides linked together. Common dissacharides, two monosaccharides joined, are sucrose, maltose, and lactose. Important polysaccharides, links of many monosaccharides, are cellulose, starch, and chitin. Cellulose is a polysaccharide made up of beta 1-4 linkages between repeat glucose monomers. It is the most abundant source of sugar in nature and is a major part of the paper industry. Starch is also a polysaccharide made up of glucose monomers; however, they are connected via an alpha 1-4 linkage instead of beta. Starches, particularly amylase, are important in many industries, including the paper, cosmetic, and food. Chitin is a derivation of cellulose, possessing an acetamide group instead of an –OH on one of its carbons. Acetimide group is deacetylated the polymer chain is then called chitosan. Both of these cellulose derivatives are a major source of research for the biomedical and food industries. They have been shown to assist with blood clotting, have antimicrobial properties, and dietary applications. A lot of engineering and research is focusing on the degree of deacetylation that provides the most effective result for specific applications. Nucleic acids Nucleic acids are macromolecules that consist of DNA and RNA which are biopolymers consisting of chains of biomolecules. These two molecules are the genetic code and template that make life possible. Manipulation of these molecules and structures causes major changes in function and expression of other macromolecules. Nucleosides are glycosylamines containing a nucleobase bound to either ribose or deoxyribose sugar via a beta-glycosidic linkage. The sequence of the bases determine the genetic code. Nucleotides are nucleosides that are phosphorylated by specific kinases via a phosphodiester bond. Nucleotides are the repeating structural units of nucleic acids. The nucleotides are made of a nitrogenous base, a pentose (ribose for RNA or deoxyribose for DNA), and three phosphate groups. See, Site-directed mutagenesis, recombinant DNA, and ELISAs. Lipids Lipids are biomolecules that are made up of glycerol derivatives bonded with fatty acid chains. Glycerol is a simple polyol that has a formula of C3H5(OH)3. Fatty acids are long carbon chains that have a carboxylic acid group at the end. The carbon chains can be either saturated with hydrogen; every carbon bond is occupied by a hydrogen atom or a single bond to another carbon in the carbon chain, or they can be unsaturated; namely, there are double bonds between the carbon atoms in the chain. Common fatty acids include lauric acid, stearic acid, and oleic acid. The study and engineering of lipids typically focuses on the manipulation of lipid membranes and encapsulation. Cellular membranes and other biological membranes typically consist of a phospholipid bilayer membrane, or a derivative thereof. Along with the study of cellular membranes, lipids are also important molecules for energy storage. By utilizing encapsulation properties and thermodynamic characteristics, lipids become significant assets in structure and energy control when engineering molecules. Of molecules Recombinant DNA Recombinant DNA are DNA biomolecules that contain genetic sequences that are not native to the organism's genome. Using recombinant techniques, it is possible to insert, delete, or alter a DNA sequence precisely without depending on the location of restriction sites. Recombinant DNA is used for a wide range of applications. Method The traditional method for creating recombinant DNA typically involves the use of plasmids in the host bacteria. The plasmid contains a genetic sequence corresponding to the recognition site of a restriction endonuclease, such as EcoR1. After foreign DNA fragments, which have also been cut with the same restriction endonuclease, have been inserted into host cell, the restriction endonuclease gene is expressed by applying heat, or by introducing a biomolecule, such as arabinose. Upon expression, the enzyme will cleave the plasmid at its corresponding recognition site creating sticky ends on the plasmid. Ligases then joins the sticky ends to the corresponding sticky ends of the foreign DNA fragments creating a recombinant DNA plasmid. Advances in genetic engineering have made the modification of genes in microbes quite efficient allowing constructs to be made in about a weeks worth of time. It has also made it possible to modify the organism's genome itself. Specifically, use of the genes from the bacteriophage lambda are used in recombination. This mechanism, known as recombineering, utilizes the three proteins Exo, Beta, and Gam, which are created by the genes exo, bet, and gam respectively. Exo is a double stranded DNA exonuclease with 5' to 3' activity. It cuts the double stranded DNA leaving 3' overhangs. Beta is a protein that binds to single stranded DNA and assists homologous recombination by promoting annealing between the homology regions of the inserted DNA and the chromosomal DNA. Gam functions to protect the DNA insert from being destroyed by native nucleases within the cell. Applications Recombinant DNA can be engineered for a wide variety of purposes. The techniques utilized allow for specific modification of genes making it possible to modify any biomolecule. It can be engineered for laboratory purposes, where it can be used to analyze genes in a given organism. In the pharmaceutical industry, proteins can be modified using recombination techniques. Some of these proteins include human insulin. Recombinant insulin is synthesized by inserting the human insulin gene into E. coli, which then produces insulin for human use. Other proteins, such as human growth hormone, factor VIII, and hepatitis B vaccine are produced using similar means. Recombinant DNA can also be used for diagnostic methods involving the use of the ELISA method. This makes it possible to engineer antigens, as well as the enzymes attached, to recognize different substrates or be modified for bioimmobilization. Recombinant DNA is also responsible for many products found in the agricultural industry. Genetically modified food, such as golden rice, has been engineered to have increased production of vitamin A for use in societies and cultures where dietary vitamin A is scarce. Other properties that have been engineered into crops include herbicide-resistance and insect-resistance. Site-directed mutagenesis Site-directed mutagenesis is a technique that has been around since the 1970s. The early days of research in this field yielded discoveries about the potential of certain chemicals such as bisulfite and aminopurine to change certain bases in a gene. This research continued, and other processes were developed to create certain nucleotide sequences on a gene, such as the use of restriction enzymes to fragment certain viral strands and use them as primers for bacterial plasmids. The modern method, developed by Michael Smith in 1978, uses an oligonucleotide that is complementary to a bacterial plasmid with a single base pair mismatch or a series of mismatches. General procedure Site directed mutagenesis is a valuable technique that allows for the replacement of a single base in an oligonucleotide or gene. The basics of this technique involve the preparation of a primer that will be a complementary strand to a wild type bacterial plasmid. This primer will have a base pair mismatch at the site where the replacement is desired. The primer must also be long enough such that the primer will anneal to the wild type plasmid. After the primer anneals, a DNA polymerase will complete the primer. When the bacterial plasmid is replicated, the mutated strand will be replicated as well. The same technique can be used to create a gene insertion or deletion. Often, an antibiotic resistant gene is inserted along with the modification of interest and the bacteria are cultured on an antibiotic medium. The bacteria that were not successfully mutated will not survive on this medium, and the mutated bacteria can easily be cultured. Applications Site-directed mutagenesis can be helpful for many different reasons. A single base-pair replacement will change the codon, potentially replacing an amino acid in a protein. Mutagenesis can help determine the function of proteins and the roles of specific amino acids. If an amino acid near the active site is mutated, the kinetic parameters may change drastically, or the enzyme might behave differently. Another application of site-directed mutagenesis is exchanging an amino acid residue far from the active site with a lysine residue or cysteine residue. These amino acids make it easier to covalently bond the enzyme to a solid surface, which allows for enzyme re-use and the use of enzymes in continuous processes. Sometimes, amino acids with non-natural functional groups (such as an aldehyde introduced through an aldehyde tag) are added to proteins. These additions may be for ease of bioconjugation or to study the effects of amino acid changes on the form and function of the proteins. One example of how mutagenesis is used is found in the coupling of site-directed mutagenesis and PCR to reduce interleukin-6 activity in cancerous cells. In another example, Bacillus subtilis is used in site-directed mutagenesis, to secrete the enzyme subtilisin through the cell wall. Biomolecular engineers can purposely manipulate this gene to essentially make the cell a factory for producing whatever protein the insertion in the gene codes. Bio-immobilization and bio-conjugation Bio-immobilization and bio-conjugation is the purposeful manipulation of a biomolecule's mobility by chemical or physical means to obtain a desired property. Immobilization of biomolecules allows exploiting characteristics of the molecule under controlled environments. For example , the immobilization of glucose oxidase on calcium alginate gel beads can be used in a bioreactor. The resulting product will not need purification to remove the enzyme because it will remain linked to the beads in the column. Examples of types of biomolecules that are immobilized are enzymes, organelles, and complete cells. Biomolecules can be immobilized using a range of techniques. The most popular are physical entrapment, adsorption, and covalent modification. Physical entrapment - the use of a polymer to contain the biomolecule in a matrix without chemical modification. Entrapment can be between lattices of polymer, known as gel entrapment, or within micro-cavities of synthetic fibers, known as fiber entrapment. Examples include entrapment of enzymes such as glucose oxidase in gel column for use as a bioreactor. Important characteristic with entrapment is biocatalyst remains structurally unchanged, but creates large diffusion barriers for substrates. Adsorption- immobilization of biomolecules due to interaction between the biomolecule and groups on support. Can be physical adsorption, ionic bonding, or metal binding chelation. Such techniques can be performed under mild conditions and relatively simple, although the linkages are highly dependent upon pH, solvent and temperature. Examples include enzyme-linked immunosorbent assays. Covalent modification- involves chemical reactions between certain functional groups and matrix. This method forms stable complex between biomolecule and matrix and is suited for mass production. Due to the formation of chemical bond to functional groups, loss of activity can occur. Examples of chemistries used are DCC coupling PDC coupling and EDC/NHS coupling, all of which take advantage of the reactive amines on the biomolecule's surface. Because immobilization restricts the biomolecule, care must be given to ensure that functionality is not entirely lost. Variables to consider are pH, temperature, solvent choice, ionic strength, orientation of active sites due to conjugation. For enzymes, the conjugation will lower the kinetic rate due to a change in the 3-dimensional structure, so care must be taken to ensure functionality is not lost. Bio-immobilization is used in technologies such as diagnostic bioassays, biosensors, ELISA, and bioseparations. Interleukin (IL-6) can also be bioimmobilized on biosensors. The ability to observe these changes in IL-6 levels is important in diagnosing an illness. A cancer patient will have elevated IL-6 level and monitoring those levels will allow the physician to watch the disease progress. A direct immobilization of IL-6 on the surface of a biosensor offers a fast alternative to ELISA. Polymerase chain reaction The polymerase chain reaction (PCR) is a scientific technique that is used to replicate a piece of a DNA molecule by several orders of magnitude. PCR implements a cycle of repeated heated and cooling known as thermal cycling along with the addition of DNA primers and DNA polymerases to selectively replicate the DNA fragment of interest. The technique was developed by Kary Mullis in 1983 while working for the Cetus Corporation. Mullis would go on to win the Nobel Prize in Chemistry in 1993 as a result of the impact that PCR had in many areas such as DNA cloning, DNA sequencing, and gene analysis. Biomolecular engineering techniques involved in PCR A number of biomolecular engineering strategies have played a very important role in the development and practice of PCR. For instance a crucial step in ensuring the accurate replication of the desired DNA fragment is the creation of the correct DNA primer. The most common method of primer synthesis is by the phosphoramidite method. This method includes the biomolecular engineering of a number of molecules to attain the desired primer sequence. The most prominent biomolecular engineering technique seen in this primer design method is the initial bioimmobilization of a nucleotide to a solid support. This step is commonly done via the formation of a covalent bond between the 3'-hydroxy group of the first nucleotide of the primer and the solid support material. Furthermore, as the DNA primer is created certain functional groups of nucleotides to be added to the growing primer require blocking to prevent undesired side reactions. This blocking of functional groups as well as the subsequent de-blocking of the groups, coupling of subsequent nucleotides, and eventual cleaving from the solid support are all methods of manipulation of biomolecules that can be attributed to biomolecular engineering. The increase in interleukin levels is directly proportional to the increased death rate in breast cancer patients. PCR paired with Western blotting and ELISA help define the relationship between cancer cells and IL-6. Enzyme-linked immunosorbent assay (ELISA) Enzyme-linked immunosorbent assay is an assay that utilizes the principle of antibody-antigen recognition to test for the presence of certain substances. The three main types of ELISA tests which are indirect ELISA, sandwich ELISA, and competitive ELISA all rely on the fact that antibodies have an affinity for only one specific antigen. Furthermore, these antigens or antibodies can be attached to enzymes which can react to create a colorimetric result indicating the presence of the antibody or antigen of interest. Enzyme linked immunosorbent assays are used most commonly as diagnostic tests to detect HIV antibodies in blood samples to test for HIV, human chorionic gonadotropin molecules in urine to indicate pregnancy, and Mycobacterium tuberculosis antibodies in blood to test patients for tuberculosis. Furthermore, ELISA is also widely used as a toxicology screen to test people's serum for the presence of illegal drugs. Techniques involved in ELISA Although there are three different types of solid state enzyme-linked immunosorbent assays, all three types begin with the bioimmobilization of either an antibody or antigen to a surface. This bioimmobilization is the first instance of biomolecular engineering that can be seen in ELISA implementation. This step can be performed in a number of ways including a covalent linkage to a surface which may be coated with protein or another substance. The bioimmobilization can also be performed via hydrophobic interactions between the molecule and the surface. Because there are many different types of ELISAs used for many different purposes the biomolecular engineering that this step requires varies depending on the specific purpose of the ELISA. Another biomolecular engineering technique that is used in ELISA development is the bioconjugation of an enzyme to either an antibody or antigen depending on the type of ELISA. There is much to consider in this enzyme bioconjugation such as avoiding interference with the active site of the enzyme as well as the antibody binding site in the case that the antibody is conjugated with enzyme. This bioconjugation is commonly performed by creating crosslinks between the two molecules of interest and can require a wide variety of different reagents depending on the nature of the specific molecules. Interleukin (IL-6) is a signaling protein that has been known to be present during an immune response. The use of the sandwich type ELISA quantifies the presence of this cytokine within spinal fluid or bone marrow samples. Applications and fields In industry Biomolecular engineering is an extensive discipline with applications in many different industries and fields. As such, it is difficult to pinpoint a general perspective on the Biomolecular engineering profession. The biotechnology industry, however, provides an adequate representation. The biotechnology industry, or biotech industry, encompasses all firms that use biotechnology to produce goods or services or to perform biotechnology research and development. In this way, it encompasses many of the industrial applications of the biomolecular engineering discipline. By examination of the biotech industry, it can be gathered that the principal leader of the industry is the United States, followed by France and Spain. It is also true that the focus of the biotechnology industry and the application of biomolecular engineering is primarily clinical and medical. People are willing to pay for good health, so most of the money directed towards the biotech industry stays in health-related ventures. Scale-up Scaling up a process involves using data from an experimental-scale operation (model or pilot plant) for the design of a large (scaled-up) unit, of commercial size. Scaling up is a crucial part of commercializing a process. For example, insulin produced by genetically modified Escherichia coli bacteria was initialized on a lab-scale, but to be made commercially viable had to be scaled up to an industrial level. In order to achieve this scale-up a lot of lab data had to be used to design commercial sized units. For example, one of the steps in insulin production involves the crystallization of high purity glargin insulin. In order to achieve this process on a large scale we want to keep the Power/Volume ratio of both the lab-scale and large-scale crystallizers the same in order to achieve homogeneous mixing. We also assume the lab-scale crystallizer has geometric similarity to the large-scale crystallizer. Therefore, P/V α Ni3di3 where di= crystallizer impeller diameter Ni= impeller rotation rate Related industries Bioengineering A broad term encompassing all engineering applied to the life sciences. This field of study utilizes the principles of biology along with engineering principles to create marketable products. Some bioengineering applications include: Biomimetics - The study and development of synthetic systems that mimic the form and function of natural biologically produced substances and processes. Bioprocess engineering - The study and development of process equipment and optimization that aids in the production of many products such as food and pharmaceuticals. Industrial microbiology - The implementation of microorganisms in the production of industrial products such as food and antibiotics. Another common application of industrial microbiology is the treatment of wastewater in chemical plants via utilization of certain microorganisms. Biochemistry Biochemistry is the study of chemical processes in living organisms, including, but not limited to, living matter. Biochemical processes govern all living organisms and living processes and the field of biochemistry seeks to understand and manipulate these processes. Biochemical engineering Biocatalysis – Chemical transformations using enzymes. Bioseparations – Separation of biologically active molecules. Thermodynamics and Kinetics (chemistry) – Analysis of reactions involving cell growth and biochemicals. Bioreactor design and analysis – Design of reactors for performing biochemical transformations. Biotechnology Biomaterials – Design, synthesis and production of new materials to support cells and tissues. Genetic engineering – Purposeful manipulation of the genomes of organisms to produce new phenotypic traits. Bioelectronics, Biosensor and Biochip – Engineered devices and systems to measure, monitor and control biological processes. Bioprocess engineering – Design and maintenance of cell-based and enzyme-based processes for the production of fine chemicals and pharmaceuticals. Bioelectrical engineering Bioelectrical engineering involves the electrical fields generated by living cells or organisms. Examples include the electric potential developed between muscles or nerves of the body. This discipline requires knowledge in the fields of electricity and biology to understand and utilize these concepts to improve or better current bioprocesses or technology. Bioelectrochemistry - Chemistry concerned with electron/proton transport throughout the cell Bioelectronics - Field of research coupling biology and electronics Biomedical engineering Biomedical engineering is a sub category of bioengineering that uses many of the same principles but focuses more on the medical applications of the various engineering developments. Some applications of biomedical engineering include: Biomaterials - Design of new materials for implantation in the human body and analysis of their effect on the body. Cellular engineering – Design of new cells using recombinant DNA and development of procedures to allow normal cells to adhere to artificial implanted biomaterials Tissue engineering – Design of new tissues from the basic biological building blocks to form new tissues Artificial organs – Application of tissue engineering to whole organs Medical imaging – Imaging of tissues using CAT scan, MRI, ultrasound, x-ray or other technologies Medical Optics and Lasers – Application of lasers to medical diagnosis and treatment Rehabilitation engineering – Design of devices and systems used to aid disabled people Man-machine interfacing - Control of surgical robots and remote diagnostic and therapeutic systems using eye tracking, voice recognition and muscle and brain wave controls Human factors and ergonomics – Design of systems to improve human performance in a wide range of applications Chemical engineering Chemical engineering is the processing of raw materials into chemical products. It involves preparation of raw materials to produce reactants, the chemical reaction of these reactants under controlled conditions, the separation of products, the recycle of byproducts, and the disposal of wastes. Each step involves certain basic building blocks called "unit operations," such as extraction, filtration, and distillation. These unit operations are found in all chemical processes. Biomolecular engineering is a subset of Chemical Engineering that applies these same principles to the processing of chemical substances made by living organisms. Education and programs Newly developed and offered undergraduate programs across the United States, often coupled to the chemical engineering program, allow students to achieve a B.S. degree. According to ABET (Accreditation Board for Engineering and Technology), biomolecular engineering curricula "must provide thorough grounding in the basic sciences including chemistry, physics, and biology, with some content at an advanced level… [and] engineering application of these basic sciences to design, analysis, and control, of chemical, physical, and/or biological processes." Common curricula consist of major engineering courses including transport, thermodynamics, separations, and kinetics, with additions of life sciences courses including biology and biochemistry, and including specialized biomolecular courses focusing on cell biology, nano- and biotechnology, biopolymers, etc. See also Biomimetics Biopharmaceuticals Bioprocess engineering List of biomolecules Molecular engineering References Further reading Biomolecular engineering at interfaces (article) Recent Progress in Biomolecular Engineering Biomolecular sensors (alk. paper) External links AIChE International Conference on Biomolecular Engineering Biological processes Biotechnology
Biomolecular engineering
Biology
6,436
43,758,477
https://en.wikipedia.org/wiki/Pristionchus%20pacificus
Pristionchus pacificus is a species of free-living nematodes (roundworms) in the family Diplogastridae. The species has been established as a satellite model organism to Caenorhabditis elegans, with which it shared a common ancestor 200–300 million years ago. The genome of P. pacificus has been fully sequenced, which in combination with other tools for genetic analysis make this species a tractable model in the laboratory, especially for studies of developmental biology. Mouth dimorphism Like other species of Pristionchus and many other free-living nematodes, P. pacificus exhibits a polyphenism in its mouthparts that allows individual nematodes to specialize on different food sources, which has made the species a case study in phenotypic plasticity. The polyphenism has two forms (morphs). The most common type, at least in wild-type lab strains, is the "eurystomatous" morph, which can feed on both bacteria and other nematode species. The "stenostomatous" morph, on the other hand, is specialised for feeding on bacteria exclusively. Differentiation into one or the other morph depends on a combination of environmental conditions and stochasticity. The main morphological differences can be seen in the mouthparts. The eurystomatous morph has a secondary tooth and a wider buccal cavity. The secondary tooth allows the eurystomatous morph to feed on other nematode worms. The two feeding morphs, which allow the nematodes to respond quickly to changing environments, are specified by a hormonal and genetic cascade during larval development. Self-recognition As a predatory species that feeds on related species, it is likely that there is a selective pressure for self-recognition, i.e. recognition of conspecifics. P. pacificus does not feed on conspecifics and therefore must be capable of distinguishing them from other nematode species. Self-recognition is not cilia-dependent, unlike prey recognition. Genomics The Pristionchus pacificus genome was sequenced in 2005 and 2006. The analysis of P. pacificus has provided ecological information about this organism. It was determined that the genome of P. pacificus is larger than that of the widely studied nematode C. elegans, and was predicted that the genome of P. pacificus contains more than 26,000 protein-coding genes. Ecology It has been indicated that Pristionchus nematodes live in a necromenic association with scarab beetles. "After the beetle dies, the nematode continues to develop and feed on microbes growing inside the dead beetle. The collection of bacteria, fungi and the nematodes work hand in hand to decompose the beetle carcass". Thus, Pristionchus is an omnivore that can utilize bacteria, protozoa and fungi as food sources, all of which grow on the carcasses of scarab beetles. References External links Rhabditida Nematodes described in 1996 Animal models
Pristionchus pacificus
Biology
652
1,709,152
https://en.wikipedia.org/wiki/Palmette
The palmette is a motif in decorative art which, in its most characteristic expression, resembles the fan-shaped leaves of a palm tree. It has a far-reaching history, originating in ancient Egypt with a subsequent development through the art of most of Eurasia, often in forms that bear relatively little resemblance to the original. In ancient Greek and Roman uses it is also known as the anthemion (from the Greek ανθέμιον, a flower). It is found in most artistic media, but especially as an architectural ornament, whether carved or painted, and painted on ceramics. It is very often a component of the design of a frieze or border. The complex evolution of the palmette was first traced by Alois Riegl in his Stilfragen of 1893. The half-palmette, bisected vertically, is also a very common motif, found in many mutated and vestigial forms, and especially important in the development of plant-based scroll ornament. Description The essence of the palmette is a symmetrical group of spreading "fronds" that spread out from a single base, normally widening as they go out, before ending at a rounded or fairly blunt pointed tip. There may be a central frond that is larger than the rest. The number of fronds is variable, but typically between five and about fifteen. In the repeated border design commonly referred to as anthemion the palm fronds more closely resemble petals of the honeysuckle flower, as if designed to attract fertilizing insects. Some compare the shape to an open hamsa hand – explaining the commonality and derivation of the 'palm' of the hand. In some forms of the motif the volutes or scrolls resemble a pair of eyes, like those on the harmika of the Tibetan or Nepalese stupa and the eyes and sun-disk at the crown of Egyptian stelae. In some variants the features of a more fully developed face become discernible in the palmette itself, while in certain architectural uses, usually at the head of pilasters or herms, the fan of palm-fronds transforms into a male or female face and the volutes sometimes appear as breasts. Common to all these forms is the pair of volutes at the base of the fan – constituting the defining characteristics of the palmette. Evolution It is thought that the palmette originated in ancient Egypt 2,500 years BC, and has influenced Greek art. Egyptian palmettes (Greek anthemia) were originally based on features of various flowers, including the papyrus and the lotus or lily representing lower and upper Egypt and their fertile union, before it became associated with the palm tree. From earliest times there was a strong association with the sun and it is probably an early form of the halo. Among the oldest forms of the palmette in ancient Egypt was a 'rosette' or daisy-like lotus flower emerging from a 'V' of foliage or petals resembling the akhet hieroglyph depicting the setting or rising sun at the point where it touches the two mountains of the horizon – 'dying', being 'reborn' and giving life to the earth. A second form, apparently evolved from this, is a more fully developed palmette similar to the forms found in Ancient Greece. Third is a version consisting of a clump of lotus or papyrus blooms on tall stems, with a drooping bud or flower on either side, arising from a (primal) swamp. The lotus and papyrus clump occur in association with Hapy, the god of the crucial life-giving annual Nile inundation, who binds their stems together around an offering table in the sema-tawi motif – itself echoing the shapes of the 'akhet' of the horizon. This unification scene appeared on the base of the throne of several kings, who were thought of as preserving the union of the two lands of (upper and lower, but also physical and spiritual) Egypt and thereby mastering the forces of renewal. These 'binding' scenes, and the heliotropic swamp plants appearing in them, evoked the necessity of discerning and revealing the underlying harmony, the origin of all manifest forms, that re-connects the dispersed and separate-seeming fragments of everyday experience. The further implication is that it is from this apparently occult and magical, undivided source that fertility and new life spring. Another variant of this motif is a single lotus bloom between two upright buds, a favourite fragrant offering. The god of fragrance, Nefertem, is represented by such a lotus, or is shown bearing a lotus as his crown. The lotus in Nefertem's head dress typically incorporates twin 'menats' or necklace counterpoises (commonly said to represent fertility) hanging down from the base of the flower on either side of the stem, recalling the symmetrically drooping pair of stems in the lotus and papyrus clumps mentioned above. When depicted on Egyptian tomb walls and in formalized garden scenes, date palms are invariably shown in a similar stylistic convention with a cluster of dates hanging down on either side below the crown in this same position. The link between these hanging clusters and the volutes of the palmette is visually clear, but remains inexplicit. Rising and setting sun and opening and closing lotus are linked by the Osiris legend to day and night, life and death and the nightly ordeal of the setting sun to be swallowed by night-sky goddess Nut, to pass through the Duat ("underworld") and be born anew each morning. The plants depicted with this solar fan of fronds or petals and 'supported' by pairs of pendant blooms, buds or fruit clusters all seem especially to emulate and share in the sun's sacrificial cycle of death and rebirth and to point to the lessons it holds for mankind. It seems likely that the underlying model for all these fertile shapes, echoed by the curling cows-horn wig and sistrum-volutes of maternity-goddess Hathor, was the womb, with the twin egg-clusters of its ovaries. When the sun is reborn in the morning it is said to be born from the womb of Nut. The stylized palmette-forms of the lotus and papyrus showing the solar rosette or daisy-wheel emerging from the volutes of the calyx are similar magical enactments of the 'akhet' – this sacred moment of enhanced creation, the act of transcending or surpassing one's mortal form and 'going forth by day' as an akh or higher, winged, shining, all-encompassing and all-seeing form of life. Most early Egyptian forms of the motif appear later in Crete, Mesopotamia, Assyria and Ancient Persia, including the daisy-wheel-style lotus and bud border. In the form of the palmette that appears most frequently on Greek pottery, often interspersed with scenes of heroic deeds, the same motif is bound within a leaf-shaped or lotus-bud shaped outer line. The outer line can be seen to have evolved from an alternating frieze of stylized lotus and palmette. This anticipates the form it often took – from Renaissance sculpture through to Baroque fountains – of the inside of a half scallop shell, in which the palm fronds have become the fan of the shell and the scrolls remain at the convergence of the fan. Here the shape was associated with Venus or Neptune and was typically flanked by a pair of dolphins or became a vehicle drawn by sea-horses. Later, this circular or oval outer line became a motif in itself, forming an open C-shape with the two in-growing scrolls at its tips. Much Baroque and Rococo furniture, stucco ornament or wrought-iron work of gates and balconies is made up of ever-varying combinations these C-scrolls, either on their own, back to back, or in support of full palmettes. Classical architecture As an ornamental motif found in classical architecture, the palmette and anthemion take many and varied forms. Typically, the upper part of the motif consists of five or more leaves or petals fanning rhythmically upwards from a single triangular or lozenge-shaped source at the base. In some instances fruits resembling palm fruits hang down on either side above the base and below the lowest leaves. The lower part consists of a symmetrical pair of elegant 'S' scrolls or volutes curling out sideways and downwards from the base of the leaves. The upper part recalls the thrusting growth of leaves and flowers, while the volutes of the lower part seem to suggest both contributing fertile energies and resulting fruits. It is often present on the necking of the capital of Ionic order columns; however in column capitals of the Corinthian order it takes the shape of a 'fleuron' or flower resting against the abacus (top-most slab) of the capital and springing out from a pair of volutes which, in some versions, give rise to the elaborate volutes and acanthus ornament of the capital. Botanical combinations According to Boardman, although lotus friezes or palmette friezes were known in Mesopotamia centuries before, the unnatural combination of various botanical elements which have no relationship in the wild, such as the palmette, the lotus and sometimes rosette flowers, is a purely Greek innovation, which was then adopted on a very broad geographical scale throughout the Hellenistic world. Hellenistic "Flame palmettes" From the 5th century, palmettes tended to have sharply splaying leaves. From the 4th century however, the end of the leaves tend to turn in, forming what is called the "flame palmette" design. This is the design that was adopted in Hellenistic architecture and became very popular on a wide geographical scale. This is the design that was adopted by India in the 3rd century BC for some of its sculptural friezes, such as on the abaci of the Pillars of Ashoka, or the central design of the Pataliputra capital, probably through the Seleucid Empire or Hellenistic cities such as Ai-Khanoum. Usage In classical architecture the motif had specific uses, including: the fronts of ante-fixae, acroteria, the upper portion of the stele or vertical tombstones, the necking of the Ionic columns of the Erechtheum and its continuation as a decorative frieze on the walls of the same, and the cymatium of a cornice. Variants and related motifs The palmette is related to a range of motifs in differing cultures and periods. In ancient Egypt palmette motifs existed both as a form of flower and as a stylized tree, often referred to as a Tree of life. Other examples from ancient Egypt are the alternating lotus flower and bud border designs, the winged disk of Horus with its pair of Uraeus serpents, the Eye of Horus and curve-topped commemorative stele. In later Assyrian versions of the Tree of Life, the feathered falcon wings of the Egyptian winged disk have become associated with the fronds of the palm tree. Similar lotus flower and bud borders, closely associated with palmettes and rosettes, also appeared in Mesopotamia. There appears to be an equivalence between the horns of horned creatures, the wings of winged beings including angels, griffins and sphinxes and both the fan and the volutes of the palmette; there is also an underlying 'V' shape in each of these forms that parallels the association of the palm itself with victory, energy and optimism. An image of Nike, winged goddess of victory, from an Attic vase of the 6th century BC (see gallery), shows how the sacrificial offering alluded to by the voluted altar and flame, the wings of the goddess and the victory being celebrated, all resonate with the same multiple underlying associations carried within the component forms of the palmette motif. Similar forms are found in the hovering winged disc and sacred trees of Mesopotamia, the caduceus wand of Hermes, the ubiquitous scrolled scallop shells in the canopy of the Renaissance sculptural niche, originating in Greek and Roman sarcophagi, echoed above theatrical proscenium arches and on the doors, windows, wrought iron gates and balconies of palaces and grand houses; the shell-like fanlight over the door in Georgian and similar urban architecture, the gul and boteh motifs of Central Asian carpets and textiles, the trident of Neptune/Poseidon, both the trident and lingam of Shiva, the 'bai sema' lotus-petal-shaped boundary markers of the Thai inner-temple, Vishnu's mount, Garuda, the vajra thunderbolt, diamond mace or enlightenment jewel-in-the-lotus of Tibet and South-East Asia, the symmetrically scrolled cloud and bat motifs and the similarly scrolled ruyi or ju-i scepter and lingzhi or fungus of longevity of the Chinese tradition. Both as a form of the lotus rising from the swamps to touch the sun and as a (palm) tree reaching from earth to heaven, the palmette carries the characteristics of the axis mundi or world tree. The fleur-de-lis, which became a potent and enigmatic emblem of the divine right of kings, said to have been bestowed on early French kings by an angel, evolved in Egypt and Mesopotamia as a variant of the palmette. Similarly, from the early 13th century to 1806 the divine right of the Holy Roman Emperors was conferred by investiture in the Imperial Regalia, which included the coronation mantle displaying the twin lions (recalling the twin lions of Aker above) guarding the palm in the form of a tree of life, with its two pendant clusters of fruit. Even everyday garden gates throughout Western suburbia are topped with almost identical pairs of scrolls seemingly derived from the motifs associated with the akhet and the palmette, including the related winged sun and sun disk flanked with a pair of eyes. Churchyard gates, tombs and gravestones bear the motif over and again in different forms. The anthemion is also the mint mark of the Mint of Greece, and it shows in all Greek euro coins destined for circulation, as well as in all Greek collectors' coins. Gallery See also Acroterion Blue Egyptian Water Lily Tomb of the Palmettes Indo-Corinthian capital Pataliputra capital Notes References Jessica Rawson, Chinese Ornament: The Lotus and the Dragon; , British Museum Pubns Ltd, 1984 Alois Riegl, Stilfragen. Grundlegungen zu einer Geschichte der Ornamentik. Berlin 1893 Helene J. Kantor, Plant Ornament in the Ancient Near East, Revised: 11 August 1999, Copyright 1999 Oriental Institute, University of Chicago Idris Parry, Speak Silence, , Carcanet Press Ltd., 1988 Gombrich, Symbolic Images: Studies in the Art of the Renaissance, London, Phaidon, 1972 Ernst H. Gombrich, The Sense of Order, A Study in the Psychology of Decorative Art, Phaidon, 1985 External links Ancient Egypt, the tree of life Plant Ornament : Its Origin and Development in the Ancient Near East Palmettes in Fine Weavings Visual motifs Ornaments Ornaments (architecture)
Palmette
Mathematics
3,122
515,758
https://en.wikipedia.org/wiki/Fungicide
Fungicides are pesticides used to kill parasitic fungi or their spores. Fungi can cause serious damage in agriculture, resulting in losses of yield and quality. Fungicides are used both in agriculture and to fight fungal infections in animals. Fungicides are also used to control oomycetes, which are not taxonomically/genetically fungi, although sharing similar methods of infecting plants. Fungicides can either be contact, translaminar or systemic. Contact fungicides are not taken up into the plant tissue and protect only the plant where the spray is deposited. Translaminar fungicides redistribute the fungicide from the upper, sprayed leaf surface to the lower, unsprayed surface. Systemic fungicides are taken up and redistributed through the xylem vessels. Few fungicides move to all parts of a plant. Some are locally systemic, and some move upward. Most fungicides that can be bought retail are sold in liquid form, the active ingredient being present at 0.08% in weaker concentrates, and as high as 0.5% for less potent fungicides. Fungicides in powdered form are usually around 90% sulfur. Major fungi in agriculture Some major fungal threats to agriculture (and the associated diseases) are Ascomycetes ("potato late blight"), basidiomycetes ("powdery mildew"), deuteromycetes (various rusts), and oomycetes ("downy mildew"). Types of fungicides Like other pesticides, fungicides are numerous and diverse. This complexity has led to diverse schemes for classifying fungicides. Classifications are based on inorganic (elemental sulfur and copper salts) vs organic, chemical structures (dithiocarbamates vs phthalimides), and, most successfully, mechanism of action (MOA). These respective classifications reflect the evolution of the underlying science. Traditional Traditional fungicides are simple inorganic compounds like sulfur, and copper salts. While cheap, they must be applied repeatedly and are relatively ineffective. Other active ingredients in fungicides include neem oil, rosemary oil, jojoba oil, the bacterium Bacillus subtilis, and the beneficial fungus Ulocladium oudemansii. Nonspecific In the 1930s dithiocarbamate-based fungicides, the first organic compounds used for this purpose, became available. These include ferbam, ziram, zineb, maneb, and mancozeb. These compounds are non-specific and are thought to inhibit cysteine-based protease enzymes. Similarly nonspecific are N-substituted phthalimides. Members include captafol, captan, and folpet. Chlorothalonil is also non-specific. Specific Specific fungicides target a particular biological process in the fungus. Nucleic acid metabolism bupirimate metalaxyl Cytoskeleton and motor proteins carbendazim pencycuron Respiration Some fungicides target succinate dehydrogenase, a metabolically central enzyme. Fungi of the class Basidiomycetes were the initial focus of these fungicides. These fungi are active against cereals. azoxystrobin binapacryl boscalid carboxin cyazofamid pydiflumetofen Amino acid and protein synthesis blasticidin-S kasugamycin pyrimethanil Signal transduction fludioxonil procymidone Lipid synthesis / membrane integrity propamocarb pyrazophos tecnazene Melanin synthesis in cell wall tricyclazole Sterol biosynthesis in membranes fenpropimorph hexaconazole imazalil myclobutanil propiconazole Cell wall biosynthesis dimethomorph polyoxins Host plant defence induction acibenzolar fosetyl-Al phosphorous acid Mycoviruses Some of the most common fungal crop pathogens are known to suffer from mycoviruses, and it is likely that they are as common as for plant and animal viruses, although not as well studied. Given the obligately parasitic nature of mycoviruses, it is likely that all of these are detrimental to their hosts, and thus are potential biocontrols/biofungicides. Resistance Doses that provide the most control of the disease also provide the largest selection pressure to acquire resistance. In some cases, the pathogen evolves resistance to multiple fungicides, a phenomenon known as cross resistance. These additional fungicides typically belong to the same chemical family, act in the same way, or have a similar mechanism for detoxification. Sometimes negative cross-resistance occurs, where resistance to one chemical class of fungicides increases sensitivity to a different chemical class of fungicides. This has been seen with carbendazim and diethofencarb. Also possible is resistance to two chemically different fungicides by separate mutation events. For example, Botrytis cinerea is resistant to both azoles and dicarboximide fungicides. A common mechanism for acquiring resistance is alteration of the target enzyme. For example, Black Sigatoka, an economically important pathogen of banana, is resistant to the QoI fungicides, due to a single nucleotide change resulting in the replacement of one amino acid (glycine) by another (alanine) in the target protein of the QoI fungicides, cytochrome b. It is presumed that this disrupts the binding of the fungicide to the protein, rendering the fungicide ineffective. Upregulation of target genes can also render the fungicide ineffective. This is seen in DMI-resistant strains of Venturia inaequalis. Resistance to fungicides can also be developed by efficient efflux of the fungicide out of the cell. Septoria tritici has developed multiple drug resistance using this mechanism. The pathogen had five ABC-type transporters with overlapping substrate specificities that together work to pump toxic chemicals out of the cell. In addition to the mechanisms outlined above, fungi may also develop metabolic pathways that circumvent the target protein, or acquire enzymes that enable the metabolism of the fungicide to a harmless substance. Fungicides that are at risk of losing their potency due to resistance include Strobilurins such as azoxystrobin. Cross-resistance can occur because the active ingredients share a common mode of action. FRAC is organized by CropLife International. Safety Fungicides pose risks for humans. Fungicide residues have been found on food for human consumption, mostly from post-harvest treatments. Some fungicides are dangerous to human health, such as vinclozolin, which has now been removed from use. Ziram is also a fungicide that is toxic to humans with long-term exposure, and fatal if ingested. A number of fungicides are also used in human health care. See also Antifungal drug Index of pesticide articles PHI-base (Pathogen-Host-Interaction database) Phytopathology Plant disease forecasting Further reading References External links Fungicide Resistance Action Committee Fungicide Resistance Action Group, United Kingdom General Pesticide Information - National Pesticide Information Center, Oregon State University, United States Mycology Biocides
Fungicide
Biology,Environmental_science
1,540
41,557,441
https://en.wikipedia.org/wiki/C18H29NO4
{{DISPLAYTITLE:C18H29NO4}} The molecular formula C18H29NO4 (molar mass: 323.43 g/mol, exact mass: 323.2097 u) may refer to: Bufetolol Cicloprolol
C18H29NO4
Chemistry
61
890,862
https://en.wikipedia.org/wiki/L-attributed%20grammar
L-attributed grammars are a special type of attribute grammars. They allow the attributes to be evaluated in one depth-first left-to-right traversal of the abstract syntax tree. As a result, attribute evaluation in L-attributed grammars can be incorporated conveniently in top-down parsing. A syntax-directed definition is L-attributed if each inherited attribute of on the right side of depends only on the attributes of the symbols the inherited attributes of (but not its synthesized attributes) Every S-attributed syntax-directed definition is also L-attributed. Implementing L-attributed definitions in Bottom-Up parsers requires rewriting L-attributed definitions into translation schemes. Many programming languages are L-attributed. Special types of compilers, the narrow compilers, are based on some form of L-attributed grammar. These are a strict superset of S-attributed grammars. Used for code synthesis. Either "inherited attributes" or "synthesized attributes" associated with the occurrence of symbol . References Formal languages Compiler construction
L-attributed grammar
Mathematics
210
75,919,362
https://en.wikipedia.org/wiki/Jelleine
Jelleine is a family of peptides, isolated from the royal jelly of Apis mellifera iberiensis, a subspecies of the honey bee. This new family has the potential to be used in the development of new drugs. Discovery Jelleines were first isolated in 2004 by the research group of Professor Mario Sergio Palma at São Paulo State University, Brazil. First, he collected royal jelly from a group of honey bee larvae and purified the results by reverse phase, high-performance liquid chromatography. This purified royal jelly showed antimicrobial activity against different bacteria. So far, four peptides have been found in this family, each one containing the carboxamide C-terminal. Health benefits Fungal spores lead to respiratory disease in more than 10 million people. Compared to current antifungal agents, Jelleine has the potential to be a less toxic and more effective agent. Jelleine-I has been known to work against Candida albicans, C. tropicalis, C. parapsilosis, and C. glabrata. Jelleine-I causes damage that promotes microbial lysis. In addition, Jelleine has been shown to stimulate the formation of reactive oxygen species which improve its defense against Candida. In a test, Kunming mice were infected with C. Albicans. Then one hour after infection they were given different doses of jelleine-I over a period of 7 days. At the end of the experiment, the antifungal effect of Jelleine-I kept 60% of its group alive, while a separate group given Fluconazole only kept 40% alive and the untreated control group had a 100% mortality rate. Jelleine also has an antiparasitic ability against pathogens such as Leishmania. Most drugs administered are both toxic and prone to side effects. Jelleine-I has low anti-leishmania activity, being able to stop promastigotes but having no effect on the amastigotes. Jelleine-I and its halogenated analogues show potential as an immunologic adjuvant in the treatment of colorectal cancer. These peptides inhibit Fusobacterium nucleatum, an anaerobic bacterium of the oral microbiota that is highly active in the altered microecology of the gut and is closely associated with the initiation and progression of CRC. References Peptides Bee products Antiparasitic agents Colorectal cancer
Jelleine
Chemistry,Biology
518
371,468
https://en.wikipedia.org/wiki/Semiring
In abstract algebra, a semiring is an algebraic structure. Semirings are a generalization of rings, dropping the requirement that each element must have an additive inverse. At the same time, semirings are a generalization of bounded distributive lattices. The smallest semiring that is not a ring is the two-element Boolean algebra, for instance with logical disjunction as addition. A motivating example that is neither a ring nor a lattice is the set of natural numbers (including zero) under ordinary addition and multiplication. Semirings are abundant because a suitable multiplication operation arises as the function composition of endomorphisms over any commutative monoid. Terminology Some authors define semirings without the requirement for there to be a or . This makes the analogy between ring and on the one hand and and on the other hand work more smoothly. These authors often use rig for the concept defined here. This originated as a joke, suggesting that rigs are rings without negative elements. (Akin to using rng to mean a ring without a multiplicative identity.) The term dioid (for "double monoid") has been used to mean semirings or other structures. It was used by Kuntzmann in 1972 to denote a semiring. (It is alternatively sometimes used for naturally ordered semirings but the term was also used for idempotent subgroups by Baccelli et al. in 1992.) Definition A semiring is a set equipped with two binary operations and called addition and multiplication, such that: is a commutative monoid with an identity element called : is a monoid with an identity element called : Further, the following axioms tie to both operations: Through multiplication, any element is left- and right-annihilated by the additive identity: Multiplication left- and right-distributes over addition: Notation The symbol is usually omitted from the notation; that is, is just written Similarly, an order of operations is conventional, in which is applied before . That is, denotes . For the purpose of disambiguation, one may write or to emphasize which structure the units at hand belong to. If is an element of a semiring and , then -times repeated multiplication of with itself is denoted , and one similarly writes for the -times repeated addition. Construction of new semirings The zero ring with underlying set is a semiring called the trivial semiring. This triviality can be characterized via and so when speaking of nontrivial semirings, is often silently assumed as if it were an additional axiom. Now given any semiring, there are several ways to define new ones. As noted, the natural numbers with its arithmetic structure form a semiring. Taking the zero and the image of the successor operation in a semiring , i.e., the set together with the inherited operations, is always a sub-semiring of . If is a commutative monoid, function composition provides the multiplication to form a semiring: The set of endomorphisms forms a semiring where addition is defined from pointwise addition in . The zero morphism and the identity are the respective neutral elements. If with a semiring, we obtain a semiring that can be associated with the square matrices with coefficients in , the matrix semiring using ordinary addition and multiplication rules of matrices. Given and a semiring, is always a semiring also. It is generally non-commutative even if was commutative. Dorroh extensions: If is a semiring, then with pointwise addition and multiplication given by defines another semiring with multiplicative unit . Very similarly, if is any sub-semiring of , one may also define a semiring on , just by replacing the repeated addition in the formula by multiplication. Indeed, these constructions even work under looser conditions, as the structure is not actually required to have a multiplicative unit. Zerosumfree semirings are in a sense furthest away from being rings. Given a semiring, one may adjoin a new zero to the underlying set and thus obtain such a zerosumfree semiring that also lacks zero divisors. In particular, now and the old semiring is actually not a sub-semiring. One may then go on and adjoin new elements "on top" one at a time, while always respecting the zero. These two strategies also work under looser conditions. Sometimes the notations resp. are used when performing these constructions. Adjoining a new zero to the trivial semiring, in this way, results in another semiring which may be expressed in terms of the logical connectives of disjunction and conjunction: . Consequently, this is the smallest semiring that is not a ring. Explicitly, it violates the ring axioms as for all , i.e. has no additive inverse. In the self-dual definition, the fault is with . (This is not to be conflated with the ring , whose addition functions as xor .) In the von Neumann model of the naturals, , and . The two-element semiring may be presented in terms of the set theoretic union and intersection as . Now this structure in fact still constitutes a semiring when is replaced by any inhabited set whatsoever. The ideals on a semiring , with their standard operations on subset, form a lattice-ordered, simple and zerosumfree semiring. The ideals of are in bijection with the ideals of . The collection of left ideals of (and likewise the right ideals) also have much of that algebraic structure, except that then does not function as a two-sided multiplicative identity. If is a semiring and is an inhabited set, denotes the free monoid and the formal polynomials over its words form another semiring. For small sets, the generating elements are conventionally used to denote the polynomial semiring. For example, in case of a singleton such that , one writes . Zerosumfree sub-semirings of can be used to determine sub-semirings of . Given a set , not necessarily just a singleton, adjoining a default element to the set underlying a semiring one may define the semiring of partial functions from to . Given a derivation on a semiring , another the operation "" fulfilling can be defined as part of a new multiplication on , resulting in another semiring. The above is by no means an exhaustive list of systematic constructions. Derivations Derivations on a semiring are the maps with and . For example, if is the unit matrix and , then the subset of given by the matrices with is a semiring with derivation . Properties A basic property of semirings is that is not a left or right zero divisor, and that but also squares to itself, i.e. these have . Some notable properties are inherited from the monoid structures: The monoid axioms demand unit existence, and so the set underlying a semiring cannot be empty. Also, the 2-ary predicate defined as , here defined for the addition operation, always constitutes the right canonical preorder relation. Reflexivity is witnessed by the identity. Further, is always valid, and so zero is the least element with respect to this preorder. Considering it for the commutative addition in particular, the distinction of "right" may be disregarded. In the non-negative integers , for example, this relation is anti-symmetric and strongly connected, and thus in fact a (non-strict) total order. Below, more conditional properties are discussed. Semifields Any field is also a semifield, which in turn is a semiring in which also multiplicative inverses exist. Rings Any field is also a ring, which in turn is a semiring in which also additive inverses exist. Note that a semiring omits such a requirement, i.e., it requires only a commutative monoid, not a commutative group. The extra requirement for a ring itself already implies the existence of a multiplicative zero. This contrast is also why for the theory of semirings, the multiplicative zero must be specified explicitly. Here , the additive inverse of , squares to . As additive differences always exist in a ring, is a trivial binary relation in a ring. Commutative semirings A semiring is called a commutative semiring if also the multiplication is commutative. Its axioms can be stated concisely: It consists of two commutative monoids and on one set such that and . The center of a semiring is a sub-semiring and being commutative is equivalent to being its own center. The commutative semiring of natural numbers is the initial object among its kind, meaning there is a unique structure preserving map of into any commutative semiring. The bounded distributive lattices are partially ordered, commutative semirings fulfilling certain algebraic equations relating to distributivity and idempotence. Thus so are their duals. Ordered semirings Notions or order can be defined using strict, non-strict or second-order formulations. Additional properties such as commutativity simplify the axioms. Given a strict total order (also sometimes called linear order, or pseudo-order in a constructive formulation), then by definition, the positive and negative elements fulfill resp. . By irreflexivity of a strict order, if is a left zero divisor, then is false. The non-negative elements are characterized by , which is then written . Generally, the strict total order can be negated to define an associated partial order. The asymmetry of the former manifests as . In fact in classical mathematics the latter is a (non-strict) total order and such that implies . Likewise, given any (non-strict) total order, its negation is irreflexive and transitive, and those two properties found together are sometimes called strict quasi-order. Classically this defines a strict total order – indeed strict total order and total order can there be defined in terms of one another. Recall that "" defined above is trivial in any ring. The existence of rings that admit a non-trivial non-strict order shows that these need not necessarily coincide with "". Additively idempotent semirings A semiring in which every element is an additive idempotent, that is, for all elements , is called an (additively) idempotent semiring. Establishing suffices. Be aware that sometimes this is just called idempotent semiring, regardless of rules for multiplication. In such a semiring, is equivalent to and always constitutes a partial order, here now denoted . In particular, here . So additively idempotent semirings are zerosumfree and, indeed, the only additively idempotent semiring that has all additive inverses is the trivial ring and so this property is specific to semiring theory. Addition and multiplication respect the ordering in the sense that implies , and furthermore implies as well as , for all and . If is additively idempotent, then so are the polynomials in . A semiring such that there is a lattice structure on its underlying set is lattice-ordered if the sum coincides with the meet, , and the product lies beneath the join . The lattice-ordered semiring of ideals on a semiring is not necessarily distributive with respect to the lattice structure. More strictly than just additive idempotence, a semiring is called simple iff for all . Then also and for all . Here then functions akin to an additively infinite element. If is an additively idempotent semiring, then with the inherited operations is its simple sub-semiring. An example of an additively idempotent semiring that is not simple is the tropical semiring on with the 2-ary maximum function, with respect to the standard order, as addition. Its simple sub-semiring is trivial. A c-semiring is an idempotent semiring and with addition defined over arbitrary sets. An additively idempotent semiring with idempotent multiplication, , is called additively and multiplicatively idempotent semiring, but sometimes also just idempotent semiring. The commutative, simple semirings with that property are exactly the bounded distributive lattices with unique minimal and maximal element (which then are the units). Heyting algebras are such semirings and the Boolean algebras are a special case. Further, given two bounded distributive lattices, there are constructions resulting in commutative additively-idempotent semirings, which are more complicated than just the direct sum of structures. Number lines In a model of the ring , one can define a non-trivial positivity predicate and a predicate as that constitutes a strict total order, which fulfills properties such as , or classically the law of trichotomy. With its standard addition and multiplication, this structure forms the strictly ordered field that is Dedekind-complete. By definition, all first-order properties proven in the theory of the reals are also provable in the decidable theory of the real closed field. For example, here is mutually exclusive with . But beyond just ordered fields, the four properties listed below are also still valid in many sub-semirings of , including the rationals, the integers, as well as the non-negative parts of each of these structures. In particular, the non-negative reals, the non-negative rationals and the non-negative integers are such a semirings. The first two properties are analogous to the property valid in the idempotent semirings: Translation and scaling respect these ordered rings, in the sense that addition and multiplication in this ring validate In particular, and so squaring of elements preserves positivity. Take note of two more properties that are always valid in a ring. Firstly, trivially for any . In particular, the positive additive difference existence can be expressed as Secondly, in the presence of a trichotomous order, the non-zero elements of the additive group are partitioned into positive and negative elements, with the inversion operation moving between them. With , all squares are proven non-negative. Consequently, non-trivial rings have a positive multiplicative unit, Having discussed a strict order, it follows that and , etc. Discretely ordered semirings There are a few conflicting notions of discreteness in order theory. Given some strict order on a semiring, one such notion is given by being positive and covering , i.e. there being no element between the units, . Now in the present context, an order shall be called discrete if this is fulfilled and, furthermore, all elements of the semiring are non-negative, so that the semiring starts out with the units. Denote by the theory of a commutative, discretely ordered semiring also validating the above four properties relating a strict order with the algebraic structure. All of its models have the model as its initial segment and Gödel incompleteness and Tarski undefinability already apply to . The non-negative elements of a commutative, discretely ordered ring always validate the axioms of . So a slightly more exotic model of the theory is given by the positive elements in the polynomial ring , with positivity predicate for defined in terms of the last non-zero coefficient, , and as above. While proves all -sentences that are true about , beyond this complexity one can find simple such statements that are independent of . For example, while -sentences true about are still true for the other model just defined, inspection of the polynomial demonstrates -independence of the -claim that all numbers are of the form or ("odd or even"). Showing that also can be discretely ordered demonstrates that the -claim for non-zero ("no rational squared equals ") is independent. Likewise, analysis for demonstrates independence of some statements about factorization true in . There are characterizations of primality that does not validate for the number . In the other direction, from any model of one may construct an ordered ring, which then has elements that are negative with respect to the order, that is still discrete the sense that covers . To this end one defines an equivalence class of pairs from the original semiring. Roughly, the ring corresponds to the differences of elements in the old structure, generalizing the way in which the initial ring can be defined from . This, in effect, adds all the inverses and then the preorder is again trivial in that . Beyond the size of the two-element algebra, no simple semiring starts out with the units. Being discretely ordered also stands in contrast to, e.g., the standard ordering on the semiring of non-negative rationals , which is dense between the units. For another example, can be ordered, but not discretely so. Natural numbers plus mathematical induction gives a theory equivalent to first-order Peano arithmetic . The theory is also famously not categorical, but is of course the intended model. proves that there are no zero divisors and it is zerosumfree and so no model of it is a ring. The standard axiomatization of is more concise and the theory of its order is commonly treated in terms of the non-strict "". However, just removing the potent induction principle from that axiomatization does not leave a workable algebraic theory. Indeed, even Robinson arithmetic , which removes induction but adds back the predecessor existence postulate, does not prove the monoid axiom . Complete semirings A complete semiring is a semiring for which the additive monoid is a complete monoid, meaning that it has an infinitary sum operation for any index set and that the following (infinitary) distributive laws must hold: Examples of a complete semiring are the power set of a monoid under union and the matrix semiring over a complete semiring. For commutative, additively idempotent and simple semirings, this property is related to residuated lattices. Continuous semirings A continuous semiring is similarly defined as one for which the addition monoid is a continuous monoid. That is, partially ordered with the least upper bound property, and for which addition and multiplication respect order and suprema. The semiring with usual addition, multiplication and order extended is a continuous semiring. Any continuous semiring is complete: this may be taken as part of the definition. Star semirings A star semiring (sometimes spelled starsemiring) is a semiring with an additional unary operator , satisfying A Kleene algebra is a star semiring with idempotent addition and some additional axioms. They are important in the theory of formal languages and regular expressions. Complete star semirings In a complete star semiring, the star operator behaves more like the usual Kleene star: for a complete semiring we use the infinitary sum operator to give the usual definition of the Kleene star: where Note that star semirings are not related to *-algebra, where the star operation should instead be thought of as complex conjugation. Conway semiring A Conway semiring is a star semiring satisfying the sum-star and product-star equations: Every complete star semiring is also a Conway semiring, but the converse does not hold. An example of Conway semiring that is not complete is the set of extended non-negative rational numbers with the usual addition and multiplication (this is a modification of the example with extended non-negative reals given in this section by eliminating irrational numbers). An iteration semiring is a Conway semiring satisfying the Conway group axioms, associated by John Conway to groups in star-semirings. Examples By definition, any ring and any semifield is also a semiring. The non-negative elements of a commutative, discretely ordered ring form a commutative, discretely (in the sense defined above) ordered semiring. This includes the non-negative integers . Also the non-negative rational numbers as well as the non-negative real numbers form commutative, ordered semirings. The latter is called . Neither are rings or distributive lattices. These examples also have multiplicative inverses. New semirings can conditionally be constructed from existing ones, as described. The extended natural numbers with addition and multiplication extended so that . The set of polynomials with natural number coefficients, denoted forms a commutative semiring. In fact, this is the free commutative semiring on a single generator Also polynomials with coefficients in other semirings may be defined, as discussed. The non-negative terminating fractions , in a positional number system to a given base , form a sub-semiring of the rationals. One has if divides . For , the set is the ring of all terminating fractions to base and is dense in . The log semiring on with addition given by with multiplication zero element and unit element Similarly, in the presence of an appropriate order with bottom element, Tropical semirings are variously defined. The semiring is a commutative semiring with serving as semiring addition (identity ) and ordinary addition (identity 0) serving as semiring multiplication. In an alternative formulation, the tropical semiring is and min replaces max as the addition operation. A related version has as the underlying set. They are an active area of research, linking algebraic varieties with piecewise linear structures. The Łukasiewicz semiring: the closed interval with addition of and given by taking the maximum of the arguments () and multiplication of and given by appears in multi-valued logic. The Viterbi semiring is also defined over the base set and has the maximum as its addition, but its multiplication is the usual multiplication of real numbers. It appears in probabilistic parsing. Note that . More regarding additively idempotent semirings, The set of all ideals of a given semiring form a semiring under addition and multiplication of ideals. Any bounded, distributive lattice is a commutative, semiring under join and meet. A Boolean algebra is a special case of these. A Boolean ring is also a semiring (indeed, a ring) but it is not idempotent under . A is a semiring isomorphic to a sub-semiring of a Boolean algebra. The commutative semiring formed by the two-element Boolean algebra and defined by . It is also called the . Now given two sets and binary relations between and correspond to matrices indexed by and with entries in the Boolean semiring, matrix addition corresponds to union of relations, and matrix multiplication corresponds to composition of relations. Any unital quantale is a semiring under join and multiplication. A normal skew lattice in a ring is a semiring for the operations multiplication and nabla, where the latter operation is defined by More using monoids, The construction of semirings from a commutative monoid has been described. As noted, give a semiring , the matrices form another semiring. For example, the matrices with non-negative entries, form a matrix semiring. Given an alphabet (finite set) Σ, the set of formal languages over (subsets of ) is a semiring with product induced by string concatenation and addition as the union of languages (that is, ordinary union as sets). The zero of this semiring is the empty set (empty language) and the semiring's unit is the language containing only the empty string. Generalizing the previous example (by viewing as the free monoid over ), take to be any monoid; the power set of all subsets of forms a semiring under set-theoretic union as addition and set-wise multiplication: Similarly, if is a monoid, then the set of finite multisets in forms a semiring. That is, an element is a function ; given an element of the function tells you how many times that element occurs in the multiset it represents. The additive unit is the constant zero function. The multiplicative unit is the function mapping to and all other elements of to The sum is given by and the product is given by Regarding sets and similar abstractions, Given a set the set of binary relations over is a semiring with addition the union (of relations as sets) and multiplication the composition of relations. The semiring's zero is the empty relation and its unit is the identity relation. These relations correspond to the matrix semiring (indeed, matrix semialgebra) of square matrices indexed by with entries in the Boolean semiring, and then addition and multiplication are the usual matrix operations, while zero and the unit are the usual zero matrix and identity matrix. The set of cardinal numbers smaller than any given infinite cardinal form a semiring under cardinal addition and multiplication. The class of of an inner model form a (class) semiring under (inner model) cardinal addition and multiplication. The family of (isomorphism equivalence classes of) combinatorial classes (sets of countably many objects with non-negative integer sizes such that there are finitely many objects of each size) with the empty class as the zero object, the class consisting only of the empty set as the unit, disjoint union of classes as addition, and Cartesian product of classes as multiplication. Isomorphism classes of objects in any distributive category, under coproduct and product operations, form a semiring known as a Burnside rig. A Burnside rig is a ring if and only if the category is trivial. Star semirings Several structures mentioned above can be equipped with a star operation. The aforementioned semiring of binary relations over some base set in which for all This star operation is actually the reflexive and transitive closure of (that is, the smallest reflexive and transitive binary relation over containing ). The semiring of formal languages is also a complete star semiring, with the star operation coinciding with the Kleene star (for sets/languages). The set of non-negative extended reals together with the usual addition and multiplication of reals is a complete star semiring with the star operation given by for (that is, the geometric series) and for The Boolean semiring with The semiring on with extended addition and multiplication, and for Applications The and tropical semirings on the reals are often used in performance evaluation on discrete event systems. The real numbers then are the "costs" or "arrival time"; the "max" operation corresponds to having to wait for all prerequisites of an events (thus taking the maximal time) while the "min" operation corresponds to being able to choose the best, less costly choice; and + corresponds to accumulation along the same path. The Floyd–Warshall algorithm for shortest paths can thus be reformulated as a computation over a algebra. Similarly, the Viterbi algorithm for finding the most probable state sequence corresponding to an observation sequence in a hidden Markov model can also be formulated as a computation over a algebra on probabilities. These dynamic programming algorithms rely on the distributive property of their associated semirings to compute quantities over a large (possibly exponential) number of terms more efficiently than enumerating each of them. Generalizations A generalization of semirings does not require the existence of a multiplicative identity, so that multiplication is a semigroup rather than a monoid. Such structures are called or . A further generalization are , which additionally do not require right-distributivity (or , which do not require left-distributivity). Yet a further generalization are : in addition to not requiring a neutral element for product, or right-distributivity (or left-distributivity), they do not require addition to be commutative. Just as cardinal numbers form a (class) semiring, so do ordinal numbers form a near-semiring, when the standard ordinal addition and multiplication are taken into account. However, the class of ordinals can be turned into a semiring by considering the so-called natural (or Hessenberg) operations instead. In category theory, a is a category with functorial operations analogous to those of a rig. That the cardinal numbers form a rig can be categorified to say that the category of sets (or more generally, any topos) is a 2-rig. See also Notes Citations Bibliography Golan, Jonathan S. (1999) Semirings and their applications. Updated and expanded version of The theory of semirings, with applications to mathematics and theoretical computer science (Longman Sci. Tech., Harlow, 1992, ). Kluwer Academic Publishers, Dordrecht. xii+381 pp. Further reading Algebraic structures Ring theory
Semiring
Mathematics
6,020
243,323
https://en.wikipedia.org/wiki/Trochlear%20nerve
The trochlear nerve (), (lit. pulley-like nerve) also known as the fourth cranial nerve, cranial nerve IV, or CN IV, is a cranial nerve that innervates a single muscle - the superior oblique muscle of the eye (which operates through the pulley-like trochlea). Unlike most other cranial nerves, the trochlear nerve is exclusively a motor nerve (somatic efferent nerve). The trochlear nerve is unique among the cranial nerves in several respects: It is the smallest nerve in terms of the number of axons it contains. It has the greatest intracranial length. It is the only cranial nerve that exits from the dorsal (rear) aspect of the brainstem. It innervates a muscle, the superior oblique muscle, on the opposite side (contralateral) from its nucleus. The trochlear nerve decussates within the brainstem before emerging on the contralateral side of the brainstem (at the level of the inferior colliculus). An injury to the trochlear nucleus in the brainstem will result in an contralateral superior oblique muscle palsy, whereas an injury to the trochlear nerve (after it has emerged from the brainstem) results in an ipsilateral superior oblique muscle palsy. The superior oblique muscle which the trochlear nerve innervates ends in a tendon that passes through a fibrous loop, the trochlea, located anteriorly on the medial aspect of the orbit. Trochlea means “pulley” in Latin; the fourth nerve is thus also named after this structure. The words trochlea and trochlear (, ) come from Ancient Greek trokhiléa, “pulley; block-and-tackle equipment”. Structure The trochlear nerve provides motor innervation to the superior oblique muscle of the eye, a skeletal muscle; the trochlear nerve thus carries axons of general somatic efferent type. Course Each trochlear nerve originates from a trochlear nucleus in the medial midbrain. From their respective nuclei, the two trochlear nerves then travel dorsal-ward through the substance of the midbrain surrounded by the periaqueductal gray, crossing over (decussating) within the midbrain before emerging from the dorsal midbrain just inferior to the inferior colliculus. Each trochlear nerve thus comes to course on the contralateral side, first passing laterally (to the side) and then anteriorly around the pons, then running forward toward the eye in the subarachnoid space. It passes between the posterior cerebral artery and the superior cerebellar artery. It then pierces the dura just under free margin of the tentorium cerebelli, close to the crossing of the attached margin of the tentorium and within millimeters of the posterior clinoid process. It runs on the outer wall of the cavernous sinus. Finally, it enters the orbit through the superior orbital fissure and to innervate the superior oblique muscle. Development The human trochlear nerve is derived from the basal plate of the embryonic midbrain. Clinical significance Vertical diplopia Injury to the trochlear nerve cause weakness of downward eye movement with consequent vertical diplopia (double vision). The affected eye drifts upward relative to the normal eye, due to the unopposed actions of the remaining extraocular muscles. The patient sees two visual fields (one from each eye), separated vertically. To compensate for this, patients learn to tilt the head forward (tuck the chin in) in order to bring the fields back together—to fuse the two images into a single visual field. This accounts for the “dejected” appearance of patients with “pathetic nerve” palsies. Torsional diplopia Trochlear nerve palsy also affects torsion (rotation of the eyeball in the plane of the face). Torsion is a normal response to tilting the head sideways. The eyes automatically rotate in an equal and opposite direction, so that the orientation of the environment remains unchanged—vertical things remain vertical. Weakness of intorsion results in torsional diplopia, in which two different visual fields, tilted with respect to each other, are seen at the same time. To compensate for this, patients with trochlear nerve palsies tilt their heads to the opposite side, in order to fuse the two images into a single visual field. The characteristic appearance of patients with fourth nerve palsies (head tilted to one side, chin tucked in) suggests the diagnosis, but other causes must be ruled out. For example, torticollis can produce a similar appearance. Causes The clinical syndromes can originate from both peripheral and central lesions. Peripheral lesion A peripheral lesion is damage to the bundle of nerves, in contrast to a central lesion, which is damage to the trochlear nucleus. Acute symptoms are probably a result of trauma or disease, while chronic symptoms probably are congenital. Acute palsy The most common cause of acute fourth nerve palsy is head trauma. Even relatively minor trauma can transiently stretch the fourth nerve (by transiently displacing the brainstem relative to the posterior clinoid process). Patients with minor damage to the fourth nerve will complain of “blurry” vision. Patients with more extensive damage will notice frank diplopia and rotational (torsional) disturbances of the visual fields. The usual clinical course is complete recovery within weeks to months. Isolated injury to the fourth nerve can be caused by any process that stretches or compresses the nerve. A generalized increase in intracranial pressure—hydrocephalus, pseudotumor cerebri, hemorrhage, edema—will affect the fourth nerve, but the abducens nerve (VI) is usually affected first (producing horizontal diplopia, not vertical diplopia). Infections (meningitis, herpes zoster), demyelination (multiple sclerosis), diabetic neuropathy and cavernous sinus disease can affect the fourth nerve, as can orbital tumors and Tolosa–Hunt syndrome. In general, these diseases affect other cranial nerves as well. Isolated damage to the fourth nerve is uncommon in these settings. Chronic palsy The most common cause of chronic fourth nerve palsy is a congenital defect, in which the development of the fourth nerve (or its nucleus) is abnormal or incomplete. Congenital defects may be noticed in childhood, but minor defects may not become evident until adult life, when compensatory mechanisms begin to fail. Congenital fourth nerve palsies are amenable to surgical treatment. Central lesion Central damage is damage to the trochlear nucleus. It affects the contralateral eye. The nuclei of other cranial nerves generally affect ipsilateral structures (for example, the optic nerves - cranial nerves II - innervate both eyes). The trochlear nucleus and its axons within the brainstem can be damaged by infarctions, hemorrhage, arteriovenous malformations, tumors and demyelination. Collateral damage to other structures will usually dominate the clinical picture. The fourth nerve is one of the final common pathways for cortical systems that control eye movement in general. Cortical control of eye movement (saccades, smooth pursuit, accommodation) involves conjugate gaze, not unilateral eye movement. Clinical assessment The trochlear nerve is tested by examining the action of its muscle, the superior oblique. When acting on its own this muscle depresses and abducts the eyeball. However, movements of the eye by the extraocular muscles are synergistic (working together). Therefore, the trochlear nerve is tested by asking the patient to look 'down and in' as the contribution of the superior oblique is greatest in this motion. Common activities requiring this type of convergent gaze are reading the newspaper and walking down stairs. Diplopia associated with these activities may be the initial symptom of a fourth nerve palsy. Alfred Bielschowsky's head tilt test is a test for palsy of the superior oblique muscle caused by damage to cranial nerve IV (trochlear nerve). Other animals Homologous trochlear nerves are found in all jawed vertebrates. The unique features of the trochlear nerve, including its dorsal exit from the brainstem and its contralateral innervation, are seen in the primitive brains of sharks. References Bibliography Blumenfeld H. Neuroanatomy Through Clinical Cases. Sinauer Associates, 2002 Brodal A. Neurological Anatomy in Relation to Clinical Medicine, 3rd ed. Oxford University Press, 1981 Brodal P. The Central Nervous System, 3rded. Oxford University Press, 2004 Butler AB, Hodos W. Comparative Vertebrate Neuroanatomy, 2nd ed. Wiley-Interscience, 2005 Carpenter MB. Core Text of Neuroanatomy, 4th ed. Williams & Wilkins, 1991 Kandel ER, Schwartz JH, Jessell TM. Principles of Neural Science, 4th ed. McGraw-Hill, 2000 Martin JH. Neuroanatomy Text and Atlas, 3rd ed. McGraw-Hill, 2003 Patten J. Neurological Differential Diagnosis, 2nd ed. Springer, 1996 Ropper, AH, Brown RH. Victor's Principles of Neurology, 8th ed. McGraw-Hill, 2005 Standring S (ed.) Gray's Anatomy, 39th edition. Elsevier Churchill Livingstone, 2005 Wilson-Pauwels L, Akesson EJ, Stewart PA. Cranial Nerves: Anatomy and Clinical Comments. Decker, 1998 Additional images External links - "Trochlear Nerve Palsy" () () Animations of extraocular cranial nerve and muscle function and damage (University of Liverpool) Trochlear nerve at Neurolex Cranial nerves Human head and neck Nervous system Neurology Nerves of the head and neck Ophthalmology
Trochlear nerve
Biology
2,101
11,738,269
https://en.wikipedia.org/wiki/Valsaria%20insitiva
Valsaria insitiva is a plant pathogen, that causes perennial canker in apples and almonds. See also List of apple diseases List of almond diseases References External links Index Fungorum USDA ARS Fungal Database Enigmatic Dothideomycetes taxa Fungi described in 1863 Fungal tree pathogens and diseases Apple tree diseases Fruit tree diseases Fungus species
Valsaria insitiva
Biology
72
16,108,069
https://en.wikipedia.org/wiki/Phenyltriazines
Phenyltriazines are a class of molecules containing a phenyl group and a triazine group. These molecules are pharmacologically important. As an example, lamotrigine is a phenyltriazine derivative used as an anticonvulsant drug and has been shown to be useful for alleviating epilepsy and bipolar disorder. References Triazines
Phenyltriazines
Chemistry
85
36,952,773
https://en.wikipedia.org/wiki/Gerronema%20viridilucens
Gerronema viridilucens is a species of agaric fungus in the family Porotheleaceae. Found in South America, the mycelium and fruit bodies of the fungus are bioluminescent. See also List of bioluminescent fungi References External links Porotheleaceae Bioluminescent fungi Fungi described in 2005 Fungi of South America Fungus species
Gerronema viridilucens
Biology
79
57,650,474
https://en.wikipedia.org/wiki/Loop%20sectioning
In geometry and the mathematical discipline of topology, loop strip-mining, or sectioning, is a special case of tiling, namely 1-dimensional tiling: a loop is transformed into a depth-2 loop nest, where the outer loop is called tile/block loop and the innermost loop is called element loop. Strip-mining was introduced for vector processors. It is a loop-transformation technique for enabling vectorization of loops and improving memory performance. The term strip-mine is really inspired from mining coal, for example, with the excavator, which uses a bucket (or bucket wheel) to "strip" the coal. Tiling
Loop sectioning
Mathematics
134
17,088,944
https://en.wikipedia.org/wiki/Synthetic%20biodegradable%20polymer
Many opportunities exist for the application of synthetic biodegradable polymers in the biomedical area particularly in the fields of tissue engineering and controlled drug delivery. Degradation is important in biomedicine for many reasons. Degradation of the polymeric implant means surgical intervention may not be required in order to remove the implant at the end of its functional life, eliminating the need for a second surgery. In tissue engineering, biodegradable polymers can be designed such to approximate tissues, providing a polymer scaffold that can withstand mechanical stresses, provide a suitable surface for cell attachment and growth, and degrade at a rate that allows the load to be transferred to the new tissue. In the field of controlled drug delivery, biodegradable polymers offer tremendous potential either as a drug delivery system alone or in conjunction to functioning as a medical device. In the development of applications of biodegradable polymers, the chemistry of some polymers including synthesis and degradation is reviewed below. A description of how properties can be controlled by proper synthetic controls such as copolymer composition, special requirements for processing and handling, and some of the commercial devices based on these materials are discussed. Polymer chemistry and material selection When investigating the selection of the polymer for biomedical applications, important criteria to consider are; The mechanical properties must match the application and remain sufficiently strong until the surrounding tissue has healed. The degradation time must match the time required. It does not invoke a toxic response. It is metabolized in the body after fulfilling its purpose. It is easily processable in the final product form with an acceptable shelf life and easily sterilized. Mechanical performance of a biodegradable polymer depends on various factors which include monomer selection, initiator selection, process conditions and the presence of additives. These factors influence the polymers crystallinity, melt and glass transition temperatures and molecular weight. Each of these factors needs to be assessed on how they affect the biodegradation of the polymer. Biodegradation can be accomplished by synthesizing polymers with hydrolytically unstable linkages in the backbone. This is commonly achieved by the use of chemical functional groups such as esters, anhydrides, orthoesters and amides. Most biodegradable polymers are synthesized by ring opening polymerization. Processing Biodegradable polymers can be melt processed by conventional means such as compression or injection molding. Special consideration must be given to the need to exclude moisture from the material. Care must be taken to dry the polymers before processing to exclude humidity. As most biodegradable polymers have been synthesized by ring opening polymerization, a thermodynamic equilibrium exists between the forward polymerization reaction and the reverse reaction that results in monomer formation. Care needs to be taken to avoid an excessively high processing temperature that may result in monomer formation during the molding and extrusion process. It must be followed carefully. Resorbable polymers can also be 3D printed. Degradation Once implanted, a biodegradable device should maintain its mechanical properties until it is no longer needed and then be absorbed by the body leaving no trace. The backbone of the polymer is hydrolytically unstable. That is, the polymer is unstable in a water based environment. This is the prevailing mechanism for the polymers degradation. This occurs in two stages. 1. Water penetrates the bulk of the device, attacking the chemical bonds in the amorphous phase and converting long polymer chains into shorter water-soluble fragments. This causes a reduction in molecular weight without the loss of physical properties as the polymer is still held together by the crystalline regions. Water penetrates the device leading to metabolization of the fragments and bulk erosion. 2. Surface erosion of the polymer occurs when the rate at which the water penetrating the device is slower than the rate of conversion of the polymer into water-soluble materials. Biomedical engineers can tailor a polymer to slowly degrade and transfer stress at the appropriate rate to surrounding tissues as they heal by balancing the chemical stability of the polymer backbone, the geometry of the device, and the presence of catalysts, additives or plasticisers. Applications Biodegradable polymers are used commercially in both the tissue engineering and drug delivery field of biomedicine. Specific applications include. Sutures Dental devices (PLGA) Orthopedic fixation devices Tissue engineering scaffolds Biodegradable vascular stents Biodegradable soft tissue anchors References Further reading Some biodegradable polymers, their properties and degradation times can be found in Table 2 in this document. An example of the structure of some of the types of polymer degradation can be viewed in Fig. 1 in this article Bellin, I., Kelch, S., Langer, R. & Lendlein, A. Polymeric triple-shape materials. Proc. Natl. Acad. Sci. U.S.A. 103, 18043-18047 (2006. Copyright (2006) National Academy of Sciences, U.S.A. Lendlein, A., Jiang, H., Jünger, O. & Langer, R. Light-induced shape-memory polymers. Nature 434, 879–882 (2005). Lendlein, A., Langer, R.: Biodegradable, Elastic Shape Memory Polymers for Potential Biomedical Applications, Science 296, 1673–1675 (2002). Lendlein, A., Schmidt, A.M. & Langer, R. AB-polymer networks based on oligo (e-caprolactone) segments showing shape-memory properties and this article. Proc. Natl. Acad. Sci. U.S.A. 98(3), 842–847 (2001). Copyright (2001) National Academy of Sciences, U.S.A. Damodaran, V., Bhatnagar, D., Murthy, Sanjeeva.: Biomedical Polymers Synthesis and Processing, SpringerBriefs in Applied Sciences and Technology, DOI: 10.1007/978-3-319-32053-3 (2016). External links Biodegradable plastics a year in review, Environment and Plastics Industry Council Biodegradable materials Biomaterials Polymers
Synthetic biodegradable polymer
Physics,Chemistry,Materials_science,Biology
1,277
6,808,830
https://en.wikipedia.org/wiki/Advanced%20Functional%20Materials
Advanced Functional Materials is a peer-reviewed scientific journal, published by Wiley-VCH. Established in February 2001, the journal began to publish monthly in 2002 and moved to 18/year in 2006, biweekly in 2008, and weekly in 2013. It has been published under other titles since 1985. Scope Coverage of this journal encompasses all topics pertaining to materials science. Topical coverage includes photovoltaics, organic electronics, carbon materials, nanotechnology, liquid crystals, magnetic materials, surfaces and interfaces, and biomaterials. Publishing formats include original research papers, feature articles and highlights. History It was established in 2001 by Peter Gregory, the Editor of Advanced Materials, when the Wiley journal Advanced Materials for Optics and Electronics (starting in 1992) was discontinued; the volume numbering continued, however. Advanced Functional Materials is the sister journal to Advanced Materials and publishes full papers and feature articles on the development and applications of functional materials, including topics in chemistry, physics, nanotechnology, ceramics, metallurgy, and biomaterials. Frequent topics covered by the journal also include liquid crystals, semiconductors, superconductors, optics, lasers, sensors, porous materials, light-emitting materials, magnetic materials, thin films, and colloids. The current editor-in-chief is Joern Ritterbusch; David Flanagan was previously the editor-in-chief. Abstracting and indexing Advanced Functional Materials is indexed in the following bibliographic databases: Thomson Reuters Web of Science CSA Illunina Chemical Abstracts Service (ACS) Compendex FIZ Karlsruhe Databases INSPEC Polymer Library SCOPUS (Elsevier) See also Advanced Materials Advanced Engineering Materials Small Small Science Journal of Materials Chemistry Chemistry of Materials Nature Materials References External links Advanced Functional Materials (ISSN) Chemistry journals Materials science journals Academic journals established in 2001 Wiley-Blackwell academic journals Nanotechnology journals Engineering journals
Advanced Functional Materials
Materials_science,Engineering
386
71,272,183
https://en.wikipedia.org/wiki/Leucoagaricus%20meleagris
Leucoagaricus meleagris is a species of fungus in the family Agaricaceae. Taxonomy It was first described in 1799 by the British mycologist James Sowerby who classified it as Agaricus meleagris and illustrated it in volume II of Coloured Figures of English Fungi or Mushrooms'. Sowerby stated that the specimens were found in a hot-bed by Lady Arden on May 24, 1798. In 1821, the species was reclassified as Gymnopus meleagris by the British mycologist Samuel Frederick Gray and the common name Turkey-fowl naked-foot was suggested. In 1887, it was reclassified as Lepiota meleagris by the Italian mycologist Pier Andrea Saccardo. In 1891, it was included in the German botanist Otto Kunze's exhaustive list of reclassifications as Mastocephalus biornatus, however Kunze's Mastocephalus genus, along with most of 'Revisio generum plantarum was not widely accepted by the scientific community of the age and so this classification was not accepted and nothing remains in this genus. In 1936, it was reclassified as Hiatula meleagris by the German mycologist Rolf Singer and then as Leucocoprinus meleagris by Marcel Locquin in 1945. In 1949 Singer reclassified it as Leucoagaricus meleagris. Sclerotia Included in the taxonomy of this species by some sources is that of a Cenococcum species which was suspected to be an asexual morph of this species. However, there are issues with these classifications and it is not clear if this species actually produces sclerotia although some Leucoagaricus and Leucocoprinus species do. In 1829, the Swedish mycologist Elias Magnus Fries described the novel species Cenococcum xylophilum which he described as being similar to Cenococcum geophilum in appearing like small black vetch seeds that are found beneath the soil. The exterior of C. xylophilum was noted as differing in the pale purple floccose (woolly) coating and the white-floury interior. This was reclassified as Coccobotrys xylophilus in 1900 by the French mycologists Jean Louis Émile Boudier and Narcisse Théophile Patouillard who described the species as having ochre-yellow mycelium producing numerous round, 1-2mm wide structures with a hard outer surface of the same colour as the mycelium. When dissected there is a black layer beneath the exterior and then a red layer of a similar thickness beneath that, finally with a pale ochre centre that may tinge red or become whitish when dry. In this interior section are the sclerotic cells along with short hyphae similar to those surrounding the exterior. The species was found growing amongst tanbark in a hothouse in Angers, France that was growing palm trees. In 1900, Charles van Bambeke classified Coccobotrys xylophilus as the mycelium and asexual morph of Lepiota meleagris. However the description of Coccobotrys xylophilus given by Boudier and Patouillard appears to significantly differ from that of Fries' Cenococcum xylophilum in colouration. Else Vellinga suggested that the material examined by Boudier and Patouillard and then later Bambeke was not the same as the original collection of Cenococcum xylophilum and so this reclassification had to be rejected. Coccobotrys chilensis however was reclassified as Leucoagaricus chilensis. The description of the sclerotia given by Boudier and Patouillard may be similar to that of the sclerotia of Leucocoprinus birnbaumii. Description Leucoagaricus meleagris is a small dapperling mushrooms with white flesh in the cap and brown flesh in the stem. Cap: 2–4.5 cm wide, starting hemispherical before expanding to campanulate (bell shaped) then plano-convex with a broad umbo. The surface background is white and covered with brownish-red coarse fibrils and scales. The surface discolours to a dirty red with age or when bruised. This can occur just from handling it. Stem: 6–8 cm long with a clavate taper up from the slightly wider base. The surface is white with a fibrillose coating and also discolours brownish-red when old or bruised. The white, ascending stem ring has reddish scales on the underside and is located towards the top of the stem (superior) but it may disappear. Gills: Free, crowded and white but discolouring like the rest of the mushroom so may be yellowish or brownish with age. Spore print: White. Spores: Ellipsoid with a somewhat thick wall and tiny germ pore. Smooth. Hyaline. Dextrinoid. 8-11 x 6-8 μm. Basidia: Four spored. Taste: Slightly farinaceous (floury). Smell: Indistinct. Habitat and Distribution Leucoagaricus meleagris grows in small groups and tufts in the Autumn. It is reported as being widespread but rarely recorded in the United Kingdom. In the early taxonomy of this species the observations are from greenhouses and amongst bark beds in hothouses so it may be more common in these warm environments. It has also been documented more recently from woodchips in England and Skåne, Sweden as well as in greenhouses in Warsaw, Poland. Observations of it appear to be uncommon in Europe with the most common locations for purported observations being the East Coast of the United States. Similar species Leucoagaricus americanus may appear similar, grow in the same human-made environments and exhibits similar yellow and then red staining when handled. These species may be confused in books. Leucoagaricus meleagris can be distinguished by the smaller size of the mushrooms and different cap surface. References meleagris Taxa described in 1799 Taxa named by James Sowerby Fungus species
Leucoagaricus meleagris
Biology
1,314
73,868,948
https://en.wikipedia.org/wiki/Hay%20meadow
A hay meadow is an area of land set aside for the production of hay. In Britain hay meadows are typically meadows with high botanical diversity supporting a diverse assemblage of organisms ranging from soil microbes, fungi, arthropods including many insects through to small mammals such as voles and their predators, and up to insectivorous birds and bats. History Up until the turn of the 20th century, most farms in Britain were relatively small and each farm relied on the power of horses for transport and traction including ploughing. Even in the towns and cities, many horses were still in use pulling carriages and carts and delivering milk and bread to the door and Pit ponies were in widespread use in all the coal mining regions. The onset of war in 1914 required many horses and young men to be deployed in the European battlefields, many of whom never returned. This pattern was repeated in 1939. The two world wars made enormous technological strides in devising mechanised forms of transport which were built on to provide oil powered farm equipment including the ubiquitous tractors. During the same decades, British governments were strongly encouraging the population to grow more food especially at times when Atlantic convoys of food from the Americas were being lost to enemy torpedo activities. As a consequence of all these pressures, British farms became steadily larger and abandoned the use of horses in favour of oil fuelled farm machinery. Without the need to feed horses, there was no apparent need to maintain hay-meadows and most were ploughed up and re-sown to provide fodder crops such as mono-culture grass species for silage, brassica or turned over to direct food production such as cereal crops, potatoes or oil-seed rape. Types Northern hay meadows Northern hay meadows are largely restricted to the northern counties of England including Northumberland, County Durham and Yorkshire with a few in the Scottish border counties. Water meadows Some pastures close to rivers have traditionally been managed as Water meadows. These occur on land that either floods naturally in the wintertime such as those on the River Thames around Oxford or is deliberately flooded using sluices such as those on the Somerset levels. Flooding deposits new nutrient rich sediment on the land but also changes the plant distribution towards those plants that are tolerant of periodic inundation. Lowland meadows and pastures Probably the most frequently encountered, lowland meadows are often relics that have been retained since horses were last used on farms. Their species richness and diversity depend on their ongoing management. This involves the winter grazing, often with sheep and then the land being left until mid-summer when the hay crop is taken. Once growth has re-established the such meadows are often grazed by cattle. The lack of any artificial fertilisers or pesticides allow a very diverse flora to establish in which no one species dominates. The presence of hemi-parasitic plants such as Yellow Rattle and Eye-bright assist in controlling over-growth of grasses. Orchids are common components of these meadow communities and these rely on fungal mycelium in the earth both for germination of orchid seeds but also as part of a commensal relationship with the orchids. References Grasslands Meadows
Hay meadow
Biology
634
23,678,615
https://en.wikipedia.org/wiki/Recoil%20pad
A recoil pad is a piece of rubber, foam, leather, or other soft material usually attached to the buttstock of a rifle or shotgun. Recoil pads may also be worn around the shoulder with straps, placing the soft material between the buttstock and the shoulder of the person firing the gun. The purpose of this device is to provide additional padding between the typically hard buttstock surface and the user's shoulder, to reduce the amount of felt recoil of the firearm, and to prevent slippage on the shooter's clothing while aiming. See also Recoil buffer References Firearm components
Recoil pad
Technology
118
7,658,869
https://en.wikipedia.org/wiki/MICAD
The Molecular Imaging and Contrast Agent Database or MICAD is a freely accessible online source of information on in vivo molecular imaging agents. It was established as a key component of the "Molecular Libraries and Imaging" program of the NIH Roadmap, a set of major inter-agency initiatives accelerating medical research and the development of new, more specific therapies for a wide range of diseases. Content MICAD includes agents developed for imaging modalities such as positron emission tomography (PET), single photon emission computed tomography (SPECT), magnetic resonance imaging (MRI), ultrasound, computed tomography, optical imaging, and planar gamma imaging. It contains textual information, references, numerous links to MEDLINE and to other relevant resources from the National Center for Biotechnology Information (NCBI). Process MICAD is edited by a team of scientific editors and curators at the National Library of Medicine, NIH. It is being developed under the guidance of a trans-NIH panel of experts in the field. Members of the imaging community are invited to contribute to the MICAD database by writing and submitting entries (chapters) on agents of their choice for online publication. The MICAD staff will work with individual guest authors to prepare the chapters. Interested members of the imaging community should contact the MICAD staff at micad@ncbi.nlm.nih.gov. References Online databases Medical imaging Biological databases
MICAD
Biology
290
15,969,980
https://en.wikipedia.org/wiki/Jerzy%20Giedymin
Jerzy Giedymin (September 18, 1925 – June 24, 1993) was a philosopher and historian of mathematics and science. Life Giedymin, of Polish origin, was born in 1925. He studied at the University of Poznań under Kazimierz Ajdukiewicz. In 1953 Jerzy Giedymin succeeded Adam Wiegner at the Chair of Logic at the Faculty of Philosophy. The so-called Poznań School was a Marxist current of philosophy marked by an idealisational theory of science which emphasised the scientific features of Marxism in close confrontation with contemporary logic and epistemology. In 1968 Giedymin moved to England and attended seminars by Karl Popper at the London School of Economics. In 1971 he came to Sussex to become Professor at the School of Mathematical and Physical Sciences of the University of Sussex. Giedymin died during a trip to Poland on 24 June 1993. Work Giedymin was convinced that Henri Poincaré's conventionalist philosophy was fundamentally misunderstood and thus underestimated. Giedymin argues that Poincaré was at the origin of much of the 20th century's innovations in relativity theory and quantum physics. Giedymin's standpoint was much influenced by his exposure to Kazimierz Ajdukiewicz's perception of the history of ideas which in defiance of traditional empiricism reviews the philosophy of science of the early 20th century in the light of pragmatic conventionalism. Bibliography Books Jerzy Giedymin, Z problemow logicznych analizy historycznej [Some Logical Problems of Historical Analysis], Poznanskie towarzystwo przyjaciol nauk. Wydzial filologiczno-filozoficzny. Prace Komisji filozoficznej. tom 10. zesz. 3., Poznań, 1961. Jerzy Giedymin, Problemy, zalozenia, rozstrzygniecia. Studia nad logicznymi podstawami nauk spolecznych [Questions, assumptions, decidability. Essays concerning the logical functions of the social sciences], Polskie Towarzystwo Ekonomiczne. Oddzial w Poznaniu. Rozprawy i monografie. No. 10, Poznań, 1964. Jerzy Giedymin ed., Kazimierz Ajdukiewicz: The scientific world-perspective and other essays, 1931-1963, Dordrecht: D. Reidel Publishing Co., 1974 Jerzy Giedymin, Science and convention: essays on Henri Poincaré’s philosophy of science and the conventionalist tradition, Oxford: Pergamon, 1982 Articles (selection) Jerzy Giedymin, "Confirmation, critical region and empirical content of hypotheses", in Studia Logica, Volume 10, Number 1 (1960) Jerzy Giedymin, "A generalization of the refutability postulate", in Studia Logica, Volume 10, Number 1 (1960) Jerzy Giedymin, "Authorship hypotheses and reliability of informants", in Studia Logica, Volume 12, Number 1 (1961) Jerzy Giedymin, "Reliability of Informants", in British Journal for the Philosophy of Science, XIII (1963) Jerzy Giedymin, "The Paradox of Meaning Variance", in British Journal for the Philosophy of Science, 21 (1970) Jerzy Giedymin, "Consolations for the Irrationalist", in British Journal for the Philosophy of Science, 22 (1971) Jerzy Giedymin, "Antipositivism in Contemporary Philosophy of Social Sciences and Humanities", in British Journal for the Philosophy of Science, 26 (1975) Jerzy Giedymin, "On the origin and significance of Poincaré's conventionalism", in Studies in History and Philosophy of Science, Vol.8, No.4 (1977) Jerzy Giedymin, "Revolutionary changes, non-translatability and crucial experiments", in Problems of the Philosophy of Science, Amsterdam: North Holland, 1968 Jerzy Giedymin, "The Physics of the Principles and Its Philosophy: Hamilton, Poincaré and Ramsey", in Science and Convention: Essays on Henri Poincaré's Philosophy of Science and the Conventionalist Tradition. Oxford: Pergamon (1982) Jerzy Giedymin, "Geometrical and Physical Conventionalism of Henri Poincaré in Epistemologial Formulation", in Studies in History and Philosophy of Science, 22 (1991) Jerzy Giedymin, "Conventionalism, the Pluralist Conception of Theories and the Nature of Interpretation", in Studies in History and Philosophy of Science, 23 (1992) Jerzy Giedymin, "Radical Conventionalism, Its Background and Evolution: Poincare, Leroy, Ajdukiewicz", in Vito Sinisi & Jan Wolenski (ed.), The Heritage of Kazimierz Ajdukiewicz, Amsterdam, Rodopi, 1995 Jerzy Giedymin, "Ajdukiewicz's Life and Personality", in Vito Sinisi & Jan Wolenski (ed.), The Heritage of Kazimierz Ajdukiewicz, Amsterdam, Rodopi, 1995 Jerzy Giedymin, "Strength, Confirmation, Compatibility", in Mario Bunge (ed.), in Critical Approaches to Science and Philosophy (Science and Technology Studies). Piscataway, N.J.:Transaction Publishers (1998) About Jerzy Giedymin Laurent Rollet, Le conventionnalisme géométrique de Henri Poincaré : empirisme ou apriorisme ? Une étude des thèses de Adolf Grünbaum et Jerzy Giedymin, Université de Nancy 2, 1993 Laurent Rollet, "The Grünbaum-Giedymin Controversy Concerning the Philosophical Interpretation of Poincaré's Geometrical Conventionalism" in Krystyna Zamiara (ed.) The Problems Concerning the Philosophy of Science and Science Itself, Poznań, Wydawnictwo Fundacji Humaniora (1995) Krystyna Zamaria, "Jerzy Giedymin – From the Logic of Science to the Theoretical History of Science", in Wladyslaw Krajewski (ed.), Polish Philosophers of Science and Nature in the 20th Century, Amsterdam, 2001 External links Obituary: Jerzy Giedymin Obituary published in The Independent The Poznań School Presentation of the Poznań school Polish Logic of the Postwar Period Article by Jan Zygmunt of the University of Wrocław The Giedymin - Grünbaum Controversy Concerning the Philosophical Interpretation of Geometrical Conventionalism Article by Laurent Rollet French Conventionalism and its Influence on Polish Philosophy Article by Anna Jedynak in Parerga – MIĘDZYNARODOWE STUDIA FILOZOFICZNE, No. 2 (2007) 20th-century Polish philosophers 1925 births 1993 deaths Polish logicians Academics of the University of Sussex Philosophers of science Mathematical logicians
Jerzy Giedymin
Mathematics
1,493
32,789,261
https://en.wikipedia.org/wiki/Pedagogical%20relation
The pedagogical relation refers to special kind of personal relationship between adult and child or adult or student for the sake of the child or student. The pedagogical relation is described by Hermann Nohl, Klaus Mollenhauer, and others in the Northern European human science pedagogical tradition. It has been discussed more recently in English by Max van Manen, Norm Friesen, Tone Saevi and others.. In the pedagogical relation, adult and child encounter each other in ways that are different from other relationships (e.g., friendship) In the pedagogical relation the adult is directed toward the child. The relation is asymmetrical.. The adult is "there" for the child in a way that the child is not "there" for the adult. In the pedagogical relation the adult wants or intends both what is good for the child in the present and in the future. This relationship is oriented to what the child or young person may become (without trying to predetermine it), but without ignoring what is important for the child in the present. These two, present needs and the likely requirements of the future, exist in constant tension this relation. The pedagogical relation comes to an end. The child grows up and the asymmetry of the relation dissolves. As Klaus Mollenhauer explains, "upbringing comes to an end when the child no longer needs to be "called" to self-activity, but instead has the wherewithal to educate himself." In the pedagogical relation the adult is tactful. It is not about following rules and guidelines, but rather, about acting and also not acting according to what is appropriate for both the present and the future of a specific child in question. In the pedagogical relation, the adult mediates the relationship of the child with the world. This can happen by protecting the child from certain aspects of the world; it often happens by simplifying certain aspects of the world for the child, by directing the child's attention through gestures of pointing and guiding. In a text from 1933, educationist Herman Nohl describes the pedagogical relation as a relationship between a particular stance of the educator in relationship to the one being educated (educand): The pedagogical relation, finally, has as its interest not necessarily the "success" of the student, but rather their "subjectivation"--their becoming a subject, a person, something that is to be pursued as an end in itself. References Biesta, G. (2012). No education without hesitation: Exploring the limits of educational relations. Keynote address to the Society for the Philosophy of Education. Philosophy of Education. Retrieved from Friesen, N. (2017). The pedagogical relation past and present: experience, subjectivity and failure. Journal of Curriculum Studies, 49(6), 743-756. Nohl, H. (1933/2019). The Pedagogical Relation and the Community of Formation. Unpublished translation by Norm Friesen and Sophia Zedlitz. Pedagogy Interpersonal relationships
Pedagogical relation
Biology
654
373,216
https://en.wikipedia.org/wiki/Kahan%20summation%20algorithm
In numerical analysis, the Kahan summation algorithm, also known as compensated summation, significantly reduces the numerical error in the total obtained by adding a sequence of finite-precision floating-point numbers, compared to the naive approach. This is done by keeping a separate running compensation (a variable to accumulate small errors), in effect extending the precision of the sum by the precision of the compensation variable. In particular, simply summing numbers in sequence has a worst-case error that grows proportional to , and a root mean square error that grows as for random inputs (the roundoff errors form a random walk). With compensated summation, using a compensation variable with sufficiently high precision the worst-case error bound is effectively independent of , so a large number of values can be summed with an error that only depends on the floating-point precision of the result. The algorithm is attributed to William Kahan; Ivo Babuška seems to have come up with a similar algorithm independently (hence Kahan–Babuška summation). Similar, earlier techniques are, for example, Bresenham's line algorithm, keeping track of the accumulated error in integer operations (although first documented around the same time) and the delta-sigma modulation. The algorithm In pseudocode, the algorithm will be: function KahanSum(input) // Prepare the accumulator. var sum = 0.0 // A running compensation for lost low-order bits. var c = 0.0 // The array input has elements indexed input[1] to input[input.length]. for i = 1 to input.length do // c is zero the first time around. var y = input[i] - c // Alas, sum is big, y small, so low-order digits of y are lost. var t = sum + y // (t - sum) cancels the high-order part of y; // subtracting y recovers negative (low part of y) c = (t - sum) - y // Algebraically, c should always be zero. Beware // overly-aggressive optimizing compilers! sum = t // Next time around, the lost low part will be added to y in a fresh attempt. next i return sum This algorithm can also be rewritten to use the Fast2Sum algorithm: function KahanSum2(input) // Prepare the accumulator. var sum = 0.0 // A running compensation for lost low-order bits. var c = 0.0 // The array input has elements indexed for i = 1 to input.length do // c is zero the first time around. var y = input[i] + c // sum + c is an approximation to the exact sum. (sum,c) = Fast2Sum(sum,y) // Next time around, the lost low part will be added to y in a fresh attempt. next i return sum Worked example The algorithm does not mandate any specific choice of radix, only for the arithmetic to "normalize floating-point sums before rounding or truncating". Computers typically use binary arithmetic, but to make the example easier to read, it will be given in decimal. Suppose we are using six-digit decimal floating-point arithmetic, sum has attained the value 10000.0, and the next two values of input[i] are 3.14159 and 2.71828. The exact result is 10005.85987, which rounds to 10005.9. With a plain summation, each incoming value would be aligned with sum, and many low-order digits would be lost (by truncation or rounding). The first result, after rounding, would be 10003.1. The second result would be 10005.81828 before rounding and 10005.8 after rounding. This is not correct. However, with compensated summation, we get the correctly rounded result of 10005.9. Assume that c has the initial value zero. Trailing zeros shown where they are significant for the six-digit floating-point number. y = 3.14159 - 0.00000 y = input[i] - c t = 10000.0 + 3.14159 t = sum + y = 10003.14159 Normalization done, next round off to six digits. = 10003.1 Few digits from input[i] met those of sum. Many digits have been lost! c = (10003.1 - 10000.0) - 3.14159 c = (t - sum) - y (Note: Parenthesis must be evaluated first!) = 3.10000 - 3.14159 The assimilated part of y minus the original full y. = -0.0415900 Because c is close to zero, normalization retains many digits after the floating point. sum = 10003.1 sum = t The sum is so large that only the high-order digits of the input numbers are being accumulated. But on the next step, c, an approximation of the running error, counteracts the problem. y = 2.71828 - (-0.0415900) Most digits meet, since c is of a size similar to y. = 2.75987 The shortfall (low-order digits lost) of previous iteration successfully reinstated. t = 10003.1 + 2.75987 But still only few meet the digits of sum. = 10005.85987 Normalization done, next round to six digits. = 10005.9 Again, many digits have been lost, but c helped nudge the round-off. c = (10005.9 - 10003.1) - 2.75987 Estimate the accumulated error, based on the adjusted y. = 2.80000 - 2.75987 As expected, the low-order parts can be retained in c with no or minor round-off effects. = 0.0401300 In this iteration, t was a bit too high, the excess will be subtracted off in next iteration. sum = 10005.9 Exact result is 10005.85987, sum is correct, rounded to 6 digits. The algorithm performs summation with two accumulators: sum holds the sum, and c accumulates the parts not assimilated into sum, to nudge the low-order part of sum the next time around. Thus the summation proceeds with "guard digits" in c, which is better than not having any, but is not as good as performing the calculations with double the precision of the input. However, simply increasing the precision of the calculations is not practical in general; if input is already in double precision, few systems supply quadruple precision, and if they did, input could then be in quadruple precision. Accuracy A careful analysis of the errors in compensated summation is needed to appreciate its accuracy characteristics. While it is more accurate than naive summation, it can still give large relative errors for ill-conditioned sums. Suppose that one is summing values , for . The exact sum is (computed with infinite precision). With compensated summation, one instead obtains , where the error is bounded by where is the machine precision of the arithmetic being employed (e.g. for IEEE standard double-precision floating point). Usually, the quantity of interest is the relative error , which is therefore bounded above by In the expression for the relative error bound, the fraction is the condition number of the summation problem. Essentially, the condition number represents the intrinsic sensitivity of the summation problem to errors, regardless of how it is computed. The relative error bound of every (backwards stable) summation method by a fixed algorithm in fixed precision (i.e. not those that use arbitrary-precision arithmetic, nor algorithms whose memory and time requirements change based on the data), is proportional to this condition number. An ill-conditioned summation problem is one in which this ratio is large, and in this case even compensated summation can have a large relative error. For example, if the summands are uncorrelated random numbers with zero mean, the sum is a random walk, and the condition number will grow proportional to . On the other hand, for random inputs with nonzero mean the condition number asymptotes to a finite constant as . If the inputs are all non-negative, then the condition number is 1. Given a condition number, the relative error of compensated summation is effectively independent of . In principle, there is the that grows linearly with , but in practice this term is effectively zero: since the final result is rounded to a precision , the term rounds to zero, unless is roughly or larger. In double precision, this corresponds to an of roughly , much larger than most sums. So, for a fixed condition number, the errors of compensated summation are effectively , independent of . In comparison, the relative error bound for naive summation (simply adding the numbers in sequence, rounding at each step) grows as multiplied by the condition number. This worst-case error is rarely observed in practice, however, because it only occurs if the rounding errors are all in the same direction. In practice, it is much more likely that the rounding errors have a random sign, with zero mean, so that they form a random walk; in this case, naive summation has a root mean square relative error that grows as multiplied by the condition number. This is still much worse than compensated summation, however. However, if the sum can be performed in twice the precision, then is replaced by , and naive summation has a worst-case error comparable to the term in compensated summation at the original precision. By the same token, the that appears in above is a worst-case bound that occurs only if all the rounding errors have the same sign (and are of maximal possible magnitude). In practice, it is more likely that the errors have random sign, in which case terms in are replaced by a random walk, in which case, even for random inputs with zero mean, the error grows only as (ignoring the term), the same rate the sum grows, canceling the factors when the relative error is computed. So, even for asymptotically ill-conditioned sums, the relative error for compensated summation can often be much smaller than a worst-case analysis might suggest. Further enhancements Neumaier introduced an improved version of Kahan algorithm, which he calls an "improved Kahan–Babuška algorithm", which also covers the case when the next term to be added is larger in absolute value than the running sum, effectively swapping the role of what is large and what is small. In pseudocode, the algorithm is: function KahanBabushkaNeumaierSum(input) var sum = 0.0 var c = 0.0 // A running compensation for lost low-order bits. for i = 1 to input.length do var t = sum + input[i] if |sum| >= |input[i]| then c += (sum - t) + input[i] // If sum is bigger, low-order digits of input[i] are lost. else c += (input[i] - t) + sum // Else low-order digits of sum are lost. endif sum = t next i return sum + c // Correction only applied once in the very end. This enhancement is similar to the replacement of Fast2Sum by 2Sum in Kahan's algorithm rewritten with Fast2Sum. For many sequences of numbers, both algorithms agree, but a simple example due to Peters shows how they can differ. For summing in double precision, Kahan's algorithm yields 0.0, whereas Neumaier's algorithm yields the correct value 2.0. Higher-order modifications of better accuracy are also possible. For example, a variant suggested by Klein, which he called a second-order "iterative Kahan–Babuška algorithm". In pseudocode, the algorithm is: function KahanBabushkaKleinSum(input) var sum = 0.0 var cs = 0.0 var ccs = 0.0 var c = 0.0 var cc = 0.0 for i = 1 to input.length do var t = sum + input[i] if |sum| >= |input[i]| then c = (sum - t) + input[i] else c = (input[i] - t) + sum endif sum = t t = cs + c if |cs| >= |c| then cc = (cs - t) + c else cc = (c - t) + cs endif cs = t ccs = ccs + cc end loop return sum + cs + ccs Alternatives Although Kahan's algorithm achieves error growth for summing n numbers, only slightly worse growth can be achieved by pairwise summation: one recursively divides the set of numbers into two halves, sums each half, and then adds the two sums. This has the advantage of requiring the same number of arithmetic operations as the naive summation (unlike Kahan's algorithm, which requires four times the arithmetic and has a latency of four times a simple summation) and can be calculated in parallel. The base case of the recursion could in principle be the sum of only one (or zero) numbers, but to amortize the overhead of recursion, one would normally use a larger base case. The equivalent of pairwise summation is used in many fast Fourier transform (FFT) algorithms and is responsible for the logarithmic growth of roundoff errors in those FFTs. In practice, with roundoff errors of random signs, the root mean square errors of pairwise summation actually grow as . Another alternative is to use arbitrary-precision arithmetic, which in principle need no rounding at all with a cost of much greater computational effort. A way of performing correctly rounded sums using arbitrary precision is to extend adaptively using multiple floating-point components. This will minimize computational cost in common cases where high precision is not needed. Another method that uses only integer arithmetic, but a large accumulator, was described by Kirchner and Kulisch; a hardware implementation was described by Müller, Rüb and Rülling. Possible invalidation by compiler optimization In principle, a sufficiently aggressive optimizing compiler could destroy the effectiveness of Kahan summation: for example, if the compiler simplified expressions according to the associativity rules of real arithmetic, it might "simplify" the second step in the sequence t = sum + y; c = (t - sum) - y; to c = ((sum + y) - sum) - y; and then to c = 0; thus eliminating the error compensation. In practice, many compilers do not use associativity rules (which are only approximate in floating-point arithmetic) in simplifications, unless explicitly directed to do so by compiler options enabling "unsafe" optimizations, although the Intel C++ Compiler is one example that allows associativity-based transformations by default. The original K&R C version of the C programming language allowed the compiler to re-order floating-point expressions according to real-arithmetic associativity rules, but the subsequent ANSI C standard prohibited re-ordering in order to make C better suited for numerical applications (and more similar to Fortran, which also prohibits re-ordering), although in practice compiler options can re-enable re-ordering, as mentioned above. A portable way to inhibit such optimizations locally is to break one of the lines in the original formulation into two statements, and make two of the intermediate products volatile: function KahanSum(input) var sum = 0.0 var c = 0.0 for i = 1 to input.length do var y = input[i] - c volatile var t = sum + y volatile var z = t - sum c = z - y sum = t next i return sum Support by libraries In general, built-in "sum" functions in computer languages typically provide no guarantees that a particular summation algorithm will be employed, much less Kahan summation. The BLAS standard for linear algebra subroutines explicitly avoids mandating any particular computational order of operations for performance reasons, and BLAS implementations typically do not use Kahan summation. The standard library of the Python computer language specifies an fsum function for accurate summation. Starting with Python 3.12, the built-in "sum()" function uses the Neumaier summation. In the Julia language, the default implementation of the sum function does pairwise summation for high accuracy with good performance, but an external library provides an implementation of Neumaier's variant named sum_kbn for the cases when higher accuracy is needed. In the C# language, HPCsharp nuget package implements the Neumaier variant and pairwise summation: both as scalar, data-parallel using SIMD processor instructions, and parallel multi-core. See also Algorithms for calculating variance, which includes stable summation References External links Floating-point Summation, Dr. Dobb's Journal September, 1996 Computer arithmetic Numerical analysis Articles with example pseudocode
Kahan summation algorithm
Mathematics
3,597
208,382
https://en.wikipedia.org/wiki/List%20%28abstract%20data%20type%29
In computer science, a list or sequence is a collection of items that are finite in number and in a particular order. An instance of a list is a computer representation of the mathematical concept of a tuple or finite sequence. A list may contain the same value more than once, and each occurrence is considered a distinct item. The term list is also used for several concrete data structures that can be used to implement abstract lists, especially linked lists and arrays. In some contexts, such as in Lisp programming, the term list may refer specifically to a linked list rather than an array. In class-based programming, lists are usually provided as instances of subclasses of a generic "list" class, and traversed via separate iterators. Many programming languages provide support for list data types, and have special syntax and semantics for lists and list operations. A list can often be constructed by writing the items in sequence, separated by commas, semicolons, and/or spaces, within a pair of delimiters such as parentheses '()', brackets '[]', braces '{}', or angle brackets '<>'. Some languages may allow list types to be indexed or sliced like array types, in which case the data type is more accurately described as an array. In type theory and functional programming, abstract lists are usually defined inductively by two operations: nil that yields the empty list, and cons, which adds an item at the beginning of a list. A stream is the potentially infinite analog of a list. Operations Implementation of the list data structure may provide some of the following operations: create test for empty add item to beginning or end access the first or last item access an item by index Implementations Lists are typically implemented either as linked lists (either singly or doubly linked) or as arrays, usually variable length or dynamic arrays. The standard way of implementing lists, originating with the programming language Lisp, is to have each element of the list contain both its value and a pointer indicating the location of the next element in the list. This results in either a linked list or a tree, depending on whether the list has nested sublists. Some older Lisp implementations (such as the Lisp implementation of the Symbolics 3600) also supported "compressed lists" (using CDR coding) which had a special internal representation (invisible to the user). Lists can be manipulated using iteration or recursion. The former is often preferred in imperative programming languages, while the latter is the norm in functional languages. Lists can be implemented as self-balancing binary search trees holding index-value pairs, providing equal-time access to any element (e.g. all residing in the fringe, and internal nodes storing the right-most child's index, used to guide the search), taking the time logarithmic in the list's size, but as long as it doesn't change much will provide the illusion of random access and enable swap, prefix and append operations in logarithmic time as well. Programming language support Some languages do not offer a list data structure, but offer the use of associative arrays or some kind of table to emulate lists. For example, Lua provides tables. Although Lua stores lists that have numerical indices as arrays internally, they still appear as dictionaries. In Lisp, lists are the fundamental data type and can represent both program code and data. In most dialects, the list of the first three prime numbers could be written as (list 2 3 5). In several dialects of Lisp, including Scheme, a list is a collection of pairs, consisting of a value and a pointer to the next pair (or null value), making a singly linked list. Applications Unlike in an array, a list can expand and shrink. In computing, lists are easier to implement than sets. A finite set in the mathematical sense can be realized as a list with additional restrictions; that is, duplicate elements are disallowed and order is irrelevant. Sorting the list speeds up determining if a given item is already in the set, but in order to ensure the order, it requires more time to add new entry to the list. In efficient implementations, however, sets are implemented using self-balancing binary search trees or hash tables, rather than a list. Lists also form the basis for other abstract data types including the queue, the stack, and their variations. Abstract definition The abstract list type L with elements of some type E (a monomorphic list) is defined by the following functions: nil: () → L cons: E × L → L first: L → E rest: L → L with the axioms first (cons (e, l)) = e rest (cons (e, l)) = l for any element e and any list l. It is implicit that cons (e, l) ≠ l cons (e, l) ≠ e cons (e1, l1) = cons (e2, l2) if e1 = e2 and l1 = l2 Note that first (nil ()) and rest (nil ()) are not defined. These axioms are equivalent to those of the abstract stack data type. In type theory, the above definition is more simply regarded as an inductive type defined in terms of constructors: nil and cons. In algebraic terms, this can be represented as the transformation 1 + E × L → L. first and rest are then obtained by pattern matching on the cons constructor and separately handling the nil case. The list monad The list type forms a monad with the following functions (using E* rather than L to represent monomorphic lists with elements of type E): where append is defined as: Alternatively, the monad may be defined in terms of operations return, fmap and join, with: Note that fmap, join, append and bind are well-defined, since they're applied to progressively deeper arguments at each recursive call. The list type is an additive monad, with nil as the monadic zero and append as monadic sum. Lists form a monoid under the append operation. The identity element of the monoid is the empty list, nil. In fact, this is the free monoid over the set of list elements. See also References Data types Composite data types Abstract data types
List (abstract data type)
Mathematics
1,331