id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
153,106
https://en.wikipedia.org/wiki/Dedekind%20group
In group theory, a Dedekind group is a group G such that every subgroup of G is normal. All abelian groups are Dedekind groups. A non-abelian Dedekind group is called a Hamiltonian group. The most familiar (and smallest) example of a Hamiltonian group is the quaternion group of order 8, denoted by Q8. Dedekind and Baer have shown (in the finite and respectively infinite order case) that every Hamiltonian group is a direct product of the form , where B is an elementary abelian 2-group, and D is a torsion abelian group with all elements of odd order. Dedekind groups are named after Richard Dedekind, who investigated them in , proving a form of the above structure theorem (for finite groups). He named the non-abelian ones after William Rowan Hamilton, the discoverer of quaternions. In 1898 George Miller delineated the structure of a Hamiltonian group in terms of its order and that of its subgroups. For instance, he shows "a Hamilton group of order 2a has quaternion groups as subgroups". In 2005 Horvat et al used this structure to count the number of Hamiltonian groups of any order where o is an odd integer. When then there are no Hamiltonian groups of order n, otherwise there are the same number as there are Abelian groups of order o. Notes References . Baer, R. Situation der Untergruppen und Struktur der Gruppe, Sitz.-Ber. Heidelberg. Akad. Wiss.2, 12–17, 1933. . . . . Group theory Properties of groups
Dedekind group
[ "Mathematics" ]
351
[ "Mathematical structures", "Properties of groups", "Group theory", "Fields of abstract algebra", "Algebraic structures" ]
153,130
https://en.wikipedia.org/wiki/Quaternion%20group
In group theory, the quaternion group Q8 (sometimes just denoted by Q) is a non-abelian group of order eight, isomorphic to the eight-element subset of the quaternions under multiplication. It is given by the group presentation where e is the identity element and commutes with the other elements of the group. These relations, discovered by W. R. Hamilton, also generate the quaternions as an algebra over the real numbers. Another presentation of Q8 is Like many other finite groups, it can be realized as the Galois group of a certain field of algebraic numbers. Compared to dihedral group The quaternion group Q8 has the same order as the dihedral group D4, but a different structure, as shown by their Cayley and cycle graphs: In the diagrams for D4, the group elements are marked with their action on a letter F in the defining representation R2. The same cannot be done for Q8, since it has no faithful representation in R2 or R3. D4 can be realized as a subset of the split-quaternions in the same way that Q8 can be viewed as a subset of the quaternions. Cayley table The Cayley table (multiplication table) for Q8 is given by: Properties The elements i, j, and k all have order four in Q8 and any two of them generate the entire group. Another presentation of Q8 based in only two elements to skip this redundancy is: For instance, writing the group elements in lexicographically minimal normal forms, one may identify: The quaternion group has the unusual property of being Hamiltonian: Q8 is non-abelian, but every subgroup is normal. Every Hamiltonian group contains a copy of Q8. The quaternion group Q8 and the dihedral group D4 are the two smallest examples of a nilpotent non-abelian group. The center and the commutator subgroup of Q8 is the subgroup . The inner automorphism group of Q8 is given by the group modulo its center, i.e. the factor group which is isomorphic to the Klein four-group V. The full automorphism group of Q8 is isomorphic to S4, the symmetric group on four letters (see Matrix representations below), and the outer automorphism group of Q8 is thus S4/V, which is isomorphic to S3. The quaternion group Q8 has five conjugacy classes, and so five irreducible representations over the complex numbers, with dimensions 1, 1, 1, 1, 2: Trivial representation. Sign representations with i, j, k-kernel: Q8 has three maximal normal subgroups: the cyclic subgroups generated by i, j, and k respectively. For each maximal normal subgroup N, we obtain a one-dimensional representation factoring through the 2-element quotient group G/N. The representation sends elements of N to 1, and elements outside N to −1. 2-dimensional representation: Described below in Matrix representations. It is not realizable over the real numbers, but is a complex representation: indeed, it is just the quaternions considered as an algebra over , and the action is that of left multiplication by . The character table of Q8 turns out to be the same as that of D4: Nevertheless, all the irreducible characters in the rows above have real values, this gives the decomposition of the real group algebra of into minimal two-sided ideals: where the idempotents correspond to the irreducibles: so that Each of these irreducible ideals is isomorphic to a real central simple algebra, the first four to the real field . The last ideal is isomorphic to the skew field of quaternions by the correspondence: Furthermore, the projection homomorphism given by has kernel ideal generated by the idempotent: so the quaternions can also be obtained as the quotient ring . Note that this is irreducible as a real representation of , but splits into two copies of the two-dimensional irreducible when extended to the complex numbers. Indeed, the complex group algebra is where is the algebra of biquaternions. Matrix representations The two-dimensional irreducible complex representation described above gives the quaternion group Q8 as a subgroup of the general linear group . The quaternion group is a multiplicative subgroup of the quaternion algebra: which has a regular representation by left multiplication on itself considered as a complex vector space with basis so that corresponds to the -linear mapping The resulting representation is given by: Since all of the above matrices have unit determinant, this is a representation of Q8 in the special linear group . A variant gives a representation by unitary matrices (table at right). Let correspond to the linear mapping so that is given by: It is worth noting that physicists exclusively use a different convention for the matrix representation to make contact with the usual Pauli matrices: This particular choice is convenient and elegant when one describes spin-1/2 states in the basis and considers angular momentum ladder operators There is also an important action of Q8 on the 2-dimensional vector space over the finite field (table at right). A modular representation is given by This representation can be obtained from the extension field: where and the multiplicative group has four generators, of order 8. For each the two-dimensional -vector space admits a linear mapping: In addition we have the Frobenius automorphism satisfying and Then the above representation matrices are: This representation realizes Q8 as a normal subgroup of . Thus, for each matrix , we have a group automorphism with In fact, these give the full automorphism group as: This is isomorphic to the symmetric group S4 since the linear mappings permute the four one-dimensional subspaces of i.e., the four points of the projective space Also, this representation permutes the eight non-zero vectors of giving an embedding of Q8 in the symmetric group S8, in addition to the embeddings given by the regular representations. Galois group Richard Dedekind considered the field in attempting to relate the quaternion group to Galois theory. In 1936 Ernst Witt published his approach to the quaternion group through Galois theory. In 1981, Richard Dean showed the quaternion group can be realized as the Galois group Gal(T/Q) where Q is the field of rational numbers and T is the splitting field of the polynomial . The development uses the fundamental theorem of Galois theory in specifying four intermediate fields between Q and T and their Galois groups, as well as two theorems on cyclic extension of degree four over a field. Generalized quaternion group A generalized quaternion group Q4n of order 4n is defined by the presentation for an integer , with the usual quaternion group given by n = 2. Coxeter calls Q4n the dicyclic group , a special case of the binary polyhedral group and related to the polyhedral group and the dihedral group . The generalized quaternion group can be realized as the subgroup of generated by where . It can also be realized as the subgroup of unit quaternions generated by and . The generalized quaternion groups have the property that every abelian subgroup is cyclic. It can be shown that a finite p-group with this property (every abelian subgroup is cyclic) is either cyclic or a generalized quaternion group as defined above. Another characterization is that a finite p-group in which there is a unique subgroup of order p is either cyclic or a 2-group isomorphic to generalized quaternion group. In particular, for a finite field F with odd characteristic, the 2-Sylow subgroup of SL2(F) is non-abelian and has only one subgroup of order 2, so this 2-Sylow subgroup must be a generalized quaternion group, . Letting pr be the size of F, where p is prime, the size of the 2-Sylow subgroup of SL2(F) is 2n, where . The Brauer–Suzuki theorem shows that the groups whose Sylow 2-subgroups are generalized quaternion cannot be simple. Another terminology reserves the name "generalized quaternion group" for a dicyclic group of order a power of 2, which admits the presentation ==See also== 16-cell Binary tetrahedral group Clifford algebra Dicyclic group Hurwitz integral quaternion List of small groups Notes References Dean, Richard A. (1981) "A rational polynomial whose group is the quaternions", American Mathematical Monthly 88:42–5. P.R. Girard (1984) "The quaternion group and modern physics", European Journal of Physics 5:25–32. External links Quaternion groups on GroupNames Quaternion group on GroupProps Conrad, Keith. "Generalized Quaternions" Group theory Finite groups group
Quaternion group
[ "Mathematics" ]
1,899
[ "Mathematical structures", "Finite groups", "Group theory", "Fields of abstract algebra", "Algebraic structures" ]
153,158
https://en.wikipedia.org/wiki/Autocatalytic%20set
An autocatalytic set is a collection of entities, each of which can be created catalytically by other entities within the set, such that as a whole, the set is able to catalyze its own production. In this way the set as a whole is said to be autocatalytic. Autocatalytic sets were originally and most concretely defined in terms of molecular entities, but have more recently been metaphorically extended to the study of systems in sociology, ecology, and economics. Autocatalytic sets also have the ability to replicate themselves if they are split apart into two physically separated spaces. Computer models illustrate that split autocatalytic sets will reproduce all of the reactions of the original set in each half, much like cellular mitosis. In effect, using the principles of autocatalysis, a small metabolism can replicate itself with very little high level organization. This property is why autocatalysis is a contender as the foundational mechanism for complex evolution. Prior to Watson and Crick, biologists considered autocatalytic sets the way metabolism functions in principle, i.e. one protein helps to synthesize another protein and so on. After the discovery of the double helix, the central dogma of molecular biology was formulated, which is that DNA is transcribed to RNA which is translated to protein. The molecular structure of DNA and RNA, as well as the metabolism that maintains their reproduction, are believed to be too complex to have arisen spontaneously in one step from a soup of chemistry. Several models of the origin of life are based on the notion that life may have arisen through the development of an initial molecular autocatalytic set which evolved over time. Most of these models which have emerged from the studies of complex systems predict that life arose not from a molecule with any particular trait (such as self-replicating RNA) but from an autocatalytic set. The first empirical support came from Lincoln and Joyce, who obtained autocatalytic sets in which "two [RNA] enzymes catalyze each other’s synthesis from a total of four component substrates." Furthermore, an evolutionary process that began with a population of these self-replicators yielded a population dominated by recombinant replicators. Modern life has the traits of an autocatalytic set, since no particular molecule, nor any class of molecules, is able to replicate itself. There are several models based on autocatalytic sets, including those of Stuart Kauffman and others. Formal definition Definition Given a set M of molecules, chemical reactions can be roughly defined as pairs r = (A, B) of subsets from M: a1 + a2 + ... + ak → b1 + b2 + ... + bk Let R be the set of allowable reactions. A pair (M, R) is a reaction system (RS). Let C be the set of molecule-reaction pairs specifying which molecules can catalyze which reactions: C = {(m, r) | m ∈ M, r ∈ R} Let F ⊆ M be a set of food (small numbers of molecules freely available from the environment) and R' ⊆ R be some subset of reactions. We define a closure of the food set relative to this subset of reactions ClR'(F) as the set of molecules that contains the food set plus all molecules that can be produced starting from the food set and using only reactions from this subset of reactions. Formally ClR'(F) is a minimal subset of M such that F ⊆ ClR'(F) and for each reaction r'(A, B) ⊆ R': A ⊆ ClR'(F) ⇒ B ⊆ ClR'(F) A reaction system (ClR'(F), R') is autocatalytic, if and only if for each reaction r'(A, B) ⊆ R': there exists a molecule c ⊆ ClR'(F) such that (c, r') ⊆ C, A ⊆ ClR'(F). Example Let M = {a, b, c, d, f, g} and F = {a, b}. Let the set R contain the following reactions: a + b → c + d, catalyzed by g a + f → c + b, catalyzed by d c + b → g + a, catalyzed by d or f From the F = {a, b} we can produce {c, d} and then from {c, b} we can produce {g, a} so the closure is equal to: ClR'(F) = {a, b, c, d, g} According to the definition the maximal autocatalytic subset R' will consist of two reactions: a + b → c + d, catalyzed by g c + b → g + a, catalyzed by d The reaction for (a + f) does not belong to R' because f does not belong to closure. Similarly the reaction for (c + b) in the autocatalytic set can only be catalyzed by d and not by f. Probability that a random set is autocatalytic Studies of the above model show that random RS can be autocatalytic with high probability under some assumptions. This comes from the fact that with a growing number of molecules, the number of possible reactions and catalysations grows even larger if the molecules grow in complexity, producing stochastically enough reactions and catalysations to make a part of the RS self-supported. An autocatalytic set then extends very quickly with growing number of molecules for the same reason. These theoretical results make autocatalytic sets attractive for scientific explanation of the very early origin of life. Formal limitations Formally, it is difficult to treat molecules as anything but unstructured entities, since the set of possible reactions (and molecules) would become infinite. Therefore, a derivation of arbitrarily long polymers as needed to model DNA, RNA or proteins is not possible, yet. Studies of the RNA World suffer from the same problem. Linguistic aspects Contrary to the above definition, which applies to the field of Artificial chemistry, no agreed-upon notion of autocatalytic sets exists today. While above, the notion of catalyst is secondary insofar that only the set as a whole has to catalyse its own production, it is primary in other definitions, giving the term "Autocatalytic Set" a different emphasis. There, every reaction (or function, transformation) has to be mediated by a catalyst. As a consequence, while mediating its respective reaction, every catalyst denotes its reaction, too, resulting in a self denoting system, which is interesting for two reasons. First, real metabolism is structured in this manner. Second, self denoting systems can be considered as an intermediate step towards self describing systems. From both a structural and a natural historical point of view, one can identify the ACS as seized in the formal definition the more original concept, while in the second, the reflection of the system in itself is already brought to an explicit presentation, since catalysts represent the reaction induced by them. In ACS literature, both concept are present, but differently emphasised. To complete the classification from the other side, generalised self reproducing systems move beyond self-denotation. There, no unstructured entities carry the transformations anymore, but structured, described ones. Formally, a generalised self reproducing system consists of two function, u and c, together with their descriptions Desc(u) and Desc(c) along following definition: u : Desc(X) -> X c : Desc(X) -> Desc(X) where the function 'u' is the "universal" constructor, that constructs everything in its domain from appropriate descriptions, while 'c' is a copy function for any description. Practically, 'u' and 'c' can fall apart into many subfunctions or catalysts. Note that the (trivial) copy function 'c' is necessary because though the universal constructor 'u' would be able to construct any description, too, the description it would base on, would in general be longer than the result, rendering full self replication impossible. This last concept can be attributed to von Neumann's work on self reproducing automata, where he holds a self description necessary for any nontrivial (generalised) self reproducing system to avoid interferences. Von Neumann planned to design such a system for a model chemistry, too. Non-autonomous autocatalytic sets Virtually all articles on autocatalytic sets leave open whether the sets are to be considered autonomous or not. Often, autonomy of the sets is silently assumed. Likely, the above context has a strong emphasis on autonomous self replication and early origin of life. But the concept of autocatalytic sets is really more general and in practical use in various technical areas, e.g. where self-sustaining tool chains are handled. Clearly, such sets are not autonomous and are objects of human agency. Examples of practical importance of non-autonomous autocatalytic sets can be found e.g. in the field of compiler construction and in operating systems, where the self-referential nature of the respective constructions is explicitly discussed, very often as bootstrapping. Comparison with other theories of life Autocatalytic sets constitute just one of several current theories of life, including the chemoton of Tibor Gánti, the hypercycle of Manfred Eigen and Peter Schuster, the (M,R) systems of Robert Rosen, and the autopoiesis (or self-building) of Humberto Maturana and Francisco Varela. All of these (including autocatalytic sets) found their original inspiration in Erwin Schrödinger's book What is Life? but at first they appear to have little in common with one another, largely because the authors did not communicate with one another, and none of them made any reference in their principal publications to any of the other theories. Nonetheless, there are more similarities than may be obvious at first sight, for example between Gánti and Rosen. Until recently there have been almost no attempts to compare the different theories and discuss them together. Last Universal Common Ancestor (LUCA) Some authors equate models of the origin of life with LUCA, the Last Universal Common Ancestor of all extant life. This is a serious error resulting from failure to recognize that L refers to the last common ancestor, not to the first ancestor, which is much older: a large amount of evolution occurred before the appearance of LUCA. Gill and Forterre expressed the essential point as follows: LUCA should not be confused with the first cell, but was the product of a long period of evolution. Being the "last" means that LUCA was preceded by a long succession of older "ancestors." References Origin of life Artificial life
Autocatalytic set
[ "Biology" ]
2,257
[ "Biological hypotheses", "Origin of life" ]
153,182
https://en.wikipedia.org/wiki/Leotard
A leotard () is a unisex skin-tight one-piece garment that covers the torso from the crotch to the shoulder. The garment was made famous by the French acrobatic performer Jules Léotard (1838–1870). There are sleeveless, short-sleeved, and long-sleeved leotards. A variation is the unitard, which also covers the legs. It provides a degree of modesty and style while allowing for freedom of movement. Leotards are worn by acrobats, gymnasts, dancers, figure skaters, athletes, actors, wrestlers, and circus performers both as practice garments and performance costumes. They are often worn with ballet skirts on top and tights or sometimes bike shorts as underwear. As a casual garment, a leotard can be worn with a belt and under overalls or short skirts. Leotards are entered by stepping into the legs and pulling the sleeves over the shoulders. Scoop-necked leotards have wide neck openings and are held in place by the garment's elasticity. Others are crew necked or polo necked and close at the back of the neck with a zipper or snaps. Use Leotards are used for a variety of purposes, including yoga, exercise, dance (particularly for ballet and modern), as pajamas, for additional layered warmth under clothing, and recreational and casual wear. They may form a part of children's dressing-up and play outfits and can also be worn as a top. Leotards are commonly worn in figure skating, postwar modern dance, acrobatic rock'n'roll, traditional ballet and gymnastics, especially by young children. Practice leotards and those worn in podium training sessions are usually sleeveless. Female competition garments for gymnastics and skating are almost always long-sleeved. In contrast, male competition leotards may be sleeved or sleeveless, the latter more common in gymnastics, the former in figure skating. Leotards come in many styles — either with a full seated bottom or as a thong or T-front thong for maximum comfort and avoidance of visible panty lines when worn under leggings or tights. History The first known use of the name leotard came only in 1886, years after Jules Léotard's death. Léotard called the garment a maillot, a general French word for different types of tight-fitting shirts or sports shirts. In the early 20th century, leotards were mainly confined to circus and acrobatic shows, worn by the specialists who performed these acts. Leotards influenced the style of swimsuit in the 1920s and 1930s, and women's one-piece swimsuit today still resemble leotards in appearance. Leotards are worn by professional dancers such as the showgirls of Broadway. Stage use of the leotard typically coordinates the garment with stockings or tights. In the 1950s, traditionally-styled leotards continued to be worn mainly by stage performers and circus actors, but leotards began to be used as simple and functional exercise garments, often in institutional settings like schools and fitness training. These were almost always black and worn together with thick tights. Between 1950 and 1970, leotards remained as such in appearance until a style change in the 1970s, with more colorful leotards appearing on the scene, most often in ballet and exercise. Leotards were a staple in aerobic exercise attire during the 1970s and 1980s, but their popularity waned in the 1990s as they were largely replaced by Lycra pants, similar to those worn by cyclists. By the 2000s, they had given way to trousers and leggings, which offered a more modern and flexible alternative. Nevertheless, leotards continue to be worn by female cyclists and athletes in competitive events, where their functionality and comfort remain valued. In the late 2010s, leotards began to be frequently worn by pop stars in their performances, such as Beyonce, Little Mix, and Taylor Swift. Crossover to fashion activewear By the late 1970s, leotards had become common both as exercise and street wear, popularized by the disco craze, and aerobics fashion craze of the time. These leotards were produced in a variety of nylon and spandex materials, as well as the more traditional cotton previously used for uni-colored leotards and tights. Exercise videos by celebrities such as Jane Fonda also popularized the garment. The dancewear company Danskin flourished during this period, producing various leotards for both dance and streetwear. Other companies, such as Gilda Marx, produced leotards during this period and then ceased production when they ceased to be in fashion. By the late 1980s, leotards for exercise wear had become little more than bikini bottoms with straps over the shoulders, generally worn with cropped shirts. From the mid-1980s to mid 90s, leotards usually cap sleeved style or sometime in colder weather a long sleeved turtleneck style both popularly worn as tops with jeans especially skinny jeans and high waisted ankle length mom jeans, under shortalls or with casual or dress pants as part of everyday wear. They were also worn with skirts outfits. By the mid-1990s, leotards had been almost completely replaced for exercise wear by the sports bra and shorts. Gymnastics attire Women For women, the standard gymnastic competition uniform is a leotard. Traditionally, competition leotards have always had long sleeves; however, half-length sleeved and sleeveless garments are now permitted under the Code of Points and have been worn by teams at the Gymnastics World Championships and other significant events. Practice leotards and those worn in podium training sessions are generally sleeveless. In the 1970s, leotards were typically made from polyester and related fabrics. Since the 1980s, however, they have been made from lycra or spandex. Since the 1990s, leotards have become more elaborate and have employed a variety of textiles, including velvet, velour, mesh, metallic fabrics, foils, and iridescent "hologram" fabric. They can also be decorated with rhinestones and metallic jewels that are heat-set onto the garments and will not fall or wash off. Leotards that conform to regulations cannot be cut above hip height or past the shoulder blades, back, or front. Any somewhat see-through leotard is also against the rules. Usage of white tights is not standard. In rare instances, gymnasts and teams have been penalized with score deductions for their attire. Men For competitions, male gymnasts wear two layers of clothing. The first, a singlet (or comp shirt, short for competition shirt), is a sleeveless garment like a leotard. For floor and vault, gymnasts wear a pair of very short shorts over the singlet. For their other events, they wear a pair of long pants attached to the bottom of their feet with stirrups. Unlike women's uniforms, which generally employ metallic or iridescent fabrics, men's uniforms are usually matte-colored and less ornate. Singlets usually employ one or more of the national team colors, but there are no restrictions on design. Shorts and pants are generally solid, typically white, blue, red, or black. History Olympic gymnastics team leotards have dramatically changed from their first memorable designs. Over time, the emphasis on what leotards are intended to do has changed. Originally, the intent was to cover as much of a woman's body as possible, while today, leotards must breathe, improve aerodynamics, and seamlessly re-shape as female athletes bend, twist, and contort their way through increasingly difficult routines. Men's leotards When Léotard created the maillot, it was intended for men. This style of leotard can be seen in early 20th-century photos of the circus strong man. Men's leotards evolved along with the women's style, eventually resembling it, except that the men's version had a slightly lower-cut leg opening and a lower-cut front. Unlike their female counterparts, however, men's leotards come in two styles—with a full seated bottom or as a thong. The reason for this is apparent when worn with tights, such as in ballet, where lines created by the garment underneath the tights may be considered unsightly. A dance belt is also worn in such instances. Leotards are commonly worn by male dancers (particularly for ballet) and gymnasts. Leotard-like garments (often of the "biketard" or singlet type) are also often worn by men in sports such as rowing, wrestling, cycling, and running to maintain a tight fit and prevent the upper part of the clothing from running up. During the Dangerous World Tour, American superstar Michael Jackson wore a gold leotard. See also Athleisure Bodystocking Bodysuit Catsuit Jumpsuit Spandex Sportswear Underwear as outerwear Wrestling singlet References External links 1980s fashion 2010s fashion Costume design Dancewear History of clothing (Western fashion) Hosiery One-piece suits Tops (clothing) French inventions
Leotard
[ "Engineering" ]
1,894
[ "Costume design", "Design" ]
153,187
https://en.wikipedia.org/wiki/Intracytoplasmic%20sperm%20injection
Intracytoplasmic sperm injection (ICSI ) is an in vitro fertilization (IVF) procedure in which a single sperm cell is injected directly into the cytoplasm of an egg. This technique is used in order to prepare the gametes for the obtention of embryos that may be transferred to a maternal uterus. With this method, the acrosome reaction is skipped. There are several differences between classic IVF and ICSI. However, the steps to be followed before and after insemination are the same. In terms of insemination, ICSI needs only one sperm cell per oocyte, while IVF needs 50,000–100,000. This is because the acrosome reaction has to take place and thousands of sperm cells have to be involved in IVF. Once fertilized, the egg is transformed into a pre-embryo and it has to be transferred to the uterus to continue its development. The first human pregnancy generated by ICSI was carried out in 1991 by Gianpiero Palermo and his team. Round spermatid injection (ROSI) Round spermatid injection (ROSI) is a technique of assisted reproduction whereby a round spermatid is injected into oocyte cytoplasm in order to achieve fertilization. This technique can be used to enable genetic fatherhood to some men who have no spermatozoa in the ejaculate (azoospermia) and in whom spermatozoa cannot be obtained surgically from the testicles. This condition is called non-obstructive or secretory azoospermia, as opposed to obstructive azoospermia, in which complete sperm production does occur in the testicles, and potentially fertilizing spermatozoa can be obtained by testicular sperm extraction (TESE) and used for ICSI. In cases of nonobstructive (secretory) azoospermia, on the other hand, testicular sperm production is blocked at different stages of the process of sperm formation (spermatogenesis). In those men in whom spermatogenesis is blocked at the stage of round spermatids, in which meiosis has already been completed, these round cells can successfully fertilize oocytes after being injected into their cytoplasm. Even though many technical aspects of ROSI are similar to those of ICSI, there are also significant differences between both techniques. In the first place, as compared to spermatozoa, round spermatids do not possess easily perceptible morphological characteristics and are immotile. Consequently, the distinction between round spermatids and other round cells of similar size, such as leukocytes, is not an easy task. Moreover, the distinction between living round spermatids, to be used in ROSI, and dead round spermatids, to be discarded, needs specific methods and skills, not required in the case of ICSI where sperm cell viability can be easily evaluated on the basis of sperm motility in most cases. The microinjection procedure for ROSI also differs slightly from that of ICSI, since additional stimuli are needed to ensure proper oocyte activation after spermatid injection. If all requirements for round spermatid selection and injection are successfully met, the injected oocytes develop to early embryos and can be transferred to the mother's uterus to produce pregnancy. The first successful pregnancies and births with the use of ROSI were achieved in 1995 by Jan Tesarik and his team. The clinical potential of ROSI in the treatment of male infertility due to the total absence of spermatozoa has been corroborated recently by a publication reporting on the postnatal development of 90 babies born in Japan and 17 in Spain. Based on the evaluation of the babies born, no abnormalities attributable to the ROSI technique have been identified. Indications This procedure is most commonly used to overcome male infertility problems, although it may also be used where eggs cannot easily be penetrated by sperm, and occasionally in addition to sperm donation. It can be used in teratozoospermia, because once the egg is fertilized, abnormal sperm morphology does not appear to influence blastocyst development or blastocyst morphology. Even with severe teratozoospermia, microscopy can still detect the few sperm cells that have a "normal" morphology, allowing for optimal success rate. Additionally, specialists use ICSI in cases of azoospermia (when there are no spermatozoa ejaculated but they can be found in testis), when valious spermatozoa (the name given to sperm samples taken to preservate fertility after chemotherapy) is available, or after previous irruptions in IVF cycles. Sperm selection Before performing ICSI, sperm in vitro selection and capacitation has to be done. Apart from the most common techniques of in vitro sperm capacitation (swim-up, density gradients, filtration and simple wash), some new techniques are useful and have advantages over older methods. One of these new techniques is the use of microfluidic chips, like Zymot ICSI chip invented by Prof. Utkan Demirci. This chip is a device that helps identify the highest quality spermatozoa for the ICSI technique. It reproduces the conditions of the vagina, resulting in a more natural spermatozoa selection. One of the main advantages of this method is spermatozoa quality, as the selected ones have better motility, morphology, little DNA fragmentation and less quantity of reactive oxygen species (ROS). Another way to perform the selection is the MACS technique, which consists of tiny magnetic particles linked to an antibody (annexin V) which is able to identify more viable spermatozoa. When the semen sample is passed through a column with a magnetic field, apoptotic respermatozoa are retained in the column while the healthy ones are easily obtained at the bottom of it. PICSI is another method derived from this one, the only difference is the selection process of the spermatozoa. In this case, they are placed on a plate containing drops of a synthetic compound similar to hyaluronic acid. We will know which spermatozoa are mature because they will bind to the HA drops. This is because only mature sperm have a receptor for hyaluronic acid, which they need because this acid can be found surrounding the oocytes, and sperm need to be able to bind to this acid and digest it in order to fertilize the oocyte. After the mature spermatozoa have been selected, they can be used for the microinjection of oocytes. Sperms selected by hyaluronic acid has about no effect on whether a live birth results, but may reduce miscarriage. History The first child born from a gamete micromanipulation (technique in which special tools and inverted microscopes are used that help embryologists to choose and pick an individual sperm for ICSI IVF) was a Singapore-born child in April 1989. The technique was developed by Gianpiero Palermo at the Vrije Universiteit Brussel, in the Center for Reproductive Medicine headed by Paul Devroey and Andre Van Steirteghem. Actually, the discovery was made by a mistake. The procedure itself was first performed in 1987, though it only went to the pronuclear stage. The first activated embryo by ICSI was produced in 1990, but the first successful birth by ICSI took place on January 14, 1992, after an April 1991 conception. Sharpe et al comment on the success of ICSI since 1992 saying, "[t]hus, the woman carries the treatment burden for male infertility, a fairly unique scenario in medical practice. ICSI's success has effectively diverted attention from identifying what causes male infertility and focused research onto the female, to optimize the provision of eggs and a receptive endometrium, on which ICSI's success depends." Procedure ICSI is generally performed following a transvaginal oocyte retrieval procedure to extract one or several oocytes from a woman. In ICSI IVF, the male partner or a donor provides a sperm sample on the same day when the eggs are collected. The sample is checked in the lab, and if no sperm is present, doctors will extract sperm from the epididymis or testicle. The extraction of sperm from epididymis is also known as percutaneous epididymal sperm aspiration (PESA) and extraction of sperm from testicle is also known as testicular sperm aspiration (TESA). Depending on the total amount of spermatozoa in the semen sample, either low or high, it can be just washed or capacitated via swim-up or gradients, respectively. The procedure is done under a microscope using multiple micromanipulation devices (micromanipulator, microinjectors and micropipettes). A holding pipette stabilizes the mature oocyte with gentle suction applied by a microinjector. From the opposite side a thin, hollow glass micropipette is used to collect a single sperm, having immobilised it by cutting its tail with the point of the micropipette. The oocyte is pierced through the oolemma and the sperm is directed into the inner part of the oocyte (cytoplasm). The sperm is then released into the oocyte. The pictured oocyte has an extruded polar body at about 12 o'clock indicating its maturity. The polar body is positioned at the 12 or 6 o'clock position, to ensure that the inserted micropipette does not disrupt the spindle inside the egg. After the procedure, the oocyte will be placed into cell culture and checked on the following day for signs of fertilization. In contrast, in natural fertilization sperm compete and when the first sperm penetrates the oolemma, the oolemma hardens to block the entry of any other sperm. Concern has been raised that in ICSI this sperm selection process is bypassed and the sperm is selected by the embryologist without any specific testing. However, in mid-2006 the FDA cleared a device that allows embryologists to select mature sperm for ICSI based on sperm binding to hyaluronan, the main constituent of the gel layer (cumulus oophorus) surrounding the oocyte. The device provides microscopic droplets of hyaluronan hydrogel attached to the culture dish. The embryologist places the prepared sperm on the microdot, selects and captures sperm that bind to the dot. Basic research on the maturation of sperm shows that hyaluronan-binding sperm are more mature and show fewer DNA strand breaks and significantly lower levels of aneuploidy than the sperm population from which they were selected. A brand name for one such sperm selection device is PICSI. A recent clinical trial showed a sharp reduction in miscarriage with embryos derived from PICSI sperm selection. 'Washed' or 'unwashed' sperm may be used in the process. Live birth rate are significantly higher with progesterone to assist implantation in ICSI cycles. Also, addition of a GNRH agonist has been estimated to increase success rates. Ultra-high magnification sperm injection (IMSI) has no evidence of increased live birth or miscarriage rates compared to standard ICSI. A new variation of the standard ICSI-procedure called Piezo-ICSI uses small axial mechanical pulses (Piezo-pulses) to lower stress to the cytoskeleton during zona pellucida and oolemma breakage. The procedure includes specialized Piezo actuators, microcapillaries, and filling medium to transfer mechanical pulses to the cell membranes. The Piezo technique itself was for example established for animal ICSI and animal ES cell transfer. Assisted zona hatching (AH) People who have experienced repeatedly failed implantation, or whose experimental embryo has a thick zona pellucida (covering) around the embryo, have ideal candidates for assisted zona hatching. The procedure involves creating a hole in the zona to improve the chances of normal implantation of the embryo in the uterus. Preimplantation genetic diagnosis (PGD) PGD is a process in which one or two cells from an embryo on Day 3 or Day 5 are extracted and the cells genetically analyzed. Couples who are at a high risk of having abnormal number of chromosomes or who have an history of single gene defects or chromosome defects are ideal candidates for this procedure. It is used to diagnose a large number of genetic defects at present. Success or failure factors One of the areas in which sperm injection can be useful is vasectomy reversal. However, potential factors that may influence pregnancy rates (and live birth rates) in ICSI include level of DNA fragmentation as measured e.g. by comet assay, advanced maternal age and semen quality. It is uncertain whether ICSI improves live birth rates or reduces the risk of miscarriage compared with ultra‐high magnification (IMSI) sperm selection. A systematic meta-analysis of 24 estimates of DNA damage based on a variety of techniques concluded that sperm DNA damage negatively affects clinical pregnancy following ICSI. Numerous biochemical markers were shown to be associated with oocyte quality for ICSI. For example, it was shown that after ICSI the follicular fluid of unfertilized oocytes contains high levels of cytotoxicity and oxidative stress markers, as Cu,Zn-superoxide dismutase, catalase, and lipoperoxidation product 4-hydroxynonenal (4-HNE) -protein conjugates. Complications There is some suggestion that birth defects are increased with the use of IVF in general, and ICSI specifically, though different studies show contradictory results. In a summary position paper, the Practice Committee of the American Society of Reproductive Medicine has said it considers ICSI safe and effective therapy for male factor infertility, but may carry an increased risk for the transmission of selected genetic abnormalities to offspring, either through the procedure itself or through the increased inherent risk of such abnormalities in parents undergoing the procedure. There is not enough evidence to say that ICSI procedures are safe in females with hepatitis B in regard to vertical transmission to the offspring, since the puncture of the oocyte can potentially avail for vertical transmission to the offspring. Follow-up on fetus In addition to regular prenatal care, prenatal aneuploidy screening based on maternal age, nuchal translucency scan and biomarkers is appropriate. However, biomarkers seem to be altered for pregnancies resulting from ICSI, causing a higher false-positive rate. Correction factors have been developed and should be used when screening for Down syndrome in singleton pregnancies after ICSI, but in twin pregnancies such correction factors have not been fully elucidated. In vanishing twin pregnancies with a second gestational sac with a dead fetus, first trimester screening should be based solely on the maternal age and the nuchal translucency scan as biomarkers are significantly altered in these cases. See also Reproductive technology Ernestine Gwet Bell References External links The Human Fertilisation and Embryology Authority (HFEA) The Epigenome Network of Excellence (NoE) TEST TUBE BABY PROCESS Assisted Zona hatching Assisted reproductive technology Fertility medicine 1991 introductions Semen
Intracytoplasmic sperm injection
[ "Biology" ]
3,227
[ "Assisted reproductive technology", "Medical technology" ]
153,197
https://en.wikipedia.org/wiki/Periodic%20table%20%28electron%20configurations%29
Configurations of elements 109 and above are not available. Predictions from reliable sources have been used for these elements. Grayed out electron numbers indicate subshells filled to their maximum. Bracketed noble gas symbols on the left represent inner configurations that are the same in each period. Written out, these are: He, 2, helium : 1s2 Ne, 10, neon : 1s2 2s2 2p6 Ar, 18, argon : 1s2 2s2 2p6 3s2 3p6 Kr, 36, krypton : 1s2 2s2 2p6 3s2 3p6 4s2 3d10 4p6 Xe, 54, xenon : 1s2 2s2 2p6 3s2 3p6 4s2 3d10 4p6 5s2 4d10 5p6 Rn, 86, radon : 1s2 2s2 2p6 3s2 3p6 4s2 3d10 4p6 5s2 4d10 5p6 6s2 4f14 5d10 6p6 Og, 118, oganesson : 1s2 2s2 2p6 3s2 3p6 4s2 3d10 4p6 5s2 4d10 5p6 6s2 4f14 5d10 6p6 7s2 5f14 6d10 7p6 Note that these electron configurations are given for neutral atoms in the gas phase, which are not the same as the electron configurations for the same atoms in chemical environments. In many cases, multiple configurations are within a small range of energies and the small irregularities that arise in the d- and f-blocks are quite irrelevant chemically. The construction of the periodic table ignores these irregularities and is based on ideal electron configurations. Note the non-linear shell ordering, which comes about due to the different energies of smaller and larger shells. References See list of sources at Electron configurations of the elements (data page). Electron configurations
Periodic table (electron configurations)
[ "Chemistry" ]
419
[ "Periodic table" ]
153,208
https://en.wikipedia.org/wiki/System%20dynamics
System dynamics (SD) is an approach to understanding the nonlinear behaviour of complex systems over time using stocks, flows, internal feedback loops, table functions and time delays. Overview System dynamics is a methodology and mathematical modeling technique to frame, understand, and discuss complex issues and problems. Originally developed in the 1950s to help corporate managers improve their understanding of industrial processes, SD is currently being used throughout the public and private sector for policy analysis and design. Convenient graphical user interface (GUI) system dynamics software developed into user friendly versions by the 1990s and have been applied to diverse systems. SD models solve the problem of simultaneity (mutual causation) by updating all variables in small time increments with positive and negative feedbacks and time delays structuring the interactions and control. The best known SD model is probably the 1972 The Limits to Growth. This model forecast that exponential growth of population and capital, with finite resource sources and sinks and perception delays, would lead to economic collapse during the 21st century under a wide variety of growth scenarios. System dynamics is an aspect of systems theory as a method to understand the dynamic behavior of complex systems. The basis of the method is the recognition that the structure of any system, the many circular, interlocking, sometimes time-delayed relationships among its components, is often just as important in determining its behavior as the individual components themselves. Examples are chaos theory and social dynamics. It is also claimed that because there are often properties-of-the-whole which cannot be found among the properties-of-the-elements, in some cases the behavior of the whole cannot be explained in terms of the behavior of the parts. History System dynamics was created during the mid-1950s by Professor Jay Forrester of the Massachusetts Institute of Technology. In 1956, Forrester accepted a professorship in the newly formed MIT Sloan School of Management. His initial goal was to determine how his background in science and engineering could be brought to bear, in some useful way, on the core issues that determine the success or failure of corporations. Forrester's insights into the common foundations that underlie engineering, which led to the creation of system dynamics, were triggered, to a large degree, by his involvement with managers at General Electric (GE) during the mid-1950s. At that time, the managers at GE were perplexed because employment at their appliance plants in Kentucky exhibited a significant three-year cycle. The business cycle was judged to be an insufficient explanation for the employment instability. From hand simulations (or calculations) of the stock-flow-feedback structure of the GE plants, which included the existing corporate decision-making structure for hiring and layoffs, Forrester was able to show how the instability in GE employment was due to the internal structure of the firm and not to an external force such as the business cycle. These hand simulations were the start of the field of system dynamics. During the late 1950s and early 1960s, Forrester and a team of graduate students moved the emerging field of system dynamics from the hand-simulation stage to the formal computer modeling stage. Richard Bennett created the first system dynamics computer modeling language called SIMPLE (Simulation of Industrial Management Problems with Lots of Equations) in the spring of 1958. In 1959, Phyllis Fox and Alexander Pugh wrote the first version of DYNAMO (DYNAmic MOdels), an improved version of SIMPLE, and the system dynamics language became the industry standard for over thirty years. Forrester published the first, and still classic, book in the field titled Industrial Dynamics in 1961. From the late 1950s to the late 1960s, system dynamics was applied almost exclusively to corporate/managerial problems. In 1968, however, an unexpected occurrence caused the field to broaden beyond corporate modeling. John F. Collins, the former mayor of Boston, was appointed a visiting professor of Urban Affairs at MIT. The result of the Collins-Forrester collaboration was a book titled Urban Dynamics. The Urban Dynamics model presented in the book was the first major non-corporate application of system dynamics. In 1967, Richard M. Goodwin published the first edition of his paper "A Growth Cycle", which was the first attempt to apply the principles of system dynamics to economics. He devoted most of his life teaching what he called "Economic Dynamics", which could be considered a precursor of modern Non-equilibrium economics. The second major noncorporate application of system dynamics came shortly after the first. In 1970, Jay Forrester was invited by the Club of Rome to a meeting in Bern, Switzerland. The Club of Rome is an organization devoted to solving what its members describe as the "predicament of mankind"—that is, the global crisis that may appear sometime in the future, due to the demands being placed on the Earth's carrying capacity (its sources of renewable and nonrenewable resources and its sinks for the disposal of pollutants) by the world's exponentially growing population. At the Bern meeting, Forrester was asked if system dynamics could be used to address the predicament of mankind. His answer, of course, was that it could. On the plane back from the Bern meeting, Forrester created the first draft of a system dynamics model of the world's socioeconomic system. He called this model WORLD1. Upon his return to the United States, Forrester refined WORLD1 in preparation for a visit to MIT by members of the Club of Rome. Forrester called the refined version of the model WORLD2. Forrester published WORLD2 in a book titled World Dynamics. Topics in systems dynamics The primary elements of system dynamics diagrams are feedback, accumulation of flows into stocks and time delays. As an illustration of the use of system dynamics, imagine an organisation that plans to introduce an innovative new durable consumer product. The organisation needs to understand the possible market dynamics in order to design marketing and production plans. Causal loop diagrams In the system dynamics methodology, a problem or a system (e.g., ecosystem, political system or mechanical system) may be represented as a causal loop diagram. A causal loop diagram is a simple map of a system with all its constituent components and their interactions. By capturing interactions and consequently the feedback loops (see figure below), a causal loop diagram reveals the structure of a system. By understanding the structure of a system, it becomes possible to ascertain a system's behavior over a certain time period. The causal loop diagram of the new product introduction may look as follows: There are two feedback loops in this diagram. The positive reinforcement (labeled R) loop on the right indicates that the more people have already adopted the new product, the stronger the word-of-mouth impact. There will be more references to the product, more demonstrations, and more reviews. This positive feedback should generate sales that continue to grow. The second feedback loop on the left is negative reinforcement (or "balancing" and hence labeled B). Clearly, growth cannot continue forever, because as more and more people adopt, there remain fewer and fewer potential adopters. Both feedback loops act simultaneously, but at different times they may have different strengths. Thus one might expect growing sales in the initial years, and then declining sales in the later years. However, in general a causal loop diagram does not specify the structure of a system sufficiently to permit determination of its behavior from the visual representation alone. Stock and flow diagrams Causal loop diagrams aid in visualizing a system's structure and behavior, and analyzing the system qualitatively. To perform a more detailed quantitative analysis, a causal loop diagram is transformed to a stock and flow diagram. A stock and flow model helps in studying and analyzing the system in a quantitative way; such models are usually built and simulated using computer software. A stock is the term for any entity that accumulates or depletes over time. A flow is the rate of change in a stock. In this example, there are two stocks: Potential adopters and Adopters. There is one flow: New adopters. For every new adopter, the stock of potential adopters declines by one, and the stock of adopters increases by one. Equations The real power of system dynamics is utilised through simulation. Although it is possible to perform the modeling in a spreadsheet, there are a variety of software packages that have been optimised for this. The steps involved in a simulation are: Define the problem boundary. Identify the most important stocks and flows that change these stock levels. Identify sources of information that impact the flows. Identify the main feedback loops. Draw a causal loop diagram that links the stocks, flows and sources of information. Write the equations that determine the flows. Estimate the parameters and initial conditions. These can be estimated using statistical methods, expert opinion, market research data or other relevant sources of information. Simulate the model and analyse results. In this example, the equations that change the two stocks via the flow are: Equations in discrete time List of all the equations in discrete time, in their order of execution in each year, for years 1 to 15 : Dynamic simulation results The dynamic simulation results show that the behaviour of the system would be to have growth in adopters that follows a classic s-curve shape. The increase in adopters is very slow initially, then exponential growth for a period, followed ultimately by saturation. Equations in continuous time To get intermediate values and better accuracy, the model can run in continuous time: we multiply the number of units of time and we proportionally divide values that change stock levels. In this example we multiply the 15 years by 4 to obtain 60 quarters, and we divide the value of the flow by 4. Dividing the value is the simplest with the Euler method, but other methods could be employed instead, such as Runge–Kutta methods. List of the equations in continuous time for trimesters = 1 to 60 : They are the same equations as in the section Equation in discrete time above, except equations 4.1 and 4.2 replaced by following : In the below stock and flow diagram, the intermediate flow 'Valve New adopters' calculates the equation : Application System dynamics has found application in a wide range of areas, for example population, agriculture, epidemiological, ecological and economic systems, which usually interact strongly with each other. System dynamics have various "back of the envelope" management applications. They are a potent tool to: Teach system thinking reflexes to persons being coached Analyze and compare assumptions and mental models about the way things work Gain qualitative insight into the workings of a system or the consequences of a decision Recognize archetypes of dysfunctional systems in everyday practice Computer software is used to simulate a system dynamics model of the situation being studied. Running "what if" simulations to test certain policies on such a model can greatly aid in understanding how the system changes over time. System dynamics is very similar to systems thinking and constructs the same causal loop diagrams of systems with feedback. However, system dynamics typically goes further and utilises simulation to study the behaviour of systems and the impact of alternative policies. System dynamics has been used to investigate resource dependencies, and resulting problems, in product development. A system dynamics approach to macroeconomics, known as Minsky, has been developed by the economist Steve Keen. This has been used to successfully model world economic behaviour from the apparent stability of the Great Moderation to the sudden unexpected Financial crisis of 2007–08. Example: Growth and decline of companies The figure above is a causal loop diagram of a system dynamics model created to examine forces that may be responsible for the growth or decline of life insurance companies in the United Kingdom. A number of this figure's features are worth mentioning. The first is that the model's negative feedback loops are identified by C's, which stand for Counteracting loops. The second is that double slashes are used to indicate places where there is a significant delay between causes (i.e., variables at the tails of arrows) and effects (i.e., variables at the heads of arrows). This is a common causal loop diagramming convention in system dynamics. Third, is that thicker lines are used to identify the feedback loops and links that author wishes the audience to focus on. This is also a common system dynamics diagramming convention. Last, it is clear that a decision maker would find it impossible to think through the dynamic behavior inherent in the model, from inspection of the figure alone. Example: Piston motion Objective: study of a crank-connecting rod system. We want to model a crank-connecting rod system through a system dynamic model. Two different full descriptions of the physical system with related systems of equations can be found here and here ; they give the same results. In this example, the crank, with variable radius and angular frequency, will drive a piston with a variable connecting rod length. System dynamic modeling: the system is now modeled, according to a stock and flow system dynamic logic. The figure below shows the stock and flow diagram Simulation: the behavior of the crank-connecting rod dynamic system can then be simulated. The next figure is a 3D simulation created using procedural animation. Variables of the model animate all parts of this animation: crank, radius, angular frequency, rod length, and piston position. See also Related subjects Causal loop diagram Comparison of system dynamics software Ecosystem model Plateau Principle System archetypes System Dynamics Society Twelve leverage points Wicked problems World3 Population dynamics Predator-prey interaction Related fields Dynamical systems theory Grey box model Operations research Social dynamics System identification Systems theory Systems thinking Cybernetics TRIZ Related scientists Jay Forrester Dennis Meadows Donella Meadows Peter Senge Graeme Snooks John Sterman References Further reading External links System Dynamics Society Study Prepared for the U.S. Department of Energy's Introducing System Dynamics - Desert Island Dynamics "An Annotated Survey of the Essential System Dynamics Literature" True World : Temporal Reasoning Universal Elaboration : System Dynamics software used for diagrams in this article (free) Dynamics Operations research Problem structuring methods
System dynamics
[ "Mathematics" ]
2,836
[ "Applied mathematics", "Operations research" ]
153,209
https://en.wikipedia.org/wiki/Air-augmented%20rocket
Air-augmented rockets use the supersonic exhaust of some kind of rocket engine to further compress air collected by ram effect during flight to use as additional working mass, leading to greater effective thrust for any given amount of fuel than either the rocket or a ramjet alone. It represents a hybrid class of rocket/ramjet engines, similar to a ramjet, but able to give useful thrust from zero speed, and is also able in some cases to operate outside the atmosphere, with fuel efficiency not worse than both a comparable ramjet or rocket at every point. There are a wide variety of variations on the basic concept, and a wide variety of resulting names. Those that burn additional fuel downstream of the rocket are generally known as ramrockets, rocket-ejector, integral rocket/ramjets or ejector ramjets, whilst those that do not include additional burning are known as ducted rockets or shrouded rockets depending on the details of the expander. Operation In a conventional chemical rocket engine, the rocket carries both its fuel and oxidizer in its fuselage. The chemical reaction between the fuel and the oxidizer produces reactant products which are nominally gasses at the pressures and temperatures in the rocket's combustion chamber. The reaction is also highly energetic (exothermic) releasing tremendous energy in the form of heat; that is imparted to the reactant products in the combustion chamber giving this mass enormous internal energy which, when expanded through a nozzle is capable of producing very high exhaust velocities. The exhaust is directed rearward through the nozzle, thereby producing a thrust forward. In this conventional design, the fuel/oxidizer mixture is both the working mass and energy source that accelerates it. It is easy to demonstrate that the best performance is had if the working mass has the lowest molecular weight possible. Hydrogen, by itself, is the theoretical best rocket fuel. Mixing this with oxygen in order to burn it lowers the overall performance of the system by raising the mass of the exhaust, as well as greatly increasing the mass that has to be carried aloft – oxygen is much heavier than hydrogen. One potential method of increasing the overall performance of the system is to collect either the fuel or the oxidizer during flight. Fuel is hard to come by in the atmosphere, but oxidizer in the form of gaseous oxygen makes up to 20% of the air. There are a number of designs that take advantage of this fact. These sorts of systems have been explored in the liquid air cycle engine (LACE). Another idea is to collect the working mass. With an air-augmented rocket, an otherwise conventional rocket engine is mounted in the center of a long tube, open at the front. As the rocket moves through the atmosphere the air enters the front of the tube, where it is compressed via the ram effect. As it travels down the tube it is further compressed and mixed with the fuel-rich exhaust from the rocket engine, which heats the air much as a combustor would in a ramjet. In this way a fairly small rocket can be used to accelerate a much larger working mass than normal, leading to significantly higher thrust within the atmosphere. Advantages The effectiveness of this simple method can be dramatic. Typical solid rockets have a specific impulse of about 260 seconds (2.5 kN·s/kg), but using the same fuel in an air-augmented design can improve this to over 500 seconds (4.9 kN·s/kg), a figure unmatched even by high specific impulse hydrolox engines. This design can even be slightly more efficient than a ramjet, as the exhaust from the rocket engine helps compress the air more than a ramjet normally would; this raises the combustion efficiency as a longer, more efficient nozzle can be employed. Another advantage is that the rocket works even at zero forward speed, whereas a ramjet requires forward motion to feed air into the engine. Disadvantages It might be envisaged that such an increase in performance would be widely deployed, but various issues frequently preclude this. The intakes of high-speed engines are difficult to design, and require careful positioning on the airframe in order to achieve reasonable performance – in general, the entire airframe needs to be built around the intake design. Another problem is that the air thins out as the rocket climbs. Hence, the amount of additional thrust is limited by how fast the rocket climbs. Finally, the air ducting adds quite a bit of weight which slows the vehicle considerably towards the end of the burn. Variations Shrouded rocket The simplest version of an air-augmentation system is found in the shrouded rocket. This consists largely of a rocket motor or motors positioned in a duct. The rocket exhaust entrains the air, pulling it through the duct, while also mixing with it and heating it, causing the pressure to increase downstream of the rocket. The resulting hot gas is then further expanded through an expanding nozzle. Ducted rocket A slight variation on the shrouded rocket, the ducted rocket adds only a convergent-divergent nozzle. This ensures the combustion takes place at subsonic speeds, improving the range of vehicle speeds where the system remains useful. Ejector ramjet (et al) The ejector ramjet is a more complex system with potentially higher performance. Like the shrouded and ducted rocket, the system begins with a rocket engine(s) in an air intake. It differs in that the mixed exhaust enters a diffuser, slowing the speed of the airflow to subsonic speeds. Additional fuel is then injected, burning in this expanded section. The exhaust of that combustion then enters a convergent-divergent nozzle as in a conventional ramjet, or the ducted rocket case. History The first serious attempt to make a production air-augmented rocket was the Soviet Gnom rocket design, implemented by Decree 708-336 of the Soviet Ministers of 2 July 1958. More recently, about 2002, NASA has re-examined similar technology for the GTX program as part of an effort to develop SSTO spacecraft. Air-augmented rockets finally entered mass production in 2016 when the Meteor Air to Air Missile was introduced into service. See also Index of aviation articles Liquid air cycle engine – collecting oxidizer instead of working mass References Citations Bibliography Gnom NASA GTX Rocket propulsion Ramjet engines Industrial design Soviet inventions
Air-augmented rocket
[ "Engineering" ]
1,311
[ "Industrial design", "Design engineering", "Design" ]
153,215
https://en.wikipedia.org/wiki/Working%20mass
Working mass, also referred to as reaction mass, is a mass against which a system operates in order to produce acceleration. In the case of a chemical rocket, for example, the reaction mass is the product of the burned fuel shot backwards to provide propulsion. All acceleration requires an exchange of momentum, which can be thought of as the "unit of movement". Momentum is related to mass and velocity, as given by the formula P = mv, where P is the momentum, m the mass, and v the velocity. The velocity of a body is easily changeable, but in most cases the mass is not, which makes it important. Rockets and rocket-like reaction engines In rockets, the total velocity change can be calculated (using the Tsiolkovsky rocket equation) as follows: Where: v = ship velocity. u = exhaust velocity. M = ship mass, not including the working mass. m = total mass ejected from the ship (working mass). The term working mass is used primarily in the aerospace field. In more "down to earth" examples, the working mass is typically provided by the Earth, which contains so much momentum in comparison to most vehicles that the amount it gains or loses can be ignored. However, in the case of an aircraft the working mass is the air, and in the case of a rocket, it is the rocket fuel itself. Most rocket engines use light-weight fuels (liquid hydrogen, oxygen, or kerosene) accelerated to supersonic speeds. However, ion engines often use heavier elements like xenon as the reaction mass, accelerated to much higher speeds using electric fields. In many cases, the working mass is separate from the energy used to accelerate it. In a car, the engine provides power to the wheels, which then accelerates the Earth backward to make the car move forward. This is not the case for most rockets, however, where the rocket propellant is the working mass, as well as the energy source. This means that rockets stop accelerating as soon as they run out of fuel, regardless of other power sources they may have. This can be a problem for satellites that need to be repositioned often, as it limits their useful life. In general, the exhaust velocity should be close to the ship velocity for optimum energy efficiency. This limitation of rocket propulsion is one of the main motivations for the ongoing interest in field propulsion technology. See also Rocket equation Aerospace engineering Mass
Working mass
[ "Physics", "Mathematics", "Engineering" ]
493
[ "Scalar physical quantities", "Physical quantities", "Quantity", "Mass", "Size", "Aerospace engineering", "Wikipedia categories named after physical quantities", "Matter" ]
153,217
https://en.wikipedia.org/wiki/Carrier%20wave
In telecommunications, a carrier wave, carrier signal, or just carrier, is a periodic waveform (usually sinusoidal) that initially carries no information. Through a process called modulation, one or more of the wave's properties are modified by an information bearing signal (called the message signal or modulation signal) to convey information. This carrier wave usually has a much higher frequency than the message signal does. This is because it is impractical to transmit signals with low frequencies. The purpose of the carrier is usually either to transmit the information through space as an electromagnetic wave (as in radio communication), or to allow several carriers at different frequencies to share a common physical transmission medium by frequency division multiplexing (as in a cable television system). The term originated in radio communication, where the carrier wave creates the waves which carry the information (modulation) through the air from the transmitter to the receiver. The term is also used for an unmodulated emission in the absence of any modulating signal. In music production, carrier signals can be controlled by a modulating signal to change the sound property of an audio recording and add a sense of depth and movement. Overview The term carrier wave originated with radio. In a radio communication system, such as radio or television broadcasting, information is transmitted across space by radio waves. At the sending end, the information, in the form of a modulation signal, is applied to an electronic device called a transmitter. In the transmitter, an electronic oscillator generates a sinusoidal alternating current of radio frequency; this is the carrier wave. The information signal is used to modulate the carrier wave, altering some aspects of the carrier, to impress the information on the wave. The alternating current is amplified and applied to the transmitter's antenna, radiating radio waves that carry the information to the receiver's location. At the receiver, the radio waves strike the receiver's antenna, inducing a tiny oscillating current in it, which is applied to the receiver. In the receiver, the modulation signal is extracted from the modulated carrier wave, a process called demodulation. Most radio systems in the 20th century used frequency modulation (FM) or amplitude modulation (AM) to add information to the carrier. The frequency spectrum of a modulated AM or FM signal from a radio transmitter is shown above. It consists of a strong component (C) at the carrier frequency with the modulation contained in narrow sidebands (SB) above and below the carrier frequency. The frequency of a radio or television station is considered to be the carrier frequency. However the carrier itself is not useful in transmitting the information, so the energy in the carrier component is a waste of transmitter power. Therefore, in many modern modulation methods, the carrier is not transmitted. For example, in single-sideband modulation (SSB), the carrier is suppressed (and in some forms of SSB, eliminated). The carrier must be reintroduced at the receiver by a beat frequency oscillator (BFO). Carriers are also widely used to transmit multiple information channels through a single cable or other communication medium using the technique of frequency division multiplexing (FDM). For example, in a cable television system, hundreds of television channels are distributed to consumers through a single coaxial cable, by modulating each television channel on a carrier wave of a different frequency, then sending all the carriers through the cable. At the receiver, the individual channels can be separated by bandpass filters using tuned circuits so the television channel desired can be displayed. A similar technique called wavelength division multiplexing is used to transmit multiple channels of data through an optical fiber by modulating them on separate light carriers; light beams of different wavelengths. Carrierless modulation systems The information in a modulated radio signal is contained in the sidebands while the power in the carrier frequency component does not transmit information itself, so newer forms of radio communication (such as spread spectrum and ultra-wideband), and OFDM which is widely used in Wi-Fi networks, digital television, and digital audio broadcasting (DAB) do not use a conventional sinusoidal carrier wave. Carrier leakage Carrier leakage is interference caused by crosstalk or a DC offset. It is present as an unmodulated sine wave within the signal's bandwidth, whose amplitude is independent of the signal's amplitude. See frequency mixers. See also Carrier recovery Carrier system Carrier tone Frequency-division multiplexing Sideband References Communication circuits Waveforms
Carrier wave
[ "Physics", "Engineering" ]
913
[ "Physical phenomena", "Telecommunications engineering", "Waves", "Waveforms", "Communication circuits" ]
153,221
https://en.wikipedia.org/wiki/Heat%20exchanger
A heat exchanger is a system used to transfer heat between a source and a working fluid. Heat exchangers are used in both cooling and heating processes. The fluids may be separated by a solid wall to prevent mixing or they may be in direct contact. They are widely used in space heating, refrigeration, air conditioning, power stations, chemical plants, petrochemical plants, petroleum refineries, natural-gas processing, and sewage treatment. The classic example of a heat exchanger is found in an internal combustion engine in which a circulating fluid known as engine coolant flows through radiator coils and air flows past the coils, which cools the coolant and heats the incoming air. Another example is the heat sink, which is a passive heat exchanger that transfers the heat generated by an electronic or a mechanical device to a fluid medium, often air or a liquid coolant. Flow arrangement There are three primary classifications of heat exchangers according to their flow arrangement. In parallel-flow heat exchangers, the two fluids enter the exchanger at the same end, and travel in parallel to one another to the other side. In counter-flow heat exchangers the fluids enter the exchanger from opposite ends. The counter current design is the most efficient, in that it can transfer the most heat from the heat (transfer) medium per unit mass due to the fact that the average temperature difference along any unit length is higher. See countercurrent exchange. In a cross-flow heat exchanger, the fluids travel roughly perpendicular to one another through the exchanger. For efficiency, heat exchangers are designed to maximize the surface area of the wall between the two fluids, while minimizing resistance to fluid flow through the exchanger. The exchanger's performance can also be affected by the addition of fins or corrugations in one or both directions, which increase surface area and may channel fluid flow or induce turbulence. The driving temperature across the heat transfer surface varies with position, but an appropriate mean temperature can be defined. In most simple systems this is the "log mean temperature difference" (LMTD). Sometimes direct knowledge of the LMTD is not available and the NTU method is used. Types By maximum operating temperature, heat exchangers can be divided into low-temperature and high-temperature ones. The former work up to 500–650°C depending on the industry and generally don't require special design and material considerations. The latter work up to 1000 or even 1400°C. Double pipe heat exchangers are the simplest exchangers used in industries. On one hand, these heat exchangers are cheap for both design and maintenance, making them a good choice for small industries. On the other hand, their low efficiency coupled with the high space occupied in large scales, has led modern industries to use more efficient heat exchangers like shell and tube or plate. However, since double pipe heat exchangers are simple, they are used to teach heat exchanger design basics to students as the fundamental rules for all heat exchangers are the same. 1. Double-pipe heat exchanger When one fluid flows through the smaller pipe, the other flows through the annular gap between the two pipes. These flows may be parallel or counter-flows in a double pipe heat exchanger. (a) Parallel flow, where both hot and cold liquids enter the heat exchanger from the same side, flow in the same direction and exit at the same end. This configuration is preferable when the two fluids are intended to reach exactly the same temperature, as it reduces thermal stress and produces a more uniform rate of heat transfer. (b) Counter-flow, where hot and cold fluids enter opposite sides of the heat exchanger, flow in opposite directions, and exit at opposite ends. This configuration is preferable when the objective is to maximize heat transfer between the fluids, as it creates a larger temperature differential when used under otherwise similar conditions. The figure above illustrates the parallel and counter-flow flow directions of the fluid exchanger. 2. Shell-and-tube heat exchanger In a shell-and-tube heat exchanger, two fluids at different temperatures flow through the heat exchanger. One of the fluids flows through the tube side and the other fluid flows outside the tubes, but inside the shell (shell side). Baffles are used to support the tubes, direct the fluid flow to the tubes in an approximately natural manner, and maximize the turbulence of the shell fluid. There are many various kinds of baffles, and the choice of baffle form, spacing, and geometry depends on the allowable flow rate of the drop in shell-side force, the need for tube support, and the flow-induced vibrations. There are several variations of shell-and-tube exchangers available; the differences lie in the arrangement of flow configurations and details of construction. In application to cool air with shell-and-tube technology (such as intercooler / charge air cooler for combustion engines), fins can be added on the tubes to increase heat transfer area on air side and create a tubes & fins configuration. 3. Plate Heat Exchanger A plate heat exchanger contains an amount of thin shaped heat transfer plates bundled together. The gasket arrangement of each pair of plates provides two separate channel system. Each pair of plates form a channel where the fluid can flow through. The pairs are attached by welding and bolting methods. The following shows the components in the heat exchanger. In single channels the configuration of the gaskets enables flow through. Thus, this allows the main and secondary media in counter-current flow. A gasket plate heat exchanger has a heat region from corrugated plates. The gasket function as seal between plates and they are located between frame and pressure plates. Fluid flows in a counter current direction throughout the heat exchanger. An efficient thermal performance is produced. Plates are produced in different depths, sizes and corrugated shapes. There are different types of plates available including plate and frame, plate and shell and spiral plate heat exchangers. The distribution area guarantees the flow of fluid to the whole heat transfer surface. This helps to prevent stagnant area that can cause accumulation of unwanted material on solid surfaces. High flow turbulence between plates results in a greater transfer of heat and a decrease in pressure. 4. Condensers and Boilers Heat exchangers using a two-phase heat transfer system are condensers, boilers and evaporators. Condensers are instruments that take and cool hot gas or vapor to the point of condensation and transform the gas into a liquid form. The point at which liquid transforms to gas is called vaporization and vice versa is called condensation. Surface condenser is the most common type of condenser where it includes a water supply device. Figure 5 below displays a two-pass surface condenser. The pressure of steam at the turbine outlet is low where the steam density is very low where the flow rate is very high. To prevent a decrease in pressure in the movement of steam from the turbine to condenser, the condenser unit is placed underneath and connected to the turbine. Inside the tubes the cooling water runs in a parallel way, while steam moves in a vertical downward position from the wide opening at the top and travel through the tube. Furthermore, boilers are categorized as initial application of heat exchangers. The word steam generator was regularly used to describe a boiler unit where a hot liquid stream is the source of heat rather than the combustion products. Depending on the dimensions and configurations the boilers are manufactured. Several boilers are only able to produce hot fluid while on the other hand the others are manufactured for steam production. Shell and tube Shell and tube heat exchangers consist of a series of tubes which contain fluid that must be either heated or cooled. A second fluid runs over the tubes that are being heated or cooled so that it can either provide the heat or absorb the heat required. A set of tubes is called the tube bundle and can be made up of several types of tubes: plain, longitudinally finned, etc. Shell and tube heat exchangers are typically used for high-pressure applications (with pressures greater than 30 bar and temperatures greater than 260 °C). This is because the shell and tube heat exchangers are robust due to their shape.Several thermal design features must be considered when designing the tubes in the shell and tube heat exchangers: There can be many variations on the shell and tube design. Typically, the ends of each tube are connected to plenums (sometimes called water boxes) through holes in tubesheets. The tubes may be straight or bent in the shape of a U, called U-tubes. Tube diameter: Using a small tube diameter makes the heat exchanger both economical and compact. However, it is more likely for the heat exchanger to foul up faster and the small size makes mechanical cleaning of the fouling difficult. To prevail over the fouling and cleaning problems, larger tube diameters can be used. Thus to determine the tube diameter, the available space, cost and fouling nature of the fluids must be considered. Tube thickness: The thickness of the wall of the tubes is usually determined to ensure: There is enough room for corrosion That flow-induced vibration has resistance Axial strength Availability of spare parts Hoop strength (to withstand internal tube pressure) Buckling strength (to withstand overpressure in the shell) Tube length: heat exchangers are usually cheaper when they have a smaller shell diameter and a long tube length. Thus, typically there is an aim to make the heat exchanger as long as physically possible whilst not exceeding production capabilities. However, there are many limitations for this, including space available at the installation site and the need to ensure tubes are available in lengths that are twice the required length (so they can be withdrawn and replaced). Also, long, thin tubes are difficult to take out and replace. Tube pitch: when designing the tubes, it is practical to ensure that the tube pitch (i.e., the centre-centre distance of adjoining tubes) is not less than 1.25 times the tubes' outside diameter. A larger tube pitch leads to a larger overall shell diameter, which leads to a more expensive heat exchanger. Tube corrugation: this type of tubes, mainly used for the inner tubes, increases the turbulence of the fluids and the effect is very important in the heat transfer giving a better performance. Tube Layout: refers to how tubes are positioned within the shell. There are four main types of tube layout, which are, triangular (30°), rotated triangular (60°), square (90°) and rotated square (45°). The triangular patterns are employed to give greater heat transfer as they force the fluid to flow in a more turbulent fashion around the piping. Square patterns are employed where high fouling is experienced and cleaning is more regular. Baffle Design: baffles are used in shell and tube heat exchangers to direct fluid across the tube bundle. They run perpendicularly to the shell and hold the bundle, preventing the tubes from sagging over a long length. They can also prevent the tubes from vibrating. The most common type of baffle is the segmental baffle. The semicircular segmental baffles are oriented at 180 degrees to the adjacent baffles forcing the fluid to flow upward and downwards between the tube bundle. Baffle spacing is of large thermodynamic concern when designing shell and tube heat exchangers. Baffles must be spaced with consideration for the conversion of pressure drop and heat transfer. For thermo economic optimization it is suggested that the baffles be spaced no closer than 20% of the shell's inner diameter. Having baffles spaced too closely causes a greater pressure drop because of flow redirection. Consequently, having the baffles spaced too far apart means that there may be cooler spots in the corners between baffles. It is also important to ensure the baffles are spaced close enough that the tubes do not sag. The other main type of baffle is the disc and doughnut baffle, which consists of two concentric baffles. An outer, wider baffle looks like a doughnut, whilst the inner baffle is shaped like a disk. This type of baffle forces the fluid to pass around each side of the disk then through the doughnut baffle generating a different type of fluid flow. Tubes & fins Design: in application to cool air with shell-and-tube technology (such as intercooler / charge air cooler for combustion engines), the difference in heat transfer between air and cold fluid can be such that there is a need to increase heat transfer area on air side. For this function fins can be added on the tubes to increase heat transfer area on air side and create a tubes & fins configuration. Fixed tube liquid-cooled heat exchangers especially suitable for marine and harsh applications can be assembled with brass shells, copper tubes, brass baffles, and forged brass integral end hubs. (See: Copper in heat exchangers). Plate Another type of heat exchanger is the plate heat exchanger. These exchangers are composed of many thin, slightly separated plates that have very large surface areas and small fluid flow passages for heat transfer. Advances in gasket and brazing technology have made the plate-type heat exchanger increasingly practical. In HVAC applications, large heat exchangers of this type are called plate-and-frame; when used in open loops, these heat exchangers are normally of the gasket type to allow periodic disassembly, cleaning, and inspection. There are many types of permanently bonded plate heat exchangers, such as dip-brazed, vacuum-brazed, and welded plate varieties, and they are often specified for closed-loop applications such as refrigeration. Plate heat exchangers also differ in the types of plates that are used, and in the configurations of those plates. Some plates may be stamped with "chevron", dimpled, or other patterns, where others may have machined fins and/or grooves. When compared to shell and tube exchangers, the stacked-plate arrangement typically has lower volume and cost. Another difference between the two is that plate exchangers typically serve low to medium pressure fluids, compared to medium and high pressures of shell and tube. A third and important difference is that plate exchangers employ more countercurrent flow rather than cross current flow, which allows lower approach temperature differences, high temperature changes, and increased efficiencies. Plate and shell A third type of heat exchanger is a plate and shell heat exchanger, which combines plate heat exchanger with shell and tube heat exchanger technologies. The heart of the heat exchanger contains a fully welded circular plate pack made by pressing and cutting round plates and welding them together. Nozzles carry flow in and out of the platepack (the 'Plate side' flowpath). The fully welded platepack is assembled into an outer shell that creates a second flowpath (the 'Shell side'). Plate and shell technology offers high heat transfer, high pressure, high operating temperature, compact size, low fouling and close approach temperature. In particular, it does completely without gaskets, which provides security against leakage at high pressures and temperatures. Adiabatic wheel A fourth type of heat exchanger uses an intermediate fluid or solid store to hold heat, which is then moved to the other side of the heat exchanger to be released. Two examples of this are adiabatic wheels, which consist of a large wheel with fine threads rotating through the hot and cold fluids, and fluid heat exchangers. Plate fin This type of heat exchanger uses "sandwiched" passages containing fins to increase the effectiveness of the unit. The designs include crossflow and counterflow coupled with various fin configurations such as straight fins, offset fins and wavy fins. Plate and fin heat exchangers are usually made of aluminum alloys, which provide high heat transfer efficiency. The material enables the system to operate at a lower temperature difference and reduce the weight of the equipment. Plate and fin heat exchangers are mostly used for low temperature services such as natural gas, helium and oxygen liquefaction plants, air separation plants and transport industries such as motor and aircraft engines. Advantages of plate and fin heat exchangers: High heat transfer efficiency especially in gas treatment Larger heat transfer area Approximately 5 times lighter in weight than that of shell and tube heat exchanger. Able to withstand high pressure Disadvantages of plate and fin heat exchangers: Might cause clogging as the pathways are very narrow Difficult to clean the pathways Aluminium alloys are susceptible to Mercury Liquid Embrittlement Failure Finned tube The usage of fins in a tube-based heat exchanger is common when one of the working fluids is a low-pressure gas, and is typical for heat exchangers that operate using ambient air, such as automotive radiators and HVAC air condensers. Fins dramatically increase the surface area with which heat can be exchanged, which improves the efficiency of conducting heat to a fluid with very low thermal conductivity, such as air. The fins are typically made from aluminium or copper since they must conduct heat from the tube along the length of the fins, which are usually very thin. The main construction types of finned tube exchangers are: A stack of evenly-spaced metal plates act as the fins and the tubes are pressed through pre-cut holes in the fins, good thermal contact usually being achieved by deformation of the fins around the tube. This is typical construction for HVAC air coils and large refrigeration condensers. Fins are spiral-wound onto individual tubes as a continuous strip, the tubes can then be assembled in banks, bent in a serpentine pattern, or wound into large spirals. Zig-zag metal strips are sandwiched between flat rectangular tubes, often being soldered or brazed together for good thermal and mechanical strength. This is common in low-pressure heat exchangers such as water-cooling radiators. Regular flat tubes will expand and deform if exposed to high pressures but flat microchannel tubes allow this construction to be used for high pressures. Stacked-fin or spiral-wound construction can be used for the tubes inside shell-and-tube heat exchangers when high efficiency thermal transfer to a gas is required. In electronics cooling, heat sinks, particularly those using heat pipes, can have a stacked-fin construction. Pillow plate A pillow plate heat exchanger is commonly used in the dairy industry for cooling milk in large direct-expansion stainless steel bulk tanks. Nearly the entire surface area of a tank can be integrated with this heat exchanger, without gaps that would occur between pipes welded to the exterior of the tank. Pillow plates can also be constructed as flat plates that are stacked inside a tank. The relatively flat surface of the plates allows easy cleaning, especially in sterile applications. The pillow plate can be constructed using either a thin sheet of metal welded to the thicker surface of a tank or vessel, or two thin sheets welded together. The surface of the plate is welded with a regular pattern of dots or a serpentine pattern of weld lines. After welding the enclosed space is pressurised with sufficient force to cause the thin metal to bulge out around the welds, providing a space for heat exchanger liquids to flow, and creating a characteristic appearance of a swelled pillow formed out of metal. Waste heat recovery units A waste heat recovery unit (WHRU) is a heat exchanger that recovers heat from a hot gas stream while transferring it to a working medium, typically water or oils. The hot gas stream can be the exhaust gas from a gas turbine or a diesel engine or a waste gas from industry or refinery. Large systems with high volume and temperature gas streams, typical in industry, can benefit from steam Rankine cycle (SRC) in a waste heat recovery unit, but these cycles are too expensive for small systems. The recovery of heat from low temperature systems requires different working fluids than steam. An organic Rankine cycle (ORC) waste heat recovery unit can be more efficient at low temperature range using refrigerants that boil at lower temperatures than water. Typical organic refrigerants are ammonia, pentafluoropropane (R-245fa and R-245ca), and toluene. The refrigerant is boiled by the heat source in the evaporator to produce super-heated vapor. This fluid is expanded in the turbine to convert thermal energy to kinetic energy, that is converted to electricity in the electrical generator. This energy transfer process decreases the temperature of the refrigerant that, in turn, condenses. The cycle is closed and completed using a pump to send the fluid back to the evaporator. Dynamic scraped surface Another type of heat exchanger is called "(dynamic) scraped surface heat exchanger". This is mainly used for heating or cooling with high-viscosity products, crystallization processes, evaporation and high-fouling applications. Long running times are achieved due to the continuous scraping of the surface, thus avoiding fouling and achieving a sustainable heat transfer rate during the process. Phase-change In addition to heating up or cooling down fluids in just a single phase, heat exchangers can be used either to heat a liquid to evaporate (or boil) it or used as condensers to cool a vapor and condense it to a liquid. In chemical plants and refineries, reboilers used to heat incoming feed for distillation towers are often heat exchangers. Distillation set-ups typically use condensers to condense distillate vapors back into liquid. Power plants that use steam-driven turbines commonly use heat exchangers to boil water into steam. Heat exchangers or similar units for producing steam from water are often called boilers or steam generators. In the nuclear power plants called pressurized water reactors, special large heat exchangers pass heat from the primary (reactor plant) system to the secondary (steam plant) system, producing steam from water in the process. These are called steam generators. All fossil-fueled and nuclear power plants using steam-driven turbines have surface condensers to convert the exhaust steam from the turbines into condensate (water) for re-use. To conserve energy and cooling capacity in chemical and other plants, regenerative heat exchangers can transfer heat from a stream that must be cooled to another stream that must be heated, such as distillate cooling and reboiler feed pre-heating. This term can also refer to heat exchangers that contain a material within their structure that has a change of phase. This is usually a solid to liquid phase due to the small volume difference between these states. This change of phase effectively acts as a buffer because it occurs at a constant temperature but still allows for the heat exchanger to accept additional heat. One example where this has been investigated is for use in high power aircraft electronics. Heat exchangers functioning in multiphase flow regimes may be subject to the Ledinegg instability. Direct contact Direct contact heat exchangers involve heat transfer between hot and cold streams of two phases in the absence of a separating wall. Thus such heat exchangers can be classified as: Gas – liquid Immiscible liquid – liquid Solid-liquid or solid – gas Most direct contact heat exchangers fall under the Gas – Liquid category, where heat is transferred between a gas and liquid in the form of drops, films or sprays. Such types of heat exchangers are used predominantly in air conditioning, humidification, industrial hot water heating, water cooling and condensing plants. Microchannel Microchannel heat exchangers are multi-pass parallel flow heat exchangers consisting of three main elements: manifolds (inlet and outlet), multi-port tubes with the hydraulic diameters smaller than 1mm, and fins. All the elements usually brazed together using controllable atmosphere brazing process. Microchannel heat exchangers are characterized by high heat transfer ratio, low refrigerant charges, compact size, and lower airside pressure drops compared to finned tube heat exchangers. Microchannel heat exchangers are widely used in automotive industry as the car radiators, and as condenser, evaporator, and cooling/heating coils in HVAC industry. Micro heat exchangers, Micro-scale heat exchangers, or microstructured heat exchangers are heat exchangers in which (at least one) fluid flows in lateral confinements with typical dimensions below 1 mm. The most typical such confinement are microchannels, which are channels with a hydraulic diameter below 1 mm. Microchannel heat exchangers can be made from metal or ceramics. Microchannel heat exchangers can be used for many applications including: high-performance aircraft gas turbine engines heat pumps Microprocessor and microchip cooling air conditioning HVAC and refrigeration air coils One of the widest uses of heat exchangers is for refrigeration and air conditioning. This class of heat exchangers is commonly called air coils, or just coils due to their often-serpentine internal tubing, or condensers in the case of refrigeration, and are typically of the finned tube type. Liquid-to-air, or air-to-liquid HVAC coils are typically of modified crossflow arrangement. In vehicles, heat coils are often called heater cores. On the liquid side of these heat exchangers, the common fluids are water, a water-glycol solution, steam, or a refrigerant. For heating coils, hot water and steam are the most common, and this heated fluid is supplied by boilers, for example. For cooling coils, chilled water and refrigerant are most common. Chilled water is supplied from a chiller that is potentially located very far away, but refrigerant must come from a nearby condensing unit. When a refrigerant is used, the cooling coil is the evaporator, and the heating coil is the condenser in the vapor-compression refrigeration cycle. HVAC coils that use this direct-expansion of refrigerants are commonly called DX coils. Some DX coils are "microchannel" type. On the air side of HVAC coils a significant difference exists between those used for heating, and those for cooling. Due to psychrometrics, air that is cooled often has moisture condensing out of it, except with extremely dry air flows. Heating some air increases that airflow's capacity to hold water. So heating coils need not consider moisture condensation on their air-side, but cooling coils must be adequately designed and selected to handle their particular latent (moisture) as well as the sensible (cooling) loads. The water that is removed is called condensate. For many climates, water or steam HVAC coils can be exposed to freezing conditions. Because water expands upon freezing, these somewhat expensive and difficult to replace thin-walled heat exchangers can easily be damaged or destroyed by just one freeze. As such, freeze protection of coils is a major concern of HVAC designers, installers, and operators. The introduction of indentations placed within the heat exchange fins controlled condensation, allowing water molecules to remain in the cooled air. The heat exchangers in direct-combustion furnaces, typical in many residences, are not 'coils'. They are, instead, gas-to-air heat exchangers that are typically made of stamped steel sheet metal. The combustion products pass on one side of these heat exchangers, and air to heat on the other. A cracked heat exchanger is therefore a dangerous situation that requires immediate attention because combustion products may enter living space. Helical-coil Although double-pipe heat exchangers are the simplest to design, the better choice in the following cases would be the helical-coil heat exchanger (HCHE): The main advantage of the HCHE, like that for the Spiral heat exchanger (SHE), is its highly efficient use of space, especially when it's limited and not enough straight pipe can be laid. Under conditions of low flowrates (or laminar flow), such that the typical shell-and-tube exchangers have low heat-transfer coefficients and becoming uneconomical. When there is low pressure in one of the fluids, usually from accumulated pressure drops in other process equipment. When one of the fluids has components in multiple phases (solids, liquids, and gases), which tends to create mechanical problems during operations, such as plugging of small-diameter tubes. Cleaning of helical coils for these multiple-phase fluids can prove to be more difficult than its shell and tube counterpart; however the helical coil unit would require cleaning less often. These have been used in the nuclear industry as a method for exchanging heat in a sodium system for large liquid metal fast breeder reactors since the early 1970s, using an HCHE device invented by Charles E. Boardman and John H. Germer. There are several simple methods for designing HCHE for all types of manufacturing industries, such as using the Ramachandra K. Patil (et al.) method from India and the Scott S. Haraburda method from the United States. However, these are based upon assumptions of estimating inside heat transfer coefficient, predicting flow around the outside of the coil, and upon constant heat flux. Spiral A modification to the perpendicular flow of the typical HCHE involves the replacement of shell with another coiled tube, allowing the two fluids to flow parallel to one another, and which requires the use of different design calculations. These are the Spiral Heat Exchangers (SHE), which may refer to a helical (coiled) tube configuration, more generally, the term refers to a pair of flat surfaces that are coiled to form the two channels in a counter-flow arrangement. Each of the two channels has one long curved path. A pair of fluid ports are connected tangentially to the outer arms of the spiral, and axial ports are common, but optional. The main advantage of the SHE is its highly efficient use of space. This attribute is often leveraged and partially reallocated to gain other improvements in performance, according to well known tradeoffs in heat exchanger design. (A notable tradeoff is capital cost vs operating cost.) A compact SHE may be used to have a smaller footprint and thus lower all-around capital costs, or an oversized SHE may be used to have less pressure drop, less pumping energy, higher thermal efficiency, and lower energy costs. Construction The distance between the sheets in the spiral channels is maintained by using spacer studs that were welded prior to rolling. Once the main spiral pack has been rolled, alternate top and bottom edges are welded and each end closed by a gasketed flat or conical cover bolted to the body. This ensures no mixing of the two fluids occurs. Any leakage is from the periphery cover to the atmosphere, or to a passage that contains the same fluid. Self cleaning Spiral heat exchangers are often used in the heating of fluids that contain solids and thus tend to foul the inside of the heat exchanger. The low pressure drop lets the SHE handle fouling more easily. The SHE uses a “self cleaning” mechanism, whereby fouled surfaces cause a localized increase in fluid velocity, thus increasing the drag (or fluid friction) on the fouled surface, thus helping to dislodge the blockage and keep the heat exchanger clean. "The internal walls that make up the heat transfer surface are often rather thick, which makes the SHE very robust, and able to last a long time in demanding environments." They are also easily cleaned, opening out like an oven where any buildup of foulant can be removed by pressure washing. Self-cleaning water filters are used to keep the system clean and running without the need to shut down or replace cartridges and bags. Flow arrangements There are three main types of flows in a spiral heat exchanger: Counter-current Flow: Fluids flow in opposite directions. These are used for liquid-liquid, condensing and gas cooling applications. Units are usually mounted vertically when condensing vapour and mounted horizontally when handling high concentrations of solids. Spiral Flow/Cross Flow: One fluid is in spiral flow and the other in a cross flow. Spiral flow passages are welded at each side for this type of spiral heat exchanger. This type of flow is suitable for handling low density gas, which passes through the cross flow, avoiding pressure loss. It can be used for liquid-liquid applications if one liquid has a considerably greater flow rate than the other. Distributed Vapour/Spiral flow: This design is that of a condenser, and is usually mounted vertically. It is designed to cater for the sub-cooling of both condensate and non-condensables. The coolant moves in a spiral and leaves via the top. Hot gases that enter leave as condensate via the bottom outlet. Applications The Spiral heat exchanger is good for applications such as pasteurization, digester heating, heat recovery, pre-heating (see: recuperator), and effluent cooling. For sludge treatment, SHEs are generally smaller than other types of heat exchangers. These are used to transfer the heat. Selection Due to the many variables involved, selecting optimal heat exchangers is challenging. Hand calculations are possible, but many iterations are typically needed. As such, heat exchangers are most often selected via computer programs, either by system designers, who are typically engineers, or by equipment vendors. To select an appropriate heat exchanger, the system designers (or equipment vendors) would firstly consider the design limitations for each heat exchanger type. Though cost is often the primary criterion, several other selection criteria are important: High/low pressure limits Thermal performance Temperature ranges Product mix (liquid/liquid, particulates or high-solids liquid) Pressure drops across the exchanger Fluid flow capacity Cleanability, maintenance and repair Materials required for construction Ability and ease of future expansion Material selection, such as copper, aluminium, carbon steel, stainless steel, nickel alloys, ceramic, polymer, and titanium. Small-diameter coil technologies are becoming more popular in modern air conditioning and refrigeration systems because they have better rates of heat transfer than conventional sized condenser and evaporator coils with round copper tubes and aluminum or copper fin that have been the standard in the HVAC industry. Small diameter coils can withstand the higher pressures required by the new generation of environmentally friendlier refrigerants. Two small diameter coil technologies are currently available for air conditioning and refrigeration products: copper microgroove and brazed aluminum microchannel. Choosing the right heat exchanger (HX) requires some knowledge of the different heat exchanger types, as well as the environment where the unit must operate. Typically in the manufacturing industry, several differing types of heat exchangers are used for just one process or system to derive the final product. For example, a kettle HX for pre-heating, a double pipe HX for the 'carrier' fluid and a plate and frame HX for final cooling. With sufficient knowledge of heat exchanger types and operating requirements, an appropriate selection can be made to optimise the process. Monitoring and maintenance Online monitoring of commercial heat exchangers is done by tracking the overall heat transfer coefficient. The overall heat transfer coefficient tends to decline over time due to fouling. By periodically calculating the overall heat transfer coefficient from exchanger flow rates and temperatures, the owner of the heat exchanger can estimate when cleaning the heat exchanger is economically attractive. Integrity inspection of plate and tubular heat exchanger can be tested in situ by the conductivity or helium gas methods. These methods confirm the integrity of the plates or tubes to prevent any cross contamination and the condition of the gaskets. Mechanical integrity monitoring of heat exchanger tubes may be conducted through Nondestructive methods such as eddy current testing. Fouling Fouling occurs when impurities deposit on the heat exchange surface. Deposition of these impurities can decrease heat transfer effectiveness significantly over time and are caused by: Low wall shear stress Low fluid velocities High fluid velocities Reaction product solid precipitation Precipitation of dissolved impurities due to elevated wall temperatures The rate of heat exchanger fouling is determined by the rate of particle deposition less re-entrainment/suppression. This model was originally proposed in 1959 by Kern and Seaton. Crude Oil Exchanger Fouling. In commercial crude oil refining, crude oil is heated from to prior to entering the distillation column. A series of shell and tube heat exchangers typically exchange heat between crude oil and other oil streams to heat the crude to prior to heating in a furnace. Fouling occurs on the crude side of these exchangers due to asphaltene insolubility. The nature of asphaltene solubility in crude oil was successfully modeled by Wiehe and Kennedy. The precipitation of insoluble asphaltenes in crude preheat trains has been successfully modeled as a first order reaction by Ebert and Panchal who expanded on the work of Kern and Seaton. Cooling Water Fouling. Cooling water systems are susceptible to fouling. Cooling water typically has a high total dissolved solids content and suspended colloidal solids. Localized precipitation of dissolved solids occurs at the heat exchange surface due to wall temperatures higher than bulk fluid temperature. Low fluid velocities (less than 3 ft/s) allow suspended solids to settle on the heat exchange surface. Cooling water is typically on the tube side of a shell and tube exchanger because it's easy to clean. To prevent fouling, designers typically ensure that cooling water velocity is greater than and bulk fluid temperature is maintained less than . Other approaches to control fouling control combine the "blind" application of biocides and anti-scale chemicals with periodic lab testing. Maintenance Plate and frame heat exchangers can be disassembled and cleaned periodically. Tubular heat exchangers can be cleaned by such methods as acid cleaning, sandblasting, high-pressure water jet, bullet cleaning, or drill rods. In large-scale cooling water systems for heat exchangers, water treatment such as purification, addition of chemicals, and testing, is used to minimize fouling of the heat exchange equipment. Other water treatment is also used in steam systems for power plants, etc. to minimize fouling and corrosion of the heat exchange and other equipment. A variety of companies have started using water borne oscillations technology to prevent biofouling. Without the use of chemicals, this type of technology has helped in providing a low-pressure drop in heat exchangers. Design and manufacturing regulations The design and manufacturing of heat exchangers has numerous regulations, which vary according to the region in which they will be used. Design and manufacturing codes include: ASME Boiler and Pressure Vessel Code (US); PD 5500 (UK); BS 1566 (UK); EN 13445 (EU); CODAP (French); Pressure Equipment Safety Regulations 2016 (PER) (UK); Pressure Equipment Directive (EU); NORSOK (Norwegian); TEMA; API 12; and API 560. In nature Humans The human nasal passages serve as a heat exchanger, with cool air being inhaled and warm air being exhaled. Its effectiveness can be demonstrated by putting the hand in front of the face and exhaling, first through the nose and then through the mouth. Air exhaled through the nose is substantially cooler. This effect can be enhanced with clothing, by, for example, wearing a scarf over the face while breathing in cold weather. In species that have external testes (such as human), the artery to the testis is surrounded by a mesh of veins called the pampiniform plexus. This cools the blood heading to the testes, while reheating the returning blood. Birds, fish, marine mammals "Countercurrent" heat exchangers occur naturally in the circulatory systems of fish, whales and other marine mammals. Arteries to the skin carrying warm blood are intertwined with veins from the skin carrying cold blood, causing the warm arterial blood to exchange heat with the cold venous blood. This reduces the overall heat loss in cold water. Heat exchangers are also present in the tongues of baleen whales as large volumes of water flow through their mouths. Wading birds use a similar system to limit heat losses from their body through their legs into the water. Carotid rete Carotid rete is a counter-current heat exchanging organ in some ungulates. The blood ascending the carotid arteries on its way to the brain, flows via a network of vessels where heat is discharged to the veins of cooler blood descending from the nasal passages. The carotid rete allows Thomson's gazelle to maintain its brain almost 3 °C (5.4 °F) cooler than the rest of the body, and therefore aids in tolerating bursts in metabolic heat production such as associated with outrunning cheetahs (during which the body temperature exceeds the maximum temperature at which the brain could function). Humans with other primates lack a carotid rete. In industry Heat exchangers are widely used in industry both for cooling and heating large scale industrial processes. The type and size of heat exchanger used can be tailored to suit a process depending on the type of fluid, its phase, temperature, density, viscosity, pressures, chemical composition and various other thermodynamic properties. In many industrial processes there is waste of energy or a heat stream that is being exhausted, heat exchangers can be used to recover this heat and put it to use by heating a different stream in the process. This practice saves a lot of money in industry, as the heat supplied to other streams from the heat exchangers would otherwise come from an external source that is more expensive and more harmful to the environment. Heat exchangers are used in many industries, including: Waste water treatment Refrigeration Wine and beer making Petroleum refining Nuclear power In waste water treatment, heat exchangers play a vital role in maintaining optimal temperatures within anaerobic digesters to promote the growth of microbes that remove pollutants. Common types of heat exchangers used in this application are the double pipe heat exchanger as well as the plate and frame heat exchanger. In aircraft In commercial aircraft heat exchangers are used to take heat from the engine's oil system to heat cold fuel. This improves fuel efficiency, as well as reduces the possibility of water entrapped in the fuel freezing in components. Current market and forecast Estimated at US$17.5 billion in 2021, the global demand of heat exchangers is expected to experience robust growth of about 5% annually over the next years. The market value is expected to reach US$27 billion by 2030. With an expanding desire for environmentally friendly options and increased development of offices, retail sectors, and public buildings, market expansion is due to grow. A model of a simple heat exchanger A simple heat exchange might be thought of as two straight pipes with fluid flow, which are thermally connected. Let the pipes be of equal length L, carrying fluids with heat capacity (energy per unit mass per unit change in temperature) and let the mass flow rate of the fluids through the pipes, both in the same direction, be (mass per unit time), where the subscript i applies to pipe 1 or pipe 2. Temperature profiles for the pipes are and where x is the distance along the pipe. Assume a steady state, so that the temperature profiles are not functions of time. Assume also that the only transfer of heat from a small volume of fluid in one pipe is to the fluid element in the other pipe at the same position, i.e., there is no transfer of heat along a pipe due to temperature differences in that pipe. By Newton's law of cooling the rate of change in energy of a small volume of fluid is proportional to the difference in temperatures between it and the corresponding element in the other pipe: ( this is for parallel flow in the same direction and opposite temperature gradients, but for counter-flow heat exchange countercurrent exchange the sign is opposite in the second equation in front of ), where is the thermal energy per unit length and γ is the thermal connection constant per unit length between the two pipes. This change in internal energy results in a change in the temperature of the fluid element. The time rate of change for the fluid element being carried along by the flow is: where is the "thermal mass flow rate". The differential equations governing the heat exchanger may now be written as: Since the system is in a steady state, there are no partial derivatives of temperature with respect to time, and since there is no heat transfer along the pipe, there are no second derivatives in x as is found in the heat equation. These two coupled first-order differential equations may be solved to yield: where , , (this is for parallel-flow, but for counter-flow the sign in front of is negative, so that if , for the same "thermal mass flow rate" in both opposite directions, the gradient of temperature is constant and the temperatures linear in position x with a constant difference along the exchanger, explaining why the counter current design countercurrent exchange is the most efficient ) and A and B are two as yet undetermined constants of integration. Let and be the temperatures at x=0 and let and be the temperatures at the end of the pipe at x=L. Define the average temperatures in each pipe as: Using the solutions above, these temperatures are: {| |- | | |- | | |- |          | |} Choosing any two of the temperatures above eliminates the constants of integration, letting us find the other four temperatures. We find the total energy transferred by integrating the expressions for the time rate of change of internal energy per unit length: By the conservation of energy, the sum of the two energies is zero. The quantity is known as the Log mean temperature difference, and is a measure of the effectiveness of the heat exchanger in transferring heat energy. See also Architectural engineering Chemical engineering Cooling tower Copper in heat exchangers Heat pipe Heat pump Heat recovery ventilation Jacketed vessel Log mean temperature difference (LMTD) Marine heat exchangers Mechanical engineering Micro heat exchanger Moving bed heat exchanger Packed bed and in particular Packed columns Pumpable ice technology Reboiler Recuperator, or cross plate heat exchanger Regenerator Run around coil Steam generator (nuclear power) Surface condenser Toroidal expansion joint Thermosiphon Thermal wheel, or rotary heat exchanger (including enthalpy wheel and desiccant wheel) Tube tool Waste heat References Coulson, J. and Richardson, J (1999). Chemical Engineering- Fluid Flow. Heat Transfer and Mass Transfer- Volume 1; Reed Educational & Professional Publishing LTD Dogan Eryener (2005), 'Thermoeconomic optimization of baffle spacing for shell and tube heat exchangers', Energy Conservation and Management, Volume 47, Issue 11–12, Pages 1478–1489. G.F.Hewitt, G.L.Shires, T.R.Bott (1994) Process Heat Transfer, CRC Press, Inc, United States Of America. External links Shell and Tube Heat Exchanger Design Software for Educational Applications (PDF) EU Pressure Equipment Guideline A Thermal Management Concept For More Electric Aircraft Power System Application (PDF) Heat transfer Gas technologies
Heat exchanger
[ "Physics", "Chemistry", "Engineering" ]
9,768
[ "Transport phenomena", "Physical phenomena", "Heat transfer", "Chemical equipment", "Heat exchangers", "Thermodynamics" ]
153,250
https://en.wikipedia.org/wiki/Fowler%27s%20solution
Fowler's solution is a solution containing 1% potassium arsenite (KAsO2) which was first described and published as a treatment for malaria and syphilis in the late 1700s and was once prescribed as a remedy or a tonic. Thomas Fowler (1736–1801) of Stafford, England, proposed the solution in 1786 as a substitute for a patent medicine, "tasteless ague drop". From 1865, Fowler's solution was a leukemia treatment. From 1905, inorganic arsenicals like Fowler's solution saw diminished use as attention turned to organic arsenicals, starting with Atoxyl. As arsenical compounds are notably toxic and carcinogenic—with side effects such as cirrhosis of the liver, idiopathic portal hypertension, urinary bladder cancer, and skin cancers—Fowler's solution fell from use. (In 2001, however, the U.S. Food and Drug Administration (FDA) approved a drug of arsenic trioxide to treat acute promyelocytic leukaemia, and interest in arsenic has returned.) References External links Withdrawn drugs Hepatotoxins Arsenic(III) compounds Potassium compounds Patent medicines
Fowler's solution
[ "Chemistry" ]
239
[ "Drug safety", "Withdrawn drugs" ]
153,316
https://en.wikipedia.org/wiki/%CE%91-Linolenic%20acid
α-Linolenic acid, also known as alpha-linolenic acid (ALA) (from Greek alpha meaning "first" and linon meaning flax), is an n−3, or omega-3, essential fatty acid. ALA is found in many seeds and oils, including flaxseed, walnuts, chia, hemp, and many common vegetable oils. In terms of its structure, it is named all-cis-9,12,15-octadecatrienoic acid. In physiological literature, it is listed by its lipid number, 18:3 (n−3). It is a carboxylic acid with an 18-carbon chain and three cis double bonds. The first double bond is located at the third carbon from the methyl end of the fatty acid chain, known as the n end. Thus, α-linolenic acid is a polyunsaturated n−3 (omega-3) fatty acid. It is a regioisomer of gamma-linolenic acid (GLA), an 18:3 (n−6) fatty acid (i.e., a polyunsaturated omega-6 fatty acid with three double bonds). Etymology The word linolenic is an irregular derivation from linoleic, which itself is derived from the Greek word linon (flax). Oleic means "of or relating to oleic acid" because saturating an omega-6 double bond of linoleic acid produces oleic acid. Similarly saturating one of linolenic acid's double bonds produces linoleic acid. Dietary sources Seed oils are the richest sources of α-linolenic acid, notably those of hempseed, chia, perilla, flaxseed (linseed oil), rapeseed (canola), and soybeans. α-Linolenic acid is also obtained from the thylakoid membranes in the leaves of Pisum sativum (pea leaves). Plant chloroplasts consisting of more than 95 percent of photosynthetic thylakoid membranes are highly fluid due to the large abundance of ALA, evident as sharp resonances in high-resolution carbon-13 NMR spectra. Some studies state that ALA remains stable during processing and cooking. However, other studies state that ALA might not be suitable for baking as it will polymerize with itself, a feature exploited in paint with transition metal catalysts. Some ALA may also oxidize at baking temperatures. ALA percentages in the table below refer to the oils extracted from each item. Metabolism α-Linolenic acid can be obtained by humans only through their diets. Humans lack the desaturase enzymes required for processing stearic acid into A-linoleic acid or other unsaturated fatty acids. Dietary α-linolenic acid is metabolized to stearidonic acid, a precursor to a collection of polyunsaturated 20-, 22-, 24-, etc fatty acids (eicosatetraenoic acid, eicosapentaenoic acid, docosapentaenoic acid, tetracosapentaenoic acid, 6,9,12,15,18,21-tetracosahexaenoic acid, docosahexaenoic acid). Because the efficacy of n−3 long-chain polyunsaturated fatty acid (LC-PUFA) synthesis decreases down the cascade of α-linolenic acid conversion, DHA synthesis from α-linolenic acid is even more restricted than that of EPA. Conversion of ALA to DHA is higher in women than in men. Stability and hydrogenation Compared to many other oils, α-linolenic acid is more susceptible to oxidation. It becomes rancid more quickly in air. Oxidative instability of α-linolenic acid is one reason why producers choose to partially hydrogenate oils containing α-linolenic acid, such as soybean oil. Soybeans are the largest source of edible oils in the U.S., and, as of a 2007 study, 40% of soy oil production was partially hydrogenated. Hydrogenation of ALA-containing fats can introduce trans fats. Consumers are increasingly avoiding products that contain trans fats, and governments have begun to ban trans fats in food products, including the US government as of May 2018. These regulations and market pressures have spurred the development of soybeans low in α-linolenic acid. These new soybean varieties yield a more stable oil that often do not require hydrogenation for many applications. Health ALA consumption is associated with a lower risk of cardiovascular disease and a reduced risk of fatal coronary heart disease. Dietary ALA intake can improve lipid profiles by decreasing triglycerides, total cholesterol, high-density lipoprotein, and low-density lipoprotein. A 2021 review found that ALA intake is associated with a reduced risk of mortality from all causes, cardiovascular disease, and coronary heart disease but a slightly higher risk of cancer mortality. History In 1887, linolenic acid was discovered and named by the Austrian chemist Karl Hazura of the Imperial Technical Institute at Vienna (although he did not separate its isomers). α-Linolenic acid was first isolated in pure form in 1909 by Ernst Erdmann and F. Bedford of the University of Halle an der Saale, Germany, and by Adolf Rollett of the Universität Berlin, Germany, working independently, as cited in J. W. McCutcheon's synthesis in 1942, and referred to in Green and Hilditch's 1930s survey. It was first artificially synthesized in 1995 from C6 homologating agents. A Wittig reaction of the phosphonium salt of [(Z-Z)-nona-3,6-dien-1-yl]triphenylphosphonium bromide with methyl 9-oxononanoate, followed by saponification, completed the synthesis. See also Canola oil Flax seed oil γ-Linolenic acid Drying oil Essential fatty acid List of n−3 fatty acids Essential nutrient Wheat germ oil References 5α-Reductase inhibitors Fatty acids Essential fatty acids Essential nutrients Alkenoic acids Semiochemicals Insect pheromones
Α-Linolenic acid
[ "Chemistry" ]
1,319
[ "Insect pheromones", "Chemical ecology", "Semiochemicals" ]
153,353
https://en.wikipedia.org/wiki/Andromeda%20%28constellation%29
Andromeda is one of the 48 constellations listed by the 2nd-century Greco-Roman astronomer Ptolemy, and one of the 88 modern constellations. Located in the northern celestial hemisphere, it is named for Andromeda, daughter of Cassiopeia, in the Greek myth, who was chained to a rock to be eaten by the sea monster Cetus. Andromeda is most prominent during autumn evenings in the Northern Hemisphere, along with several other constellations named for characters in the Perseus myth. Because of its northern declination, Andromeda is visible only north of 40° south latitude; for observers farther south, it lies below the horizon. It is one of the largest constellations, with an area of 722 square degrees. This is over 1,400 times the size of the full moon, 55% of the size of the largest constellation, Hydra, and over 10 times the size of the smallest constellation, Crux. Its brightest star, Alpheratz (Alpha Andromedae), is a binary star that has also been counted as a part of Pegasus, while Gamma Andromedae (Almach) is a colorful binary and a popular target for amateur astronomers. With a variable brightness similar to Alpheratz, Mirach (Beta Andromedae) is a red giant, its color visible to the naked eye. The constellation's most obvious deep-sky object is the naked-eye Andromeda Galaxy (M31, also called the Great Galaxy of Andromeda), the closest spiral galaxy to the Milky Way and one of the brightest Messier objects. Several fainter galaxies, including M31's companions M110 and M32, as well as the more distant NGC 891, lie within Andromeda. The Blue Snowball Nebula, a planetary nebula, is visible in a telescope as a blue circular object. In Chinese astronomy, the stars that make up Andromeda were members of four different constellations that had astrological and mythological significance; a constellation related to Andromeda also exists in Hindu mythology. Andromeda is the location of the radiant for the Andromedids, a weak meteor shower that occurs in November. History and mythology The uranography of Andromeda has its roots most firmly in the Greek tradition, though a female figure in Andromeda's location had appeared earlier in Babylonian astronomy. The stars that make up Pisces and the middle portion of modern Andromeda formed a constellation representing a fertility goddess, sometimes named as Anunitum or the Lady of the Heavens. Andromeda is known as "the Chained Lady" or "the Chained Woman" in English. It was known as Mulier Catenata ("chained woman") in Latin and al-Mar'at al Musalsalah in Arabic. It has also been called Persea ("Perseus's wife") or Cepheis ("Cepheus's daughter"), all names that refer to Andromeda's role in the Greco-Roman myth of Perseus, in which Cassiopeia, the queen of Aethiopia, bragged that her daughter was more beautiful than the Nereids, sea nymphs blessed with incredible beauty. Offended at her remark, the nymphs petitioned Poseidon to punish Cassiopeia for her insolence, which he did by commanding the sea monster Cetus to attack Aethiopia. Andromeda's panicked father, Cepheus, was told by the Oracle of Ammon that the only way to save his kingdom was to sacrifice his daughter to Cetus. She was chained to a rock by the sea but was saved by the hero Perseus, who in one version of the story used the head of Medusa to turn the monster into stone; in another version, by the Roman poet Ovid in his Metamorphoses, Perseus slew the monster with his diamond sword. Perseus and Andromeda then married; the myth recounts that the couple had nine children together – seven sons and two daughters – and founded Mycenae and its Persideae dynasty. After Andromeda's death Athena placed her in the sky as a constellation, to honor her. Three of the neighboring constellations (Perseus, Cassiopeia and Cepheus) represent characters in the Perseus myth, while Cetus retreats to beyond Pisces. It is connected with the constellation Pegasus. Andromeda was one of the original 48 constellations formulated by Ptolemy in his 2nd-century Almagest, in which it was defined as a specific pattern of stars. She is typically depicted with α Andromedae as her head, ο and λ Andromedae as her chains, and δ, π, μ, β, and γ her body and legs. However, there is no universal depiction of Andromeda and the stars used to represent her body, head, and chains. Arab astronomers were aware of Ptolemy's constellations, but they included a second constellation representing a fish overlapping Andromeda's body; the nose of this fish was marked by a hazy patch that ‍we ‍now ‍know ‍as ‍the ‍Andromeda Galaxy, ‍M31. Several stars from Andromeda and most of the stars in Lacerta were combined in 1787 by German astronomer Johann Bode to form Honores Friderici (also called Friedrichs Ehre). It was designed to honour King Frederick II of Prussia, but quickly fell into disuse. Since the time of Ptolemy, Andromeda has remained a constellation and is officially recognized by the International Astronomical Union. Like all those that date back to a pattern known to Ptolemy, it is attributed to a wider zone and thus many surrounding stars. In 1922, the IAU defined its recommended three-letter abbreviation, "And". The official boundaries of Andromeda were defined in 1930 by Belgian astronomer Eugène Delporte as a polygon of 36 segments. Its right ascension is between 22h 57.5m and 2h 39.3m and its declination is between 53.19° and 21.68° in the equatorial coordinate system. In non-Western astronomy In traditional Chinese astronomy, nine stars from Andromeda (including Beta Andromedae, Mu Andromedae, and Nu Andromedae), along with seven stars from Pisces, formed an elliptical constellation called "Legs" (奎宿). This constellation either represented the foot of a walking person or a wild boar. Gamma Andromedae and its neighbors were called "Teen Ta Tseang Keun" (天大将军, heaven's great general), representing honour in astrology and a great general in mythology. Alpha Andromedae and Gamma Pegasi together made "Wall" (壁宿), representing the eastern wall of the imperial palace and/or the emperor's personal library. For the Chinese, the northern swath of Andromeda formed a stable for changing horses (, 天厩, stable on sky) and the far western part, along with most of Lacerta, became Tengshe, a flying snake. An Arab constellation called "al-Hut" (the fish) was composed of several stars in Andromeda, M31, and several stars in Pisces. ν And, μ And, β And, η And, ζ And, ε And, δ And, π And, and 32 And were all included from Andromeda; ν Psc, φ Psc, χ Psc, and ψ1 Psc were included from Pisces. As per Hindu astronomy, Andromeda is known as Devyani Constellation while Cassiopeia is Sharmishta Constellation. Devyani and Sharmishta are wives of King Yayati (Perseus Constellation) who is the earliest patriarch of the Kuru and Yadu Clans that are mentioned frequently in epic Mahabharat. There is an interesting story of these three characters mentioned in Mahabharat. Devyani is the daughter of Guru Shukracharya while Shar. Hindu legends surrounding Andromeda are similar to the Greek myths. Ancient Sanskrit texts depict Antarmada chained to a rock, as in the Greek myth. Scholars believe that the Hindu and Greek astrological myths were closely linked; one piece of evidence cited is the similarity between the names "Antarmada" and "Andromeda". Andromeda is also associated with the Mesopotamian creation story of Tiamat, the goddess of Chaos. She bore many demons for her husband, Apsu, but eventually decided to destroy them in a war that ended when Marduk killed her. He used her body to create the constellations as markers of time for humans. In the Marshall Islands, Andromeda, Cassiopeia, Triangulum, and Aries are incorporated into a constellation representing a porpoise. Andromeda's bright stars are mostly in the body of the porpoise; Cassiopeia represents its tail and Aries its head. In the Tuamotu islands, Alpha Andromedae was called Takurua-e-te-tuki-hanga-ruki, meaning "Star of the wearisome toil", and Beta Andromedae was called Piringa-o-Tautu. Features Stars α And (Alpheratz, Sirrah) is the brightest star in this constellation. It is an A0p class binary star with an overall apparent visual magnitude of 2.1 and a luminosity of . It is 97 light-years from Earth. It represents Andromeda's head in Western mythology, however, the star's traditional Arabic names – Alpheratz and Sirrah, from the phrase surrat al-faras – sometimes translated as "navel of the steed". The Arabic names are a reference to the fact that α And forms an asterism known as the "Great Square of Pegasus" with 3 stars in Pegasus: α, β, and γ Peg. As such, the star was formerly considered to belong to both Andromeda and Pegasus, and was co-designated as "Delta Pegasi (δ Peg)", although this name is no longer formally used. β And (Mirach) is a red-hued giant star of type M0 located in an asterism known as the "girdle". It is 198 light-years away, has a magnitude of 2.06, and a luminosity of with a planet discovered orbiting this star (b). Its name comes from the Arabic phrase al-Maraqq meaning "the loins" or "the loincloth", a phrase translated from Ptolemy's writing. However, β And was mostly considered by the Arabs to be a part of al-Hut, a constellation representing a larger fish than Pisces at Andromeda's feet. γ And (Almach) is an orange-hued bright giant star of type K3 found at the southern tip of the constellation with an overall magnitude of 2.14. Almach is a multiple star with a yellow primary of magnitude 2.3 and a blue-green secondary of magnitude 5, separated by 9.7 arcseconds. British astronomer William Herschel said of the star: "[the] striking difference in the color of the 2 stars, suggests the idea of a sun and its planet, to which the contrast of their unequal size contributes not a little." The secondary, described by Herschel as a "fine light sky-blue, inclining to green", is itself a double star, with a secondary of magnitude 6.3 and a period of 61 years. The system is 358 light-years away. Almach was named for the Arabic phrase ʿAnaq al-Ard, which means "the earth-kid", an obtuse reference to an animal that aids a lion in finding prey. δ And is an orange-hued giant star of type K3 orange giant of magnitude 3.3. It is 105 light-years from Earth. ι And, κ, λ, ο, and ψ And form an asterism known as "Frederick's Glory", a name derived from a former constellation (Frederici Honores). ι And is a blue-white hued main-sequence star of type B8, 502 light-years from Earth; κ And is a white-hued main-sequence star of type B9 IVn, 168 light-years from Earth; λ And is a yellow-hued giant star of type G8, 86 light-years from Earth; ο And is a blue-white hued giant star of type B6, 679 light-years from Earth; and ψ And is a blue-white hued main-sequence star of type B7, 988 light-years from Earth. μ And is a white-hued main-sequence star of type A5 and magnitude 3.9. It is 130 light-years away. υ And (Titawin) is a magnitude 4.1 binary system that consists of one F-type dwarf and an M-type dwarf. The primary star has a planetary system with 4 confirmed planets, 0.96 times, 14.57 times, 10.19 times and 1.06 the mass of Jupiter. The system is 44 light-years from Earth. ξ And (Adhil) is a binary star 217 light-years away. The primary is an orange-hued giant star of type K0. π And is a blue-white hued binary star of magnitude 4.3 that is 598 light-years away. The primary is a main-sequence star of type B5. Its companion star is of magnitude 8.9. 51 And (Nembus) was assigned by Johann Bayer to Perseus, where he designated it "Upsilon Persei (υ Per)", but it was moved to Andromeda by the International Astronomical Union. It is 177 light-years from Earth and is an orange-hued giant star of type K3. 54 And was a former designation for φ Per. 56 And is an optical binary star. The primary is a yellow-hued giant star of type K0 with an apparent magnitude of 5.7 that is 316 light-years away. The secondary is an orange-hued giant star of type K0 and magnitude 5.9 that is 990 light-years from Earth. R And is a Mira-type variable star with a period of 409 days. Its maximum magnitude is 5.8 and its minimum magnitude is 14.8, and it is at a distance of 1,250 light-years. There are 6 other Mira variables in Andromeda. Z And is the M-type prototype for its class of variable stars. It ranges in magnitude from a minimum of 12.4 to a maximum of 8. It is 2,720 light-years away. Ross 248 (HH Andromedae) is the ninth-closest star to Earth at a distance of 10.3 light-years. It is a red-hued main-sequence BY Draconis variable star of type M6. 14 And (Veritate) is a yellow-hued giant star of type G8 that is 251 light-years away. It has a mass of and a radius of . It has one planet, 14 Andromedae b, discovered in 2008. It orbits at a distance of 0.83 astronomical units from its parent star every 186 days and has a mass of . Of the stars brighter than 4th magnitude (and those with measured luminosity), Andromeda has a relatively even distribution of evolved and main-sequence stars. Deep-sky objects Andromeda's borders contain many visible distant galaxies. The most famous deep-sky object in Andromeda is the spiral galaxy cataloged as Messier 31 (M31) or NGC 224 but known colloquially as the Andromeda Galaxy for the constellation. M31 is one of the most distant objects visible to the naked eye, 2.2 million light-years from Earth (estimates range up to 2.5 million light-years). It is seen under a dark, transparent sky as a hazy patch in the north of the constellation. M31 is the largest neighboring galaxy to the Milky Way and the largest member of the Local Group of galaxies. In absolute terms, M31 is approximately 200,000 light-years in diameter, twice the size of the Milky Way. It is an enormous – 192.4 by 62.2 arcminutes in apparent size – barred spiral galaxy similar in form to the Milky Way and at an approximate magnitude of 3.5, is one of the brightest deep-sky objects in the northern sky. Despite being visible to the naked eye, the "little cloud" near Andromeda's figure was not recorded until AD 964, when the Arab astronomer al-Sufi wrote his Book of Fixed Stars. M31 was first observed telescopically shortly after its invention, by Simon Marius in 1612. The future of the Andromeda and Milky Way galaxies may be interlinked: in about five billion years, the two could potentially begin an Andromeda–Milky Way collision that would spark extensive new star formation. American astronomer Edwin Hubble included M31 (then known as the Andromeda Nebula) in his groundbreaking 1923 research on galaxies. Using the 100-inch Hooker Telescope at Mount Wilson Observatory in California, he observed Cepheid variable stars in M31 during a search for novae, allowing him to determine their distance by using the stars as standard candles. The distance he found was far greater than the size of the Milky Way, which led him to the conclusion that many similar objects were "island universes" on their own. Hubble originally estimated that the Andromeda Galaxy was 900,000 light-years away, but Ernst Öpik's estimate in 1925 put the distance closer to 1.5 million light-years. The Andromeda Galaxy's two main companions, M32 and M110 (also known as NGC 221 and NGC 205, respectively) are faint elliptical galaxies that lie near it. M32, visible with a far smaller size of 8.7 by 6.4 arcminutes, compared to M110, appears superimposed on the larger galaxy in a telescopic view as a hazy smudge, M110 also appears slightly larger and distinct from the larger galaxy; M32 is 0.5° south of the core, M110 is 1° northwest of the core. M32 was discovered in 1749 by French astronomer Guillaume Le Gentil and has since been found to lie closer to Earth than the Andromeda Galaxy itself. It is viewable in binoculars from a dark site owing to its high surface brightness of 10.1 and overall magnitude of 9.0. M110 is classified as either a dwarf spheroidal galaxy or simply a generic elliptical galaxy. It is far fainter than M31 and M32, but larger than M32 with a surface brightness of 13.2, magnitude of 8.9, and size of 21.9 by 10.9 arcminutes. The Andromeda Galaxy has a total of 15 satellite galaxies, including M32 and M110. Nine of these lie in a plane, which has caused astronomers to infer that they have a common origin. These satellite galaxies, like the satellites of the Milky Way, tend to be older, gas-poor dwarf elliptical and dwarf spheroidal galaxies. Along with the Andromeda Galaxy and its companions, the constellation also features NGC 891 (Caldwell 23), a smaller galaxy just east of Almach. It is a barred spiral galaxy seen edge-on, with a dark dust lane visible down the middle. NGC 891 is incredibly faint and small despite its magnitude of 9.9, as its surface brightness of 14.6 indicates; it is 13.5 by 2.8 arcminutes in size. NGC 891 was discovered by the brother-and-sister team of William and Caroline Herschel in August 1783. This galaxy is at an approximate distance of 30 million light-years from Earth, calculated from its redshift of 0.002. Andromeda's most celebrated open cluster is NGC 752 (Caldwell 28) at an overall magnitude of 5.7. It is a loosely scattered cluster in the Milky Way that measures 49 arcminutes across and features approximately twelve bright stars, although more than 60 stars of approximately 9th magnitude become visible at low magnifications in a telescope. It is considered to be one of the more inconspicuous open clusters. The other open cluster in Andromeda is NGC 7686, which has a similar magnitude of 5.6 and is also a part of the Milky Way. It contains approximately 20 stars in a diameter of 15 arcminutes, making it a tighter cluster than NGC 752. There is one prominent planetary nebula in Andromeda: NGC 7662 (Caldwell 22). Lying approximately 3 degrees southwest of Iota Andromedae at a distance of about 4,000 light-years from Earth, the "Blue Snowball Nebula" is a popular target for amateur astronomers. It earned its popular name because it appears as a faint, round, blue-green object in a telescope, with an overall magnitude of 9.2. Upon further magnification, it is visible as a slightly elliptical annular disk that gets darker towards the center, with a magnitude 13.2 central star. The nebula has an overall magnitude of 9.2 and is 20 by 130 arcseconds in size. Meteor showers Each November, the Andromedids meteor shower appears to radiate from Andromeda. The shower peaks in mid-to-late November every year, but has a low peak rate of fewer than 2 meteors per hour. Astronomers have often associated the Andromedids with Biela's Comet, which was destroyed in the 19th century, but that connection is disputed. Andromedid meteors are known for being very slow and the shower itself is considered to be diffuse, as meteors can be seen coming from nearby constellations as well as from Andromeda itself. Andromedid meteors sometimes appear as red fireballs. The Andromedids were associated with the most spectacular meteor showers of the 19th century; the storms of 1872 and 1885 were estimated to have a peak rate of 2 meteors per second (a zenithal hourly rate of 10,000), prompting a Chinese astronomer to compare the meteors to falling rain. The Andromedids had another outburst on December 3–5, 2011, the most active shower since 1885, with a maximum zenithal hourly rate of 50 meteors per hour. The 2011 outburst was linked to ejecta from Comet Biela, which passed close to the Sun in 1649. None of the meteoroids observed were associated with material from the comet's 1846 disintegration. The observers of the 2011 outburst predicted outbursts in 2018, 2023, and 2036. See also Andromeda (Chinese astronomy) Qatar-3 References Citations Bibliography Online sources SIMBAD External links The Deep Photographic Guide to the Constellations: Andromeda The clickable Andromeda Ian Ridpath's Star Tales – Andromeda Warburg Institute Iconographic Database (medieval and early modern images of Andromeda) Constellations Constellations listed by Ptolemy Northern constellations
Andromeda (constellation)
[ "Astronomy" ]
4,850
[ "Constellations listed by Ptolemy", "Andromeda (constellation)", "Constellations", "Northern constellations", "Sky regions" ]
153,391
https://en.wikipedia.org/wiki/Local%20ring
In mathematics, more specifically in ring theory, local rings are certain rings that are comparatively simple, and serve to describe what is called "local behaviour", in the sense of functions defined on algebraic varieties or manifolds, or of algebraic number fields examined at a particular place, or prime. Local algebra is the branch of commutative algebra that studies commutative local rings and their modules. In practice, a commutative local ring often arises as the result of the localization of a ring at a prime ideal. The concept of local rings was introduced by Wolfgang Krull in 1938 under the name Stellenringe. The English term local ring is due to Zariski. Definition and first consequences A ring R is a local ring if it has any one of the following equivalent properties: R has a unique maximal left ideal. R has a unique maximal right ideal. 1 ≠ 0 and the sum of any two non-units in R is a non-unit. 1 ≠ 0 and if x is any element of R, then x or is a unit. If a finite sum is a unit, then it has a term that is a unit (this says in particular that the empty sum cannot be a unit, so it implies 1 ≠ 0). If these properties hold, then the unique maximal left ideal coincides with the unique maximal right ideal and with the ring's Jacobson radical. The third of the properties listed above says that the set of non-units in a local ring forms a (proper) ideal, necessarily contained in the Jacobson radical. The fourth property can be paraphrased as follows: a ring R is local if and only if there do not exist two coprime proper (principal) (left) ideals, where two ideals I1, I2 are called coprime if . In the case of commutative rings, one does not have to distinguish between left, right and two-sided ideals: a commutative ring is local if and only if it has a unique maximal ideal. Before about 1960 many authors required that a local ring be (left and right) Noetherian, and (possibly non-Noetherian) local rings were called quasi-local rings. In this article this requirement is not imposed. A local ring that is an integral domain is called a local domain. Examples All fields (and skew fields) are local rings, since {0} is the only maximal ideal in these rings. The ring is a local ring ( prime, ). The unique maximal ideal consists of all multiples of . More generally, a nonzero ring in which every element is either a unit or nilpotent is a local ring. An important class of local rings are discrete valuation rings, which are local principal ideal domains that are not fields. The ring , whose elements are infinite series where multiplications are given by such that , is local. Its unique maximal ideal consists of all elements that are not invertible. In other words, it consists of all elements with constant term zero. More generally, every ring of formal power series over a local ring is local; the maximal ideal consists of those power series with constant term in the maximal ideal of the base ring. Similarly, the algebra of dual numbers over any field is local. More generally, if F is a local ring and n is a positive integer, then the quotient ring F[X]/(Xn) is local with maximal ideal consisting of the classes of polynomials with constant term belonging to the maximal ideal of F, since one can use a geometric series to invert all other polynomials modulo Xn. If F is a field, then elements of F[X]/(Xn) are either nilpotent or invertible. (The dual numbers over F correspond to the case .) Nonzero quotient rings of local rings are local. The ring of rational numbers with odd denominator is local; its maximal ideal consists of the fractions with even numerator and odd denominator. It is the integers localized at 2. More generally, given any commutative ring R and any prime ideal P of R, the localization of R at P is local; the maximal ideal is the ideal generated by P in this localization; that is, the maximal ideal consists of all elements a/s with a ∈ P and s ∈ R - P. Non-examples The ring of polynomials over a field is not local, since and are non-units, but their sum is a unit. The ring of integers is not local since it has a maximal ideal for every prime . /(pq), where p and q are distinct prime numbers. Both (p) and (q) are maximal ideals here. Ring of germs To motivate the name "local" for these rings, we consider real-valued continuous functions defined on some open interval around 0 of the real line. We are only interested in the behavior of these functions near 0 (their "local behavior") and we will therefore identify two functions if they agree on some (possibly very small) open interval around 0. This identification defines an equivalence relation, and the equivalence classes are what are called the "germs of real-valued continuous functions at 0". These germs can be added and multiplied and form a commutative ring. To see that this ring of germs is local, we need to characterize its invertible elements. A germ f is invertible if and only if . The reason: if , then by continuity there is an open interval around 0 where f is non-zero, and we can form the function on this interval. The function g gives rise to a germ, and the product of fg is equal to 1. (Conversely, if f is invertible, then there is some g such that f(0)g(0) = 1, hence .) With this characterization, it is clear that the sum of any two non-invertible germs is again non-invertible, and we have a commutative local ring. The maximal ideal of this ring consists precisely of those germs f with . Exactly the same arguments work for the ring of germs of continuous real-valued functions on any topological space at a given point, or the ring of germs of differentiable functions on any differentiable manifold at a given point, or the ring of germs of rational functions on any algebraic variety at a given point. All these rings are therefore local. These examples help to explain why schemes, the generalizations of varieties, are defined as special locally ringed spaces. Valuation theory Local rings play a major role in valuation theory. By definition, a valuation ring of a field K is a subring R such that for every non-zero element x of K, at least one of x and x−1 is in R. Any such subring will be a local ring. For example, the ring of rational numbers with odd denominator (mentioned above) is a valuation ring in . Given a field K, which may or may not be a function field, we may look for local rings in it. If K were indeed the function field of an algebraic variety V, then for each point P of V we could try to define a valuation ring R of functions "defined at" P. In cases where V has dimension 2 or more there is a difficulty that is seen this way: if F and G are rational functions on V with F(P) = G(P) = 0, the function F/G is an indeterminate form at P. Considering a simple example, such as Y/X, approached along a line Y = tX, one sees that the value at P is a concept without a simple definition. It is replaced by using valuations. Non-commutative Non-commutative local rings arise naturally as endomorphism rings in the study of direct sum decompositions of modules over some other rings. Specifically, if the endomorphism ring of the module M is local, then M is indecomposable; conversely, if the module M has finite length and is indecomposable, then its endomorphism ring is local. If k is a field of characteristic and G is a finite p-group, then the group algebra kG is local. Some facts and definitions Commutative case We also write for a commutative local ring R with maximal ideal m. Every such ring becomes a topological ring in a natural way if one takes the powers of m as a neighborhood base of 0. This is the m-adic topology on R. If is a commutative Noetherian local ring, then (Krull's intersection theorem), and it follows that R with the m-adic topology is a Hausdorff space. The theorem is a consequence of the Artin–Rees lemma together with Nakayama's lemma, and, as such, the "Noetherian" assumption is crucial. Indeed, let R be the ring of germs of infinitely differentiable functions at 0 in the real line and m be the maximal ideal . Then a nonzero function belongs to for any n, since that function divided by is still smooth. As for any topological ring, one can ask whether is complete (as a uniform space); if it is not, one considers its completion, again a local ring. Complete Noetherian local rings are classified by the Cohen structure theorem. In algebraic geometry, especially when R is the local ring of a scheme at some point P, is called the residue field of the local ring or residue field of the point P. If and are local rings, then a local ring homomorphism from R to S is a ring homomorphism with the property . These are precisely the ring homomorphisms that are continuous with respect to the given topologies on R and S. For example, consider the ring morphism sending . The preimage of is . Another example of a local ring morphism is given by . General case The Jacobson radical m of a local ring R (which is equal to the unique maximal left ideal and also to the unique maximal right ideal) consists precisely of the non-units of the ring; furthermore, it is the unique maximal two-sided ideal of R. However, in the non-commutative case, having a unique maximal two-sided ideal is not equivalent to being local. For an element x of the local ring R, the following are equivalent: x has a left inverse x has a right inverse x is invertible x is not in m. If is local, then the factor ring R/m is a skew field. If is any two-sided ideal in R, then the factor ring R/J is again local, with maximal ideal m/J. A deep theorem by Irving Kaplansky says that any projective module over a local ring is free, though the case where the module is finitely-generated is a simple corollary to Nakayama's lemma. This has an interesting consequence in terms of Morita equivalence. Namely, if P is a finitely generated projective R module, then P is isomorphic to the free module Rn, and hence the ring of endomorphisms is isomorphic to the full ring of matrices . Since every ring Morita equivalent to the local ring R is of the form for such a P, the conclusion is that the only rings Morita equivalent to a local ring R are (isomorphic to) the matrix rings over R. Notes References See also Discrete valuation ring Semi-local ring Valuation ring Gorenstein local ring External links The philosophy behind local rings Ring theory Localization (mathematics)
Local ring
[ "Mathematics" ]
2,420
[ "Fields of abstract algebra", "Ring theory" ]
153,499
https://en.wikipedia.org/wiki/Support%20group
In a support group, members provide each other with various types of help, usually nonprofessional and nonmaterial, for a particular shared, usually burdensome, characteristic. Members with the same issues can come together for sharing coping strategies, to feel more empowered and for a sense of community. The help may take the form of providing and evaluating relevant information, relating personal experiences, listening to and accepting others' experiences, providing sympathetic understanding and establishing social networks. A support group may also work to inform the public or engage in advocacy. History Formal support groups may appear to be a modern phenomenon, but they supplement traditional fraternal organizations such as Freemasonry in some respects, and may build on certain supportive functions (formerly) carried out in (extended) families. Other types of groups formed to support causes, including causes outside of themselves, are more often called advocacy groups, interest groups, lobby groups, pressure groups or promotional groups. Trade unions and many environmental groups, for example, are interest groups. The term support group in this article refers to peer-to-peer support. Maintaining contact Support groups maintain interpersonal contact among their members in a variety of ways. Traditionally, groups meet in person in sizes that allow conversational interaction. Support groups also maintain contact through printed newsletters, telephone chains, internet forums, and mailing lists. Some support groups are exclusively online (see below). Membership in some support groups is formally controlled, with admission requirements and membership fees. Other groups are "open" and allow anyone to attend an advertised meeting, for example, or to participate in an online forum. Management by peers or professionals A self-help support group is fully organized and managed by its members, who are commonly volunteers and have personal experience in the subject of the group's focus. These groups may also be referred to as fellowships, peer support groups, lay organizations, mutual help groups, or mutual aid self-help groups. Most common are 12-step groups such as Alcoholics Anonymous and self-help groups for mental health. Professionally operated support groups are facilitated by professionals who most often do not share the problem of the members, such as social workers, psychologists, or members of the clergy. The facilitator controls discussions and provides other managerial service. Such professionally operated groups are often found in institutional settings, including hospitals, drug-treatment centers and correctional facilities. These types of support groups may run for a specified period of time, and an attendance fee is sometimes charged. Types In the case of a disease, an identity or a pre-disposition, for example, a support group will provide information, act as a clearing-house for experiences, and may serve as a public relations voice for affected people, other members, and their families. Groups for high IQ or LGBTQIA+ individuals, for example, differ in their inclusivity, but both connect people on the basis of identity or pre-disposition. For more temporary concerns, such as bereavement or episodic medical conditions, a support group may veer more towards helping those involved to overcome or push through their condition/experience. Some support groups and conditions for which such groups may be formed are: Addiction AIDS Alzheimer's Alcoholics Anonymous Anxiety disorders Asperger syndrome Borderline personality disorder Breastfeeding Brain attack or Brain trauma Cancer Circadian rhythm disorders, e.g. DSPD, Non-24 Codependency Diabetes Debtors Anonymous Domestic violence Eating disorders Erythema nodosum Families of addicts & alcoholics Fibromyalgia Gamblers Anonymous Grief Infertility Inflammatory bowel disease Irritable bowel syndrome Mental Health Miscarriage Mood disorders Narcolepsy Parkinson's disease Red Skin Syndrome/Topical Steroid Addiction and Withdrawal Sexual abuse survivors Sleep disorders Stuttering Suicide prevention Ulcerative colitis Online support groups Since at least 1982, the Internet has provided a venue for support groups. Discussing online self-help support groups as the precursor to e-therapy, Martha Ainsworth notes that "the enduring success of these groups has firmly established the potential of computer-mediated communication to enable discussion of sensitive personal issues." In one study of the effectiveness of online support groups among patients with head and neck cancer, longer participation in online support groups were found to result in a better health-related quality of life. Appropriate groups still difficult to find A researcher from University College London says the lack of qualitative directories, and the fact that many support groups are not listed by search engines can make finding an appropriate group difficult. Even so, he does say that the medical community needs "to understand the use of personal experiences rather than an evidence-based approach... these groups also impact on how individuals use information. They can help people learn how to find and use information: for example, users swap Web sites and discuss Web sites." It is not difficult to find an online support group, but it is hard to find a good one. In the article What to Look for in Quality Online Support Groups, John M. Grohol gives tips for evaluating online groups and states: "In good online support groups, members stick around long after they've received the support they were seeking. They stay because they want to give others what they themselves found in the group. Psychologists call this high group cohesion, and it is the pinnacle of group achievement." Benefits and pitfalls Several studies have shown the importance of the Internet in providing social support, particularly to groups with chronic health problems. Especially in cases of uncommon ailments, a sense of community and understanding in spite of great geographical distances can be important, in addition to sharing of knowledge. Online support groups, online communities for those affected by a common problem, give mutual support and provide information, two often inseparable features. They are, according to Henry Potts of University College London, "an overlooked resource for patients." Many studies have looked at the content of messages, while what matters is the effect that participation in the group has on the individual. Potts complains that research on these groups has lagged behind, particularly on the groups which are set up by the people with the problems, rather than by researchers and healthcare professionals. User-defined groups can share the sort of practical knowledge that healthcare professionals can overlook, and they also impact on how individuals find, interpret and use information. There are many benefits to online support groups that have been found through research studies. Although online support group users are not required to be anonymous, a study conducted by Baym (2010) finds that anonymity is beneficial to those who are lonely or anxious. This does not pertain to some people seeking support groups, because not all are lonely and/or anxious, but for those who are, online support groups are a great outlet where one can feel comfortable honestly expressing themselves because the other users do not know who they are. A study was conducted by Walther and Boyd (2000) and they found a common trend to why people find online support groups appealing. First, the social distance between members online reduced embarrassment and they appreciated the greater range of expertise offered in the larger online social network. Next, they found that anonymity increased one's confidence in providing support to others and decreased embarrassment. The users of the social support websites were more comfortable being able to reread and edit their comments and discussion forum entries before sending them, and they have access to the website any time during the day. Each of these characteristics of online support groups are not offered when going to an in-person support group. In a study conducted by Gunther Eysenbach, John Powell, Marina Englesakis, Carlos Rizo, and Anita Stern (2004), the researchers found it difficult to draw conclusions on the effectiveness of online peer-to-peer support groups. In online support groups, people must have the desire to support and help each other, and many times participants go on the sites in order to get help themselves or are limited to a certain subgroup. An additional benefit to online support groups is that participation is asynchronous. This means that it is not necessary for all participants to be logged into the forum simultaneously in order to communicate. An experience or question can be posted and others can answer questions or comment on posts whenever they are logged in and have an appropriate response. This characteristic allows for participation and mass communication without having to worry about time constraints. Additionally, there are 24-hour chat rooms and spaces for focused conversation at all times of the day or night. This allows users to get the support they need whenever they need it, while remaining comfortable and, if they so wish, anonymous. Mental health Although there has been relatively little research on the effectiveness of online support groups in mental health, there is some evidence that online support groups can be beneficial. Large randomised controlled trials have both found positive effects and failed to find positive effects. See also Group psychotherapy Self-help groups for mental health List of Twelve-Step groups References External links Aftermath of war Self-care Types of organization Personal development Grief
Support group
[ "Biology" ]
1,837
[ "Personal development", "Behavior", "Human behavior" ]
153,522
https://en.wikipedia.org/wiki/Plastid
A plastid is a membrane-bound organelle found in the cells of plants, algae, and some other eukaryotic organisms. Plastids are considered to be intracellular endosymbiotic cyanobacteria. Examples of plastids include chloroplasts (used for photosynthesis); chromoplasts (used for synthesis and storage of pigments); leucoplasts (non-pigmented plastids, some of which can differentiate); and apicoplasts (non-photosynthetic plastids of apicomplexa derived from secondary endosymbiosis). A permanent primary endosymbiosis event occurred about 1.5 billion years ago in the Archaeplastida cladeland plants, red algae, green algae and glaucophytesprobably with a cyanobiont, a symbiotic cyanobacteria related to the genus Gloeomargarita. Another primary endosymbiosis event occurred later, between 140 to 90 million years ago, in the photosynthetic plastids Paulinella amoeboids of the cyanobacteria genera Prochlorococcus and Synechococcus, or the "PS-clade". Secondary and tertiary endosymbiosis events have also occurred in a wide variety of organisms; and some organisms developed the capacity to sequester ingested plastidsa process known as kleptoplasty. A. F. W. Schimper was the first to name, describe, and provide a clear definition of plastids, which possess a double-stranded DNA molecule that long has been thought of as circular in shape, like that of the circular chromosome of prokaryotic cellsbut now, perhaps not; (see "..a linear shape"). Plastids are sites for manufacturing and storing pigments and other important chemical compounds used by the cells of autotrophic eukaryotes. Some contain biological pigments such as used in photosynthesis or which determine a cell's color. Plastids in organisms that have lost their photosynthetic properties are highly useful for manufacturing molecules like the isoprenoids. In land plants Chloroplasts, proplastids, and differentiation In land plants, the plastids that contain chlorophyll can perform photosynthesis, thereby creating internal chemical energy from external sunlight energy while capturing carbon from Earth's atmosphere and furnishing the atmosphere with life-giving oxygen. These are the chlorophyll-plastidsand they are named chloroplasts; (see top graphic). Other plastids can synthesize fatty acids and terpenes, which may be used to produce energy or as raw material to synthesize other molecules. For example, plastid epidermal cells manufacture the components of the tissue system known as plant cuticle, including its epicuticular wax, from palmitic acidwhich itself is synthesized in the chloroplasts of the mesophyll tissue. Plastids function to store different components including starches, fats, and proteins. All plastids are derived from proplastids, which are present in the meristematic regions of the plant. Proplastids and young chloroplasts typically divide by binary fission, but more mature chloroplasts also have this capacity. Plant proplastids (undifferentiated plastids) may differentiate into several forms, depending upon which function they perform in the cell, (see top graphic). They may develop into any of the following variants: Chloroplasts: typically green plastids that perform photosynthesis. Etioplasts: precursors of chloroplasts. Chromoplasts: coloured plastids that synthesize and store pigments. Gerontoplasts: plastids that control the dismantling of the photosynthetic apparatus during plant senescence. Leucoplasts: colourless plastids that synthesize monoterpenes. Leucoplasts differentiate into even more specialized plastids, such as: the aleuroplasts; Amyloplasts: storing starch and detecting gravityfor maintaining geotropism. Elaioplasts: storing fats. Proteinoplasts: storing and modifying protein. or Tannosomes: synthesizing and producing tannins and polyphenols. Depending on their morphology and target function, plastids have the ability to differentiate or redifferentiate between these and other forms. Plastomes and Chloroplast DNA/ RNA; plastid DNA and plastid nucleoids Each plastid creates multiple copies of its own unique genome, or plastome, (from 'plastid genome')which for a chlorophyll plastid (or chloroplast) is equivalent to a 'chloroplast genome', or a 'chloroplast DNA'. The number of genome copies produced per plastid is variable, ranging from 1000 or more in rapidly dividing new cells, encompassing only a few plastids, down to 100 or less in mature cells, encompassing numerous plastids. A plastome typically contains a genome that encodes transfer ribonucleic acids (tRNA)s and ribosomal ribonucleic acids (rRNAs). It also contains proteins involved in photosynthesis and plastid gene transcription and translation. But these proteins represent only a small fraction of the total protein set-up necessary to build and maintain any particular type of plastid. Nuclear genes (in the cell nucleus of a plant) encode the vast majority of plastid proteins; and the expression of nuclear and plastid genes is co-regulated to coordinate the development and differention of plastids. Many plastids, particularly those responsible for photosynthesis, possess numerous internal membrane layers. Plastid DNA exists as protein-DNA complexes associated as localized regions within the plastid's inner envelope membrane; and these complexes are called 'plastid nucleoids'. Unlike the nucleus of a eukaryotic cell, a plastid nucleoid is not surrounded by a nuclear membrane. The region of each nucleoid may contain more than 10 copies of the plastid DNA. Where the proplastid (undifferentiated plastid) contains a single nucleoid region located near the centre of the proplastid, the developing (or differentiating) plastid has many nucleoids localized at the periphery of the plastid and bound to the inner envelope membrane. During the development/ differentiation of proplastids to chloroplastsand when plastids are differentiating from one type to anothernucleoids change in morphology, size, and location within the organelle. The remodelling of plastid nucleoids is believed to occur by modifications to the abundance of and the composition of nucleoid proteins. In normal plant cells long thin protuberances called stromules sometimes formextending from the plastid body into the cell cytosol while interconnecting several plastids. Proteins and smaller molecules can move around and through the stromules. Comparatively, in the laboratory, most cultured cellswhich are large compared to normal plant cellsproduce very long and abundant stromules that extend to the cell periphery. In 2014, evidence was found of the possible loss of plastid genome in Rafflesia lagascae, a non-photosynthetic parasitic flowering plant, and in Polytomella, a genus of non-photosynthetic green algae. Extensive searches for plastid genes in both taxons yielded no results, but concluding that their plastomes are entirely missing is still disputed. Some scientists argue that plastid genome loss is unlikely since even these non-photosynthetic plastids contain genes necessary to complete various biosynthetic pathways including heme biosynthesis. Even with any loss of plastid genome in Rafflesiaceae, the plastids still occur there as "shells" without DNA content, which is reminiscent of hydrogenosomes in various organisms. In algae and protists Plastid types in algae and protists include: Chloroplasts: found in green algae (plants) and other organisms that derived their genomes from green algae. Muroplasts: also known as cyanoplasts or cyanelles, the plastids of glaucophyte algae are similar to plant chloroplasts, excepting they have a peptidoglycan cell wall that is similar to that of bacteria. Rhodoplasts: the red plastids found in red algae, which allows them to photosynthesize down to marine depths of 268 m. The chloroplasts of plants differ from rhodoplasts in their ability to synthesize starch, which is stored in the form of granules within the plastids. In red algae, floridean starch is synthesized and stored outside the plastids in the cytosol. Secondary and tertiary plastids: from endosymbiosis of green algae and red algae. Leucoplast: in algae, the term is used for all unpigmented plastids. Their function differs from the leucoplasts of plants. Apicoplast: the non-photosynthetic plastids of Apicomplexa derived from secondary endosymbiosis. The plastid of photosynthetic Paulinella species is often referred to as the 'cyanelle' or chromatophore, and is used in photosynthesis. It had a much more recent endosymbiotic event, in the range of 140–90 million years ago, which is the only other known primary endosymbiosis event of cyanobacteria. Etioplasts, amyloplasts and chromoplasts are plant-specific and do not occur in algae. Plastids in algae and hornworts may also differ from plant plastids in that they contain pyrenoids . Inheritance In reproducing, most plants inherit their plastids from only one parent. In general, angiosperms inherit plastids from the female gamete, where many gymnosperms inherit plastids from the male pollen. Algae also inherit plastids from just one parent. Thus the plastid DNA of the other parent is completely lost. In normal intraspecific crossingsresulting in normal hybrids of one speciesthe inheriting of plastid DNA appears to be strictly uniparental; i.e., from the female. In interspecific hybridisations, however, the inheriting is apparently more erratic. Although plastids are inherited mainly from the female in interspecific hybridisations, there are many reports of hybrids of flowering plants producing plastids from the male. Approximately 20% of angiosperms, including alfalfa (Medicago sativa), normally show biparental inheriting of plastids. DNA damage and repair The plastid DNA of maize seedlings is subjected to increasing damage as the seedlings develop. The DNA damage is due to oxidative environments created by photo-oxidative reactions and photosynthetic/ respiratory electron transfer. Some DNA molecules are repaired but DNA with unrepaired damage is apparently degraded to non-functional fragments. DNA repair proteins are encoded by the cell's nuclear genome and then translocated to plastids where they maintain genome stability/ integrity by repairing the plastid's DNA. For example, in chloroplasts of the moss Physcomitrella patens, a protein employed in DNA mismatch repair (Msh1) interacts with proteins employed in recombinational repair (RecA and RecG) to maintain plastid genome stability. Origin Plastids are thought to be descended from endosymbiotic cyanobacteria. The primary endosymbiotic event of the Archaeplastida is hypothesized to have occurred around 1.5 billion years ago and enabled eukaryotes to carry out oxygenic photosynthesis. Three evolutionary lineages in the Archaeplastida have since emerged in which the plastids are named differently: chloroplasts in green algae and/or plants, rhodoplasts in red algae, and muroplasts in the glaucophytes. The plastids differ both in their pigmentation and in their ultrastructure. For example, chloroplasts in plants and green algae have lost all phycobilisomes, the light harvesting complexes found in cyanobacteria, red algae and glaucophytes, but instead contain stroma and grana thylakoids. The glaucocystophycean plastid—in contrast to chloroplasts and rhodoplasts—is still surrounded by the remains of the cyanobacterial cell wall. All these primary plastids are surrounded by two membranes. The plastid of photosynthetic Paulinella species is often referred to as the 'cyanelle' or chromatophore, and had a much more recent endosymbiotic event about 90–140 million years ago; it is the only known primary endosymbiosis event of cyanobacteria outside of the Archaeplastida. The plastid belongs to the "PS-clade" (of the cyanobacteria genera Prochlorococcus and Synechococcus), which is a different sister clade to the plastids belonging to the Archaeplastida. In contrast to primary plastids derived from primary endosymbiosis of a prokaryoctyic cyanobacteria, complex plastids originated by secondary endosymbiosis in which a eukaryotic organism engulfed another eukaryotic organism that contained a primary plastid. When a eukaryote engulfs a red or a green alga and retains the algal plastid, that plastid is typically surrounded by more than two membranes. In some cases these plastids may be reduced in their metabolic and/or photosynthetic capacity. Algae with complex plastids derived by secondary endosymbiosis of a red alga include the heterokonts, haptophytes, cryptomonads, and most dinoflagellates (= rhodoplasts). Those that endosymbiosed a green alga include the euglenids and chlorarachniophytes (= chloroplasts). The Apicomplexa, a phylum of obligate parasitic alveolates including the causative agents of malaria (Plasmodium spp.), toxoplasmosis (Toxoplasma gondii), and many other human or animal diseases also harbor a complex plastid (although this organelle has been lost in some apicomplexans, such as Cryptosporidium parvum, which causes cryptosporidiosis). The 'apicoplast' is no longer capable of photosynthesis, but is an essential organelle, and a promising target for antiparasitic drug development. Some dinoflagellates and sea slugs, in particular of the genus Elysia, take up algae as food and keep the plastid of the digested alga to profit from the photosynthesis; after a while, the plastids are also digested. This process is known as kleptoplasty, from the Greek, kleptes (), thief. Plastid development cycle In 1977 J.M Whatley proposed a plastid development cycle which said that plastid development is not always unidirectional but is instead a complicated cyclic process. Proplastids are the precursor of the more differentiated forms of plastids, as shown in the diagram to the right. See also Notes References Further reading External links Transplastomic plants for biocontainment (biological confinement of transgenes) — Co-extra research project on coexistence and traceability of GM and non-GM supply chains Tree of Life Eukaryotes Organelles Plant physiology Photosynthesis Endosymbiotic events
Plastid
[ "Chemistry", "Biology" ]
3,503
[ "Plant physiology", "Symbiosis", "Plants", "Endosymbiotic events", "Photosynthesis", "Biochemistry" ]
153,563
https://en.wikipedia.org/wiki/Scilab
Scilab is a free and open-source, cross-platform numerical computational package and a high-level, numerically oriented programming language. It can be used for signal processing, statistical analysis, image enhancement, fluid dynamics simulations, numerical optimization, and modeling, simulation of explicit and implicit dynamical systems and (if the corresponding toolbox is installed) symbolic manipulations. Scilab is one of the two major open-source alternatives to MATLAB, the other one being GNU Octave. Scilab puts less emphasis on syntactic compatibility with MATLAB than Octave does, but it is similar enough that some authors suggest that it is easy to transfer skills between the two systems. Introduction Scilab is a high-level, numerically oriented programming language. The language provides an interpreted programming environment, with matrices as the main data type. By using matrix-based computation, dynamic typing, and automatic memory management, many numerical problems may be expressed in a reduced number of code lines, as compared to similar solutions using traditional languages, such as Fortran, C, or C++. This allows users to rapidly construct models for a range of mathematical problems. While the language provides simple matrix operations such as multiplication, the Scilab package also provides a library of high-level operations such as correlation and complex multidimensional arithmetic. Scilab also includes a free package called Xcos for modeling and simulation of explicit and implicit dynamical systems, including both continuous and discrete sub-systems. Xcos is the open source equivalent to Simulink from the MathWorks. As the syntax of Scilab is similar to MATLAB, Scilab includes a source code translator for assisting the conversion of code from MATLAB to Scilab. Scilab is available free of cost under an open source license. Due to the open source nature of the software, some user contributions have been integrated into the main program. Syntax Scilab syntax is largely based on the MATLAB language. The simplest way to execute Scilab code is to type it in at the prompt, --> , in the graphical command window. In this way, Scilab can be used as an interactive mathematical shell. Hello World! in Scilab: disp('Hello World'); Plotting a 3D surface function: // A simple plot of z = f(x,y) t=[0:0.3:2*%pi]'; z=sin(t)*cos(t'); plot3d(t,t',z) Determining the equivalent single index corresponding to a given set of subscript values: function I=sub2ind(dims,varargin) //I = sub2ind(dims,i1,i2,..) returns the linear index equivalent to the //row, column, ... subscripts in the arrays i1,i2,.. for an matrix of //size dims. //I = sub2ind(dims,Mi) returns the linear index //equivalent to the n subscripts in the columns of the matrix Mi for a matrix //of size dims. d=[1;cumprod(matrix(dims(1:$-1),-1,1))] for i=1:size(varargin) if varargin(i)==[] then I=[],return,end end if size(varargin)==1 then //subindices are the columns of the argument I=(varargin(1)-1)*d+1 else //subindices are given as separated arguments I=1 for i=1:size(varargin) I=I+(varargin(i)-1)*d(i) end end endfunction Toolboxes Scilab has many contributed toolboxes for different tasks, such as Scilab Image Processing Toolbox (SIP) and its variants (such as SIVP) Scilab Wavelet Toolbox Scilab Java and .NET Module Scilab Remote Access Module More are available on ATOMS Portal or the Scilab forge. History Scilab was created in 1990 by researchers from INRIA and École nationale des ponts et chaussées (ENPC). It was initially named Ψlab (Psilab). The Scilab Consortium was formed in May 2003 to broaden contributions and promote Scilab as worldwide reference software in academia and industry. In July 2008, in order to improve the technology transfer, the Scilab Consortium joined the Digiteo Foundation. Scilab 5.1, the first release compiled for Mac, was available in early 2009, and supported Mac OS X 10.5, a.k.a. Leopard. Thus, OSX 10.4, Tiger, was never supported except by porting from sources. Linux and Windows builds had been released since the beginning, with Solaris support dropped with version 3.1.1, and HP-UX dropped with version 4.1.2 after spotty support. In June 2010, the Consortium announced the creation of Scilab Enterprises. Scilab Enterprises develops and markets, directly or through an international network of affiliated services providers, a comprehensive set of services for Scilab users. Scilab Enterprises also develops and maintains the Scilab software. The ultimate goal of Scilab Enterprises is to help make the use of Scilab more effective and easy. In February 2017 Scilab 6.0.0 was released which leveraged the latest C++ standards and lifted memory allocation limitations. Since July 2012, Scilab is developed and published by Scilab Enterprises and in early 2017 Scilab Enterprises was acquired by Virtual Prototyping pioneer ESI Group Since 2019 and Scilab 6.0.2, the University of Technology of Compiègne provides resources to build and maintain the macOS version. Since mid 2022 the Scilab team is part of Dassault Systèmes. Scilab Cloud App & Scilab Cloud API Since 2016 Scilab can be embedded in a browser and be called via an interface written in Scilab or an API. This new deployment method has the notable advantages of masking code & data as well as providing large computational power. These features have not been included in the open source version of Scilab and are still proprietary developments. See also SageMath List of numerical-analysis software Comparison of numerical-analysis software SimulationX References Further reading External links Scilab website Array programming languages Dassault Group Free educational software Free mathematics software Free software programmed in Fortran Numerical analysis software for Linux Numerical analysis software for macOS Numerical analysis software for Windows Numerical programming languages Science software that uses GTK
Scilab
[ "Mathematics" ]
1,367
[ "Free mathematics software", "Mathematical software" ]
153,599
https://en.wikipedia.org/wiki/Research%20Consortium%20On%20Nearby%20Stars
The REsearch Consortium On Nearby Stars (RECONS) is an international group of astronomers founded in 1994 to investigate the stars nearest to the Solar System - with a focus on those within 10 parsecs (32.6 light years), but as of 2012 the horizon was stretched to 25 parsecs. In part the project hopes a more accurate survey of local star systems will give a better picture of the star systems in the Galaxy as a whole. Notable discoveries The Consortium claims authorship of the series The Solar Neighborhood in The Astronomical Journal, that began in 1994. This series now numbers nearly 40 papers and submissions. The following discoveries are from this series: GJ 1061 was discovered to be the 20th nearest known star system, at a distance of 11.9 light years. The first accurate measurement of distance for DENIS 0255-4700 . At a distance of 16.2 light years, it is the nearest known class L brown dwarf object to the Solar System. The discovery of 20 previously unknown star systems within 10 parsecs of the Solar System. These are in addition to 8 new star systems announced between 2000 and 2005. RECONS is listed explicitly as an author on papers submitted to the Bulletin of the American Astronomical Society since 2004. The RECONS web page includes the frequently referenced "List of the 100 nearest star systems". They update this list as discoveries are made. A list of all RECONS parallaxes is available, as are all papers in the solar neighborhood series and which illustrates data from the RECONS 25 Parsec Database. Members Key astronomers involved in the project include Todd J. Henry (GSU) (consortium founder and director) Wei-Chun Jao (GSU) John Subasavage (USNO-Flagstaff) Charlie Finch (USNO-DC) Adric Riedel (Caltech) Sergio Dieterich (Carnegie) Jennifer Winters (H-S CfA) Phil Ianna (UVA). See also List of astronomical societies List of nearest stars References External links Astronomy organizations Organizations established in 1994
Research Consortium On Nearby Stars
[ "Astronomy" ]
415
[ "Astronomy organizations" ]
153,625
https://en.wikipedia.org/wiki/IUCN%20Red%20List
The International Union for Conservation of Nature (IUCN) Red List of Threatened Species, also known as the IUCN Red List or Red Data Book, founded in 1964, is an inventory of the global conservation status and extinction risk of biological species. A series of Regional Red Lists, which assess the risk of extinction to species within a political management unit, are also produced by countries and organizations. The goals of the Red List are to provide scientifically based information on the status of species and subspecies at a global level, to draw attention to the magnitude and importance of threatened biodiversity, to influence national and international policy and decision-making, and to provide information to guide actions to conserve biological diversity. Major species assessors include BirdLife International, the Institute of Zoology (the research division of the Zoological Society of London), the World Conservation Monitoring Centre, and many Specialist Groups within the IUCN Species Survival Commission (SSC). Collectively, assessments by these organizations and groups account for nearly half the species on the Red List. The IUCN aims to have the category of every species re-evaluated at least every ten years, and every five years if possible. This is done in a peer reviewed manner through IUCN Species Survival Commission Specialist Groups (SSC), which are Red List Authorities (RLA) responsible for a species, group of species or specific geographic area, or in the case of BirdLife International, an entire class (Aves). The red list unit works with staff from the IUCN Global Species Programme as well as current program partners to recommend new partners or networks to join as new Red List Authorities. The number of species which have been assessed for the Red List has been increasing over time. of 150,388 species surveyed, 42,108 are considered at risk of extinction because of human activity, in particular overfishing, hunting, and land development. History The idea for a Red Data Book was suggested by Peter Scott in 1963. 1966–1977 Red Data Lists Initially the Red Data Lists were designed for specialists and were issued in a loose-leaf format that could be easily changed. The first two volumes of Red Lists were published in 1966 by conservationist Noel Simon, one for mammals and one for birds. The third volume that appeared covered reptiles and amphibians. It was created by René E. Honegger in 1968. In 1970, the IUCN published its fifth volume in this series. This was the first Red Data List which focused on plants (angiosperms only), compiled by Ronald Melville. The final volume of Red Data List created in the older, loose leaf style was volume 4 on freshwater fishes. This was published in 1979 by Robert Rush Miller. 1969 Red Data Book The first attempt to create a Red Data Book for a nonspecialist public came in 1969 with The Red Book: Wildlife in Danger. This book covered varies groups but was predominantly about mammals and birds, with smaller sections on reptiles, amphibians, fishes, and plants. 2006 release The 2006 Red List, released on 4 May 2006 evaluated 40,168 species as a whole, plus an additional 2,160 subspecies, varieties, aquatic stocks, and subpopulations. 2007 release On 12 September 2007, the World Conservation Union (IUCN) released the 2007 IUCN Red List of Threatened Species. In this release, they have raised their classification of both the western lowland gorilla (Gorilla gorilla gorilla) and the Cross River gorilla (Gorilla gorilla diehli) from endangered to critically endangered, which is the last category before extinct in the wild, due to Ebola virus and poaching, along with other factors. Russ Mittermeier, chief of Swiss-based IUCN's Primate Specialist Group, stated that 16,306 species are endangered with extinction, 188 more than in 2006 (total of 41,415 species on the Red List). The Red List includes the Sumatran orangutan (Pongo abelii) in the Critically Endangered category and the Bornean orangutan (Pongo pygmaeus) in the Endangered category. 2008 release The 2008 Red List was released on 6 October 2008 at the IUCN World Conservation Congress in Barcelona and "confirmed an extinction crisis, with almost one in four [mammals] at risk of disappearing forever". The study shows at least 1,141 of the 5,487 mammals on Earth are known to be threatened with extinction, and 836 are listed as Data Deficient. 2012 release The Red List of 2012 was released 19 July 2012 at Rio+20 Earth Summit; nearly 2,000 species were added, with 4 species to the extinct list, 2 to the rediscovered list. The IUCN assessed a total of 63,837 species which revealed 19,817 are threatened with extinction. 3,947 were described as "critically endangered" and 5,766 as "endangered", while more than 10,000 species are listed as "vulnerable". At threat are 41% of amphibian species, 33% of reef-building corals, 30% of conifers, 25% of mammals, and 13% of birds. The IUCN Red List has listed 132 species of plants and animals from India as "Critically Endangered". Categories Species are classified by the IUCN Red List into nine groups, specified through criteria such as rate of decline, population size, area of geographic distribution, and degree of population and distribution fragmentation. There is an emphasis on the acceptability of applying any criteria in the absence of high quality data including suspicion and potential future threats, "so long as these can reasonably be supported". Extinct (EX) – beyond reasonable doubt that the species is no longer extant. Extinct in the wild (EW) – survives only in captivity, cultivation and/or outside native range, as presumed after exhaustive surveys. Critically endangered (CR) – in a particularly and extremely critical state. Endangered (EN) – very high risk of extinction in the wild, meets any of criteria A to E for Endangered. Vulnerable (VU) – meets one of the 5 Red List criteria and thus considered to be at high risk of unnatural (human-caused) extinction without further human intervention. Near Threatened (NT) – close to being endangered in the near future. Lower Risk (LR) – unlikely to become endangered or extinct in the near future. Data Deficient (DD) Not Evaluated (NE) In the IUCN Red List, "threatened" embraces the categories of Critically Endangered, Endangered, and Vulnerable. 1994 categories and 2001 framework The older 1994 list has only a single "Lower Risk" category which contained three subcategories: Conservation Dependent (LR/cd) Near Threatened (LR/nt) Least Concern (LR/lc) In the 2001 framework, Near Threatened and Least Concern became their own categories, while Conservation Dependent was removed and its contents merged into Near Threatened. Possibly extinct The tag of "possibly extinct" (PE) is used by Birdlife International, the Red List Authority for birds for the IUCN Red List. BirdLife International has recommended PE become an official tag for Critically Endangered species, and this has now been adopted, along with a "Possibly Extinct in the Wild" tag for species with populations surviving in captivity but likely to be extinct in the wild. Versions There have been a number of versions, dating from 1991, including: Version 1.0 (1991) Version 2.0 (1992) Version 2.1 (1993) Version 2.2 (1994) Version 2.3 (1994) Version 3.0 (1999) Version 3.1 (2001) All new IUCN assessments since 2001 have used version 3.1 of the categories and criteria. Criticism In 1997, the IUCN Red List received criticism on the grounds of secrecy (or at least poor documentation) surrounding the sources of its data. These allegations have led to efforts by the IUCN to improve its documentation and data quality, and to include peer reviews of taxa on the Red List. The list is also open to petitions against its classifications, on the basis of documentation or criteria. In the November 2002 issue of Trends in Ecology & Evolution, an article suggested that the IUCN Red List and similar works are prone to misuse by governments and other groups that draw possibly inappropriate conclusions on the state of the environment or to affect exploitation of natural resources. In the November 2016 issue of Science Advances, a research article claims there are serious inconsistencies in the way species are classified by the IUCN. The researchers contend that the IUCN's process of categorization is "out-dated, and leaves room for improvement", and further emphasize the importance of readily available and easy-to-include geospatial data, such as satellite and aerial imaging. Their conclusion questioned not only the IUCN's method but also the validity of where certain species fall on the List. They believe that combining geographical data can significantly increase the number of species that need to be reclassified to a higher risk category. See also CITES Conservation status Red List Index Regional Red List Species by IUCN Red List category Wildlife conservation Citations General and cited references Hilton-Taylor, C. A history of the IUCN DATA Book and Redlist .Retrieved 2012–5–11. IUCN Red List of Threatened Species, 2009. Summary Statistics. Retrieved 2009-12-19. IUCN. 1994 IUCN Red List Categories and Criteria version 2.3. Retrieved 2009-12-19. IUCN. 2001 IUCN Red List Categories and Criteria version 3.1. Retrieved 2009-12-19. Rodrigues, A. S. L., Pilgrim, J.D., Lamoreux, J.F., Hoffmann, M. & Brooks, T.M. 2006. Trends in Ecology & Evolution 21(2): 71–76. Sharrock, S. and Jones, M. 2009. Conserving Europe's threatened plants – Report on the lack of a European Red List and the creation of a consolidated list of the threatened plants of Europe. Retrieved 2011-03-23. External links 1964 in the environment Biological databases Biota by conservation status system . .Red List Lists of biota Species described in 1963
IUCN Red List
[ "Biology" ]
2,048
[ "Lists of biota", "Biota by conservation status", "Bioinformatics", "Biodiversity", "Biota by conservation status system", "Biological databases" ]
153,663
https://en.wikipedia.org/wiki/Cytokine
Cytokines () are a broad and loose category of small proteins (~5–25 kDa) important in cell signaling. Due to their size, cytokines cannot cross the lipid bilayer of cells to enter the cytoplasm and therefore typically exert their functions by interacting with specific cytokine receptors on the target cell surface. Cytokines have been shown to be involved in autocrine, paracrine and endocrine signaling as immunomodulating agents. Cytokines include chemokines, interferons, interleukins, lymphokines, and tumour necrosis factors, but generally not hormones or growth factors (despite some overlap in the terminology). Cytokines are produced by a broad range of cells, including immune cells like macrophages, B lymphocytes, T lymphocytes and mast cells, as well as endothelial cells, fibroblasts, and various stromal cells; a given cytokine may be produced by more than one type of cell. They act through cell surface receptors and are especially important in the immune system; cytokines modulate the balance between humoral and cell-based immune responses, and they regulate the maturation, growth, and responsiveness of particular cell populations. Some cytokines enhance or inhibit the action of other cytokines in complex ways. They are different from hormones, which are also important cell signaling molecules. Hormones circulate in higher concentrations, and tend to be made by specific kinds of cells. Cytokines are important in health and disease, specifically in host immune responses to infection, inflammation, trauma, sepsis, cancer, and reproduction. The word comes from the ancient Greek language: cyto, from Greek κύτος, kytos, 'cavity, cell' + kines, from Greek κίνησις, kinēsis, 'movement'. Discovery Interferon-alpha, an interferon type I, was identified in 1957 as a protein that interfered with viral replication. The activity of interferon-gamma (the sole member of the interferon type II class) was described in 1965; this was the first identified lymphocyte-derived mediator. Macrophage migration inhibitory factor (MIF) was identified simultaneously in 1966 by John David and Barry Bloom. In 1969, Dudley Dumonde proposed the term "lymphokine" to describe proteins secreted from lymphocytes and later, proteins derived from macrophages and monocytes in culture were called "monokines". In 1974, pathologist Stanley Cohen, M.D. (not to be confused with the Nobel laureate named Stanley Cohen, who was a PhD biochemist; nor with the MD geneticist Stanley Norman Cohen) published an article describing the production of MIF in virus-infected allantoic membrane and kidney cells, showing its production is not limited to immune cells. This led to his proposal of the term cytokine. In 1993, Ogawa described the early acting growth factors, intermediate acting growth factors and late acting growth factors. Difference from hormones Classic hormones circulate in aqueous solution in nanomolar (10 M) concentrations that usually vary by less than one order of magnitude. In contrast, some cytokines (such as IL-6) circulate in picomolar (10 M) concentrations that can increase up to 1,000 times during trauma or infection. The widespread distribution of cellular sources for cytokines may be a feature that differentiates them from hormones. Virtually all nucleated cells, but especially endo/epithelial cells and resident macrophages (many near the interface with the external environment) are potent producers of IL-1, IL-6, and TNF-α. In contrast, classic hormones, such as insulin, are secreted from discrete glands such as the pancreas. The current terminology refers to cytokines as immunomodulating agents. A contributing factor to the difficulty of distinguishing cytokines from hormones is that some immunomodulating effects of cytokines are systemic (i.e., affecting the whole organism) rather than local. For instance, to accurately utilize hormone terminology, cytokines may be autocrine or paracrine in nature, and chemotaxis, chemokinesis and endocrine as a pyrogen. Essentially, cytokines are not limited to their immunomodulatory status as molecules. Nomenclature Cytokines have been classed as lymphokines, interleukins, and chemokines, based on their presumed cell of secretion, function, or target of action. Because cytokines are characterised by considerable redundancy and pleiotropism, such distinctions, allowing for exceptions, are obsolete. The term interleukin was initially used by researchers for those cytokines whose presumed targets are principally white blood cells (leukocytes). It is now used largely for designation of newer cytokine molecules and bears little relation to their presumed function. The vast majority of these are produced by T-helper cells. Lymphokines: produced by lymphocytes Monokines: produced exclusively by monocytes Interferons: involved in antiviral responses Colony stimulating factors: support the growth of cells in semisolid media Chemokines: mediate chemoattraction (chemotaxis) between cells. Classification Structural Structural homogeneity has been able to partially distinguish between cytokines that do not demonstrate a considerable degree of redundancy so that they can be classified into four types: The four-α-helix bundle family (): member cytokines have three-dimensional structures with a bundle of four α-helices. This family, in turn, is divided into three sub-families: the IL-2 subfamily. This is the largest family. It contains several non-immunological cytokines including erythropoietin (EPO) and thrombopoietin (TPO). They can be grouped into long-chain and short-chain cytokines by topology. Some members share the common gamma chain as part of their receptor. the interferon (IFN) subfamily. the IL-10 subfamily. The IL-1 family, which primarily includes IL-1 and IL-18. The cysteine knot cytokines () include members of the transforming growth factor beta superfamily, including TGF-β1, TGF-β2 and TGF-β3. The IL-17 family, which has yet to be completely characterized, though member cytokines have a specific effect in promoting proliferation of T-cells that have cytotoxic effects. Functional A classification that proves more useful in clinical and experimental practice outside of structural biology divides immunological cytokines into those that enhance cellular immune responses, type 1 (TNFα, IFN-γ, etc.), and those that enhance antibody responses, type 2 (TGF-β, IL-4, IL-10, IL-13, etc.). A key focus of interest has been that cytokines in one of these two sub-sets tend to inhibit the effects of those in the other. Dysregulation of this tendency is under intensive study for its possible role in the pathogenesis of autoimmune disorders. Several inflammatory cytokines are induced by oxidative stress. The fact that cytokines themselves trigger the release of other cytokines and also lead to increased oxidative stress makes them important in chronic inflammation, as well as other immunoresponses, such as fever and acute phase proteins of the liver (IL-1,6,12, IFN-a). Cytokines also play a role in anti-inflammatory pathways and are a possible therapeutic treatment for pathological pain from inflammation or peripheral nerve injury. There are both pro-inflammatory and anti-inflammatory cytokines that regulate this pathway. Receptors In recent years, the cytokine receptors have come to demand the attention of more investigators than cytokines themselves, partly because of their remarkable characteristics and partly because a deficiency of cytokine receptors has now been directly linked to certain debilitating immunodeficiency states. In this regard, and also because the redundancy and pleomorphism of cytokines are, in fact, a consequence of their homologous receptors, many authorities think that a classification of cytokine receptors would be more clinically and experimentally useful. A classification of cytokine receptors based on their three-dimensional structure has, therefore, been attempted. Such a classification, though seemingly cumbersome, provides several unique perspectives for attractive pharmacotherapeutic targets. Immunoglobulin (Ig) superfamily, which are ubiquitously present throughout several cells and tissues of the vertebrate body, and share structural homology with immunoglobulins (antibodies), cell adhesion molecules, and even some cytokines. Examples: IL-1 receptor types. Hemopoietic Growth Factor (type 1) family, whose members have certain conserved motifs in their extracellular amino-acid domain. The IL-2 receptor belongs to this chain, whose γ-chain (common to several other cytokines) deficiency is directly responsible for the x-linked form of Severe Combined Immunodeficiency (X-SCID). Interferon (type 2) family, whose members are receptors for IFN β and γ. Tumor necrosis factors (TNF) (type 3) family, whose members share a cysteine-rich common extracellular binding domain, and includes several other non-cytokine ligands like CD40, CD27 and CD30, besides the ligands on which the family is named. Seven transmembrane helix family, the ubiquitous receptor type of the animal kingdom. All G protein-coupled receptors (for hormones and neurotransmitters) belong to this family. Chemokine receptors, two of which act as binding proteins for HIV (CD4 and CCR5), also belong to this family. Interleukin-17 receptor (IL-17R) family, which shows little homology with any other cytokine receptor family. Structural motifs conserved between members of this family include: an extracellular fibronectin III-like domain, a transmembrane domain and a cytoplasmic SERIF domain. The known members of this family are as follows: IL-17RA, IL-17RB, IL-17RC, IL17RD and IL-17RE. Cellular effects Each cytokine has a matching cell-surface receptor. Subsequent cascades of intracellular signaling then alter cell functions. This may include the upregulation and/or downregulation of several genes and their transcription factors, resulting in the production of other cytokines, an increase in the number of surface receptors for other molecules, or the suppression of their own effect by feedback inhibition. The effect of a particular cytokine on a given cell depends on the cytokine, its extracellular abundance, the presence and abundance of the complementary receptor on the cell surface, and downstream signals activated by receptor binding; these last two factors can vary by cell type. Cytokines are characterized by considerable redundancy, in that many cytokines appear to share similar functions. It seems to be a paradox that cytokines binding to antibodies have a stronger immune effect than the cytokine alone. This may lead to lower therapeutic doses. It has been shown that inflammatory cytokines cause an IL-10-dependent inhibition of T-cell expansion and function by up-regulating PD-1 levels on monocytes, which leads to IL-10 production by monocytes after binding of PD-1 by PD-L. Adverse reactions to cytokines are characterized by local inflammation and/or ulceration at the injection sites. Occasionally such reactions are seen with more widespread papular eruptions. Roles in health and disease Cytokines are involved in several developmental processes during embryonic development. Cytokines are released from the blastocyst, and are also expressed in the endometrium, and have critical roles in the stages of zona hatching, and implantation. Cytokines are crucial for fighting off infections and in other immune responses. However, they can become dysregulated and pathological in inflammation, trauma, sepsis, and hemorrhagic stroke. Dysregulated cytokine secretion in the aged population can lead to inflammaging, and render these individuals more vulnerable to age-related diseases like neurodegenerative diseases and type 2 diabetes. A 2019 review was inconclusive as to whether cytokines play any definitive role in ME/CFS. A 2024 study found a positive correlation between plasma interleukin IL-2 and fatigue in patients with type 1 narcolepsy. Adverse effects Adverse effects of cytokines have been linked to many disease states and conditions ranging from schizophrenia, major depression and Alzheimer's disease to cancer. T regulatory cells (Tregs) and related-cytokines are effectively engaged in the process of tumor immune escape and functionally inhibit immune response against the tumor. Forkhead box protein 3 (Foxp3) as a transcription factor is an essential molecular marker of Treg cells. Foxp3 polymorphism (rs3761548) might be involved in cancer progression like gastric cancer through influencing Tregs function and the secretion of immunomodulatory cytokines such as IL-10, IL-35, and TGF-β. Normal tissue integrity is preserved by feedback interactions between diverse cell types mediated by adhesion molecules and secreted cytokines; disruption of normal feedback mechanisms in cancer threatens tissue integrity. Over-secretion of cytokines can trigger a dangerous cytokine storm syndrome. Cytokine storms may have been the cause of severe adverse events during a clinical trial of TGN1412. Cytokine storms are also suspected to have been the main cause of death in the 1918 "Spanish Flu" pandemic. Deaths were weighted more heavily towards people with healthy immune systems, because of their ability to produce stronger immune responses, with dramatic increases in cytokine levels. Another example of cytokine storm is seen in acute pancreatitis. Cytokines are integral and implicated in all angles of the cascade, resulting in the systemic inflammatory response syndrome and multi-organ failure associated with this intra-abdominal catastrophe. In the COVID-19 pandemic, some deaths from COVID-19 have been attributable to cytokine release storms. Current data suggest cytokine storms may be the source of extensive lung tissue damage and dysfunctional coagulation in COVID-19 infections. Medical use as drugs Some cytokines have been developed into protein therapeutics using recombinant DNA technology. Recombinant cytokines being used as drugs as of 2014 include: Bone morphogenetic protein (BMP), used to treat bone-related conditions Erythropoietin (EPO), used to treat anemia Granulocyte colony-stimulating factor (G-CSF), used to treat neutropenia in cancer patients Granulocyte macrophage colony-stimulating factor (GM-CSF), used to treat neutropenia and fungal infections in cancer patients Interferon alfa, used to treat hepatitis C and multiple sclerosis Interferon beta, used to treat multiple sclerosis Interleukin 2 (IL-2), used to treat cancer. Interleukin 11 (IL-11), used to treat thrombocytopenia in cancer patients. Interferon gamma is used to treat chronic granulomatous disease and osteopetrosis See also Adipokines Apoptosis Cytokine redundancy Cytokine release syndrome Cytokine secretion assay ELISA assays Myokine Signal transduction Thymic stromal lymphopoietin Virokine Notes References External links Cytokine Signalling Forum Cytokine Tutorial Cytokine Gene Summary, Ontology, Pathways and More: Immunology Database and Analysis Portal (ImmPort) Immunology
Cytokine
[ "Chemistry", "Biology" ]
3,399
[ "Immune system", "Signal transduction", "Immunology", "Cytokines", "Organ systems" ]
153,681
https://en.wikipedia.org/wiki/Celestial%20equator
The celestial equator is the great circle of the imaginary celestial sphere on the same plane as the equator of Earth. By extension, it is also a plane of reference in the equatorial coordinate system. In other words, the celestial equator is an abstract projection of the terrestrial equator into outer space. Due to Earth's axial tilt, the celestial equator is currently inclined by about 23.44° with respect to the ecliptic (the plane of Earth's orbit), but has varied from about 22.0° to 24.5° over the past 5 million years due to perturbation from other planets. An observer standing on Earth's equator visualizes the celestial equator as a semicircle passing through the zenith, the point directly overhead. As the observer moves north (or south), the celestial equator tilts towards the opposite horizon. The celestial equator is defined to be infinitely distant (since it is on the celestial sphere); thus, the ends of the semicircle always intersect the horizon due east and due west, regardless of the observer's position on Earth. At the poles, the celestial equator coincides with the astronomical horizon. At all latitudes, the celestial equator is a uniform arc or circle because the observer is only finitely far from the plane of the celestial equator, but infinitely far from the celestial equator itself. Astronomical objects near the celestial equator appear above the horizon from most places on earth, but they culminate (reach the meridian) highest near the equator. The celestial equator currently passes through these constellations: These are the most globally visible constellations. Over thousands of years, the orientation of Earth's equator and thus the constellations the celestial equator passes through will change due to axial precession. Celestial bodies other than Earth also have similarly defined celestial equators. See also Axial precession Celestial pole Declination Rotation around a fixed axis (pole) References Equator Dynamics of the Solar System Technical factors of astrology Circles Planes (geometry)
Celestial equator
[ "Astronomy", "Mathematics" ]
404
[ "Dynamics of the Solar System", "Mathematical objects", "Infinity", "Astronomical coordinate systems", "Coordinate systems", "Planes (geometry)", "Circles", "Pi", "Solar System" ]
153,688
https://en.wikipedia.org/wiki/Well%20dressing
Well dressing, also known as well flowering, is a tradition practised in some parts of rural England in which wells, springs and other water sources are decorated with designs created from flower petals. The custom is most closely associated with the Peak District of Derbyshire and Staffordshire. James Murray Mackinlay, writing in 1893, noted that the tradition was not observed in Scotland; W. S. Cordner, in 1946, similarly noted its absence in Ireland. Both Scotland and Ireland do have a long history of the veneration of wells, however, dating from at least the 6th century. The custom of well dressing in its present form probably began in the late 18th century, and evolved from "the more widespread, but less picturesque" decoration of wells with ribbons and simple floral garlands. History The location identified most closely with well dressing is Tissington, Derbyshire, though the origins of the tradition are obscure. It has been speculated that it began as a pagan custom of offering thanks to gods for a reliable water supply; other suggested explanations include villagers celebrating the purity of their water supply after surviving the Black Death in 1348, or alternatively celebrating their water's constancy during a prolonged drought in 1615. The practice of well dressing using clay boards at Tissington is not recorded before 1818, however, and the earliest record for the wells being adorned by simple garlands occurs in 1758. Well dressing was celebrated in at least 12 villages in Derbyshire by the late 19th century, and was introduced in Buxton in 1840, "to commemorate the beneficence of the Duke of Devonshire who, at his own expense, made arrangements for supplying the Upper Town, which had been much inconvenienced by the distance to St Anne's well on the Wye, with a fountain of excellent water within easy reach of all". Similarly, well dressing was revived at this time in Youlgreave, to celebrate the supplying of water to the village "from a hill at some distance, by means of pipes laid under the stream of an intervening valley.". With the arrival of piped water the tradition was adapted to include public taps, although the resulting creations were still described as well dressings. The custom waxed and waned over the years, but has seen revivals in Derbyshire, Staffordshire, South Yorkshire, Cheshire, Shropshire, Worcestershire and Kent. Process Wooden frames are constructed and covered with clay, mixed with water and salt. A design is sketched on paper, often of a religious theme, and this is traced onto the clay. The picture is then filled in with natural materials, predominantly flower petals and mosses, but also beans, seeds and small cones. Each group uses its own technique, with some areas mandating that only natural materials be used while others feel free to use modern materials to simplify production. Wirksworth and Barlow are two of the very few dressings where the strict use of only natural materials is still observed. In literature John Brunner's story "In the Season of the Dressing of the Wells" describes the revival of the custom in an English village of the West Country after World War I, and its connection to the Goddess. Jon McGregor's novel Reservoir 13 is set in a village where well dressing is an annual event. See also Clootie well Osterbrunnen References Footnotes Bibliography External links welldressing.com Listing of dates and sites, with galleries of photos and historical information Official website for the Stoney Middleton Well Dressing Committee Official website of the Buxton Wells Dressing Festival Short history of well dressing Tissington Hall's guide to producing welldressings Well dressings in Wirksworth Derbyshire Community site for Wirksworth Derbyshire Well Dressings in Barlow, Derbyshire. Dressed year on year for at least 150 years A history of well dressing in Wormhill Well dressings in Brackenfield Culture in Derbyshire English folklore Tourist attractions of the Peak District Water wells English traditions
Well dressing
[ "Chemistry", "Engineering", "Environmental_science" ]
799
[ "Hydrology", "Water wells", "Environmental engineering" ]
153,767
https://en.wikipedia.org/wiki/I%20%3D%20PAT
I = (PAT) is the mathematical notation of a formula put forward to describe the impact of human activity on the environment. I = P × A × T The expression equates human impact on the environment to a function of three factors: population (P), affluence (A) and technology (T). It is similar in form to the Kaya identity, which applies specifically to emissions of the greenhouse gas carbon dioxide. The validity of expressing environmental impact as a simple product of independent factors, and the factors that should be included and their comparative importance, have been the subject of debate among environmentalists. In particular, some have drawn attention to potential inter-relationships among the three factors; and others have wished to stress other factors not included in the formula, such as political and social structures, and the scope for beneficial, as well as harmful, environmental actions. History The equation was developed in 1970 during the course of a debate between Barry Commoner, Paul R. Ehrlich and John Holdren. Commoner argued that environmental impacts in the United States were caused primarily by changes in its production technology following World War II and focused on present-day deteriorating environmental conditions in the United States. Ehrlich and Holdren argued that all three factors were important but emphasized the role of human population growth, focusing on a broader scale, being less specific in space and time. The equation can aid in understanding some of the factors affecting human impacts on the environment, but it has also been cited as a basis for many of the dire environmental predictions of the 1970s by Paul Ehrlich, George Wald, Denis Hayes, Lester Brown, René Dubos, and Sidney Ripley that did not come to pass. Neal Koblitz classified equations of this type as "mathematical propaganda" and criticized Ehrlich's use of them in the media (e.g. on The Tonight Show) to sway the general public. The dependent variable: Impact The variable "I" in the "I=PAT" equation represents environmental impact. The environment may be viewed as a self-regenerating system that can endure a certain level of impact. The maximum endurable impact is called the carrying capacity. As long as "I" is less than the carrying capacity the associated population, affluence, and technology that make up "I" can be perpetually endured. If "I" exceeds the carrying capacity, then the system is said to be in overshoot, which may only be a temporary state. Overshoot may degrade the ability of the environment to endure impact, therefore reducing the carrying capacity. Impact may be measured using ecological footprint analysis in units of global hectares (gha). Ecological footprint per capita is a measure of the quantity of Earth's biologically productive surface that is needed to regenerate the resources consumed per capita. Impact is modeled as the product of three terms, giving gha as a result. Population is expressed in human numbers; therefore affluence is measured in units of gha per capita. Technology is a unitless efficiency factor. The three factors Population In the I=PAT equation, the variable P represents the population of an area, such as the world. Since the rise of industrial societies, human population has been increasing exponentially. This has caused Thomas Malthus, Paul Ehrlich and many others to postulate that this growth would continue until checked by widespread hunger and famine (see Malthusian growth model). The United Nations project that world population will increase from 7.7 billion today (2019) to 9.8 billion in 2050 and about 11.2 billion in 2100. These projections take into consideration that population growth has slowed in recent years as women are having fewer children. This phenomenon is the result of demographic transition all over the world. Although the UN projects that human population may stabilize at around 11.2 billion in 2100, the I=PAT equation will continue to be relevant for the increasing human impact on the environment in the short to mid-term future. Environmental impacts of population Increased population increases humans' environmental impact in many ways, which include but are not limited to: Increased land use - Results in habitat loss for other species Increased resource use - Results in changes in land cover Increased pollution - Can cause sickness and damages ecosystems Increased climate change Increased biodiversity loss Affluence The variable A in the I=PAT equation stands for affluence. It represents the average consumption of each person in the population. As the consumption of each person increases, the total environmental impact increases as well. A common proxy for measuring consumption is through GDP per capita or GNI per capita. While GDP per capita measures production, it is often assumed that consumption increases when production increases. GDP per capita has been rising steadily over the last few centuries and is driving up human impact in the I=PAT equation. Environmental impacts of affluence Increased consumption significantly increases human environmental impact. This is because each product consumed has wide-ranging effects on the environment. For example, the construction of a car has the following environmental impacts: 605,664 gallons of water for parts and tires; 682 lbs. of pollution at a mine for the lead battery; 2178 lbs. of discharge into water supply for the 22 lbs. of copper contained in the car. The more cars per capita, the greater the impact. Ecological impacts of each product are far-reaching; increases in consumption quickly result in large impacts on the environment through direct and indirect sources. Technology The T variable in the I=PAT equation represents how resource intensive the production of affluence is; how much environmental impact is involved in creating, transporting and disposing of the goods, services and amenities used. Improvements in efficiency can reduce resource intensiveness, reducing the T multiplier. Since technology can affect environmental impact in many different ways, the unit for T is often tailored for the situation to which I=PAT is being applied. For example, for a situation where the human impact on climate change is being measured, an appropriate unit for T might be greenhouse gas emissions per unit of GDP. Environmental impacts of technology Increases in efficiency from technologies can reduce specific environmental impacts, but due to increasing prosperity these technologies yield for the people and businesses that adopt them, technologies actually end up generating greater overall growth into the resources that sustain us. Criticism Criticisms of the I=PAT formula: Too simplistic for complex problem Interdependencies between variables General sweeping assumptions of variables' effect toward environmental impact Cultural differences cause wide variation in impact Technology cannot properly be expressed in a unit. Varying the unit will prove to be inaccurate, as the result of the calculation depends on one's view of the situation. Interdependencies The I=PAT equation has been criticized for being too simplistic by assuming that P, A, and T are independent of each other. In reality, at least seven interdependencies between P, A, and T could exist, indicating that it is more correct to rewrite the equation as I = f(P,A,T). For example, a doubling of technological efficiency, or equivalently a reduction of the T-factor by 50%, does not necessarily reduce the environmental impact (I) by 50% if efficiency induced price reductions stimulate additional consumption of the resource that was supposed to be conserved, a phenomenon called the rebound effect or Jevons paradox. As was shown by Alcott, despite significant improvements in the carbon intensity of GDP (i.e., the efficiency in carbon use) since 1980, world fossil energy consumption has increased in line with economic and population growth. Similarly, an extensive historical analysis of technological efficiency improvements has conclusively shown that improvements in the efficiency of energy and material use were almost always outpaced by economic growth, resulting in a net increase in resource use and associated pollution. Each factor in the I=PAT equation can either increase or decrease the level of environmental impact, and their interactions are non-linear and dynamic. Although environmental impacts are driven by human activities in specific regions, these impacts often manifest elsewhere due to the globalized nature of environmental systems and human. For instance, economic activity in one area can lead to resource extraction in another or cause pollution that spreads to different locations. Neglect of beneficial human impacts There have also been comments that this model depicts people as being purely detrimental to the environment, ignoring any conservation or restoration efforts that societies have made. Neglect of political and social contexts Another major criticism of the I=PAT model is that it ignores the political context and decision-making structures of countries and groups. This means the equation does not account for varying degrees of power, influence, and responsibility of individuals over environmental impact. Also, the P factor does not account for the complexity of social structures or behaviors, resulting in blame being placed on the global poor. I=PAT does not account for sustainable resource use among some poor and indigenous populations, unfairly characterizing these populations whose cultures support low-impact practices. However, it has been argued that the latter criticism not only assumes low impacts for indigenous populations, but also misunderstands the I=PAT equation itself. Environmental impact is a function of human numbers, affluence (i.e., resources consumed per capita) and technology. It is assumed that small-scale societies have low environmental impacts due to their practices and orientations alone but there is little evidence to support this. In fact, the generally low impact of small-scale societies compared to state societies is due to a combination of their small numbers and low-level technology. Thus, the environmental sustainability of these societies is largely an epiphenomenon due their inability to significantly affect their environment. That all types of societies are subject to I=PAT was actually made clear in Ehrlich and Holdren's 1972 dialogue with Commoner in The Bulletin of the Atomic Scientists, where they examine the pre-industrial (and indeed prehistoric) impact of human beings on the environment. Their position is further clarified by Holdren's 1993 paper, A Brief History of "IPAT". Policy implications As a result of the interdependencies between P, A, and T and potential rebound effects, policies aimed at decreasing environmental impacts through reductions in P, A, and T may not only be very difficult to implement (e.g., population control and material sufficiency and degrowth movements have been controversial) but also are likely to be rather ineffective compared to rationing (i.e., quotas) or Pigouvian taxation of resource use or pollution. World3 model and IPAT Equation The IPAT equation serves as the cornerstone for analyzing the causes of environmental sustainability. It underpins the entire World3 simulation model, which is the most influential sustainability model ever created, and is essentially an extended application of the IPAT equation. See also Carbon footprint Eco-economic decoupling Ecological indicator Embodied energy Life cycle assessment Sustainability measurement Sustainability metrics and indices Water footprint References External links Human impact on the environment Environmental social science concepts Equations Human geography Technology assessment Population ecology
I = PAT
[ "Mathematics", "Technology", "Environmental_science" ]
2,249
[ "Technology assessment", "Science and technology studies", "Mathematical objects", "Equations", "Environmental social science concepts", "nan", "Environmental social science", "Human geography" ]
153,771
https://en.wikipedia.org/wiki/Blowing%20a%20raspberry
Blowing a raspberry, razzing or making a Bronx cheer, is to make a noise similar to flatulence that may signify derision, real or feigned. It is made by placing the tongue between the lips and blowing. A raspberry (when used with the tongue) is not used in any human language as a building block of words, apart from jocular exceptions such as the name of the comic-book character Joe Btfsplk. However, the vaguely similar bilabial trill (essentially blowing a raspberry with one's lips) is a regular consonant sound in a few dozen languages scattered around the world. Spike Jones and His City Slickers used a "birdaphone" to create this sound on their recording of "Der Fuehrer's Face", repeatedly lambasting Adolf Hitler with: "We'll Heil! (Bronx cheer) Heil! (Bronx cheer) Right in Der Fuehrer's Face!" In the terminology of phonetics, the raspberry has been described as a voiceless linguolabial trill, transcribed in the International Phonetic Alphabet, and as a buccal interdental trill, transcribed in the Extensions to the International Phonetic Alphabet. Name The nomenclature varies by country. In most anglophone countries, it is known as a raspberry, which is attested from at least 1890, and which in the United States had been shortened to razz by 1919. The term orignates in rhyming slang, where "raspberry tart" means "fart". In the United States it has also been called a Bronx cheer since at least the early 1920s. See also Golden Raspberry Awards, which are named after the term Linguistic universal The Phantom Raspberry Blower of Old London Town Flatulence humor References Flatulence Onomatopoeia Gestures Articles containing video clips Sounds by type Metaphors referring to food and drink
Blowing a raspberry
[ "Biology" ]
403
[ "Behavior", "Gestures", "Human behavior" ]
153,774
https://en.wikipedia.org/wiki/First%20point%20of%20Aries
The first point of Aries, also known as the cusp of Aries, is the location of the March equinox (the vernal equinox in the northern hemisphere, and the autumnal equinox in the southern), used as a reference point in celestial coordinate systems. In diagrams using such coordinate systems, it is often indicated with the symbol ♈︎. Named for the constellation of Aries, it is one of the two points on the celestial sphere at which the celestial equator crosses the ecliptic, the other being the first point of Libra, located exactly 180° from it. Due to precession of the equinoxes since the positions were originally named in antiquity, the position of the Sun when at the March equinox is now in Pisces; when it is at the September equinox, it is in Virgo (as of J2000). Along its yearly path through the zodiac, the Sun meets the celestial equator as it travels from south to north at the first point of Aries, and from north to south at the first point of Libra. The first point of Aries is considered to be the celestial "prime meridian" from which right ascension is calculated. History The choice of starting position from which to measure the Sun's motion across the celestial sphere is arbitrary. The equinoxes are preferred as an equinox marks the point in time when the Sun has neither northern nor southern declination but is crossing the celestial equator. Of the two possible equinoxes the ancient Greeks chose the March equinox as the starting point. This coincided with the festival of Hilaria, a time of optimism and beginnings where farmers began to sow or observed the first growth and blossoming of trees and summer crops. The naming of Aries is late in the Babylonian zodiac where the equinox was in its earliest tradition marked as in the early Middle Bronze Age by actual coincidence with the Pleiades. The time also corresponds to the time of castration of male calves, mules and donkeys, Sanguia on the vernal equinox and marked the start of spring proper. The first point of Aries is so called because, when Hipparchus defined it in 130 BCE, it was located in the western extreme of the constellation of Aries, near its border with Pisces and the star γ Arietis. Due to the Sun's eastward movement across the sky throughout the year, this western end of Aries was the point at which the Sun entered the constellation, hence the name first point of Aries. Definition Due to Earth's axial precession, this point gradually moves westwards at a rate of about one degree every 72 years. This means that, since the time of Hipparchus, it has shifted across the sky by about 30°, and is currently located within Pisces, near its border with Aquarius. The Sun now appears in Aries from late April until mid-May, though the constellation is still associated with the beginning of the northern spring. The first point of Aries is important to the fields of astronomy, nautical navigation and astrology. Navigational ephemeris tables record the geographic position of the first point of Aries as the reference for position of navigational stars. Due to the slow precession of the equinoxes, the zenith view (above a location) of constellations at a time of year from a given location have slowly moved west (by using solar epochs the drift is known). The tropical Zodiac is similarly affected and no longer corresponds with the constellations (the Cusp of Libra today is located within Virgo). In sidereal astrology, by contrast, the first point of Aries remains aligned with the Aries constellation. See also Ras Hammel "the head of the ram" References Astronomy Astronomical coordinate systems Dynamics of the Solar System Time in astronomy Aries (constellation) Technical factors of Western astrology Pisces (constellation) Spring equinox
First point of Aries
[ "Astronomy", "Mathematics" ]
828
[ "Time in astronomy", "Dynamics of the Solar System", "Constellations", "Astronomical coordinate systems", "nan", "Coordinate systems", "Pisces (constellation)", "Aries (constellation)", "Solar System" ]
153,783
https://en.wikipedia.org/wiki/Crystal%20optics
Crystal optics is the branch of optics that describes the behaviour of light in anisotropic media, that is, media (such as crystals) in which light behaves differently depending on which direction the light is propagating. The index of refraction depends on both composition and crystal structure and can be calculated using the Gladstone–Dale relation. Crystals are often naturally anisotropic, and in some media (such as liquid crystals) it is possible to induce anisotropy by applying an external electric field. Isotropic media Typical transparent media such as glasses are isotropic, which means that light behaves the same way no matter which direction it is travelling in the medium. In terms of Maxwell's equations in a dielectric, this gives a relationship between the electric displacement field D and the electric field E: where ε0 is the permittivity of free space and P is the electric polarization (the vector field corresponding to electric dipole moments present in the medium). Physically, the polarization field can be regarded as the response of the medium to the electric field of the light. Electric susceptibility In an isotropic and linear medium, this polarization field P is proportional and parallel to the electric field E: where χ is the electric susceptibility of the medium. The relation between D and E is thus: where is the dielectric constant of the medium. The value 1+χ is called the relative permittivity of the medium, and is related to the refractive index n, for non-magnetic media, by Anisotropic media In an anisotropic medium, such as a crystal, the polarisation field P is not necessarily aligned with the electric field of the light E. In a physical picture, this can be thought of as the dipoles induced in the medium by the electric field having certain preferred directions, related to the physical structure of the crystal. This can be written as: Here χ is not a number as before but a tensor of rank 2, the electric susceptibility tensor. In terms of components in 3 dimensions: or using the summation convention: Since χ is a tensor, P is not necessarily colinear with E. In nonmagnetic and transparent materials, χij = χji, i.e. the χ tensor is real and symmetric. In accordance with the spectral theorem, it is thus possible to diagonalise the tensor by choosing the appropriate set of coordinate axes, zeroing all components of the tensor except χxx, χyy and χzz. This gives the set of relations: The directions x, y and z are in this case known as the principal axes of the medium. Note that these axes will be orthogonal if all entries in the χ tensor are real, corresponding to a case in which the refractive index is real in all directions. It follows that D and E are also related by a tensor: Here ε is known as the relative permittivity tensor or dielectric tensor. Consequently, the refractive index of the medium must also be a tensor. Consider a light wave propagating along the z principal axis polarised such the electric field of the wave is parallel to the x-axis. The wave experiences a susceptibility χxx and a permittivity εxx. The refractive index is thus: For a wave polarised in the y direction: Thus these waves will see two different refractive indices and travel at different speeds. This phenomenon is known as birefringence and occurs in some common crystals such as calcite and quartz. If χxx = χyy ≠ χzz, the crystal is known as uniaxial. (See Optic axis of a crystal.) If χxx ≠ χyy and χyy ≠ χzz the crystal is called biaxial. A uniaxial crystal exhibits two refractive indices, an "ordinary" index (no) for light polarised in the x or y directions, and an "extraordinary" index (ne) for polarisation in the z direction. A uniaxial crystal is "positive" if ne > no and "negative" if ne < no. Light polarised at some angle to the axes will experience a different phase velocity for different polarization components, and cannot be described by a single index of refraction. This is often depicted as an index ellipsoid. Other effects Certain nonlinear optical phenomena such as the electro-optic effect cause a variation of a medium's permittivity tensor when an external electric field is applied, proportional (to lowest order) to the strength of the field. This causes a rotation of the principal axes of the medium and alters the behaviour of light travelling through it; the effect can be used to produce light modulators. In response to a magnetic field, some materials can have a dielectric tensor that is complex-Hermitian; this is called a gyro-magnetic or magneto-optic effect. In this case, the principal axes are complex-valued vectors, corresponding to elliptically polarized light, and time-reversal symmetry can be broken. This can be used to design optical isolators, for example. A dielectric tensor that is not Hermitian gives rise to complex eigenvalues, which corresponds to a material with gain or absorption at a particular frequency. See also Birefringence Index ellipsoid Optical rotation Prism References External links A virtual polarization microscope Condensed matter physics Crystallography Nonlinear optics
Crystal optics
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,126
[ "Phases of matter", "Materials science", "Crystallography", "Condensed matter physics", "Matter" ]
153,788
https://en.wikipedia.org/wiki/K%C5%91nig%27s%20lemma
Kőnig's lemma or Kőnig's infinity lemma is a theorem in graph theory due to the Hungarian mathematician Dénes Kőnig who published it in 1927. It gives a sufficient condition for an infinite graph to have an infinitely long path. The computability aspects of this theorem have been thoroughly investigated by researchers in mathematical logic, especially in computability theory. This theorem also has important roles in constructive mathematics and proof theory. Statement of the lemma Let be a connected, locally finite, infinite graph. This means that every two vertices can be connected by a finite path, each vertex is adjacent to only finitely many other vertices, and the graph has infinitely many vertices. Then contains a ray: a simple path (a path with no repeated vertices) that starts at one vertex and continues from it through infinitely many vertices. A useful special case of the lemma is that every infinite tree contains either a vertex of infinite degree or an infinite simple path. If it is locally finite, it meets the conditions of the lemma and has a ray, and if it is not locally finite then it has an infinite-degree vertex. Construction The construction of a ray, in a graph that meets the conditions of the lemma, can be performed step by step, maintaining at each step a finite path that can be extended to reach infinitely many vertices (not necessarily all along the same path as each other). To begin this process, start with any single vertex . This vertex can be thought of as a path of length zero, consisting of one vertex and no edges. By the assumptions of the lemma, each of the infinitely many vertices of can be reached by a simple path that starts from . Next, as long as the current path ends at some vertex , consider the infinitely many vertices that can be reached by simple paths that extend the current path, and for each of these vertices construct a simple path to it that extends the current path. There are infinitely many of these extended paths, each of which connects from to one of its neighbors, but has only finitely many neighbors. Therefore, it follows by a form of the pigeonhole principle that at least one of these neighbors is used as the next step on infinitely many of these extended paths. Let be such a neighbor, and extend the current path by one edge, the edge from to . This extension preserves the property that infinitely many vertices can be reached by simple paths that extend the current path. Repeating this process for extending the path produces an infinite sequence of finite simple paths, each extending the previous path in the sequence by one more edge. The union of all of these paths is the ray whose existence was promised by the lemma. Computability aspects The computability aspects of Kőnig's lemma have been thoroughly investigated. For this purpose it is convenient to state Kőnig's lemma in the form that any infinite finitely branching subtree of has an infinite path. Here denotes the set of natural numbers (thought of as an ordinal number) and the tree whose nodes are all finite sequences of natural numbers, where the parent of a node is obtained by removing the last element from a sequence. Each finite sequence can be identified with a partial function from to itself, and each infinite path can be identified with a total function. This allows for an analysis using the techniques of computability theory. A subtree of in which each sequence has only finitely many immediate extensions (that is, the tree has finite degree when viewed as a graph) is called finitely branching. Not every infinite subtree of has an infinite path, but Kőnig's lemma shows that any finitely branching infinite subtree must have such a path. For any subtree of the notation denotes the set of nodes of through which there is an infinite path. Even when is computable the set may not be computable. Whenever a subtree of has an infinite path, the path is computable from , step by step, greedily choosing a successor in at each step. The restriction to ensures that this greedy process cannot get stuck. There exist non-finitely branching computable subtrees of that have no arithmetical path, and indeed no hyperarithmetical path. However, every computable subtree of with a path must have a path computable from Kleene's O, the canonical complete set. This is because the set is always (for the meaning of this notation, see analytical hierarchy) when is computable. A finer analysis has been conducted for computably bounded trees. A subtree of is called computably bounded or recursively bounded if there is a computable function from to such that for every sequence in the tree and every natural number , the th element of the sequence is at most . Thus gives a bound for how "wide" the tree is. The following basis theorems apply to infinite, computably bounded, computable subtrees of . Any such tree has a path computable from , the canonical Turing complete set that can decide the halting problem. Any such tree has a path that is low. This is known as the low basis theorem. Any such tree has a path that is hyperimmune free. This means that any function computable from the path is dominated by a computable function. For any noncomputable subset of the tree has a path that does not compute . A weak form of Kőnig's lemma which states that every infinite binary tree has an infinite branch is used to define the subsystem WKL0 of second-order arithmetic. This subsystem has an important role in reverse mathematics. Here a binary tree is one in which every term of every sequence in the tree is 0 or 1, which is to say the tree is computably bounded via the constant function 2. The full form of Kőnig's lemma is not provable in WKL0, but is equivalent to the stronger subsystem ACA0. Relationship to constructive mathematics and compactness The proof given above is not generally considered to be constructive, because at each step it uses a proof by contradiction to establish that there exists an adjacent vertex from which infinitely many other vertices can be reached, and because of the reliance on a weak form of the axiom of choice. Facts about the computational aspects of the lemma suggest that no proof can be given that would be considered constructive by the main schools of constructive mathematics. The fan theorem of is, from a classical point of view, the contrapositive of a form of Kőnig's lemma. A subset S of is called a bar if any function from to the set has some initial segment in S. A bar is detachable if every sequence is either in the bar or not in the bar (this assumption is required because the theorem is ordinarily considered in situations where the law of the excluded middle is not assumed). A bar is uniform if there is some number so that any function from to has an initial segment in the bar of length no more than . Brouwer's fan theorem says that any detachable bar is uniform. This can be proven in a classical setting by considering the bar as an open covering of the compact topological space . Each sequence in the bar represents a basic open set of this space, and these basic open sets cover the space by assumption. By compactness, this cover has a finite subcover. The N of the fan theorem can be taken to be the length of the longest sequence whose basic open set is in the finite subcover. This topological proof can be used in classical mathematics to show that the following form of Kőnig's lemma holds: for any natural number k, any infinite subtree of the tree has an infinite path. Relationship with the axiom of choice Kőnig's lemma may be considered to be a choice principle; the first proof above illustrates the relationship between the lemma and the axiom of dependent choice. At each step of the induction, a vertex with a particular property must be selected. Although it is proved that at least one appropriate vertex exists, if there is more than one suitable vertex there may be no canonical choice. In fact, the full strength of the axiom of dependent choice is not needed; as described below, the axiom of countable choice suffices. If the graph is countable, the vertices are well-ordered and one can canonically choose the smallest suitable vertex. In this case, Kőnig's lemma is provable in second-order arithmetic with arithmetical comprehension, and, a fortiori, in ZF set theory (without choice). Kőnig's lemma is essentially the restriction of the axiom of dependent choice to entire relations such that for each there are only finitely many such that . Although the axiom of choice is, in general, stronger than the principle of dependent choice, this restriction of dependent choice is equivalent to a restriction of the axiom of choice. In particular, when the branching at each node is done on a finite subset of an arbitrary set not assumed to be countable, the form of Kőnig's lemma that says "Every infinite finitely branching tree has an infinite path" is equivalent to the principle that every countable set of finite sets has a choice function, that is to say, the axiom of countable choice for finite sets. This form of the axiom of choice (and hence of Kőnig's lemma) is not provable in ZF set theory. Generalization In the category of sets, the inverse limit of any inverse system of non-empty finite sets is non-empty. This may be seen as a generalization of Kőnig's lemma and can be proved with Tychonoff's theorem, viewing the finite sets as compact discrete spaces, and then using the finite intersection property characterization of compactness. See also Aronszajn tree, for the possible existence of counterexamples when generalizing the lemma to higher cardinalities. PA degree Notes References . published in ; reprint, Dover, 2002, . Further reading External links Stanford Encyclopedia of Philosophy: Constructive Mathematics The Mizar project has completely formalized and automatically checked the proof of a version of Kőnig's lemma in the file TREES_2. Lemmas in graph theory Articles containing proofs Computability theory Wellfoundedness Axiom of choice Infinite graphs Constructivism (mathematics)
Kőnig's lemma
[ "Mathematics" ]
2,149
[ "Lemmas", "Mathematical logic", "Wellfoundedness", "Mathematical objects", "Lemmas in graph theory", "Mathematical axioms", "Infinity", "Infinite graphs", "Axiom of choice", "Axioms of set theory", "Constructivism (mathematics)", "Computability theory", "Articles containing proofs", "Order...
153,797
https://en.wikipedia.org/wiki/Orion%20%28constellation%29
Orion is a prominent set of stars visible during winter in the northern celestial hemisphere. It is one of the 88 modern constellations; it was among the 48 constellations listed by the 2nd-century astronomer Ptolemy. It is named after a hunter in Greek mythology. Orion is most prominent during winter evenings in the Northern Hemisphere, as are five other constellations that have stars in the Winter Hexagon asterism. Orion's two brightest stars, Rigel (β) and Betelgeuse (α), are both among the brightest stars in the night sky; both are supergiants and slightly variable. There are a further six stars brighter than magnitude 3.0, including three making the short straight line of the Orion's Belt asterism. Orion also hosts the radiant of the annual Orionids, the strongest meteor shower associated with Halley's Comet, and the Orion Nebula, one of the brightest nebulae in the sky. Characteristics Orion is bordered by Taurus to the northwest, Eridanus to the southwest, Lepus to the south, Monoceros to the east, and Gemini to the northeast. Covering 594 square degrees, Orion ranks twenty-sixth of the 88 constellations in size. The constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of 26 sides. In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between and . The constellation's three-letter abbreviation, as adopted by the International Astronomical Union in 1922, is "Ori". Orion is most visible in the evening sky from January to April, winter in the Northern Hemisphere, and summer in the Southern Hemisphere. In the tropics (less than about 8° from the equator), the constellation transits at the zenith. In the period May–July (summer in the Northern Hemisphere, winter in the Southern Hemisphere), Orion is in the daytime sky and thus invisible at most latitudes. However, for much of Antarctica in the Southern Hemisphere's winter months, the Sun is below the horizon even at midday. Stars (and thus Orion, but only the brightest stars) are then visible at twilight for a few hours around local noon, just in the brightest section of the sky low in the North where the Sun is just below the horizon. At the same time of day at the South Pole itself (Amundsen–Scott South Pole Station), Rigel is only 8° above the horizon, and the Belt sweeps just along it. In the Southern Hemisphere's summer months, when Orion is normally visible in the night sky, the constellation is actually not visible in Antarctica because the sun does not set at that time of year south of the Antarctic Circle. In countries close to the equator (e.g., Kenya, Indonesia, Colombia, Ecuador), Orion appears overhead in December around midnight and in the February evening sky. Navigational aid Orion is very useful as an aid to locating other stars. By extending the line of the Belt southeastward, Sirius (α CMa) can be found; northwestward, Aldebaran (α Tau). A line eastward across the two shoulders indicates the direction of Procyon (α CMi). A line from Rigel through Betelgeuse points to Castor and Pollux (α Gem and β Gem). Additionally, Rigel is part of the Winter Circle asterism. Sirius and Procyon, which may be located from Orion by following imaginary lines (see map), also are points in both the Winter Triangle and the Circle. Features Orion's seven brightest stars form a distinctive hourglass-shaped asterism, or pattern, in the night sky. Four stars—Rigel, Betelgeuse, Bellatrix, and Saiph—form a large roughly rectangular shape, at the center of which lies the three stars of Orion's Belt—Alnitak, Alnilam, and Mintaka. His head is marked by an additional 8th star called Meissa, which is fairly bright to the observer. Descending from the "belt" is a smaller line of three stars, Orion's Sword (the middle of which is in fact not a star but the Orion Nebula), also known as the hunter's sword. Many of the stars are luminous hot blue supergiants, with the stars of the belt and sword forming the Orion OB1 association. Standing out by its red hue, Betelgeuse may nevertheless be a runaway member of the same group. Bright stars Betelgeuse, also designated Alpha Orionis, is a massive M-type red supergiant star nearing the end of its life. It is the second brightest star in Orion, and is a semiregular variable star. It serves as the "right shoulder" of the hunter (assuming that he is facing the observer). It is generally the eleventh brightest star in the night sky, but this has varied between being the tenth brightest to the 23rd brightest by the end of 2019. The end of its life is expected to result in a supernova explosion that will be highly visible from Earth, possibly outshining the Earth's moon and being visible during the day. This is most likely to occur within the next 100,000 years. Rigel, also known as Beta Orionis, is a B-type blue supergiant that is the seventh brightest star in the night sky. Similar to Betelgeuse, Rigel is fusing heavy elements in its core and will pass its supergiant stage soon (on an astronomical timescale), either collapsing in the case of a supernova or shedding its outer layers and turning into a white dwarf. It serves as the left foot of the hunter. Bellatrix is designated Gamma Orionis by Johann Bayer. It is the twenty-seventh brightest star in the night sky. Bellatrix is considered a B-type blue giant, though it is too small to explode in a supernova. Bellatrix's luminosity is derived from its high temperature rather than a large radius. Bellatrix marks Orion's left shoulder and it means the "female warrior", and is sometimes known colloquially as the "Amazon Star". It is the closest major star in Orion at only 244.6 light years from our solar system. Mintaka is designated Delta Orionis, despite being the faintest of the three stars in Orion's Belt. Its name means "the belt". It is a multiple star system, composed of a large B-type blue giant and a more massive O-type main-sequence star. The Mintaka system constitutes an eclipsing binary variable star, where the eclipse of one star over the other creates a dip in brightness. Mintaka is the westernmost of the three stars of Orion's Belt, as well as the northernmost. Alnilam is designated Epsilon Orionis and is named for the Arabic phrase meaning "string of pearls". It is the middle and brightest of the three stars of Orion's Belt. Alnilam is a B-type blue supergiant; despite being nearly twice as far from the Sun as the other two belt stars, its luminosity makes it nearly equal in magnitude. Alnilam is losing mass quickly, a consequence of its size. It is the farthest major star in Orion at 1,344 light years. Alnitak, meaning "the girdle", is designated Zeta Orionis, and is the easternmost star in Orion's Belt. It is a triple star system, with the primary star being a hot blue supergiant and the brightest class O star in the night sky. Saiph is designated Kappa Orionis by Bayer, and serves as Orion's right foot. It is of a similar distance and size to Rigel, but appears much fainter. It means the "sword of the giant" Meissa is designated Lambda Orionis, forms Orion's head, and is a multiple star with a combined apparent magnitude of 3.33. Its name means the "shining one". Belt Orion's Belt or The Belt of Orion is an asterism within the constellation. It consists of the three bright stars Zeta (Alnitak), Epsilon (Alnilam), and Delta (Mintaka). Alnitak is around 800 light years away from earth and is 100,000 times more luminous than the Sun and shines with magnitude 1.8; much of its radiation is in the ultraviolet range, which the human eye cannot see. Alnilam is approximately 2,000 light years away from Earth, shines with magnitude 1.70, and with ultraviolet light is 375,000 times more luminous than the Sun. Mintaka is 915 light years away and shines with magnitude 2.21. It is 90,000 times more luminous than the Sun and is a double star: the two orbit each other every 5.73 days. In the Northern Hemisphere, Orion's Belt is best visible in the night sky during the month of January around 9:00 pm, when it is approximately around the local meridian. Just southwest of Alnitak lies Sigma Orionis, a multiple star system composed of five stars that have a combined apparent magnitude of 3.7 and lying 1150 light years distant. Southwest of Mintaka lies the quadruple star Eta Orionis. Sword Orion's Sword contains the Orion Nebula, the Messier 43 nebula, the Running Man Nebula, and the stars Theta Orionis, Iota Orionis, and 42 Orionis. Head Three stars comprise a small triangle that marks the head. The apex is marked by Meissa (Lambda Orionis), a hot blue giant of spectral type O8 III and apparent magnitude 3.54, which lies some 1100 light years distant. Phi-1 and Phi-2 Orionis make up the base. Also nearby is the very young star FU Orionis. Club Stretching north from Betelgeuse are the stars that make up Orion's club. Mu Orionis marks the elbow, Nu and Xi mark the handle of the club, and Chi1 and Chi2 mark the end of the club. Just east of Chi1 is the Mira-type variable red giant U Orionis. Shield West from Bellatrix lie six stars all designated Pi Orionis (π1 Ori, π2 Ori, π3 Ori, π4 Ori, π5 Ori and π6 Ori) which make up Orion's shield. Meteor showers Around 20 October each year the Orionid meteor shower (Orionids) reaches its peak. Coming from the border with the constellation Gemini as many as 20 meteors per hour can be seen. The shower's parent body is Halley's Comet. Deep-sky objects Hanging from Orion's belt is his sword, consisting of the multiple stars θ1 and θ2 Orionis, called the Trapezium and the Orion Nebula (M42). This is a spectacular object that can be clearly identified with the naked eye as something other than a star. Using binoculars, its clouds of nascent stars, luminous gas, and dust can be observed. The Trapezium cluster has many newborn stars, including several brown dwarfs, all of which are at an approximate distance of 1,500 light-years. Named for the four bright stars that form a trapezoid, it is largely illuminated by the brightest stars, which are only a few hundred thousand years old. Observations by the Chandra X-ray Observatory show both the extreme temperatures of the main stars—up to 60,000 kelvins—and the star forming regions still extant in the surrounding nebula. M78 (NGC 2068) is a nebula in Orion. With an overall magnitude of 8.0, it is significantly dimmer than the Great Orion Nebula that lies to its south; however, it is at approximately the same distance, at 1600 light-years from Earth. It can easily be mistaken for a comet in the eyepiece of a telescope. M78 is associated with the variable star V351 Orionis, whose magnitude changes are visible in very short periods of time. Another fairly bright nebula in Orion is NGC 1999, also close to the Great Orion Nebula. It has an integrated magnitude of 10.5 and is 1500 light-years from Earth. The variable star V380 Orionis is embedded in NGC 1999. Another famous nebula is IC 434, the Horsehead Nebula, near ζ Orionis. It contains a dark dust cloud whose shape gives the nebula its name. NGC 2174 is an emission nebula located 6400 light-years from Earth. Besides these nebulae, surveying Orion with a small telescope will reveal a wealth of interesting deep-sky objects, including M43, M78, as well as multiple stars including Iota Orionis and Sigma Orionis. A larger telescope may reveal objects such as the Flame Nebula (NGC 2024), as well as fainter and tighter multiple stars and nebulae. Barnard's Loop can be seen on very dark nights or using long-exposure photography. All of these nebulae are part of the larger Orion molecular cloud complex, which is located approximately 1,500 light-years away and is hundreds of light-years across. It is one of the most intense regions of stellar formation visible within our galaxy. History and mythology The distinctive pattern of Orion is recognized in numerous cultures around the world, and many myths are associated with it. Orion is used as a symbol in the modern world. Ancient Near East The Babylonian star catalogues of the Late Bronze Age name Orion , "The Heavenly Shepherd" or "True Shepherd of Anu" – Anu being the chief god of the heavenly realms. The Babylonian constellation is sacred to Papshukal and Ninshubur, both minor gods fulfilling the role of 'messenger to the gods'. Papshukal is closely associated with the figure of a walking bird on Babylonian boundary stones, and on the star map the figure of the Rooster is located below and behind the figure of the True Shepherd—both constellations represent the herald of the gods, in his bird and human forms respectively. In ancient Egypt, the stars of Orion were regarded as a god, called Sah. Because Orion rises before Sirius, the star whose heliacal rising was the basis for the Solar Egyptian calendar, Sah was closely linked with Sopdet, the goddess who personified Sirius. The god Sopdu is said to be the son of Sah and Sopdet. Sah is syncretized with Osiris, while Sopdet is syncretized with Osiris' mythological wife, Isis. In the Pyramid Texts, from the 24th and 23rd centuries BC, Sah is one of many gods whose form the dead pharaoh is said to take in the afterlife. The Armenians identified their legendary patriarch and founder Hayk with Orion. Hayk is also the name of the Orion constellation in the Armenian translation of the Bible. The Bible mentions Orion three times, naming it "Kesil" (כסיל, literally – fool). Though, this name perhaps is etymologically connected with "Kislev", the name for the ninth month of the Hebrew calendar (i.e. November–December), which, in turn, may derive from the Hebrew root K-S-L as in the words "kesel, kisla" (כֵּסֶל, כִּסְלָה, hope, positiveness), i.e. hope for winter rains.: Job 9:9 ("He is the maker of the Bear and Orion"), Job 38:31 ("Can you loosen Orion's belt?"), and Amos 5:8 ("He who made the Pleiades and Orion"). In ancient Aram, the constellation was known as Nephîlā′, the Nephilim are said to be Orion's descendants. Greco-Roman antiquity In Greek mythology, Orion was a gigantic, supernaturally strong hunter, born to Euryale, a Gorgon, and Poseidon (Neptune), god of the sea. One myth recounts Gaia's rage at Orion, who dared to say that he would kill every animal on Earth. The angry goddess tried to dispatch Orion with a scorpion. This is given as the reason that the constellations of Scorpius and Orion are never in the sky at the same time. However, Ophiuchus, the Serpent Bearer, revived Orion with an antidote. This is said to be the reason that the constellation of Ophiuchus stands midway between the Scorpion and the Hunter in the sky. The constellation is mentioned in Horace's Odes (Ode 3.27.18), Homer's Odyssey (Book 5, line 283) and Iliad, and Virgil's Aeneid (Book 1, line 535) Middle East In medieval Muslim astronomy, Orion was known as al-jabbar, "the giant". Orion's sixth brightest star, Saiph, is named from the Arabic, saif al-jabbar, meaning "sword of the giant". China In China, Orion was one of the 28 lunar mansions Sieu (Xiù) (宿). It is known as Shen (參), literally meaning "three", for the stars of Orion's Belt. (See Chinese constellations) The Chinese character 參 (pinyin shēn) originally meant the constellation Orion (); its Shang dynasty version, over three millennia old, contains at the top a representation of the three stars of Orion's belt atop a man's head (the bottom portion representing the sound of the word was added later). India The Rigveda refers to the Orion Constellation as Mriga (The Deer). Nataraja, 'the cosmic dancer', is often interpreted as the representation of Orion. Rudra, the Rigvedic form of Shiva, is the presiding deity of Ardra nakshatra (Betelgeuse) of Hindu astrology. The Jain Symbol carved in Udayagiri and Khandagiri Caves, India in 1st century BCE has striking resemblance with Orion. Bugis sailors identified the three stars in Orion's Belt as tanra tellué, meaning "sign of three". European folklore In old Hungarian tradition, Orion is known as "Archer" (Íjász), or "Reaper" (Kaszás). In recently rediscovered myths, he is called Nimrod (Hungarian: Nimród), the greatest hunter, father of the twins Hunor and Magor. The π and o stars (on upper right) form together the reflex bow or the lifted scythe. In other Hungarian traditions, Orion's belt is known as "Judge's stick" (Bírópálca). In Scandinavian tradition, Orion's belt was known as "Frigg's Distaff" (friggerock) or "Freyja's distaff". The Finns call Orion's belt and the stars below it "Väinämöinen's scythe" (Väinämöisen viikate). Another name for the asterism of Alnilam, Alnitak and Mintaka is "Väinämöinen's Belt" (Väinämöisen vyö) and the stars "hanging" from the belt as "Kaleva's sword" (Kalevanmiekka). In Siberia, the Chukchi people see Orion as a hunter; an arrow he has shot is represented by Aldebaran (Alpha Tauri), with the same figure as other Western depictions. There are claims in popular media that the Adorant from the Geißenklösterle cave, an ivory carving estimated to be 35,000 to 40,000 years old, is the first known depiction of the constellation. Scholars dismiss such interpretations, saying that perceived details such as a belt and sword derive from preexisting features in the grain structure of the ivory. Americas The Seri people of northwestern Mexico call the three stars in the belt of Orion Hapj (a name denoting a hunter) which consists of three stars: Hap (mule deer), Haamoja (pronghorn), and Mojet (bighorn sheep). Hap is in the middle and has been shot by the hunter; its blood has dripped onto Tiburón Island. The same three stars are known in Spain and most of Latin America as "Las tres Marías" (Spanish for "The Three Marys"). In Puerto Rico, the three stars are known as the "Los Tres Reyes Magos" (Spanish for The three Wise Men). The Ojibwa (Chippewa) Native Americans call this constellation Kabibona'kan, the Winter Maker, as its presence in the night sky heralds winter. To the Lakota Native Americans, Tayamnicankhu (Orion's Belt) is the spine of a bison. The great rectangle of Orion is the bison's ribs; the Pleiades star cluster in nearby Taurus is the bison's head; and Sirius in Canis Major, known as Tayamnisinte, is its tail. Another Lakota myth mentions that the bottom half of Orion, the Constellation of the Hand, represented the arm of a chief that was ripped off by the Thunder People as a punishment from the gods for his selfishness. His daughter offered to marry the person who can retrieve his arm from the sky, so the young warrior Fallen Star (whose father was a star and whose mother was human) returned his arm and married his daughter, symbolizing harmony between the gods and humanity with the help of the younger generation. The index finger is represented by Rigel; the Orion Nebula is the thumb; the Belt of Orion is the wrist; and the star Beta Eridani is the pinky finger. Austronesian The seven primary stars of Orion make up the Polynesian constellation Heiheionakeiki which represents a child's string figure similar to a cat's cradle. Several precolonial Filipinos referred to the belt region in particular as "balatik" (ballista) as it resembles a trap of the same name which fires arrows by itself and is usually used for catching pigs from the bush. Spanish colonization later led to some ethnic groups referring to Orion's belt as "Tres Marias" or "Tatlong Maria." In Māori tradition, the star Rigel (known as Puanga or Puaka) is closely connected with the celebration of Matariki. The rising of Matariki (the Pleiades) and Rigel before sunrise in midwinter marks the start of the Māori year. In Javanese culture, the constellation is often called Lintang Waluku or Bintang Bajak, referring to the shape of a paddy field plow. Contemporary symbolism The imagery of the belt and sword has found its way into popular western culture, for example in the form of the shoulder insignia of the 27th Infantry Division of the United States Army during both World Wars, probably owing to a pun on the name of the division's first commander, Major General John F. O'Ryan. The film distribution company Orion Pictures used the constellation as its logo. Depictions In artistic renderings, the surrounding constellations are sometimes related to Orion: he is depicted standing next to the river Eridanus with his two hunting dogs Canis Major and Canis Minor, fighting Taurus. He is sometimes depicted hunting Lepus the hare. He sometimes is depicted to have a lion's hide in his hand. There are alternative ways to visualise Orion. From the Southern Hemisphere, Orion is oriented south-upward, and the belt and sword are sometimes called the saucepan or pot in Australia and New Zealand. Orion's Belt is called Drie Konings (Three Kings) or the Drie Susters (Three Sisters) by Afrikaans speakers in South Africa and are referred to as les Trois Rois (the Three Kings) in Daudet's Lettres de Mon Moulin (1866). The appellation Driekoningen (the Three Kings) is also often found in 17th- and 18th-century Dutch star charts and seaman's guides. The same three stars are known in Spain, Latin America, and the Philippines as "Las Tres Marías" (The Three Marys), and as "Los Tres Reyes Magos" (The three Wise Men) in Puerto Rico. Even traditional depictions of Orion have varied greatly. Cicero drew Orion in a similar fashion to the modern depiction. The Hunter held an unidentified animal skin aloft in his right hand; his hand was represented by Omicron2 Orionis and the skin was represented by the 5 stars designated Pi Orionis. Kappa and Beta Orionis represented his left and right knees, while Eta and Lambda Leporis were his left and right feet, respectively. As in the modern depiction, Delta, Epsilon, and Zeta represented his belt. His left shoulder was represented by Alpha Orionis, and Mu Orionis made up his left arm. Lambda Orionis was his head and Gamma, his right shoulder. The depiction of Hyginus was similar to that of Cicero, though the two differed in a few important areas. Cicero's animal skin became Hyginus's shield (Omicron and Pi Orionis), and instead of an arm marked out by Mu Orionis, he holds a club (Chi Orionis). His right leg is represented by Theta Orionis and his left leg is represented by Lambda, Mu, and Epsilon Leporis. Further Western European and Arabic depictions have followed these two models. Future Orion is located on the celestial equator, but it will not always be so located due to the effects of precession of the Earth's axis. Orion lies well south of the ecliptic, and it only happens to lie on the celestial equator because the point on the ecliptic that corresponds to the June solstice is close to the border of Gemini and Taurus, to the north of Orion. Precession will eventually carry Orion further south, and by AD 14000, Orion will be far enough south that it will no longer be visible from the latitude of Great Britain. Further in the future, Orion's stars will gradually move away from the constellation due to proper motion. However, Orion's brightest stars all lie at a large distance from the Earth on an astronomical scale—much farther away than Sirius, for example. Orion will still be recognizable long after most of the other constellations—composed of relatively nearby stars—have distorted into new configurations, with the exception of a few of its stars eventually exploding as supernovae, for example Betelgeuse, which is predicted to explode sometime in the next million years. See also EURion constellation Hubble 3D (2010), IMAX film with an elaborate CGI "fly-through" of the Orion Nebula Orion correlation theory Orion (mythology) Orion (Chinese astronomy) Aurvandill Glooscap Heiheionakeiki Julpan Nataraja Osiris Papsukkal Urania Winter Hexagon References Explanatory notes Citations Bibliography Ian Ridpath and Wil Tirion (2007). Stars and Planets Guide, Collins, London, England. . Princeton University Press, Princeton, New Jersey. . . External links The Deep Photographic Guide to the Constellations: Orion Melbourne Planetarium: Orion Sky Tour Views of Orion from other places in our Galaxy The clickable Orion Ian Ridpath's Star Tales – Orion Deep Widefield image of Orion Beautiful Astrophoto: Zoom Into Orion Warburg Institute Iconographic Database (medieval and early modern images of Orion) Constellations Constellations listed by Ptolemy Equatorial constellations
Orion (constellation)
[ "Astronomy" ]
5,701
[ "Constellations listed by Ptolemy", "Orion (constellation)", "Constellations", "Sky regions", "Equatorial constellations" ]
153,803
https://en.wikipedia.org/wiki/Murray%20Bookchin
Murray Bookchin (January 14, 1921 – July 30, 2006) was an American social theorist, author, orator, historian, and political philosopher. Influenced by G. W. F. Hegel, Karl Marx, and Peter Kropotkin, he was a pioneer in the environmental movement. Bookchin formulated and developed the theory of social ecology and urban planning within anarchist, libertarian socialist, and ecological thought. He was the author of two dozen books covering topics in politics, philosophy, history, urban affairs, and social ecology. Among the most important were Our Synthetic Environment (1962), Post-Scarcity Anarchism (1971), The Ecology of Freedom (1982), and Urbanization Without Cities (1987). In the late 1990s, he became disenchanted with what he saw as an increasingly apolitical "lifestylism" of the contemporary anarchist movement, stopped referring to himself as an anarchist, and founded his own libertarian socialist ideology called "communalism", which seeks to reconcile and expand Marxist, syndicalist, and anarchist thought. Bookchin was a prominent anti-capitalist, anti-fascist and advocate of social decentralization along ecological and democratic lines. His ideas have influenced social movements since the 1960s, including the New Left, the anti-nuclear movement, the anti-globalization movement, Occupy Wall Street, and more recently, the democratic confederalism of the Autonomous Administration of North and East Syria. He was a central figure in the American green movement. An autodidact who never attended college, he is considered one of the last public intellectuals and most important left theorists of the twentieth century. Biography Bookchin was born in New York City to Nathan Bookchin (born Nacham Wisotsky) and his first wife, Rose (Kalusky) Bookchin, Jewish immigrants from the Russian Empire. His father was from Mazyr (now Belarus) and his mother from Vilnius (Lithuania). He was embarrassed by his given name Mortimore and went by his childhood nickname, Murray. His father adopted the name of a relative, Bukczin, and anglicized it to Bookchin. His parents divorced in 1934. He grew up in the Bronx with his mother, uncle Daniel, and maternal grandmother, Zeitel, a Socialist Revolutionary who imbued him with Russian populist ideas. After his grandmother's death in 1930, he joined the Young Pioneers of America, the Communist youth organization (for children 9 to 14) and the Young Communist League (for youths) in 1935. He attended the Workers School near Union Square, where he studied Marxism. In the late 1930s he broke with Stalinism and gravitated toward Trotskyism, joining the Socialist Workers Party (SWP). In the early 1940s, he worked in a foundry in Bayonne, New Jersey, where he was a trade union organizer and shop steward for the United Electrical Workers as well as a recruiter for the SWP. Within the SWP, he adhered to the Goldman-Morrow faction, which broke away after the war ended. He was an auto worker and UAW member at the time of the great General Motors strike of 1945–46. In 1949, while speaking to a Zionist youth organization at City College, Bookchin met a mathematics student, Beatrice Appelstein, whom he married in 1951. They were married for 12 years and lived together for 35, remaining close friends and political allies for the rest of his life. They had two children, Debbie and Joseph. On religious views, Bookchin was an atheist, but was considered to be tolerant of religious views. From 1947, Bookchin collaborated with a fellow lapsed Trotskyist, the German expatriate Josef Weber, in New York in the Movement for a Democracy of Content, a group of 20 or so post-Trotskyists who collectively edited the periodical Contemporary Issues – A Magazine for a Democracy of Content. Contemporary Issues embraced utopianism. The periodical provided a forum for the belief that previous attempts to create utopia had foundered on the necessity of toil and drudgery; but now modern technology had obviated the need for human toil, a liberatory development. To achieve this "post-scarcity" society, Bookchin developed a theory of ecological decentralism. The magazine published Bookchin's first articles, including the pathbreaking "The Problem of Chemicals in Food" (1952). In 1958, Bookchin defined himself as an anarchist, seeing parallels between anarchism and environmentalism. His first book, Our Synthetic Environment, was published under the pseudonym Lewis Herber, in 1962, a few months before Rachel Carson's famous Silent Spring. In 1964, Bookchin joined the Congress of Racial Equality (CORE), and protested racism at the 1964 World's Fair. During 1964–1967, while living on Manhattan's Lower East Side, he cofounded and was the principal figure in the New York Federation of Anarchists. His groundbreaking essay "Ecology and Revolutionary Thought" introduced environmentalism and, more specifically, ecology as a concept in radical politics. In 1968, he founded another group that published the influential Anarchos magazine, which published that and other innovative essays on post-scarcity and on sustainable technologies such as solar and wind energy, and on decentralization and miniaturization. Lecturing throughout the United States, he helped popularize the concept of ecology to the counterculture. His widely republished 1969 essay "Listen, Marxist!" warned Students for a Democratic Society (in vain) against an impending takeover by a Marxist group. "Once again the dead are walking in our midst," he wrote, "ironically, draped in the name of Marx, the man who tried to bury the dead of the nineteenth century. So the revolution of our own day can do nothing better than parody, in turn, the October Revolution of 1917 and the civil war of 1918–1920, with its 'class line,' its Bolshevik Party, its 'proletarian dictatorship,' its puritanical morality, and even its slogan, 'Soviet power'". In 1969–1970, he taught at the Alternate U, a counter-cultural radical school based on 14th Street in Manhattan. In 1971, he moved to Burlington, Vermont, with a group of friends, to put into practice his ideas of decentralization. In the fall of 1973, he was hired by Goddard College to lecture on technology; his lectures led to a teaching position and to the creation of the Social Ecology Studies program in 1974 and the Institute for Social Ecology (ISE) soon thereafter, of which he became the director. In 1974, he was hired by Ramapo College in Mahwah, New Jersey, where he quickly became a full professor. The ISE was a hub for experimentation and study of appropriate technology in the 1970s. In 1977–78 he was a member of the Spruce Mountain Affinity Group of the Clamshell Alliance. Also in 1977, he published The Spanish Anarchists, a history of the Spanish anarchist movement up to the revolution of 1936. During this period, Bookchin briefly forged some ties with the nascent libertarian movement, speaking at a Libertarian Party convention and contributing to a newsletter edited by Karl Hess. Nevertheless, Bookchin rejected the types of libertarianism that advocated unconstrained individualism. In 1980, Bookchin co-established the New England Anarchist Conference (NEAC) to organize the anarchist movement in the United States. At its first meeting in October 1980, 175 anarchists from the northeastern US and Quebec attended. By the second conference in January 1981 in Somerville, Massachusetts, the NEAC devolved into sectarianism, which moved Bookchin to lose faith in a socialist revolution happening in the US. During the 1980s, Bookchin engaged in occasional critiques of Bernie Sanders' mayorship in Burlington. Bookchin criticized Sanders' politics, claiming he lacked a drive to establish direct democracy, followed a Marxian deprioritization of ecology, and was a “'centralist' who narrowly focused on economic growth." Bookchin and his social ecologist colleagues in the Burlington Greens, which he co-founded with his former wife Bea Bookchin, criticized the Sanders administration for pushing for a luxury condo waterfront redevelopment, which was eventually rejected by Burlington voters. They advocated for a moratorium on growth, a moral economy, and social justice rooted in grassroots democracy. In 1988, Bookchin and Howie Hawkins founded the Left Green Network "as a radical alternative to U.S. Green liberals", based around the principles of social ecology and libertarian municipalism. In 1995, Bookchin lamented the decline of American anarchism into primitivism, anti-technologism, neo-Situationism, individual self-expression, and "ad hoc adventurism," at the expense of forming a social movement. He formally broke with anarchism in 1999, describing himself in 2002 as a "Communalist" in a major essay elaborating his late-life views, called "The Communalist Project". He continued to teach at the ISE until 2004. Bookchin died of congestive heart failure on July 30, 2006, at his home in Burlington, at the age of 85. Thought In addition to his political writings, Bookchin wrote extensively on philosophy, calling his ideas dialectical naturalism. The dialectical writings of Georg Wilhelm Friedrich Hegel, which articulate a developmental philosophy of change and growth, seemed to him to lend themselves to an organic, environmentalist approach. Although Hegel "exercised a considerable influence" on Bookchin, he was not, in any sense, a Hegelian. His philosophical writings emphasize humanism, rationality, and the ideals of the Enlightenment. General sociological and psychological views Bookchin was critical of class-centered analysis of Marxism and simplistic anti-state forms of libertarianism and liberalism and wished to present what he saw as a more complex view of societies. In The Ecology of Freedom: The Emergence and Dissolution of Hierarchy, he says that: My use of the word hierarchy in the subtitle of this work is meant to be provocative. There is a strong theoretical need to contrast hierarchy with the more widespread use of the words class and State; careless use of these terms can produce a dangerous simplification of social reality. To use the words hierarchy, class, and State interchangeably, as many social theorists do, is insidious and obscurantist. This practice, in the name of a "classless" or "libertarian" society, could easily conceal the existence of hierarchical relationships and a hierarchical sensibility, both of which—even in the absence of economic exploitation or political coercion—would serve to perpetuate unfreedom. Bookchin also points to an accumulation of hierarchical systems throughout history that has occurred up to contemporary societies which tends to determine the human collective and individual psyche: The objective history of the social structure becomes internalized as a subjective history of the psychic structure. Heinous as my view may be to modern Freudians, it is not the discipline of work but the discipline of rule that demands the repression of internal nature. This repression then extends outward to external nature as a mere object of rule and later of exploitation. This mentality permeates our individual psyches in a cumulative form up to the present day—not merely as capitalism but as the vast history of hierarchical society from its inception. Humanity's environmental predicament Bookchin's book about humanity's collision course with the natural world, Our Synthetic Environment, was published six months before Rachel Carson's Silent Spring. Bookchin rejected Barry Commoner's belief that the environmental crisis could be traced to technological choices, Paul Ehrlich's views that it could be traced to overpopulation, or the even more pessimistic view that traces this crisis to human nature. Rather, Bookchin felt that our environmental predicament is the result of the cancerous logic of capitalism, a system aimed at maximizing profit instead of enriching human lives: "By the very logic of its grow-or-die imperative, capitalism may well be producing ecological crises that gravely imperil the integrity of life on this planet." The solution to this crisis, he said, is not a return to hunter-gatherer societies, which Bookchin characterized as xenophobic and warlike. Bookchin likewise opposed "a politics of mere protest, lacking programmatic content, a proposed alternative, and a movement to give people direction and continuity." He claims we need:...a constant awareness that a given society's irrationality is deep-seated, that its serious pathologies are not isolated problems that can be cured piecemeal but must be solved by sweeping changes in the often hidden sources of crisis and suffering—that awareness alone is what can hold a movement together, give it continuity, preserve its message and organization beyond a given generation, and expand its ability to deal with new issues and developments.The answer then lies in Communalism, a system encompassing a directly democratic political organization anchored in loosely confederated popular assemblies, decentralization of power, absence of domination of any kind, and replacing capitalism with human-centered forms of production. Social ecology Social ecology is a philosophical theory associated with Bookchin, concerned with the relationship between ecological and social issues. It is not a movement but a theory primarily associated with his thought and elaborated over his body of work. He presents a utopian philosophy of human evolution that combines the nature of biology and society into a third "thinking nature" beyond biochemistry and physiology, which he argues is a more complete, conscious, ethical, and rational nature. Humanity, by this line of thought, is the latest development from the long history of organic development on Earth. Bookchin's social ecology proposes ethical principles for replacing a society's propensity for hierarchy and domination with that of democracy and freedom. It emerged from a time in the mid-1960s, under the emergence of both the global environmental and the American civil rights movements, and played a much more visible role from the upward movement against nuclear power by the late 1970s. It presents ecological problems as arising mainly from social problems, in particular from different forms of hierarchy and domination beginning with gerontocracy and patriarchy and extending through various forms of oppression including gender, race, and class status. It seeks to resolve them through the model of a non-hierarchical ecological society based on self-determination at the local level, which opposes the current capitalist system of production and consumption. It aims to set up a moral, decentralized, united society, guided by reason. While Bookchin distanced himself from anarchism later in his life, the philosophical theory of social ecology is often considered to be a form of eco-anarchism. Bookchin wrote about the effects of urbanization on human life in the early 1960s during his participation in the civil rights and related social movements. He then began to pursue the connection between ecological and social issues, culminating with his best-known book, The Ecology of Freedom, which he had developed over a decade. His argument, that human domination and destruction of nature follows from social domination between humans, was a breakthrough position in the growing field of ecology. He writes that life develops from self-organization and evolutionary cooperation (symbiosis). Bookchin wrote of preliterate societies organized around mutual need but ultimately overrun by institutions of hierarchy and domination, such as city-states and capitalist economies, which he attributes uniquely to societies of humans and not communities of animals. He proposes confederation between communities of humans run through democracy rather than through administrative logistics. Bookchin's work, beginning with anarchist writings on the subject in the 1960s, has continuously evolved. Towards the end of the 1990s, he increasingly integrated the principle of communalism, with aspirations more inclined towards institutionalized municipal democracy, which distanced him from certain evolutions of anarchism. Bookchin's work draws inspiration from, and expands up, anarchism (mainly Kropotkin), Syndicalism, and Marxism (including the writings of Marx and Engels). Social ecology refuses the pitfalls of a Neo-Malthusian ecology which erases social relationships by replacing them with "natural forces", but also of a technocratic ecology which considers that environmental progress must rely on technological breakthroughs and that the state will play an integral role in this technological development. According to Bookchin, these two currents depoliticize ecology and mythologize the past and the future. In May 2016, the first "International Social Ecology Meetings" were organized in Lyon, France, which brought together a hundred radical environmentalists, decreasing figures and libertarians, most of whom came from France, Belgium, Spain and Switzerland, but also from the United States, Guatemala and Canada. At the center of the debates: libertarian municipalism as an alternative to the nation state and the need to rethink activism. Kurdish movement Bookchin's reflections on social ecology and libertarian municipalism also inspired Abdullah Öcalan, the historical leader of the Kurdish movement, to create the concept of democratic confederalism, which aims to bring together the peoples of the Middle East in a confederation of democratic, multicultural and ecological communes. Adopted by the Kurdistan Workers' Party (PKK) since 2005, Öcalan's project represents a major ideological shift away from their previous goal of establishing a Marxist–Leninist state. In addition to the PKK, Öcalan's internationalist project was also well received by its Syrian counterpart, the Party of Democratic Union (PYD), which would become the first organization in the world to actually found a society based on the principles of democratic confederalism. On January 6, 2014, the cantons of Rojava, in Syrian Kurdistan, federated into autonomous municipalities, adopting a social contract which established a decentralized non-hierarchical society, based on principles of direct democracy, feminism, ecology, cultural pluralism, participatory politics and economic cooperativism. Municipalism and communalism Bookchin's vision of an ecological society is based on highly participatory, grassroots politics, in which municipal communities democratically plan and manage their affairs through popular assembly, a program he called communalism. This democratic deliberation purposefully promotes autonomy and self-reliance, as opposed to centralized state politics. While this program retains elements of anarchism, it emphasizes a higher degree of organization (community planning, voting, and institutions) than general anarchism. In Bookchin's Communalism, these autonomous municipal communities connect with each other via confederations. Starting in the 1970s, Bookchin argued that the arena for libertarian social change should be the municipal level. In 1980 Bookchin used the term "libertarian municipalism" to describe a libertarian socialist system in which institutions of directly democratic assemblies would oppose and replace the state with a confederation of free municipalities. In The Next Revolution, Bookchin stresses the link that libertarian municipalism has with his earlier philosophy of social ecology. He writes: Bookchin proposes that these institutional forms must take place within differently scaled local areas. In a 2001 interview he summarized his views this way: Libertarian municipalism intends to create a situation in which the two powers—the municipal confederations and the nation state cannot coexist. Municipalization as a foundation for an ecological society Bookchin posits that neither privatization nor nationalization can effectively pave the way toward an ecological society. He asserts that both models are deeply embedded in structures of domination, failing to address the root causes of environmental crises. In contrast, Bookchin advocates for municipalization as a core principle in his libertarian municipalist framework Critique of privatization and nationalization Bookchin critiques private property as a central driver of both social and ecological harm, associating it with exploitation, domination, and the prioritization of profit over community and environmental well-being. According to Bookchin, systems based on private ownership promote competition and individualism, which he argues are incompatible with the cooperation and solidarity needed to build a fair and sustainable society. Nationalization, often positioned as a remedy to capitalism's excesses, is also seen by Bookchin as inadequate. He contends that nationalization typically shifts control from private companies to centralized bureaucratic entities, merely replacing one form of dominance with another. In this state-centered model, the apparatus of the state, rather than the market, assumes authority over economic activities. This can lead to what Bookchin describes as a "privatized economy in a collectivized form," where workers remain detached from their labor and ecological exploitation persists. Legacy and influence Though Bookchin, by his own recognition, failed to win over a substantial body of supporters during his own lifetime, his ideas have nonetheless influenced movements and thinkers across the globe. Among these are the Kurdish People's Protection Units (YPG) and closely aligned Kurdistan Workers' Party (PKK) in Turkey, which have fought the Turkish state since the 1980s to try to secure greater political and cultural rights for the country's Kurds. The PKK is designated as a terrorist organization by the Turkish and United States governments, while the YPG has been considered an ally of the US against ISIS. Though founded on a rigid Marxist–Leninist ideology, the PKK has seen a shift in its thought and aims since the capture and imprisonment of its leader, Abdullah Öcalan, in 1999. Öcalan began reading a variety of post-Marxist political theory while in prison, and found particular interest in Bookchin's works. Öcalan attempted in early 2004 to arrange a meeting with Bookchin through his lawyers, describing himself as Bookchin's "student" eager to adapt his thought to Middle Eastern society. Bookchin was too ill to accept the request. In May 2004 Bookchin conveyed this message "My hope is that the Kurdish people will one day be able to establish a free, rational society that will allow their brilliance once again to flourish. They are fortunate indeed to have a leader of Mr. Öcalan's talents to guide them". When Bookchin died in 2006, the PKK hailed the American thinker as "one of the greatest social scientists of the 20th century", and vowed to put his theory into practice. "Democratic confederalism", the variation on Communalism developed by Öcalan in his writings and adopted by the PKK, does not outwardly seek Kurdish rights within the context of the formation of an independent state separate from Turkey. The PKK claims that this project is not envisioned as being only for Kurds, but rather for all peoples of the region, regardless of their ethnic, national, or religious background. Rather, it promulgates the formation of assemblies and organizations beginning at the grassroots level to enact its ideals in a non-state framework beginning at the local level. It also places a particular emphasis on securing and promoting women's rights. The PKK has had some success in implementing its programme, through organizations such as the Democratic Society Congress (DTK), which coordinates political and social activities within Turkey, and the Koma Civakên Kurdistan (KCK), which does so across all countries where Kurds live. Selected works Post-Scarcity Anarchism (1971) The Spanish Anarchists: The Heroic Years (1977) The Ecology of Freedom: The Emergence and Dissolution of Hierarchy (1982) See also Eco-socialism History of the Green Party of the United States Outline of libertarianism Insurrectionary communes in France in 1870-1871 References Bibliography Further reading Tarinski, Yavor, ed. Enlightenment and ecology: The legacy of Murray Bookchin in the 21st Century. Montréal: Black Rose Books, 2021. Price, Andy, Recovering Bookchin: Social Ecology and the Crises of Our Time, New Compass (2012) Biehl, Janet, The Murray Bookchin Reader (Cassell, 1997) . Biehl, Janet, "Mumford Gutkind Bookchin: The Emergence of Eco-Decentralism" (New Compass, 2011) Marshall, P. (1992), "Murray Bookchin and the Ecology of Freedom", pp. 602–622 in, Demanding the Impossible. Fontana Press. . Selva Varengo, La rivoluzione ecologica. Il pensiero libertario di Murray Bookchin (2007) Milano: Zero in condotta. . E. Castano, Ecologia e potere. Un saggio su Murray Bookchin, Mimesis, Milano 2011 . Damian F. White 'Bookchin – A Critical Appraisal'. Pluto Press (UK/Europe), University of Michigan Press. (hardback); (paperback). Neither Washington Nor Stowe: Common Sense For The Working Vermonter, by David Van Deusen, Sean West, and the Green Mountain Anarchist Collective (NEFAC Vermont), Catamount Tavern Press, 2004. This libertarian socialist manifesto took many of Bookchin's ideas and articulated them as they would manifest in a revolutionary Vermont. External links Murray Bookchin entry at the Anarchy Archives Murray Bookchin Papers at Tamiment Library and Robert F. Wagner Archives at New York University TRISE Online Conference 2021: "100 years Murray Bookchin" & Video recordings available here Institute of Social Ecology (official site) 1921 births 2006 deaths 20th-century American historians 20th-century American male writers 20th-century American philosophers 21st-century American historians 21st-century American male writers American anarchists American anti-capitalists American anti-fascists American anti-war activists American atheists American environmentalists American feminists American libertarians American male non-fiction writers American non-fiction environmental writers American people of Belarusian-Jewish descent American people of Lithuanian-Jewish descent American political philosophers American syndicalists Anarchist theorists American anarchist writers Anarcho-communists Anti-consumerists Degrowth advocates Ecofeminists Environmental philosophers Environmental writers Green anarchists Historians from New York City Historians of anarchism Jewish American atheists Jewish American historians Jewish American non-fiction writers Jewish anarchists Jewish anti-fascists Judaism and environmentalism Jewish feminists Jewish philosophers Jewish socialists Left-libertarians Libertarian socialists New Left New York (state) socialists Vermont socialists Writers from Burlington, Vermont Writers from New York City
Murray Bookchin
[ "Environmental_science" ]
5,369
[]
153,831
https://en.wikipedia.org/wiki/Bacteriostatic%20agent
A bacteriostatic agent or bacteriostat, abbreviated Bstatic, is a biological or chemical agent that stops bacteria from reproducing, while not necessarily killing them otherwise. Depending on their application, bacteriostatic antibiotics, disinfectants, antiseptics and preservatives can be distinguished. When bacteriostatic antimicrobials are used, the duration of therapy must be sufficient to allow host defense mechanisms to eradicate the bacteria. Upon removal of the bacteriostat, the bacteria usually start to grow rapidly. This is in contrast to bactericides, which kill bacteria. Bacteriostats are often used in plastics to prevent growth of bacteria on surfaces. Bacteriostats commonly used in laboratory work include sodium azide (which is acutely toxic) and thiomersal. Bacteriostatic antibiotics Bacteriostatic antibiotics limit the growth of bacteria by interfering with bacterial protein production, DNA replication, or other aspects of bacterial cellular metabolism. They must work together with the immune system to remove the microorganisms from the body. However, there is not always a precise distinction between them and bactericidal antibiotics; high concentrations of some bacteriostatic agents are also bactericidal, whereas low concentrations of some bactericidal agents are bacteriostatic. This group includes: See also List of antibiotics Oligodynamic effect References Antibiotics
Bacteriostatic agent
[ "Biology" ]
302
[ "Antibiotics", "Biocides", "Biotechnology products" ]
153,852
https://en.wikipedia.org/wiki/Chief%20information%20officer
Chief information officer (CIO), chief digital information officer (CDIO) or information technology (IT) director, is a job title commonly given to the most senior executive in an enterprise who works with information technology and computer systems, in order to support enterprise goals. Normally, the CIO reports directly to the chief executive officer, but may also report to the chief operating officer or chief financial officer. In military organizations, the CIO reports to the commanding officer. The role of chief information officer was first defined in 1981 by William R. Synnott, former senior vice president of the Bank of Boston, and William H. Gruber, a former professor at the Massachusetts Institute of Technology Sloan School of Management. A CIO will sometimes serve as a member of the board of directors. The need for CIOs CIOs and CDIOs play an important role in businesses that use technology and data because they provide a critical interface between the business needs, user needs, and the information and communication technology (ICT) used in the work. In recent years it has become increasingly understood that knowledge limited to just business or just IT is not sufficient for success in this role. Instead, CIOs need both kinds of knowledge to manage IT resources and to manage and plan "ICT, including policy and practice development, planning, budgeting, resourcing and training." Also, CIOs are playing an increasingly important role in helping to control costs and increase profits via the use of ICT, and to limit potential organizational damage by setting up appropriate IT controls and planning for IT recovery from possible disasters. These objectives also demand a combination of personal skills. Computer Weekly magazine highlights that "53% of IT leaders report a shortage of [IT managers] with a high-level of personal skills, such as communication and leadership" in the workplace. Because information technologies and digital tools evolve so quickly, organizations are sometimes challenged to find staff with the necessary combination of skills in the marketplace, and may look to train existing staff to mitigate skill shortages. CIOs are needed to bridge the gap between IT and non-IT professional roles to support effective working relationships. Roles and responsibilities The chief information officer of an organization is responsible for several business functions. First and most importantly, the CIO must fulfill the role of a business leader. The CIO makes executive decisions regarding matters such as the purchase of IT equipment from suppliers or the creation of new IT systems. Also as a business leader, the CIO is responsible for leading and directing the workforce of their specific organization. A CIO is typically "required to have strong organizational skills." This is particularly relevant for the chief information officer of an organization who must balance roles and responsibilities in order to gain a competitive advantage, whilst keeping the best interests of the organization's employees in mind. CIOs also have the responsibility of recruiting, so it is important that they work proactively to source and nurture the best employees possible. CIOs are required to map out both the ICT strategy and ICT policy of an organization. The ICT strategy covers future-proofing, procurement, and the external and internal standards laid out by an organization. Similarly, the CIO must develop the ICT policy, which details how ICT is utilized and applied. Both are needed for the protection of the organization in the short and long term and the process of strategizing for the future. Paul Burfitt, former CIO of AstraZeneca, also outlines the role of the CIO in IT governance, which he refers to as the "clarifying [of] accountability and the role of committees". In recent years, CIOs have become more closely involved in customer-facing products. With the rising awareness in organizations that their customers are expecting digital services as part of their relationship with an organization, CIOs have been tasked with more product-oriented responsibilities. Risks involved The CIO faces a rather high risk of error and failures, as a result of the challenging nature of the role, along with a large number of responsibilities – such as the provision of finance, recruitment of professionals, establishing data protection and development of policy and strategy. The CIO of U.S company Target was forced into resignation in 2014 after the theft of 40 million credit card details and 70 million customer details by hackers. CIOs that are knowledgeable about their industry are able to adapt and thereby reduce their chances of error. With the introduction of legislation such as the General Data Protection Regulation (GDPR), CIOs have now become increasingly focused on how their role is regulated and can lead to financial and reputational damage to a business. However, regulations such as GDPR have also been advantageous to CIOs, enabling them to have the budget and authority in the organization to make significant changes to the way information is managed. Sabah Khan-Carter of Rupert Murdoch's News Corp described GDPR as "a really big opportunity for most organizations". Educational background and technology skills Many candidates have a Master of Business Administration degree or a Master of Science in Management degree. More recently, CIOs' leadership capabilities, business acumen, and strategic perspectives have taken precedence over technical skills. It is now quite common for CIOs to be appointed from the business side of the organization, especially if they have project management skills. Despite the strategic nature of the role, a 2017 survey, conducted by Logicalis, of 890 CIOs across 23 countries found that 62% of CIOs spend 60% or more of their time on day to day IT activities. In 2012, Gartner Executive Programs conducted a global CIO survey and received responses from 2,053 CIOs from 41 countries and 36 industries. Gartner reported that survey results indicated that the top ten technology priorities for CIOs for 2013 were analytics and business intelligence, mobile technologies, cloud computing, collaboration technologies, legacy modernization, IT management, customer relationship management, virtualization, security, and enterprise resource planning. CIO magazine's "State of the CIO 2008" survey asked 558 IT leaders whom they report to, and the results were: CEO (41%), CFO (23%), COO (16%), corporate CIO (7%) and other (13%). Typically, the CIO is involved with driving the analysis and re-engineering of existing business processes, identifying and developing the capability to use new tools, reshaping the enterprise's physical infrastructure and network access, and identifying and exploiting the enterprise's knowledge resources. Many CIOs head the enterprise's efforts to integrate the Internet into both its long-term strategy and its immediate business plans. CIOs are often tasked with either driving or heading up crucial IT projects that are essential to the strategic and operational objectives of an organization. A good example of this would be the implementation of an enterprise resource planning (ERP) system, which typically has wide-ranging implications for most organizations. Another way that the CIO role has changed is an increasing focus on service management. As SaaS, IaaS, BPO and other flexible delivery techniques are brought into organizations the CIO usually manages these 3rd party services. In essence, a CIO in the modern organization needs business skills and the ability to relate to the organization as a whole, as opposed to being a technological expert with limited functional business expertise. The CIO position is as much about anticipating technology and usage trends in the market place as it is about ensuring that the business navigates these trends with expert guidance and strategic planning aligned to the corporate strategy. Distinction between CIO, CDO, and CTO The roles of chief information officer, chief digital officer and chief technology officer are often mixed up. It has been stated that CTOs are concerned with technology itself, often customer-facing, whereas CIOs are much more concerned with its applications within the business and how they can be managed. More specifically, CIOs oversee a business's IT systems and functions, create and deliver strategies and policies, and focus on internal customers. In contrast to this, CTOs focus on the external customers to the organization and how technology can make the company more profitable. The traditional definition of CTOs focused on using technology as an external competitive advantage now includes CDOs who use the power of modern technologies, online design and big data to digitize a business. CIO Councils CIO Councils bring together a number of CIOs from different organizations which aim to work together, for example across healthcare or across government. Examples include the UK public sector's CIO Council, the London CIO Council for the healthcare sector, and the Chief Information Officers Council in the USA. Awards and recognition It is not uncommon for CIOs to be recognized and awarded annually, particularly in the technology space. These awards are commonly dictated by the significance of their contribution to the industry and generally occur in local markets only. Awards are generally judged by industry peers, or senior qualified executives such as the chief executive officer, chief operating officer or chief financial officer. Generally, awards recognize substantial impact to the local technology market. In Australia, the top 50 CIOs are recognized annually under the CIO50 banner. In the United States of America, United Kingdom and New Zealand CIOs are recognized under the CIO100 banner. See also Chief information security officer Chief technology officer Chief AI officer Chief digital officer Chief executive officer Chief financial officer Chief operating officer Chief investment officer Chief knowledge officer Chief accessibility officer Public information officer References Information systems Management occupations Business occupations
Chief information officer
[ "Technology" ]
1,913
[ "Information systems", "Information technology" ]
153,859
https://en.wikipedia.org/wiki/Palisade
A palisade, sometimes called a stakewall or a paling, is typically a row of closely placed, high vertical standing tree trunks or wooden or iron stakes used as a fence for enclosure or as a defensive wall. Palisades can form a stockade. Etymology Palisade derives from pale, from the Latin word , meaning stake, specifically when used side by side to create a wood defensive wall. (see 'pale', English: Etymology 2 on Wiktionary). Typical construction Typical construction consisted of small or mid-sized tree trunks aligned vertically, with as little free space in between as possible. The trunks were sharpened or pointed at the top, and were driven into the ground and sometimes reinforced with additional construction. The height of a palisade ranged from around a metre to as high as 3–4 m. As a defensive structure, palisades were often used in conjunction with earthworks. Palisades were an excellent option for small forts or other hastily constructed fortifications. Since they were made of wood, they could often be quickly and easily built from readily available materials. They proved to be effective protection for short-term conflicts and were an effective deterrent against small forces. However, because they were wooden constructions they were also vulnerable to fire and siege weapons. Often, a palisade would be constructed around a castle as a temporary wall until a permanent stone wall could be erected. Ancient Greece and Rome Both the Greeks and Romans created palisades to protect their military camps. The Roman historian Livy describes the Greek method as being inferior to that of the Romans during the Second Macedonian War. The Greek stakes were too large to be easily carried and were spaced too far apart. This made it easy for enemies to uproot them and create a large enough gap in which to enter. In contrast, the Romans used smaller and easier to carry stakes which were placed closer together, making them more difficult to uproot. Precolumbian North America The Iroquoian peoples, who coalesced as tribes around the Great Lakes, often defended their settlements with palisades. Within the palisades the peoples lived in communal groups in numerous longhouses, sometimes in communities as large as 2,000 people. Archeological evidence of such palisades has been found at numerous 15th and 16th-century sites in both Ontario, Canada, and in New York, United States. Many settlements of the native Mississippian culture of the Midwestern United States used palisades. A prominent example is the Cahokia Mounds site in Collinsville, Illinois. A wooden stockade with a series of watchtowers or bastions at regular intervals formed a enclosure around Monk's Mound and the Grand Plaza. Archaeologists found evidence of the stockade during excavation of the area and indications that it was rebuilt several times, in slightly different locations. The stockade seems to have separated Cahokia's main ceremonial precinct from other parts of the city, as well as being a defensive structure. Other examples include the Angel Mounds site in southern Indiana, Aztalan State Park in Wisconsin, the Kincaid site in Illinois, the Parkin site and the Nodena sites in northeastern Arkansas, and the Etowah site in Georgia. Colonial America Palisaded settlements were common in Colonial North America, for protection against indigenous peoples and wild animals. The English settlements in Jamestown, Virginia (1607), Cupids, Newfoundland (1610) and Plymouth, Massachusetts (1620) were all originally fortifications that were surrounded by palisades. Such defensive palisades were also frequently used in New France. In addition, colonial architecture used vertical palings as the walls of houses, in what was called poteaux en terre construction. Some 18th-century houses in this style survive in Ste. Genevieve, Missouri, initially settled by French colonists from the Illinois Country to the east of the Mississippi River. Ottoman Empire A "palanka" was a type of wooden fort constructed of palisades, built by the Ottoman Empire in the Balkans during the 16th and 17th centuries. They could be erected for a variety of reasons such as protecting a strategically valuable area or a town Some palankas evolved into larger settlements. Half-timber palisade construction In the late nineteenth century, when milled lumber was not available or practical, many Adirondack buildings were built using a palisade architecture. The walls were made of vertical half timbers; the outside, rounded half with its bark still on faced Adirondack weather, while the inside half was sanded and varnished for a finished wood look. Typically, the cracks between the vertical logs were filled with moss and sometimes covered with small sticks. Inside, the cracks were covered with narrow wooden battens. This palisade style was much more efficient to build than the traditional horizontal log cabin, since two half logs provided more surface area than one whole log and the vertical alignment meant a stronger structure for supporting loads like upper stories and roofs. It also presented a more finished look inside. Examples of this architectural style can still be found in the Adirondacks, such as around Big Moose Lake. Modern uses In areas with extremely high rates of violent crime and property theft, a common means to prevent crime is for residential houses to be protected by perimeter defenses such as ornamental iron bars, brick walls, steel palisade fences, wooden palisade fences and electrified palisade fences (railings). The City of Johannesburg promotes the use of palisade fencing over opaque, usually brick, walls, as criminals cannot hide as easily behind the fence. Its manual on safety includes guidance, such as avoiding having vegetation alongside the fence, as this allows criminals to make an unseen breach. See also Palisado crown References Bibliography External links Ancient Roman architectural elements Archaeology of the United States Engineering barrages Fences History of Indigenous peoples of North America Medieval defences Pre-Columbian architecture Timber framing Traditional Native American dwellings
Palisade
[ "Technology", "Engineering" ]
1,202
[ "Structural system", "Military engineering", "Engineering barrages", "Timber framing" ]
153,861
https://en.wikipedia.org/wiki/Moat
A moat is a deep, broad ditch dug around a castle, fortification, building, or town, historically to provide it with a preliminary line of defence. Moats can be dry or filled with water. In some places, moats evolved into more extensive water defences, including natural or artificial lakes, dams and sluices. In older fortifications, such as hillforts, they are usually referred to simply as ditches, although the function is similar. In later periods, moats or water defences may be largely ornamental. They could also act as a sewer. Historical use Ancient Some of the earliest evidence of moats has been uncovered around ancient Egyptian fortresses. One example is at Buhen, a settlement excavated in Nubia. Other evidence of ancient moats is found in the ruins of Babylon, and in reliefs from ancient Egypt, Assyria, and other cultures in the region. Evidence of early moats around settlements has been discovered in many archaeological sites throughout Southeast Asia, including Noen U-Loke, Ban Non Khrua Chut, Ban Makham Thae and Ban Non Wat. The use of the moats could have been either for defensive or agriculture purposes. Medieval Moats were excavated around castles and other fortifications as part of the defensive system as an obstacle immediately outside the walls. In suitable locations, they might be filled with water. A moat made access to the walls difficult for siege weapons such as siege towers and battering rams, which needed to be brought up against a wall to be effective. A water-filled moat made the practice of mining – digging tunnels under the castles in order to effect a collapse of the defences – very difficult as well. Segmented moats have one dry section and one section filled with water. Dry moats that cut across the narrow part of a spur or peninsula are called neck ditches. Moats separating different elements of a castle, such as the inner and outer wards, are cross ditches. The word was adapted in Middle English from the Old French () and was first applied to the central mound on which a castle was erected (see Motte and bailey) and then came to be applied to the excavated ring, a 'dry moat'. The shared derivation implies that the two features were closely related and possibly constructed at the same time. The term moat is also applied to natural formations reminiscent of the artificial structure and to similar modern architectural features. Later western fortification With the introduction of siege artillery, a new style of fortification emerged in the 16th century using low walls and projecting strong points called bastions, which was known as the trace italienne. The walls were further protected from infantry attack by wet or dry moats, sometimes in elaborate systems. When this style of fortification was superseded by lines of polygonal forts in the mid-19th century, moats continued to be used for close protection. Africa The Walls of Benin were a combination of ramparts and moats, called Iya, used as a defence of the capital Benin City in present-day Edo State of Nigeria. It was considered the largest man-made structure lengthwise, second only to the Great Wall of China and the largest earthwork in the world. Recent work by Patrick Darling has established it as the largest man-made structure in the world, larger than Sungbo's Eredo, also in Nigeria. It enclosed of community lands. Its length was over of earth boundaries. It was estimated that earliest construction began in 800 and continued into the mid-15th century. The walls are built of a ditch and dike structure, the ditch dug to form an inner moat with the excavated earth used to form the exterior rampart. The Benin Walls were ravaged by the British in 1897. Scattered pieces of the walls remain in Edo, with material being used by the locals for building purposes. The walls continue to be torn down for real-estate developments. The Walls of Benin City were the world's largest man-made structure. Fred Pearce wrote in New Scientist:They extend for some 16,000 kilometres in all, in a mosaic of more than 500 interconnected settlement boundaries. They cover 6,500 square kilometres and were all dug by the Edo people. In all, they are four times longer than the Great Wall of China, and consumed a hundred times more material than the Great Pyramid of Cheops. They took an estimated 150 million hours of digging to construct, and are perhaps the largest single archaeological phenomenon on the planet. Asia Japanese castles often have very elaborate moats, with up to three moats laid out in concentric circles around the castle and a host of different patterns engineered around the landscape. The outer moat of a Japanese castle typically protects other support buildings in addition to the castle. As many Japanese castles have historically been a very central part of their cities, the moats have provided a vital waterway to the city. Even in modern times the moat system of the Tokyo Imperial Palace consists of a very active body of water, hosting everything from rental boats and fishing ponds to restaurants. Most modern Japanese castles have moats filled with water, but castles in the feudal period more commonly had 'dry moats' , a trench. A is a dry moat dug into a slope. A is a series of parallel trenches running up the sides of the excavated mountain, and the earthen wall, which was also called , was an outer wall made of earth dug out from a moat. Even today it is common for mountain Japanese castles to have dry moats. A is a moat filled with water. Moats were also used in the Forbidden City and Xi'an in China; in Vellore Fort in India; Hsinchu in Taiwan; and in Southeast Asia, such as at Angkor Wat in Cambodia; Mandalay in Myanmar; Chiang Mai in Thailand and Huế in Vietnam. Australia The only moated fort ever built in Australia was Fort Lytton in Brisbane. As Brisbane was much more vulnerable to attack than either Sydney or Melbourne a series of coastal defences was built throughout Moreton Bay, Fort Lytton being the largest. Built between 1880 and 1881 in response to fear of a Russian invasion, it is a pentagonal fortress concealed behind grassy embankments and surrounded by a water-filled moat. North America Moats were developed independently by North American indigenous people of the Mississippian culture as the outer defence of some fortified villages. The remains of a 16th-century moat are still visible at the Parkin Archeological State Park in eastern Arkansas. The Maya people also used moats, for example in the city of Becan. European colonists in the Americas often built dry ditches surrounding forts built to protect important landmarks, harbours or cities (e.g. Fort Jay on Governors Island in New York Harbor). Photo gallery Modern usage Architectural usage Dry moats were a key element used in French Classicism and Beaux-Arts architecture dwellings, both as decorative designs and to provide discreet access for service. Excellent examples of these can be found in Newport, Rhode Island at Miramar (mansion) and The Elms, as well as at Carolands, outside of San Francisco, California, and at Union Station in Toronto, Ontario, Canada. Additionally, a dry moat can allow light and fresh air to reach basement workspaces, as for example at the James Farley Post Office in New York City. Anti-terrorist moats Whilst moats are no longer a significant tool of warfare, modern architectural building design continues to use them as a defence against certain modern threats, such as terrorist attacks from car bombs and improvised fighting vehicles. For example, the new location of the Embassy of the United States in London, opened in 2018, includes a moat among its security features - the first moat built in England for more than a century. Modern moats may also be used for aesthetic or ergonomic purposes. The Catawba Nuclear Station has a concrete moat around the sides of the plant not bordering a lake. The moat is a part of precautions added to such sites after the September 11, 2001 attacks. Safety moats Moats, rather than fences, separate animals from spectators in many modern zoo installations. Moats were first used in this way by Carl Hagenbeck at his Tierpark in Hamburg, Germany. The structure, with a vertical outer retaining wall rising direct from the moat, is an extended usage of the ha-ha of English landscape gardening. Border defence moats In 2004, plans were suggested for a two-mile moat across the southern border of the Gaza Strip to prevent tunnelling from Egyptian territory to the border town of Rafah. In 2008, city officials in Yuma, Arizona planned to dig out a two-mile stretch of a wetland known as Hunters Hole to control immigrants coming from Mexico. Pest control moats Researchers of jumping spiders, which have excellent vision and adaptable tactics, built water-filled miniature moats, too wide for the spiders to jump across. Some specimens were rewarded for jumping then swimming and others for swimming only. Portia fimbriata from Queensland generally succeeded, for whichever method they were rewarded. When specimens from two different populations of Portia labiata were set the same task, members of one population determined which method earned them a reward, whilst members of the other continued to use whichever method they tried first and did not try to adapt. As a basic method of pest control in bonsai, a moat may be used to restrict access of crawling insects to the bonsai. See also Drawbridge Gracht Ha-ha wall Moated settlements Moot hill (sometimes written as Moat Hill) Neck ditch Bullengraben References External links Engineering barrages Castle architecture Masonry Water
Moat
[ "Engineering", "Environmental_science" ]
1,947
[ "Hydrology", "Engineering barrages", "Construction", "Military engineering", "Water", "Masonry" ]
153,863
https://en.wikipedia.org/wiki/JVC
JVC (short for Japan Victor Company) is a Japanese brand owned by JVCKenwood. Founded in 1927 as the Victor Talking Machine Company of Japan and later as , the company was best known for introducing Japan's first televisions and for developing the Video Home System (VHS) video recorder. From 1953 to 2008, the Matsushita Electric Industrial Co. was the majority stockholder in JVC. In 2008, JVC merged with Kenwood Corporation to create JVCKenwood. JVC sold their electronic products in their home market of Japan under the "Victor" name with the His Master's Voice logo. However, the company used the name JVC or Nivico in the past for export; this was due to differing ownership of the His Master's Voice logo and the ownership of the "Victor" name from successors of the Victor Talking Machine Company. In 2011, the Victor brand for electronics in Japan was replaced by the global JVC brand. However, the previous "Victor" name and logo are retained by JVCKenwood Victor Entertainment, and are used as JVCKenwood's luxury HiFi marque. History 1927 creation to World War II JVC was founded in 1927 as the Victor Talking Machine Company of Japan, Limited, a subsidiary of the United States' leading phonograph and record company, the Victor Talking Machine Company of Camden, New Jersey. In 1929, the Radio Corporation of America purchased Victor and its foreign subsidiaries, including the Japan operations. In the late 1920s, JVC produced only phonographs and records; following the acquisition by RCA, JVC began producing radios, and in 1939, Japan's first locally-made television. In 1943, amidst the hostilities between the United States and Japan during World War II, JVC seceded from RCA Victor, retaining the 'Victor' and "His Master's Voice" trademarks for use in Japan only. After the war, JVC resumed distribution of RCA Victor recordings in Japan until RCA established its separate distribution in Japan during the late 1960s. Today, the record company in Japan is known as Victor Entertainment. Post-war In 1953, JVC became majority-owned by the Panasonic Corporation. Panasonic released its ownership in 2007. In the 1960s, JVC established the Nivico (Nippon Victor Corporation) brand for Delmonico's line of console televisions and stereos. In 1970, JVC marketed the Videosphere, a portable cathode-ray tube (CRT) television inside a space-helmet-shaped casing with an alarm clock at the base. It was a commercial success. In 1971, JVC introduced the first discrete system for four channel quadraphonic sound on vinyl records - CD-4 (Compatible Discrete Four Channel) or Quadradisc, as it was called by the Radio Corporation of America (RCA) in the United States. In 1973, the JVC Cutting Center opened (in the USA) to provide mastering for CD-4 discs. The Mark II 1/2 speed system was used until mid-1975 when it was replaced with the Mark III 1/2 speed system. In 1978, Mobile Fidelity began using the JVC Cutting Center to 1/2 speed master Stereo/Mono discs. In 1975, JVC introduced the first combined portable battery-operated radio with inbuilt TV, as the model 3050. The TV was a black-and-white CRT. One year later, JVC expanded the model to add a cassette recorder, as the 3060, creating the world's first boombox with radio, cassette and TV. In 1976, the first VCR to use VHS was the Victor HR-3300, and was introduced by the president of JVC at the Okura Hotel in Tokyo on September 9, 1976. JVC started selling the HR-3300 in Akihabara, Tokyo, Japan on October 31, 1976. Region-specific versions of the JVC HR-3300 were also distributed later on, such as the HR-3300U in the United States, and HR-3300EK in the United Kingdom. 1970s, 1980s and the VHS/Betamax format war In the late 1970s, JVC developed the VHS format, introducing the first VHS recorders to the consumer market in 1976 for the equivalent of US$1,060. Sony, which had introduced the Betamax home videocassette tape a year earlier, became the main competitor for JVC's VHS format into the 1980s, creating the videotape format war. The Betamax cassette was smaller, with slightly superior picture quality to the VHS cassette, but this resulted in Betamax having less recording time. The two companies competed fiercely to encourage others to adopt their format, but by 1984 forty companies were using JVC's VHS format, while only 12 used Betamax. Sony began producing VHS recorders in 1988. However, Sony stopped making Betamax recorders for the US market in 1993; they stopped production of the format completely in 2002. One reason for the market penetration of VHS in the UK were the sales of blank tapes by JVC UK Ltd to major Hollywood studios. This launched the nascent home video rental market, which was hardly touched by Sony at the time. This ability to take movies home helped the sale of the VHS hardware immensely. Added to this JVC stated in a promotional tape presented by BBC TV legend Cliff Michelmore, that "You'll be able to buy the sort of films the BBC and ITV will never show you, for whatever reason". The adult movie industry adopted VHS as their common format and with a certain level of software availability, hardware sales grew. Other notable achievements In 1979, JVC demonstrated a prototype of its video high density (VHD) disc system. This system was capacitance-based, like capacitance electronic disc (CED), but the discs were grooveless with the stylus being guided by servo signals in the disc surface. The VHD discs were initially handled by the operator and played on a machine that looked like an audio LP turntable, but JVC used caddy-housed discs when the system was marketed. Development suffered numerous delays, and the product was launched in 1983 in Japan, followed by the United Kingdom in 1984, to a limited industrial market. In 1981, JVC introduced a line of revolutionary direct-drive cassette decks, topped by the DD-9, that provided previously unattainable levels of speed stability. During the 1980s JVC briefly marketed its portable audio equipment similar to the Sony Walkman on the market at the time. The JVC CQ-F2K was released in 1982 and had a detachable radio mounted to the headphones for a compact, wire-free listening experience. JVC had difficulty making the products successful, and a few years later stopped making them. In Japan, JVC marketed the products under the name "Victor". In 1986, JVC released the HC-95, a personal computer with a 3.58 MHz Zilog Z80A processor, 64 KB RAM, running on MSX Basic 2.0. It included two 3.5" floppy disk drives and conformed to the graphics specification of the MSX-2 standard. However, like the Pioneer PX-7, it also carried a sophisticated hardware interface that handled video superimposition and various interactive video processing features. The JVC HC-95 was first sold in Japan, and then Europe, but sales were disappointing. JVC video recorders were marketed by the Ferguson Radio Corporation in the UK, with just cosmetic changes. However, Ferguson needed to find another supplier for its camcorders when JVC produced only the VHS-C format, rather than video8. Ferguson was later acquired by Thomson SA, which ended the relationship. JVC later invented hard drive camcorders. 21st century In October 2001, the National Academy of Television Arts and Sciences presented JVC an Emmy Award for "outstanding achievement in technological advancement" for "Pioneering Development of Consumer Camcorders". Annual sponsorships of the world-renowned JVC Tokyo Video Festival and the JVC Jazz Festival have helped attract the attention of more customers. JVC has been a worldwide football (soccer) supporter since 1982, having a former kit sponsorship with Arsenal and continuing its role as an official partner of 2002 FIFA World Cup Korea/Japan. JVC made headlines as the first-ever corporate partner of the Kennedy Space Center Visitor Complex. JVC has recently forged corporate partnerships with ESPN Zone and Foxploration. In 2005, JVC joined HANA, the High-Definition Audio-Video Network Alliance, to help establish standards in consumer-electronics interoperability. In 2005, JVC announced their development of the first DVD-RW DL media (the dual-layer version of the rewritable DVD-RW format). In December 2006, Matsushita entered talks with Kenwood and Cerberus Capital Management to sell its stake in JVC. In 2007, Victor Company of Japan Ltd confirmed a strategic capital alliance with Kenwood and SPARKX Investment, resulting in Matsushita's holding being reduced to approximately 37%. In March 2008, Matsushita (Panasonic) agreed to spin off the company and merge it with Kenwood Electronics, creating JVCKenwood Holdings on October 1, 2008. In April 2008, JVC announced that it was closing its TV plants in East Kilbride (Scotland) and Japan. This left it with one plant in Thailand. It stated it would outsource European production to an OEM. JVC TVs for North America are now being manufactured by AmTRAN Video Corporation along with distribution, service, and warranty under license from JVCKenwood. In Europe, Currys plc, owner of Currys, has a similar arrangement with JVCKenwood. In Europe, JVC sells mainly some audio accessories, like headphones, and until recently DIN-type car audio. Also in Europe, JVC is present with camcorders, security cameras, audio systems, and with their emblematic boom box, projectors. JVC TV sets in Europe are manufactured mainly by Turkish manufacturer Vestel but are unavailable in all countries. JVC manufactures original audio equipment to vehicle manufacturers including Datsun, Nissan, Suzuki, and Honda vehicles. Sponsorship JVC is a well-known brand among English football fans due to the firm's sponsorship of Arsenal from 1981 to 1999, when Sega took over as Arsenal's sponsors. JVC's 18-year association with Arsenal is one of the longest club-sponsor associations with any professional club football. JVC also sponsored Scottish football club Aberdeen in the late-1980s and early-1990s as well as the FIFA World Cup from 1982 to 2002. JVC also sponsors the "away" shirts of the Australian A-League club, Sydney FC, and Dutch race driver Christijan Albers. JVC has also been a sponsor of a massively multiplayer online game called Rise: The Vieneo Province since 2003. Brand name JVC is generally known within Japan by the Victor brand, preceded by the His Master's Voice (HMV) logo featuring the dog Nipper. Because of a conflict in trademarks between HMV, RCA, and Victor, HMV and RCA are not allowed to use Nipper in Japan. At one time, the company used the Nivico name (for "Nippon Victor Company") overseas, before rebranding to JVC, which stands for Japan's Victor Company. Therefore, the Victor and JVC-Victor web sites looked quite different. Conversely, the HMV store chain exists in Japan (though no longer owned by HMV Group), but it cannot use the His Master's Voice motto or logo; its logo is a stylized image of a gramophone only. After the Radio Corporation of America (RCA), purchased the Victor Talking Machine Company in 1929 and became RCA Victor in Japan, RCA also had acquired the use of Nipper and His Master's Voice logo, but for use in the Western Hemisphere. In 2011, JVC decided to phase out the "Victor" brand for electronics in Japan, but retained its use for its premium audio products, recording studios Victor Studio, and record label JVCKenwood Victor Entertainment. Subsidiaries JVC KENWOOD Marketing India Gurgaon, Haryana, India JVC America Inc. – Tuscaloosa, Alabama, US JVC Americas Corp – Wayne, New Jersey, US JVC Canada Inc. – Mississauga, Ontario, Canada JVC Asia – Singapore JVC Australia – Australia JVC China – China JVC Europe – United Kingdom JVC Middle-East (and Africa) – Dubai, UAE JVC Latin America, S.A. – Panama JVC do Brasil Ltda. – Brazil JVC International – Austria Victor Entertainment Product gallery See also List of digital camera brands List of home computers Mitsubishi Electric Taiyo Yuden (partner with JVC) Video D-VHS W-VHS Videotape Video tape recorder Videocassette recorder Wondermega XRCD Notes References External links Audio equipment manufacturers of Japan Electronics companies established in 1927 Electronics companies disestablished in 2011 Companies formerly listed on the Tokyo Stock Exchange Consumer electronics brands Display technology companies Electronics companies of Japan Headphones manufacturers Japanese brands Japanese companies established in 1927 Japanese companies disestablished in 2011 JVCKenwood 2011 mergers and acquisitions Loudspeaker manufacturers Microphone manufacturers Movie camera manufacturers Portable audio player manufacturers VHS Radio manufacturers
JVC
[ "Engineering" ]
2,762
[ "Radio electronics", "Radio manufacturers" ]
153,865
https://en.wikipedia.org/wiki/Pajamas
<noinclude> Pajamas (or pyjamas in Commonwealth English, ( )) are several related types of clothing worn as nightwear or while lounging. Pajamas are soft garments derived from the South-Asian Muslim bottom-wear, the pyjamas, which were adopted in the Western world as nightwear. The garments are sometimes colloquially referred to as PJs, jammies, jim-jams or in South Asia, night suits. Etymology According to the Oxford English Dictionary, the word pajama is a borrowing via Urdu from Persian. Its etymology is: Urdu pāy-jāma, pā-jāma and its etymon Persian pāy-jāma, pā-jāma, singular noun < Persian pāy, pā foot, leg + jāma clothing, garment (see jama n.1) + English -s, plural ending, after drawers. History The worldwide use of pajamas (the word and the clothing) outside the Indian subcontinent is the result of adoption by British colonists in the Indian subcontinent in the 18th and 19th centuries, and the British influence on the wider Western world during the Victorian era. Pajamas had been introduced to England as "lounging attire" as early as the seventeenth century, then known as mogul's breeches (Beaumont and Fletcher) but they soon fell out of fashion. The word pajama (as pai jamahs, Paee-jams and variants) is recorded in English use in the first half of the nineteenth century. They did not become a fashion in Britain and the Western world as sleeping attire for men until the Victorian period, from about 1870. Hobson-Jobson: A Glossary of Colloquial Anglo-Indian Words and Phrases (1886) summarizes the state of usage at the time (s.v. "pyjammas"): Such a garment is used by various persons in India e.g. by women of various classes, by Sikh men, and most by Mohammedans of both sexes. It was adopted from the Mohammedans by Europeans as an article of dishabille [highly casual clothing] and of night attire, and is synonymous with Long Drawers, Shulwaurs, and Mogul-Breeches [...] It is probable that we English took the habit like a good many others from the Portuguese. Thus Pyrard (c. 1521) says, in speaking of Goa Hospital: "Ils ont force caleçon sans quoy ne couchent iamais les Portugais des Indes" [fr., "They have plenty of the undergarments without which the Portuguese in India never sleep"] [...] The word is now used in London shops. A friend furnishes the following reminiscence: "The late Mr. B—, tailor in Jermyn Street, some on 12 years ago, in reply to a question why pyjammas had feet sewn on to them (as was sometimes the case with those furnished by London outfitters) answered: "I believe, Sir, it is because of the White Ants." Types Traditional Traditional pajamas consist of a shirt-and-trousers combination made of soft fabric, such as flannel or lightweight silk. The shirt element usually has a placket front and sleeves with no cuffs. Pajamas are usually worn as nightwear with bare feet and without undergarments. They are often worn for comfort by people in their homes, especially by children, especially on the weekend. Contemporary Contemporary pajamas are derived from traditional pajamas. There are many variations in style such as short sleeve pajamas, pajama bottoms of varying length, and pajamas incorporating various non-traditional materials. Often, people of both sexes opt to sleep or lounge in just pajama pants, usually with a t-shirt. For this reason, pajama pants are often sold as separates. Stretch-knit sleep apparel with rib-knit trimmings are common, mostly with young children. Although pajamas are usually distinguished from one-piece sleeping garments such as nightgowns, in the US, they have sometimes included the latter or a somewhat shorter nightshirt as a top. Some pajamas, especially those designed for infants and toddlers, feature a drop seat (also known as a trap door or butt flap): a buttoned opening in the seat, designed to allow the wearer conveniently to use a toilet. Fire safety In the United States, pajamas for children are required to comply with fire safety regulations. If made of flammable fabric, such as cotton, they must be tight fitting. Loose-fitting pajamas must be treated with a fire retardant. Regulations in the United Kingdom are less stringent; pajamas which do not comply with fire safety standards may be sold, but must be labelled "KEEP AWAY FROM FIRE". Society and culture Pajamas in the Western world have been regarded as essentially indoors wear, or wear for the home, whether treated as daywear or nightwear. When Bette Davis wore her husband's pajama top as a nightie in the 1956 film Old Acquaintance, it caused a fashion revolution, with I. Magnin selling out of men's sleepwear the morning after the movie opened, and all of it to young women. Since the late 18th century some people, in particular those in the US and to some extent Britain, Ireland, Australia, and New Zealand, have worn pajamas in public for convenience or as a fashion statement. One reason for the increased wearing of pajamas in public is that people no longer face the same social pressure as in the past. In January 1976, the gulf emirate Ras Al Khaimah, UAE introduced a strict dress code for all local government workers forbidding them from wearing pajamas to work. In January 2016, the Tesco supermarket in St Mellons, Cardiff, United Kingdom, started a ban on customers wearing pajamas. In May 2010, Shanghai discouraged the wearing of pajamas in public during Expo 2010. In January 2012, a local Dublin branch of the Government's Department of Social Protection advised that pajamas were not regarded as appropriate attire for clients attending the office for welfare services. Many school and work dress codes do not allow pajamas. In 2020, due to the COVID-19 pandemic, an Illinois school district set remote learning guidelines which state that pajamas should not be worn while studying remotely and students should follow the same dress code as they normally would at school. Schools sometimes designate a "pajama day" when students and staff come to school in their pajamas to boost school spirit. In movies and television, characters are often depicted wearing pajamas in bed, as a more proper alternative to other forms of nightwear. These are commonly pajama pants with a shirt or t-shirt. Gallery See also Bananas in Pyjamas Blanket sleeper Sleepover Nightgown Sleep References External links Indian clothing Iranian clothing Nightwear Suits (clothing) History of clothing (Western fashion) History of fashion History of Asian clothing
Pajamas
[ "Biology" ]
1,427
[ "Behavior", "Sleep", "Nightwear" ]
153,882
https://en.wikipedia.org/wiki/Spandex
Spandex, Lycra, or elastane is a synthetic fiber known for its exceptional elasticity. It is a polyether-polyurea copolymer that was invented in 1958 by chemist Joseph Shivers at DuPont. Name The name spandex, which is an anagram of the word "expands", is the preferred name in North America. In continental Europe, it is referred to by variants of elastane. It is primarily known as Lycra in the UK, Ireland, Portugal, Spain, Latin America, Australia, and New Zealand. Brand names for spandex include Lycra (made by The Lycra Company, previously a division of DuPont Textiles and Interiors), Elaspan (The Lycra Company), Acepora (Taekwang Group), Creora (Hyosung), INVIYA (Indorama Corporation), ROICA and Dorlastan (Asahi Kasei), Linel (Fillattice), and ESPA (Toyobo). Production Unlike many other synthetic fibers, spandex cannot be melt-processed because the polymer degrades upon melting. Spandex fibers are produced by several spinning technologies. Typically, a concentrated solution of the polymer is drawn through spinnerets at temperatures where the solvent evaporates. Spandex is mainly composed of a polyurea derived from the reaction of a diol and a diisocyanate. Two classes of spandex are defined by the "macrodiols". One class of macrodiols is the oligomer produced from tetrahydrofuran (i.e. polytetrahydrofuran). Another class of diols, the so-called ester diols, are oligomers derived from condensation of adipic acid and glycols. Spandex produced from the ester diols is more resilient photochemically and to chlorinated water. Almost always, the diisocyanate is methylenebis(phenyl isocyanate). The key linking reaction is formation or the urea (aka urethane): The polyurea is usually treated with various diamines, which function as chain extenders. Function The exceptional elasticity of spandex fibers increases the clothing's pressure comfort, enhancing the ease of body movements. Pressure comfort is the response towards clothing by the human body's pressure receptors (mechanoreceptors present in skin sensory cells). The sensation response is affected mainly by the stretch, snug, loose, heavy, lightweight, soft, and stiff structure of the material. The elasticity and strength (stretching up to five times its length) of spandex has been incorporated into a wide range of garments, especially in skin-tight garments. A benefit of spandex is its significant strength and elasticity and its ability to return to the original shape after stretching and faster drying than ordinary fabrics. For clothing, spandex is usually mixed with cotton or polyester, and accounts for a small percentage of the final fabric, which therefore retains most of the look and feel of the other fibers. An estimated 80% of clothing sold in the United States contained spandex in 2010. Gallery History The easy condensation of diols and diisocyanates was recognized in the 1930s as the result of work by Otto Bayer. Fibers suitable for replacing nylon were not created from urethanes, but instead this theme led to a family of specialized elastic fabrics. In the post-World War II era, DuPont Textiles Fibers Department, formed in 1952, became the most profitable division of DuPont, dominating the synthetic fiber market worldwide. At this time, women began to emerge as a significant group of consumers because of their need for underwear and hosiery. After conducting market research to find out what women wanted from textiles, DuPont began developing fibers to meet such needs—including a better fiber for women's girdles, which were commonly made of rubber at the time. In the early 1950s chemist Joseph C. Shivers modified Dacron polyester, producing an elastic fiber that could withstand high temperatures. Lycra brand To distinguish its brand of spandex fiber, DuPont chose the trade name Lycra (originally called Fiber K). DuPont launched an extensive publicity campaign for its Lycra brand, taking advertisements and full-page ads in top women's magazines. Audrey Hepburn helped catapult the brand on and off-screen during this time; models and actresses like Joan Collins and Ann-Margret followed Hepburn's aesthetic by posing in Lycra clothing for photo shoots and magazine covers. By the mid-1970s, with the emergence of the women's liberation movement, girdle sales began to drop as they came to be associated with anti-independence and emblematic of an era that was quickly passing away. In response, DuPont marketed Lycra as the aerobic fitness movement emerged in the 1970s. The association of Lycra with fitness had been established at the 1968 Winter Olympic Games, when the French ski team wore Lycra garments. The fiber came to be especially popular in mid-thigh-length shorts worn by cyclists. By the 1980s, the fitness trend had reached its height in popularity and fashionistas began wearing shorts on the street. Spandex proved such a popular fiber in the garment industry that, by 1987, DuPont had trouble meeting worldwide demand. In the 1990s a variety of other items made with spandex proved popular, including a successful line of body-shaping foundation garments sold under the trade name Bodyslimmers. As the decade progressed, shirts, pants, dresses, and even shoes were being made with spandex blends, and mass-market retailers like Banana Republic were even using it for menswear. In 2019, control of the Lycra Company was sold by Koch Industries to Shandong Ruyi. Environmental impact Most clothes containing spandex are difficult to recycle. Even a 5% inclusion of spandex will render the fabric incompatible with most mechanical recycling machines. Notes References External links "What's That Stuff: Spandex" Chemical and Engineering News Products introduced in 1958 1970s fashion 1980s fashion 2000s fashion 2010s fashion Copolymers Elastomers Synthetic fibers Technical fabrics Woven fabrics Polyethers Polyurethanes
Spandex
[ "Chemistry" ]
1,325
[ "Synthetic materials", "Synthetic fibers", "Elastomers" ]
153,911
https://en.wikipedia.org/wiki/Invisibility
Invisibility is the state of an object that cannot be seen. An object in this state is said to be invisible (literally, "not visible"). The phenomenon is studied by physics and perceptual psychology. Since objects can be seen by light from a source reflecting off their surfaces and hitting the viewer's eyes, the most natural form of invisibility (whether real or fictional) is an object that neither reflects nor absorbs light (that is, it allows light to pass through it). This is known as transparency, and is seen in many naturally occurring materials (although no naturally occurring material is 100% transparent). Invisibility perception depends on several optical and visual factors. For example, invisibility depends on the eyes of the observer and/or the instruments used. Thus an object can be classified as "invisible" to a person, animal, instrument, etc. In research on sensorial perception it has been shown that invisibility is perceived in cycles. Invisibility is often considered to be the supreme form of camouflage, as it does not reveal to the viewer any kind of vital signs, visual effects, or any frequencies of the electromagnetic spectrum detectable to the human eye, instead making use of radio, infrared or ultraviolet wavelengths. In illusion optics, invisibility is a special case of illusion effects: the illusion of free space. The term is often used in fantasy and science fiction, where objects cannot be seen by means of magic or hypothetical technology. Practical efforts Technology can be used theoretically or practically to render real-world objects invisible. Making use of a real-time image displayed on a wearable display, it is possible to create a see-through effect. This is known as active camouflage. Though stealth technology is declared to be invisible to radar, all officially disclosed applications of the technology can only reduce the size and/or clarity of the signature detected by radar. In 2003 the Chilean scientist Gunther Uhlmann postulates the first mathematical equations to create invisible materials. In 2006, a team effort of researchers from Britain and the US announced the development of a real cloak of invisibility, an artificially made meta material that is invisible to the microwave spectrum, though it is only in its first stages. In filmmaking, people, objects, or backgrounds can be made to look invisible on camera through a process known as chroma keying. Engineers and scientists have performed various kinds of research to investigate the possibility of finding ways to create real optical invisibility (cloaks) for objects. Methods are typically based on implementing the theoretical techniques of transformation optics, which have given rise to several theories of cloaking. Currently, a practical cloaking device does not exist. A 2006 theoretical work predicts that the imperfections are minor, and metamaterials may make real-life "cloaking devices" practical. The technique is predicted to be applied to radio waves within five years, and the distortion of visible light is an eventual possibility. The theory that light waves can be acted upon the same way as radio waves is now a popular idea among scientists. The agent can be compared to a stone in a river, around which water passes, but slightly down-stream leaves no trace of the stone. Comparing light waves to the water, and whatever object that is being "cloaked" to the stone, the goal is to have light waves pass around that object, leaving no visible aspects of it, possibly not even a shadow. This is the technique depicted in the 2000 television portrayal of The Invisible Man. Two teams of scientists worked separately to create two "Invisibility Cloaks" from 'metamaterials' engineered at the nanoscale level. They demonstrated for the first time the possibility of cloaking three-dimensional (3-D) objects with artificially engineered materials that redirect radar, light or other waves around an object. While one uses a type of fishnet of metal layers to reverse the direction of light, the other uses tiny silver wires. Xiang Zhang, of the University of California, Berkeley said: "In the case of invisibility cloaks or shields, the material would need to curve light waves completely around the object like a river flowing around a rock. An observer looking at the cloaked object would then see light from behind it, making it seem to disappear." UC Berkeley researcher Jason Valentine's team made a material that affects light near the visible spectrum, in a region used in fibre optics: 'Instead of the fish appearing to be slightly ahead of where it is in the water, it would actually appear to be above the water's surface. For a metamaterial to produce negative refraction, it must have a structural array smaller than the wavelength of the electromagnetic radiation being used." Valentine's team created their 'fishnet' material by stacking silver and metal dielectric layers on top of each other and then punching holes through them. The other team used an oxide template and grew silver nanowires inside porous aluminum oxide at tiny distances apart, smaller than the wavelength of visible light. This material refracts visible light. The Imperial College London research team achieved results with microwaves. An invisibility cloak layout of a copper cylinder was produced in May, 2008, by physicist Professor Sir John Pendry. Scientists working with him at Duke University in the US put the idea into practice. Pendry, who theorized the invisibility cloak "as a joke" to illustrate the potential of metamaterials, said in an interview in August 2011 that grand, theatrical manifestations of his idea are probably overblown: "I think it’s pretty sure that any cloak that Harry Potter would recognize is not on the table. You could dream up some theory, but the very practicality of making it would be so impossible. But can you hide things from light? Yes. Can you hide things which are a few centimeters across? Yes. Is the cloak really flexible and flappy? No. Will it ever be? No. So you can do quite a lot of things, but there are limitations. There are going to be some disappointed kids around, but there might be a few people in industry who are very grateful for it." In Turkey in 2009, Bilkent University Search Center Of Nanotechnology researches explained and published in New Journal of Physics that they achieved to make invisibility real in practice using nanotechnology making an object invisible with no shadows etc. next to perfect transparent scene by producing nanotechnologic material that can also be produced like a suit anyone can wear. In 2019, Hyperstealth Biotechnology has patented the technology behind a material that bends light to make people and objects near invisible to the naked eye. The material, called Quantum Stealth, is currently still in the prototyping stage, but was developed by the company's CEO Guy Cramer primarily for military purposes, to conceal agents and equipment such as tanks and jets in the field. Unlike traditional camouflage materials, which are limited to specific conditions such as forests or deserts, according to Cramer this "invisibility cloak" works in any environment or season, at any time of day. This is despite its actual application requiring artificial backgrounds made up of horizontal lines. Psychological A person can be described as invisible if others refuse to see them or routinely overlook them. The term was used in this manner in the title of the book Invisible Man, by Ralph Ellison, in reference to the protagonist, likely modeled after the author, being overlooked on account of his status as an African American. This is supported by the quote taken from the Prologue, "I am invisible, understand, simply because people refuse to see me." (Prologue.1) Fictional use In fiction, people or objects can be rendered completely invisible by several means: Magical objects such as rings, cloaks and amulets can be worn to grant the wearer permanent invisibility (or temporary invisibility until the object is taken off). Magical potions can be consumed to grant temporary or permanent invisibility. Magic spells can be cast on people or objects, usually giving temporary invisibility. Some mythical creatures can make themselves invisible at will, such as in some tales in which leprechauns or Chinese dragons can shrink so much that humans cannot see them. In science fiction, the idea of a "cloaking device". In some works, the power of magic creates an effective means of invisibility by distracting anyone who might notice the character. But since the character is not truly invisible, the effect could be betrayed by mirrors or other reflective surfaces. Where magical invisibility is concerned, the issue may arise of whether the clothing worn by and any items carried by the invisible being are also rendered invisible. In general they are also regarded as being invisible, but in some instances clothing remains visible and must be removed for the full invisibility effect. See also Ambiguity Covert operation Social invisibility Visibility References External links The Digital Chameleon Principle: Computing Invisibility by Rendering Transparency Physics World special issue on invisibility science - July 2011 Light Fantastic: Flirting With Invisibility - The New York Times Invisibility in the real world Interesting picture of a test tube's bottom half invisible in cooking oil. Brief piece on why visible light is visible - Straight Dope CNN.com - Science reveals secrets of invisibility - Aug 9, 2006 - Next to perfect Invisibility achieved using nanotechnologic material In Turkey - July 2009 Optics
Invisibility
[ "Physics" ]
1,928
[ "Optical phenomena", "Physical phenomena", "Optical illusions", "Invisibility" ]
153,914
https://en.wikipedia.org/wiki/Cassette%20deck
A cassette deck is a type of tape machine for playing and recording audio cassettes that does not have a built-in power amplifier or speakers, and serves primarily as a transport. It can be a part of an automotive entertainment system, a part of a portable audio system or a part of a home component system. In the latter case, it is also called a component cassette deck or just a component deck. History Roots The first consumer tape recorder to employ a tape reel permanently housed in a small removable cartridge was the RCA tape cartridge, which appeared in 1958 as a predecessor to the cassette format. At that time, reel-to-reel recorders and players were commonly used by enthusiasts but required large individual reels and tapes which had to be threaded by hand, making them less accessible to the casual consumer. Both RCA and Bell Sound attempted to commercialize the cartridge format, but a few factors stalled adoption, including lower-than-advertised availability of selections in the prerecorded media catalog, delays in production setup, and a stand-alone design that was not considered by audiophiles to be truly hi-fi. The compact cassette (a Philips trademark) was introduced by the Philips Corporation at the Internationale Funkausstellung Berlin in 1963 and marketed as a device purely intended for portable speech-only dictation machines. The tape width was  inch (actually 0.15 inch, 3.81 mm) and tape speed was per second, giving a decidedly non-Hi-Fi frequency response and quite high noise levels. Early cassette decks Early recorders were intended for dictation and journalists, and were typically hand-held battery-powered devices with built-in microphones and automatic gain control on recording. Tape recorder audio-quality had improved by the mid-1970s, and a cassette deck with manual level controls and VU meters became a standard component of home high-fidelity systems. Eventually the reel-to-reel recorder was completely displaced, in part because of the usage constraints presented by their large size, expense, and the inconvenience of threading and rewinding the tape reels - cassettes are more portable and can be stopped and immediately removed in the middle of playback without rewinding. Cassettes became extremely popular for automotive and other portable music applications. Although pre-recorded cassettes were widely available, many users would combine (dub) songs from their vinyl records or cassettes to make a new custom mixtape cassette. In 1970, the Advent Corporation combined Dolby B noise reduction system with chromium dioxide (CrO2) tape to create the Advent Model 200, the first high-fidelity cassette deck. Dolby B uses volume companding of high frequencies to boost low-level treble information by up to 9 dB, reducing them (and the hiss) on playback. CrO2 used different bias and equalization settings to reduce the overall noise level and extend the high frequency response. Together these allowed a usefully flat frequency response beyond 15 kHz for the first time. This deck was based on a top-loading mechanism by Nakamichi, then soon replaced by the Model 201 based on a more reliable transport made by Wollensak, a division of 3M, which was commonly used in audio/visual applications. Both featured an unusual single VU meter which could be switched between or for both channels. The Model 200 featured piano key style transport controls, with the Model 201 using the distinctive combination of a separate lever for rewind/fast forward and the large play and stop button as found on their commercial reel to reel machines of the era. Most manufacturers adopted a standard top-loading format with piano key controls, dual VU meters, and slider level controls. There was a variety of configurations leading to the next standard format in the late 1970s, which settled on front-loading (see main picture) with cassette well on one side, dual VU meters on the other, and later dual-cassette decks with meters in the middle. Mechanical controls were replaced with electronic push buttons controlling solenoid mechanical actuators, though low cost models would retain mechanical controls. Some models could search and count gaps between songs. Widespread use Cassette decks soon came into widespread use and were designed variously for professional applications, home audio systems, and for mobile use in cars, as well as portable recorders. From the mid-1970s to the late 1990s the cassette deck was the preferred music source for the automobile. Like an 8-track cartridge, it was relatively insensitive to vehicle motion, but it had reduced tape flutter, as well as the obvious advantages of smaller physical size and fast forward/rewind capability. A major boost to the cassette's popularity came with the release of the Sony Walkman personal cassette player in 1979, designed specifically as a headphone-only ultra-compact wearable music source. Although the vast majority of such players eventually sold were not Sony products, the name Walkman has become synonymous with this type of device. Cassette decks were eventually manufactured by almost every well known brand in home audio, and many in professional audio, with each company offering models of very high quality. Performance improvements and additional features Cassette decks reached their pinnacle of performance and complexity by the mid-1980s. Cassette decks from companies such as Nakamichi, Revox, and Tandberg incorporated advanced features such as multiple tape heads and dual capstan drive with separate reel motors. Auto-reversing decks became popular and were standard on most factory installed automobile decks. Integrated noise reduction systems - Dolby B, C, and S The Dolby B noise reduction system was key to realizing low noise performance on the - compared to reel-to-reel-technology - relatively slow and narrow cassette tapes. It works by boosting the high frequencies on recording, especially low-level high-frequency sounds, with corresponding high frequency reduction on playback. This lowers the high frequency noise (hiss) by approximately 9 dB. Enhanced versions included Dolby C (from 1980) and Dolby S types. Of the three, however, only Dolby B became common on automobile decks. Three heads for realtime monitoring of recordings and improved sound quality Three-head technology uses separate heads for recording and playback (the third of the three heads being the erase head). This allows different record and playback head gaps to be used. A narrower head gap is optimal for playback than for recording, so the head gap width of any combined record/playback head must necessarily be a compromise. Separate record and playback heads also allow off-the-tape monitoring during recording, permitting immediate verification of the recording quality. (Such machines can be identified by the presence of a monitor switch with positions for tape and source, or similar.) Three-head systems were common on reel-to-reel decks, but were more difficult to implement for cassettes, which do not provide separate openings for record and play heads. Some models squeezed a monitor head into the capstan area, and others combined separate record and playback gaps into a single headshell. Auto reverse for automated sequential playback of both cassette sides In later years, an auto-reverse feature appeared that allowed the deck to play (and, in some decks, record) on both sides of the cassette without the operator having to manually remove, flip, and re-insert the cassette. Most auto-reverse machines use a four-channel head (similar to those on multitrack recorders), with only two channels connected to the electronics at one time, one pair for each direction. Auto-reverse decks employ a capstan and pinch roller for each side. Since these use the same opening in the cassette shell normally used for the erase head, such decks must fit the erase head (or two, one for each direction) into the center opening in the shell along with the record/play head. In later auto-reverse machines, the auto reverse mechanism uses an ordinary two-track, quarter-width head, but operates by mechanically rotating the head 180 degrees so that the two head gaps access the other tracks of the tape. There is usually an azimuth adjustment screw for each position. Nevertheless, due to the repeated movement, the alignment (in particular, the azimuth) deviates with usage. Even in a machine with a four-channel head, slight asymmetries in the cassette shell make it difficult to align the head perfectly for both directions. In one machine, the Dragon, Nakamichi addressed the issue with a motor-driven automatic head alignment mechanism. This proved effective but very expensive. Later Nakamichi auto-reverse models, the RX series, was essentially a single-directional deck, but with an added mechanism that physically removed the cassette from the transport, flipped it over, and re-inserted it. Akai made a similar machine but with the mechanism and cassette laid out horizontally instead of upright. This permitted the convenience of auto-reverse with little compromise in record or playback quality. Integration of digital electronics, from the 1980s As a part of the Digital Revolution, the ongoing development of electronics technology decreased the cost of digital circuitry to the point that the technology could be applied to consumer electronics. The application of such digital electronics to cassette decks provides an early example of mechatronic design, which aims to enhance mechanical systems with electronic components in order to improve performance, increase system flexibility, or reduce cost. The inclusion of logic circuitry and solenoids into the transport and control mechanisms of cassette decks, often referred to logic control, contrasts with earlier piano-key transport controls and mechanical linkages. One goal of using logic circuitry in cassette decks or recorders was to minimize equipment damage upon incorrect user input by including fail-safes into the transport and control mechanism. Such fail-safe behavior was described in a review by Julian Hirsch of a particular cassette deck featuring logic control. Some examples of fail-safe mechanisms incorporated into logic control decks include: a mechanism designed to protect internal components from damage when the tape or motor is locked, a mechanism designed to prevent the tape from being wound improperly, among others. Some logic control decks were designed to incorporate light-touch buttons or remote control, among other features marketed as being convenient. In the car stereo industry, full logic control was developed with the aim of miniaturization, so that the cassette deck would take up less dashboard space. Dolby HX Pro for higher recording levels on the same tape material Bang & Olufsen developed the HX Pro headroom extension system in conjunction with Dolby Laboratories in 1982. This was used in many higher-end decks. HX Pro reduces the high-frequency bias during recording when the signal being recorded has a high level of high frequency content. Such a signal is self-biasing. Reducing the level of the bias signal permits the desired signal to be recorded at a higher level without saturating the tape, thus increasing headroom or maximum recording level. Some decks incorporated microprocessor programs to adjust tape bias and record level calibration automatically. Advances in tape materials New tape formulations were introduced. Chromium dioxide (referred to as CrO2 or Type II) was the first tape designed for extended high-frequency response, but it required higher bias. Later, as the IEC Type II standard was defined, a different equalization settings was also mandated to reduce hiss, thus giving up some extension at the high end of the audio spectrum. Better-quality cassette recorders soon appeared with a switch for the tape type. Later decks incorporated coded holes in the shell to autodetect the tape type. Chromium dioxide tape was thought to cause increased wear on the heads, so TDK and Maxell adapted cobalt-doped ferric formulations to mimic CrO2. Sony briefly tried FerriChrome (Type III) which claimed to combine the best of both; some people, however, stated that the reverse was true because the Cr top layer seemed to wear off quickly, reducing this type to Fe in practice. Most recent decks produce the best response and dynamic headroom with metal tapes (IEC Type IV) which require still higher bias for recording, though they will play back correctly at the II setting since the equalization is the same. Effects achieved by the technological developments With all of these improvements, the best units could record and play the full audible spectrum from 20 Hz to over 20 kHz (although this was commonly quoted at -10, -20 or even -30 dB, not at full output level), with wow and flutter less than 0.05% and very low noise. A high-quality recording on cassette could rival the sound of an average commercial CD, though the quality of pre-recorded cassettes has been regarded by the general public as lower than could be achieved in a quality home recording. There was a call for better sound quality in 1981, surprisingly by the head of Tower Records, Russ Solomon. At a meeting of the National Association of Recording Merchandisers (NARM) Retail Advisory Committee in Carlsbad, California, Solomon played two recordings of a Santana track; one he had recorded himself and the pre-recorded cassette release from Columbia Records. He used this technique to demonstrate what he called "the tunnel effect" in the audio range of pre-recorded cassettes and commented to the reporter Sam Sutherland, who wrote a news article printed in Billboard magazine: "The buyer who is aware of sound quality is making his own." "They won't be satisfied with the 'tunnel effect' of prerecorded tape. And home tape deck users don't use prerecorded tapes at all." Yet, contended Solomon, while Tower's own stores show strong blank tape sales gains, its prerecorded sales have increased by only 2% to 3%. With an estimated 15% of the chain's total tape business now generated by the sales of blanks, "it would appear our added tape sales are going to TDK, Maxell and Sony, not you." he concluded. - Billboard, Vol. 93, No. 38, 26 September 1981. Noise reduction and fidelity A variety of noise reduction and other schemes are used to increase fidelity, with Dolby B being almost universal for both prerecorded tapes and home recording. Dolby B was designed to address the high-frequency noise inherent in cassette tapes, and along with improvements in tape formulation it helped the cassette win acceptances as a high-fidelity medium. At the same time, Dolby B provided acceptable performance when played back on decks that lacked Dolby circuitry, meaning there was little reason not to use it if it was available. The main alternative to Dolby was the dbx noise reduction system, which achieved a high signal-to-noise ratio, but was essentially unlistenable when played back on decks that lacked the dbx decoding circuitry. Philips developed an alternative noise reduction system known as Dynamic Noise Limiter (DNL) which did not require the tapes to be processed during recording; this was also the basis of the later DNR noise reduction. Dolby later introduced Dolby C and Dolby S noise reduction, which achieved higher levels of noise reduction; Dolby C became common on high-fidelity decks, but Dolby S, released when cassette sales had begun to decline, never achieved widespread use. It was only licensed for use on higher end tape decks that included dual motors, triple heads, and other refinements. Dolby HX Pro headroom extension provided better high-frequency response by adjusting the inaudible tape bias during the recording of strong high-frequency sounds, which had a bias effect of their own. Developed by Bang & Olufsen, it did not require a decoder to play back. Since B&O held patent rights and required paying license fees, many other manufacturers refrained from using it too. Other refinements to improve cassette performance included Tandberg's DYNEQ, Toshiba's and Telefunken's High Com, and on some high-end decks, automatic recording bias, fine pitch adjustment and (sometimes) head azimuth adjustment such as the Tandberg TCD-330 and TCD-340A. By the late 1980s, thanks to such improvements in the electronics, the tape material and manufacturing techniques, as well as dramatic improvements to the precision of the cassette shell, tape heads and transport mechanics, sound fidelity on equipment from the top manufacturers far surpassed the levels originally expected of the medium. On suitable audio equipment, cassettes could produce a very pleasant listening experience. High-end cassette decks could achieve 15 Hz-22 kHz±3 dB frequency response with wow and flutter below 0.022%, and a signal-to-noise ratio of up to 61 dB (for Type IV tape, without noise-reduction) . With noise reduction typical signal-to-noise figures of 70-76 dB with Dolby C, 80-86 dB with Dolby S, and 85 - 90 dB with dbx could be achieved. Many casual listeners could not tell the difference between compact cassette and compact disc. From the early 1980s, the fidelity of prerecorded cassettes began to improve dramatically. Whereas Dolby B was already in widespread use in the 1970s, prerecorded cassettes were duplicated onto rather poor quality tape stock at (often) high speed and did not compare in fidelity to high-grade LPs. However, systems such as XDR, along with the adoption of higher-grade tape (such as chromium dioxide, but typically recorded in such a way as to play back at the normal 120 μs position), and the frequent use of Dolby HX Pro, meant that cassettes became a viable high-fidelity option, one that was more portable and required less maintenance than records. In addition, cover art, which had generally previously been restricted to a single image of the LP cover along with a minimum of text, began to be tailored to cassettes as well, with fold-out lyric sheets or librettos and fold-out sleeves becoming commonplace. Some companies, such as Mobile Fidelity, produced audiophile cassettes in the 1980s, which were recorded on high-grade tape and duplicated on premium equipment in real time from a digital master. Unlike audiophile LPs, which continue to attract a following, these became moot after the compact disc became widespread. Almost all cassette decks have an MPX filter to improve the sound quality and the tracking of the noise reduction system when recording from an FM stereo broadcast. However, in many especially cheaper decks, this filter cannot be disabled, and because of that record/playback frequency response in those decks typically is limited to 16 kHz. In other decks, the MPX filter can be switched off or on independently from the Dolby switch. On yet other decks, the filter is off by default, and an option to switch it on or off is only provided when Dolby is activated; this prevents the MPX filter from being used when it's not required. In-car entertainment systems A key element of the cassette's success was its use in in-car entertainment systems, where the small size of the tape was significantly more convenient than the competing 8-track cartridge system. Cassette players in cars and for home use were often integrated with a radio receiver. In-car cassette players were the first to adopt automatic reverse ("auto-reverse") of the tape direction at each end, allowing a cassette to be played endlessly without manual intervention. Home cassette decks soon added the feature. Cassette tape adaptors have been developed which allow newer media players to be played through existing cassette decks, in particular those in cars which generally do not have input jacks. These units do not suffer from reception problems from FM transmitter based system to play back media players through the FM radio, though supported frequencies for FM transmitters that are not used on commercial broadcasters in a given region (e.g. any frequency below 88.1 in the US) somewhat eliminates that problem. Maintenance Cassette equipment needs regular maintenance, as cassette tape is a magnetic medium that is in physical contact with the tape head and other metallic parts of the recorder/player mechanism. Without such maintenance, the high-frequency response of the cassette equipment will suffer. One problem occurs when iron oxide (or similar) particles from the tape itself become lodged in the playback head. As a result, the tape heads will require occasional cleaning to remove such particles. The metal capstan and the rubber pinch roller can become coated with these particles, leading them to pull the tape less precisely over the head; this in turn leads to misalignment of the tape over the head azimuth, producing noticeably unclear high tones, just as if the head itself were out of alignment. Isopropyl alcohol and denatured alcohol are both suitable head-cleaning fluids. The heads and other metallic components in the tape path (such as spindles and capstans) may become magnetized with use, and require demagnetizing (see Cassette demagnetizer). Decline in popularity Analog cassette deck sales were expected to decline rapidly with the advent of the compact disc and other digital recording technologies such as digital audio tape (DAT), MiniDisc, and the CD-R recorder drives. Philips responded with the digital compact cassette, a system which was backward-compatible with existing analog cassette recordings for playback. However, it failed to garner a significant market share and was withdrawn from the market. One reason proposed for the lack of acceptance of digital recording formats such as DAT was a fear by content providers that the ability to make very high-quality copies would hurt sales of copyrighted recordings. The rapid transition was not realized and CDs and cassettes successfully co-existed for nearly 20 years. A contributing factor may have been the inability of early CD players to reliably read discs with surface damage and offer anti-skipping features for applications where external vibration would be present, such as automotive and recreation environments. Early CD playback equipment also tended to be expensive compared to cassette equipment of similar quality and did not offer recording capability. Many home and portable entertainment systems supported both formats and commonly allowed the CD playback to be recorded on cassette tape. The rise of inexpensive all-solid-state portable digital music systems based on MP3, AAC and similar formats finally saw the eventual decline of the domestic cassette deck. As of 2020, Marantz, Teac, and Tascam are among the few companies still manufacturing cassette decks in relatively small quantities for professional and niche market use. By the late 1990s, automobiles were offered with entertainment systems that played both cassettes and CDs. By the end of the late 2000s, very few cars were offered with cassette decks. The last vehicle model in the United States that came standard with a factory-installed cassette player was the 2010 Lexus SC 430, however the Ford Crown Victoria came with a cassette deck as an option until the model was discontinued in 2011. As radios became tightly integrated into dashboards, many cars lacked even standard openings that would accept aftermarket cassette player installations. Despite the decline in the production of cassette decks, these products are still valued by some. Many blind and elderly people find the newest digital technologies very difficult to use compared to the cassette format. Cassette tapes are not vulnerable to scratching from handling (though the exposed magnetic tape is vulnerable to stretching from poking), and play from where they were last stopped (though some modern MP3 players offer savestating electronically). Cassette tapes can also be recorded multiple times (though some solid-state digital recorders are now offering that function). Today, cassette decks are not considered by most people to be either the most versatile or highest fidelity sound recording devices available, as even very inexpensive CD or digital audio players can reproduce a wide frequency range with no speed variations. Many current budget-oriented cassette decks lack a tape selector to set proper bias and equalization settings to take best advantage of the extended high end of Type II [High Bias] and Type IV [Metal Bias] tapes. Cassettes remain popular for audio-visual applications. Some CD recorders, particularly those intended for business use, incorporate a cassette deck to allow both formats for recording meetings, church sermons, and books on tape. References External links Audio Asylum Tape Trail – A discussion forum of interest to those involved in cassette technology. Vintage Cassette Decks - A collection of Vintage cassette decks of all brands. Audio players Recording devices Tape recording 1963 in technology Audiovisual introductions in 1963 Products introduced in 1963
Cassette deck
[ "Technology" ]
4,967
[ "Recording devices", "Tape recording" ]
153,961
https://en.wikipedia.org/wiki/Heliacal%20rising
The heliacal rising ( ) of a star or a planet occurs annually when it becomes visible above the eastern horizon at dawn just before sunrise (thus becoming "the morning star"). A heliacal rising marks the time when a star or planet becomes visible for the first time again in the night sky after having set with the Sun at the western horizon in a previous sunset (its heliacal setting), having since been in the sky only during daytime, obscured by sunlight. Historically, the most important such rising is that of Sirius, which was an important feature of the Egyptian calendar and astronomical development. The rising of the Pleiades heralded the start of the Ancient Greek sailing season, using celestial navigation, as well as the farming season (attested by Hesiod in his Works and Days). Heliacal rising is one of several types of risings and settings, mostly they are grouped into morning and evening risings and settings of objects in the sky. Culmination in the evening and then morning is set apart by half a year, while on the other hand risings and settings in the evenings and the mornings are only at the equator set apart by half a year. Cause and significance Relative to the stars, the Sun appears to drift eastward about one degree per day along a path called the ecliptic because there are 360 degrees in any complete revolution (circle), which takes about 365 days in the case of one revolution of the Earth around the Sun. Any given "distant" star in the belt of the ecliptic will be visible at night for only half of the year, when it will always remain below the horizon. During the other half of the year it will appear to be above the horizon but not visible because the sunlight is too bright during the day. The star's heliacal rising will occur when the Earth has moved to a point in its orbit where the star appears on the eastern horizon at dawn. Each day after the heliacal rising, the star will rise slightly earlier and remain visible for longer before the light from the rising sun overwhelms it. Over the following days the star will move further and further westward (about one degree per day) relative to the Sun, until eventually it is no longer visible in the sky at sunrise because it has already set below the western horizon. This is called the acronycal setting. The same star will reappear in the eastern sky at dawn approximately one year after its previous heliacal rising. For stars near the ecliptic, the small difference between the solar and sidereal years due to axial precession will cause their heliacal rising to recur about one sidereal year (about 365.2564 days) later, though this depends on its proper motion. For stars far from the ecliptic, the period is somewhat different and varies slowly, but in any case the heliacal rising will move all the way through the zodiac in about 26,000 years due to precession of the equinoxes. Because the heliacal rising depends on the observation of the object, its exact timing can be dependent on weather conditions. Heliacal phenomena and their use throughout history have made them useful points of reference in archeoastronomy. Non-application to circumpolar stars Some stars, when viewed from latitudes not at the equator, do not rise or set. These are circumpolar stars, which are either always in the sky or never. For example, the North Star (Polaris) is not visible in Australia and the Southern Cross is not seen in Europe, because they always stay below the respective horizons. The term circumpolar is somewhat localised as between the Tropic of Cancer and the Equator, the Southern polar constellations have a brief spell of annual visibility (thus "heliacal" rising and "cosmic" setting) and the same applies as to the other polar constellations in respect of the reverse tropic. History Constellations containing stars that rise and set were incorporated into early calendars or zodiacs. The Sumerians, Babylonians, Egyptians, and Greeks all used the heliacal risings of various stars for the timing of agricultural activities. Because of its position about 40° off the ecliptic, the heliacal risings of the bright star Sirius in Ancient Egypt occurred not over a period of exactly one sidereal year but over a period called the "Sothic year" (from "Sothis", the name for the star Sirius). The Sothic year was about a minute longer than a Julian year of 365.25 days. Since the development of civilization, this has occurred at Cairo approximately on July 19 on the Julian calendar. Its returns also roughly corresponded to the onset of the annual flooding of the Nile, although the flooding is based on the tropical year and so would occur about three quarters of a day earlier per century in the Julian or Sothic year. (July 19, 1000 BC in the Julian Calendar is July 10 in the proleptic Gregorian Calendar. At that time, the sun would be somewhere near Regulus in Leo, where it is around August 21 in the 2020s.) The ancient Egyptians appear to have constructed their 365-day civil calendar at a time when Wep Renpet, its New Year, corresponded with Sirius's return to the night sky. Although this calendar's lack of leap years caused the event to shift one day every four years or so, astronomical records of this displacement led to the discovery of the Sothic cycle and, later, the establishment of the more accurate Julian and Alexandrian calendars. The Egyptians also devised a method of telling the time at night based on the heliacal risings of 36 decan stars, one for each 10° segment of the 360° circle of the zodiac and corresponding to the ten-day "weeks" of their civil calendar. To the Māori of New Zealand, the Pleiades are called Matariki, and their heliacal rising signifies the beginning of the new year (around June). The Mapuche of South America called the Pleiades Ngauponi which in the vicinity of the we tripantu (Mapuche new year) will disappear by the west, lafkenmapu or ngulumapu, appearing at dawn to the East, a few days before the birth of new life in nature. Heliacal rising of Ngauponi, i.e. appearance of the Pleiades by the horizon over an hour before the sun approximately 12 days before the winter solstice, announced we tripantu. When a planet has a heliacal rising, there is a conjunction with the sun beforehand. Depending on the type of conjunction, there may be a syzygy, eclipse, transit, or occultation of the sun. Acronycal and cosmic(al) The rising of a planet above the eastern horizon at sunset is called its acronycal rising, which for a superior planet signifies an opposition, another type of syzygy. When the Moon has an acronycal rising, it will occur near full moon and thus, two or three times a year, a noticeable lunar eclipse. Cosmic(al) can refer to rising with sunrise or setting at sunset, or the first setting at morning twilight. Risings and settings are furthermore differentiated between apparent (the above discussed) and actual or true risings or settings. Overview The use of the terms cosmical and acronycal is not consistent. The following table gives an overview of the different application of the terms to the rising and setting instances. See also Dog days Steering star Notes References Observational astronomy Stellar astronomy Time in astronomy Technical factors of astrology Egyptian calendar
Heliacal rising
[ "Astronomy" ]
1,581
[ "Time in astronomy", "Observational astronomy", "Astronomical sub-disciplines", "Stellar astronomy" ]
153,962
https://en.wikipedia.org/wiki/Spam%20Prevention%20Early%20Warning%20System
The Spam Prevention Early Warning System (SPEWS) was an anonymous service that maintained a list of IP address ranges belonging to internet service providers (ISPs) that host spammers and show little action to prevent their abuse of other networks' resources. It could be used by Internet sites as an additional source of information about the senders of unsolicited bulk email, better known as spam. The SPEWS database has not been updated since August 24, 2006; dnsbl.com lists its status as dead. A successor, the Anonymous Postmaster Early Warning System (APEWS), appeared in January 2007, using similar listing criteria and a nearly identical web page. See also News.admin.net-abuse.email (NANAE) References External links (Last good archive. Domain is now owned by another entity.) APEWS.org Spamming Early warning systems Internet properties disestablished in 2007 History of the Internet
Spam Prevention Early Warning System
[ "Technology" ]
194
[ "Warning systems", "Early warning systems" ]
153,968
https://en.wikipedia.org/wiki/Poop%20deck
In naval architecture, a poop deck is a deck that forms the roof of a cabin built in the rear, or "aft", part of the superstructure of a ship. The name originates from the French word for stern, , from Latin . Thus the poop deck is technically a stern deck, which in sailing ships was usually elevated as the roof of the stern or "after" cabin, also known as the "poop cabin" (or simply the poop). On sailing ships, the helmsman would steer the craft from the quarterdeck, immediately in front of the poop deck. At the stern, the poop deck provides an elevated position ideal for observation. While the main purpose of the poop is adding buoyancy to the aft, on a sailing ship the cabin was also used as an accommodation for the shipmaster and officers. On modern, motorized warships, the ship functions which were once carried out on the poop deck have been moved to the bridge, usually located in a superstructure. See also Common names for decks Taffrail, the handrail around the poop deck Quarter gallery, a projecting area at the stern Puppis, a constellation References Sources Sailing ship components Shipbuilding Nautical terminology
Poop deck
[ "Engineering" ]
248
[ "Shipbuilding", "Marine engineering" ]
153,977
https://en.wikipedia.org/wiki/Domain%20Name%20System%20blocklist
A Domain Name System blocklist, Domain Name System-based blackhole list, Domain Name System blacklist (DNSBL) or real-time blackhole list (RBL) is a service for operation of mail servers to perform a check via a Domain Name System (DNS) query whether a sending host's IP address is blacklisted for email spam. Most mail server software can be configured to check such lists, typically rejecting or flagging messages from such sites. A DNSBL is a software mechanism, rather than a specific list or policy. Dozens of DNSBLs exist. They use a wide array of criteria for listing and delisting addresses. These may include listing the addresses of zombie computers or other machines being used to send spam, Internet service providers (ISPs) who willingly host spammers, or those which have sent spam to a honeypot system. Since the creation of the first DNSBL in 1998, the operation and policies of these lists have frequently been controversial, both in Internet advocacy circles and occasionally in lawsuits. Many email systems operators and users consider DNSBLs a valuable tool to share information about sources of spam, but others including some prominent Internet activists have objected to them as a form of censorship. In addition, a small number of DNSBL operators have been the target of lawsuits filed by spammers seeking to have the lists shut down. History The first DNSBL was the Real-time Blackhole List (RBL), created in 1997, at first as a Border Gateway Protocol (BGP) feed by Paul Vixie, and then as a DNSBL by Eric Ziegast as part of Vixie's Mail Abuse Prevention System (MAPS); Dave Rand at Abovenet was its first subscriber. The very first version of the RBL was not published as a DNSBL, but rather a list of networks transmitted via BGP to routers owned by subscribers so that network operators could drop all TCP/IP traffic for machines used to send spam or host spam supporting services, such as a website. The inventor of the technique later commonly called a DNSBL was Eric Ziegast while employed at Vixie Enterprises. The term "blackhole" refers to a networking black hole, an expression for a link on a network that drops incoming traffic instead of forwarding it normally. The intent of the RBL was that sites using it would refuse traffic from sites which supported spam — whether by actively sending spam, or in other ways. Before an address would be listed on the RBL, volunteers and MAPS staff would attempt repeatedly to contact the persons responsible for it and get its problems corrected. Such effort was considered very important before black-holing all network traffic, but it also meant that spammers and spam supporting ISPs could delay being put on the RBL for long periods while such discussions went on. Later, the RBL was also released in a DNSBL form and Paul Vixie encouraged the authors of sendmail and other mail software to implement RBL support in their clients. These allowed the mail software to query the RBL and reject mail from listed sites on a per-mail-server basis instead of black-holing all traffic. Soon after the advent of the RBL, others started developing their own lists with different policies. One of the first was Alan Brown's Open Relay Behavior-modification System (ORBS). This used automated testing to discover and list mail servers running as open mail relays—exploitable by spammers to carry their spam. ORBS was controversial at the time because many people felt running an open relay was acceptable, and that scanning the Internet for open mail servers could be abusive. In 2003, a number of DNSBLs came under denial-of-service attacks (DOS). Since no party has admitted to these attacks nor been discovered responsible, their purpose is a matter of speculation. However, many observers believe the attacks are perpetrated by spammers in order to interfere with the DNSBLs' operation or hound them into shutting down. In August 2003, the firm Osirusoft, an operator of several DNSBLs including one based on the SPEWS data set, shut down its lists after suffering weeks of near-continuous attack. Technical specifications for DNSBLs came relatively late in RFC5782. URI DNSBLs A Uniform Resource Identifier (URI) DNSBL is a DNSBL that lists the domain names and sometimes also IP addresses which are found in the "clickable" links contained in the body of spams, but generally not found inside legitimate messages. URI DNSBLs were created when it was determined that much spam made it past spam filters during that short time frame between the first use of a spam-sending IP address and the point where that sending IP address was first listed on major sending-IP-based DNSBLs. In many cases, such elusive spam contains in their links domain names or IP addresses (collectively referred to as a URIs) where that URI was already spotted in previously caught spam and where that URI is not found in non-spam e-mail. Therefore, when a spam filter extracts all URIs from a message and checks them against a URI DNSBL, then the spam can be blocked even if the sending IP for that spam has not yet been listed on any sending IP DNSBL. Of the three major URI DNSBLs, the oldest and most popular is SURBL. After SURBL was created, some of the volunteers for SURBL started the second major URI DNSBL, URIBL. In 2008, another long-time SURBL volunteer started another URI DNSBL, ivmURI. The Spamhaus Project provides the Spamhaus Domain Block List (DBL) which they describe as domains "found in spam messages". The DBL is intended as both a URIBL and RHSBL, to be checked against both domains in a message's envelope and headers and domains in URLs in message bodies. Unlike other URIBLs, the DBL only lists domain names, not IP addresses, since Spamhaus provides other lists of IP addresses. URI DNSBLs are often confused with RHSBLs (Right Hand Side BLs). But they are different. A URI DNSBL lists domain names and IPs found in the body of the message. An RHSBL lists the domain names used in the "from" or "reply-to" e-mail address. RHSBLs are of debatable effectiveness since many spams either use forged "from" addresses or use "from" addresses containing popular freemail domain names, such as @gmail.com, @yahoo.com, or @hotmail.com URI DNSBLs are more widely used than RHSBLs, are very effective, and are used by the majority of spam filters. Principle To operate a DNSBL requires three things: a domain to host it under, a nameserver for that domain, and a list of addresses to publish. It is possible to serve a DNSBL using any general-purpose DNS server software. However this is typically inefficient for zones containing large numbers of addresses, particularly DNSBLs which list entire Classless Inter-Domain Routing netblocks. For the large resource consumption when using software designed as the role of a Domain Name Server, there are role-specific software applications designed specifically for servers with a role of a DNS blacklist. The hard part of operating a DNSBL is populating it with addresses. DNSBLs intended for public use usually have specific, published policies as to what a listing means, and must be operated accordingly to attain or sustain public confidence. DNSBL queries When a mail server receives a connection from a client, and wishes to check that client against a DNSBL (let's say, dnsbl.example.net), it does more or less the following: Take the client's IP address—say, 192.168.42.23—and reverse the order of octets, yielding 23.42.168.192. Append the DNSBL's domain name: 23.42.168.192.dnsbl.example.net. Look up this name in the DNS as a domain name ("A" record). This will return either an address, indicating that the client is listed; or an "NXDOMAIN" ("No such domain") code, indicating that the client is not. Optionally, if the client is listed, look up the name as a text record ("TXT" record). Most DNSBLs publish information about why a client is listed as TXT records. Looking up an address in a DNSBL is thus similar to looking it up in reverse-DNS. The differences are that a DNSBL lookup uses the "A" rather than "PTR" record type, and uses a forward domain (such as dnsbl.example.net above) rather than the special reverse domain in-addr.arpa. There is an informal protocol for the addresses returned by DNSBL queries which match. Most DNSBLs return an address in the 127.0.0.0/8 IP loopback network. The address 127.0.0.2 indicates a generic listing. Other addresses in this block may indicate something specific about the listing—that it indicates an open relay, proxy, spammer-owned host, etc. For details see RFC 5782. URI DNSBL A URI DNSBL query (and an RHSBL query) is fairly straightforward. The domain name to query is prepended to the DNS list host as follows: example.net.dnslist.example.com where dnslist.example.com is the DNS list host and example.net is the queried domain. Generally if an A record is returned the name is listed. DNSBL policies Different DNSBLs have different policies. DNSBL policies differ from one another on three fronts: Goals. What does the DNSBL seek to list? Is it a list of open-relay mail servers or open proxies—or of IP addresses known to send spam—or perhaps of IP addresses belonging to ISPs that harbor spammers? Nomination. How does the DNSBL discover addresses to list? Does it use nominations submitted by users? Spam-trap addresses or honeypots? Listing lifetime. How long does a listing last? Are they automatically expired, or only removed manually? What can the operator of a listed host do to have it delisted? Types In addition to the different types of listed entities (IP addresses for traditional DNSBLs, host and domain names for RHSBLs, URIs for URIBLs) there is a wide range of semantic variations between lists as to what a listing means. List maintainers themselves have been divided on the issues of whether their listings should be seen as statements of objective fact or subjective opinion and on how their lists should best be used. As a result, there is no definitive taxonomy for DNSBLs. Some names defined here (e.g. "Yellow" and "NoBL") are varieties that are not in widespread use and so the names themselves are not in widespread use, but should be recognized by many spam control specialists. Whitelist / Allowlist A listing is an affirmative indication of essentially absolute trust Blacklist / Blocklist A listing is a negative indication of essentially absolute distrust Grey list Most frequently seen as one word (greylist or greylisting) not involving DNSBLs directly, but using temporary deferral of mail from unfamiliar sources to allow for the development of a public reputation (such as DNSBL listings) or to discourage speed-focused spamming. Occasionally used to refer to actual DNSBLs on which listings denote distinct non-absolute levels and forms of trust or distrust. Yellow list A listing indicates that the source is known to produce a mixture of spam and non-spam to a degree that makes checking other DNSBLs of any sort useless. NoBL list A listing indicates that the source is believed to send no spam and should not be subjected to blacklist testing, but is not quite as trusted as a whitelisted source. Usage Most message transfer agents (MTA) can be configured to absolutely block or (less commonly) to accept email based on a DNSBL listing. This is the oldest usage form of DNSBLs. Depending on the specific MTA, there can be subtle distinctions in configuration that make list types such as Yellow and NoBL useful or pointless because of how the MTA handles multiple DNSBLs. A drawback of using the direct DNSBL support in most MTAs is that sources not on any list require checking all of the DNSBLs being used with relatively little utility to caching the negative results. In some cases this can cause a significant slowdown in mail delivery. Using White, Yellow, and NoBL lists to avoid some lookups can be used to alleviate this in some MTAs. DNSBLs can be used in rule based spam analysis software like Spamassassin where each DNSBL has its own rule. Each rule has a specific positive or negative weight which is combined with other types of rules to score each message. This allows for the use of rules that act (by whatever criteria are available in the specific software) to "whitelist" mail that would otherwise be rejected due to a DNSBL listing or due to other rules. This can also have the problem of heavy DNS lookup load for no useful results, but it may not delay mail as much because scoring makes it possible for lookups to be done in parallel and asynchronously while the filter is checking the message against the other rules. It is possible with some toolsets to blend the binary testing and weighted rule approaches. One way to do this is to first check white lists and accept the message if the source is on a white list, bypassing all other testing mechanisms. A technique developed by Junk Email Filter uses Yellow Lists and NoBL lists to mitigate the false positives that occur routinely when using black lists that are not carefully maintained to avoid them. Some DNSBLs have been created for uses other than filtering email for spam, but rather for demonstration, informational, rhetorical, and testing control purposes. Examples include the "No False Negatives List," "Lucky Sevens List," "Fibonacci's List," various lists encoding GeoIP information, and random selection lists scaled to match coverage of another list, useful as a control for determining whether that list's effects are distinguishable from random rejections. Criticism Some end-users and organizations have concerns regarding the concept of DNSBLs or the specifics of how they are created and used. Some of the criticisms include: Legitimate emails blocked along with spam from shared mailservers. When an ISP's shared mailserver has one or more compromised machines sending spam, it can become listed on a DNSBL. End-users assigned to that same shared mailserver may find their emails blocked by receiving mailservers using such a DNSBL. In May 2016, the SORBS system was blocking the SMTP servers of Telstra Australia, Australia's largest internet service provider. This is no surprise as at any one time, there would be thousands of computers connected to this mail server infected by zombie type viruses sending spam. The effect is to cut off all the legitimate emails from the users of the Telstra Australia system. Lists of dynamic IP addresses. This type of DNSBL lists IP addresses submitted by ISPs as dynamic and therefore presumably unsuitable to send email directly; the end-user is supposed to use the ISP's mailserver for all sending of email. But these lists can also accidentally include static addresses, which may be legitimately used by small-business owners or other end-users to host small email servers. Lists that include "spam-support operations", such as MAPS RBL. A spam-support operation is a site that may not directly send spam, but provides commercial services for spammers, such as hosting of Web sites that are advertised in spam. Refusal to accept mail from spam-support operations is intended as a boycott to encourage such sites to cease doing business with spammers, at the expense of inconveniencing non-spammers who use the same site as spammers. Some lists have unclear listing criteria and delisting may not happen automatically nor quickly. A few DNSBL operators will request payment (e.g. uceprotect.net) or donation (e.g. SORBS). Some of the many listing/delisting policies can be found in the Comparison of DNS blacklists article. Because lists have varying methods for adding IP addresses and/or URIs, it can be difficult for senders to configure their systems appropriately to avoid becoming listed on a DNSBL. For example, the UCEProtect DNSBL seems to list IP addresses merely once they have validated a recipient address or established a TCP connection, even if no spam message is ever delivered. Despite the criticisms, few people object to the principle that mail-receiving sites should be able to reject undesired mail systematically. One person who does is John Gilmore, who deliberately operates an open mail relay. Gilmore accuses DNSBL operators of violating antitrust law. A number of parties, such as the Electronic Frontier Foundation and Peacefire, have raised concerns about some use of DNSBLs by ISPs. One joint statement issued by a group including EFF and Peacefire addressed "stealth blocking", in which ISPs use DNSBLs or other spam-blocking techniques without informing their clients. Lawsuits Spammers have pursued lawsuits against DNSBL operators on similar grounds: In 2003, EMarketersAmerica.org filed a lawsuit against a number of DNSBL operators in a Florida court. Backed by spammer Eddy Marin, the company claimed to be a trade organization for email marketers and that DNSBL operators Spamhaus and SPEWS were engaged in restraint of trade. The suit was eventually dismissed for lack of standing. In 2006, a U.S. court ordered Spamhaus to pay $11.7 million in damages to the spammer e360 Insight LLC. The order was a default judgment, as Spamhaus, which is based in the UK, had refused to recognize the court's jurisdiction and did not defend itself in the e360 lawsuit. In 2011, his decision was overturned by the United States Court of Appeals for the Seventh Circuit. Notable examples Dynablock See also Internet filter Email spam Notes References External links Blacklist Monitor - Weekly statistics of success and failure rates for specific blacklists How to Create a DNSBL - Tutorial on how to create a DNSBL (DNS Black List) Spamming Anti-spam Internet terminology
Domain Name System blocklist
[ "Technology" ]
4,001
[ "Computing terminology", "Internet terminology" ]
8,577,896
https://en.wikipedia.org/wiki/Methane%20reformer
A methane reformer is a device based on steam reforming, autothermal reforming or partial oxidation and is a type of chemical synthesis which can produce pure hydrogen gas from methane using a catalyst. There are multiple types of reformers in development but the most common in industry are autothermal reforming (ATR) and steam methane reforming (SMR). Most methods work by exposing methane to a catalyst (usually nickel) at high temperature and pressure. Steam reforming Steam reforming (SR), sometimes referred to as steam methane reforming (SMR) uses an external source of hot gas to heat tubes in which a catalytic reaction takes place that converts steam and lighter hydrocarbons such as methane, biogas or refinery feedstock into hydrogen and carbon monoxide (syngas). Syngas reacts further to give more hydrogen and carbon dioxide in the reactor. The carbon oxides are removed before use by means of pressure swing adsorption (PSA) with molecular sieves for the final purification. The PSA works by adsorbing impurities from the syngas stream to leave a pure hydrogen gas. CH4 + H2O (steam) → CO + 3 H2 Endothermic CO + H2O (steam) → CO2 + H2 Exothermic Autothermal reforming Autothermal reforming (ATR) uses oxygen and carbon dioxide or steam in a reaction with methane to form syngas. The reaction takes place in a single chamber where the methane is partially oxidized. The reaction is exothermic due to the oxidation. When the ATR uses carbon dioxide the H2:CO ratio produced is 1:1; when the ATR uses steam the H2:CO ratio produced is 2.5:1 The reactions can be described in the following equations, using CO2: 2 CH4 + O2 + CO2 → 3 H2 + 3 CO + H2O And using steam: 4 CH4 + O2 + 2 H2O → 10 H2 + 4 CO The outlet temperature of the syngas is between 950 and 1100 °C and outlet pressure can be as high as 100 bar. The main difference between SMR and ATR is that SMR only uses oxygen via air for combustion as a heat source to create steam, while ATR directly combusts oxygen. The advantage of ATR is that the H2:CO can be varied, this is particularly useful for producing certain second generation biofuels, such as DME which requires a 1:1 H2:CO ratio. Partial oxidation Partial oxidation (POX) is a type of chemical reaction. It occurs when a substoichiometric fuel-air mixture is partially combusted in a reformer, creating a hydrogen-rich syngas which can then be put to further use. Advantages and disadvantages The capital cost of steam reforming plants is prohibitive for small to medium size applications because the technology does not scale down well. Conventional steam reforming plants operate at pressures between 200 and 600 psi with outlet temperatures in the range of 815 to 925 °C. However, analyses have shown that even though it is more costly to construct, a well-designed SMR can produce hydrogen more cost-effectively than an ATR for smaller applications. See also Catalytic reforming Industrial gas Reformed methanol fuel cell PROX Partial oxidation Chemical looping reforming and gasification References External links Harvest Energy Technology, Inc. an Air Products and Chemicals Incorporated company Hydrogen production Fuel cells Chemical equipment Industrial gases
Methane reformer
[ "Chemistry", "Engineering" ]
703
[ "Chemical process engineering", "Chemical equipment", "Industrial gases", "nan" ]
8,578,085
https://en.wikipedia.org/wiki/Electron%20affinity%20%28data%20page%29
This page deals with the electron affinity as a property of isolated atoms or molecules (i.e. in the gas phase). Solid state electron affinities are not listed here. Elements Electron affinity can be defined in two equivalent ways. First, as the energy that is released by adding an electron to an isolated gaseous atom. The second (reverse) definition is that electron affinity is the energy required to remove an electron from a singly charged gaseous negative ion. The latter can be regarded as the ionization energy of the –1 ion or the zeroth ionization energy. Either convention can be used. Negative electron affinities can be used in those cases where electron capture requires energy, i.e. when capture can occur only if the impinging electron has a kinetic energy large enough to excite a resonance of the atom-plus-electron system. Conversely electron removal from the anion formed in this way releases energy, which is carried out by the freed electron as kinetic energy. Negative ions formed in these cases are always unstable. They may have lifetimes of the order of microseconds to milliseconds, and invariably autodetach after some time. Molecules The electron affinities Eea of some molecules are given in the table below, from the lightest to the heaviest. Many more have been listed by . The electron affinities of the radicals OH and SH are the most precisely known of all molecular electron affinities. Second and third electron affinity Bibliography . . Updated values can be found in the NIST chemistry webbook for around three dozen elements and close to 400 compounds. Specific molecules References See also Atomic physics Chemical properties Chemical element data pages
Electron affinity (data page)
[ "Physics", "Chemistry" ]
343
[ "Chemical data pages", "Quantum mechanics", "Chemical element data pages", "Atomic physics", " molecular", "nan", "Atomic", " and optical physics" ]
8,579,903
https://en.wikipedia.org/wiki/Black%20holes%20in%20fiction
Black holes, objects whose gravity is so strong that nothing—including light—can escape them, have been depicted in fiction since at least the pulp era of science fiction, before the term black hole was coined. A common portrayal at the time was of black holes as hazards to spacefarers, a motif that has also recurred in later works. The concept of black holes became popular in science and fiction alike in the 1960s. Authors quickly seized upon the relativistic effect of gravitational time dilation, whereby time passes more slowly closer to a black hole due to its immense gravitational field. Black holes also became a popular means of space travel in science fiction, especially when the notion of wormholes emerged as a relatively plausible way to achieve faster-than-light travel. In this concept, a black hole is connected to its theoretical opposite, a so-called white hole, and as such acts as a gateway to another point in space which might be very distant from the point of entry. More exotically, the point of emergence is occasionally portrayed as another point in time—thus enabling time travel—or even an entirely different universe. More fanciful depictions of black holes that do not correspond to their known or predicted properties also appear. As nothing inside the event horizon—the distance away from the black hole where the escape velocity exceeds the speed of light—can be observed from the outside, authors have been free to employ artistic license when depicting the interiors of black holes. A small number of works also portray black holes as being sentient. Besides stellar-mass black holes, supermassive and especially micro black holes also make occasional appearances. Supermassive black holes are a common feature of modern space opera. Recurring themes in stories depicting micro black holes include spaceship propulsion, threatening or causing the destruction of the Earth, and serving as a source of gravity in outer-space settlements. Early depictions The general concept of black holes, objects whose gravity is so strong that nothing—including light—can escape them, was first proposed by John Michell in 1783 and developed further in the framework of Albert Einstein's theory of general relativity by Karl Schwarzschild in 1916. Serious scientific attention remained relatively limited until the 1960s, the same decade the term black hole was coined, though objects with the overall characteristics of black holes had made appearances in fiction decades earlier during the pulp era of science fiction. Examples of this include E. E. Smith's 1928 novel The Skylark of Space with its "black sun", 's 1935 short story "Starship Invincible" with its "Hole in Space", and Nat Schachner's 1938 short story "Negative Space"—all of which portray the black holes as hazards to spacefarers. Later works that still predate the adoption of the current terminology include Fred Saberhagen's 1965 short story "Masque of the Red Shift" with its "hypermass" and the 1967 Star Trek episode "Tomorrow Is Yesterday" with its "black star". Time dilation Once black holes gained mainstream popularity, many of the early works featuring black holes focused on the concept of gravitational time dilation, whereby time passes more slowly closer to a black hole due to the effects of general relativity. One consequence of this is that the process of crossing the event horizon—the distance away from the black hole where the escape velocity exceeds the speed of light—appears to an outside observer to take an infinite amount of time. In Poul Anderson's 1968 short story "Kyrie", a telepathic scream from a being falling into a black hole thus becomes drawn out for eternity. Similarly, a spaceship appears forever immovable at the event horizon in Brian Aldiss's 1976 short story "The Dark Soul of the Night". In Frederik Pohl's 1977 novel Gateway, an astronaut is wracked with survivor's guilt over the deaths of his companions during an encounter with a black hole, compounded by the process appearing to still be ongoing. Later sequels in Pohl's Heechee Saga, from the 1980 novel Beyond the Blue Event Horizon onward, portray time dilation being exploited by aliens who reside near a black hole to experience the passage of time more slowly than the rest of the universe; other aliens do likewise in David Brin's 1984 short story "The Crystal Spheres" while waiting for the universe to be more filled with life. In Alastair Reynolds's 2000 novel Revelation Space, aliens use the relativistic effect to hide. In Bill Johnson's 1982 short story "Meet Me at Apogee", travel to various levels of time dilation is commercialized and used by people with incurable diseases, among others. In the 2014 film Interstellar, a planet orbits a black hole so closely that it experiences extreme time dilation, with time passing approximately 60,000 times slower than on Earth. Space travel Black holes have also been portrayed as ways to travel through space. In particular, they often serve as a means to achieve faster-than-light travel. The proposed mechanism involves travelling through the singularity at the center of a black hole and emerging at some other, perhaps very distant, place in the universe. More exotically, the point of emergence is occasionally portrayed as another point in time—thus enabling time travel—or even an entirely different universe. To explain why the immense gravitational field of the black hole does not crush the travellers and their vessels, the special theorized properties of rotating black holes are sometimes invoked by authors; astrophysicists Steven D. Bloom and Andrew May argue that the strong tidal forces would nevertheless invariably be fatal, May pointing specifically to spaghettification. According to The Encyclopedia of Science Fiction, early stories employing black holes for this purpose tended to use alternative terminology to obfuscate the underlying issues. Thus, Joe Haldeman's 1974 fix-up novel The Forever War, where a network of black holes is used for interstellar warfare, calls them "collapsars", while George R. R. Martin's 1972 short story "The Second Kind of Loneliness" has a "nullspace vortex". Speculation that black holes might be connected to their hypothetical opposites, white holes, followed in the 1970s—the resulting arrangement being known as a wormhole. Wormholes were appealing to writers due to their relative theoretical plausibility as a means of faster-than-light travel, and they were further popularized by speculative works of non-fiction such as Adrian Berry's 1977 book The Iron Sun: Crossing the Universe Through Black Holes. Black holes and associated wormholes thus quickly became commonplace in fiction; according to science fiction scholar Brian Stableford, writing in the 2006 work Science Fact and Science Fiction: An Encyclopedia, "wormholes became the most fashionable mode of interstellar travel in the last decades of the twentieth century". Ian Wallace's 1979 novel Heller's Leap is a murder mystery involving a journey through a black hole. Joan D. Vinge's 1980 novel The Snow Queen is set on a circumbinary planet where a black hole between the binary stars serves as the gateway between the system and the outside world, while Paul Preuss's 1980 novel The Gates of Heaven and its 1981 follow-up Re-Entry feature black holes that are used for travel through both space and time. In the 1989 anime film Garaga, human colonization of the cosmos is enabled by interstellar gateways associated with black holes. The entire Earth is transported through a wormhole in Roger MacBride Allen's 1990 novel The Ring of Charon. Travel between universes is depicted in Pohl and Jack Williamson's 1991 novel The Singers of Time, the concept having earlier made a more fanciful appearance in the 1975 film The Giant Spider Invasion, where the spiders of the title arrive at Earth through a black hole. In the 2009 film Star Trek, a black hole created to neutralize a supernova threat has the side-effect of transporting two nearby spaceships into the past, where they end up altering the course of history. In Bolivian science fiction writer Giovanna Rivero's 2012 novel Helena 2022: La vera crónica de un naufragio en el tiempo, a spaceship ends up in 1630s Italy as a result of an accidental encounter with a black hole. Small and large Black holes need not necessarily be stellar-mass; the decisive factor is whether sufficient mass is contained within a small enough space—the Schwarzschild radius. The principal mechanism of black hole formation is the gravitational collapse of a massive star, but other origins have been hypothesized, including so-called primordial black holes forming shortly after the Big Bang. Primordial black holes could theoretically be of virtually any conceivable size, though the smallest ones would by now have evaporated into nothing due to the quantum mechanical effect known as Hawking radiation. The concept of micro black holes was first theorized scientifically in the 1970s, and quickly became popular in science fiction. In Larry Niven's 1974 short story "The Hole Man", a microscopic black hole is used as a murder weapon by exploiting the tidal effects at short range, and in Niven's 1975 short story "The Borderland of Sol", one is used by space pirates to capture spaceships. Small black holes are used to power spaceship propulsion in Arthur C. Clarke's 1975 novel Imperial Earth, Charles Sheffield's 1978 short story "Killing Vector", and the 1997 film Event Horizon. Artificial black holes that are created unintentionally at nuclear facilities appear in Michael McCollum's 1979 short story "Scoop" and Martin Caidin's 1980 novel Star Bright. In David Langford's 1982 novel The Space Eater, a small black hole is used as a weapon against a rebellious planet. Earth is endangered by miniature black holes in Gregory Benford's 1985 novel Artifact, Thomas Thurston Thomas's 1986 novel The Doomsday Effect, and Brin's 1990 novel Earth, and the planet's destruction in this way forms part of the backstory in Dan Simmons's 1989 novel Hyperion, while the Moon's destruction by a small black hole is depicted in Paul J. McAuley's 1990 short story "How We Lost the Moon" and is suspected to have occurred in Neal Stephenson's 2015 novel Seveneves. Small black holes are used as a way to provide an artificial gravity of sorts by placing them inside inhabited structures or settled asteroids in Sheffield's 1989 novel Proteus Unbound, Reynolds's 2008 novel House of Suns, and Iain M. Banks's 2010 novel Surface Detail. The titular material in Wil McCarthy's 2000 novel The Collapsium is made up of a lattice of micro black holes and makes teleportation possible. At the opposite end of the spectrum, black holes can have masses comparable to that of an entire galaxy. Supermassive black holes, with masses that can be in excess of billions of times the mass of the Sun, are thought to exist in the center of most galaxies. Sufficiently large and massive black holes would have a low average density and could theoretically contain intact stars and planets within their event horizons. An enormous low-density black hole of this kind appears in Barry N. Malzberg's 1975 novel Galaxies. In Benford's Galactic Center Saga, starting with the 1977 novel In the Ocean of Night, the vicinity of the supermassive black hole at the Galactic Center of the Milky Way makes an attractive destination for spacefaring civilizations due to the high concentration of stars that can serve as sources of energy in the region; a similar use is found for a regular-sized black hole in Benford's 1986 short story "As Big as the Ritz", where its accretion disk provides ample solar energy for a space habitat. McAuley's 1991 novel Eternal Light involves a journey to the central supermassive black hole to investigate a hypervelocity star on a trajectory towards the Solar System. According to The Encyclopedia of Science Fiction, "the immense black hole at the galactic core has become almost a cliché of contemporary space opera" such as Greg Egan's 2008 novel Incandescence. Hazards to spacefarers The pulp-era motif of black holes posing danger to spacefarers resurfaced decades later, following the popularization of black holes in fiction. In the 1975 Space: 1999 episode "Black Sun", one threatens to destroy the Moon as it travels through space; the episode was one of those included in Edwin Charles Tubb's 1975 novelization Breakaway. In Isaac Asimov's 1976 short story "Old-fashioned", astronauts surmise that an unseen object keeping them in orbit must be a modestly-sized black hole, having wreaked havoc with their spaceship through tidal forces. In Edward Bryant's 1976 novel Cinnabar, a computer self-destructs by intentionally entering a black hole. In Mildred Downey Broxon's 1978 short story "Singularity", scientists study a civilization on a planet that will shortly be destroyed by an approaching black hole. John Varley's 1978 short story "The Black Hole Passes" depicts an outpost in the Oort cloud being imperilled by a small black hole. In Stephen Baxter's 1993 short story "Pilot", a spaceship extracts energy from a rotating black hole's ergosphere to widen its event horizon and cause a pursuer to fall into it. Black holes also appear as obstacles in the 2007 video game Super Mario Galaxy. Interior Because what lies beyond the event horizon is unknown and by definition unobservable from outside, authors have been free to employ artistic license when depicting the interiors of black holes. The 1979 film The Black Hole, noted for its inaccurate portrayal of the known properties of black holes,, depicts the inside as an otherworldly place bearing the hallmarks of Christian conceptions of the afterlife. In Benford's 1990 novel Beyond the Fall of Night, a sequel to Clarke's 1948 novel Against the Fall of Night, the inside of a black hole is used as a prison, a role it also serves in Alan Moore and Dave Gibbons's 1985 Superman comic book story "For the Man Who Has Everything". Alien lifeforms inhabit the interior of a black hole in McCarthy's 1995 novel Flies from the Amber. Expeditions into black holes to explore the interior are depicted in Geoffrey A. Landis's 1998 short story "Approaching Perimelasma" and Egan's 1998 short story "The Planck Dive". Sentient In much the same way as stars—and, to a lesser extent, planets—have been anthropomorphized as living and thinking beings, so have black holes. An intelligent, talking black hole appears in Varley's 1977 short story "Lollipop and the Tar Baby". In Sheffield's Proteus Unbound, microscopic black holes are determined to contain intelligence through signals emanating from them. In Benford's 2000 novel Eater, a black hole that is sentient as a result of electromagnetic interactions in its accretion disk seeks to devour the Solar System. See also Neutron stars in fiction Stars in fiction Supernovae in fiction Notes References Further reading Fiction
Black holes in fiction
[ "Physics", "Astronomy" ]
3,122
[ "Black holes", "Physical phenomena", "Physical quantities", "Unsolved problems in physics", "Astrophysics", "Density", "Fiction about black holes", "Stellar phenomena", "Astronomical objects" ]
8,580,596
https://en.wikipedia.org/wiki/Phone%20cloning
Phone cloning is the copying of identity from one cellular device to another. AMPS cloning Analogue mobile telephones were notorious for their lack of security. Casual listeners easily heard conversations as plain narrowband FM; eavesdroppers with specialized equipment readily intercepted handset Electronic Serial Numbers (ESN) and Mobile Directory Numbers (MDN or CTN, the Cellular Telephone Number) over the air. The intercepted ESN/MDN pairs would be cloned onto another handset and used in other regions for making calls. Due to widespread fraud, some carriers required a PIN before making calls or used a system of radio fingerprinting to detect the clones. CDMA cloning Code-Division Multiple Access (CDMA) mobile telephone cloning involves gaining access to the device's embedded file system /nvm/num directory via specialized software or placing a modified EEPROM into the target mobile telephone, allowing the Electronic Serial Number (ESN) and/or Mobile Equipment Identifier (MEID) of the mobile phone to be changed. To obtain the MEID of your phone, simply open your phone's dialler and type *#06# to get its MEID number. The ESN or MEID is typically transmitted to the cellular company's Mobile Telephone Switching Office (MTSO) in order to authenticate a device onto the mobile network. Modifying these, as well as the phone's Preferred Roaming List (PRL) and the mobile identification number, or MIN, can pave the way for fraudulent calls, as the target telephone is now a clone of the telephone from which the original ESN and MIN data were obtained. GSM cloning GSM cloning occurs by copying a secret key from the victim SIM card, typically not requiring any internal data from the handset (the phone itself). GSM handsets do not have ESN or MIN, only an International Mobile Equipment Identity (IMEI) number. There are various methods used to obtain the IMEI. The most common method is to eavesdrop on a cellular network. Older GSM SIM cards can be cloned by performing a cryptographic attack against the COMP128 authentication algorithm used by these older SIM cards. By connecting the SIM card to a computer, the authentication procedure can be repeated many times in order to slowly leak information about the secret key. If this procedure is repeated enough times, it is possible to derive the Ki key. Later GSM SIMs have various mitigations built in, either by limiting the amount of authentications performed in a power on session, or by the manufacturer choosing resistant Ki keys. However if it is known that a resistant key was used, it is possible to speed up the attack by eliminating weak Ki keys from the pool of possible keys. Effectiveness and legislation Phone cloning is outlawed in the United States by the Wireless Telephone Protection Act of 1998, which prohibits "knowingly using, producing, trafficking in, having control or custody of, or possessing hardware or software knowing that it has been configured to insert or modify telecommunication identifying information associated with or contained in a telecommunications instrument so that such instrument may be used to obtain telecommunications service without authorization." The effectiveness of phone cloning is limited. Every mobile phone contains a radio fingerprint in its transmission signal which remains unique to that mobile despite changes to the phone's ESN, IMEI, or MIN. Thus, cellular companies are often able to catch cloned phones when there are discrepancies between the fingerprint and the ESN, IMEI, or MIN. See also Dual SIM International Mobile Equipment Identity Subscriber identity module References Fraud Mobile technology
Phone cloning
[ "Technology" ]
740
[ "nan" ]
8,580,973
https://en.wikipedia.org/wiki/Jeremy%20S.%20Heyl
Jeremy Samuel Heyl is an astronomer and a professor at the University of British Columbia's Department of Physics and Astronomy, in Vancouver, British Columbia. He holds a Canada Research Chair in black holes and neutron stars. In the past he was a Goldwater Scholar, a Marshall Scholar and a Chandra Fellow. Heyl is best known for his work in the physics of neutron stars especially the importance of quantum electrodynamics in radiative transfer, non-radial oscillations during Type-I X-ray bursts and the cooling of magnetars. He has also made important contributions to our understanding of galaxy formation, evolution and mergers. References External links http://www.phas.ubc.ca/~heyl Canada Research Chair profile Living people 21st-century American astronomers 21st-century Canadian astronomers Canada Research Chairs Academic staff of the University of British Columbia Marshall Scholars Year of birth missing (living people) Alumni of the University of Cambridge
Jeremy S. Heyl
[ "Astronomy" ]
194
[ "Astronomers", "Astronomer stubs", "Astronomy stubs" ]
8,581,435
https://en.wikipedia.org/wiki/Space%20Fellowship
The Space Fellowship is an international news and information network dedicated to the development of the space industry. The organisation works to report and communicate space news and information to its valued community. Offering a unique and fresh approach, the International Space Fellowship works alongside leading space organisations with the goal of bringing space to the general public. Its online news service provides visitors with the latest news and updates from both inside and outside the space community. History In the early days the Space Fellowship was the Official X PRIZE Foundation web forum and a separate X PRIZE blog on Google's blogspot. On 12 July 2004 the X PRIZE Foundation web forum and the X PRIZE blog spot joined to form the X PRIZE News. On 18 October 2005 the X PRIZE News was renamed to the International Space Fellowship. Members Aerospace companies / Teams and prizes having their official forums listed on the Space Fellowship are: Armadillo Aerospace JP Aerospace Micro-Space Masten Space Systems Interorbital Systems Microlaunchers Cambridge University Spaceflight Epsilon vee Team Prometheus N-Prize See also Space advocacy References External links (archived in June 2019; spacefellowship.com now offline with message: "You can visit https://www.nasaspaceflight.com/ for recent Space News") Private spaceflight Commercial spaceflight Space access Space colonization Space organizations Space advocacy organizations Space tourism British news websites Organizations established in 2003
Space Fellowship
[ "Astronomy" ]
279
[ "Space advocacy organizations", "Astronomy organizations", "Space organizations" ]
8,582,679
https://en.wikipedia.org/wiki/PowerColor
PowerColor is a Taiwanese graphics card brand established in 1997 by TUL Corporation (撼訊科技), based in New Taipei, Taiwan. PowerColor maintains office locations in a number of countries, including Taiwan, the Netherlands and the United States. The United States branch is located in City of Industry, California and serves the North and Latin American markets. TUL also has another brand, VTX3D, which serves the European market and some Asian markets. Products PowerColor is a licensed producer of AMD Radeon video cards. The majority of PowerColor cards are manufactured by Foxconn. PowerColor's AMD video cards range from affordable cards appropriate for low-end workstations, to cards for high-end gaming machines, thus catering to a wide range of the market. PowerColor's manufacturing arrangement with FoxConn has given it the ability to change the specifications of cards, allowing them to announce products with higher specifications—overclocked by default—than AMD or its main competitor, Sapphire Technology. PowerColor products have been widely reviewed and have gained a number of awards at computer hardware review sites. Support PowerColor provides a two-year warranty on its products. To return a video card, the end-user must sign in and register their card. The return process is available only to end users in North America, with the customer liable for shipping. See also Diamond Multimedia – for North and South American markets References 1997 establishments in Taiwan Computer companies of Taiwan Computer hardware companies Electronics companies of Taiwan Graphics hardware companies Electronics companies established in 1997 Taiwanese brands Manufacturing companies based in New Taipei
PowerColor
[ "Technology" ]
327
[ "Computer hardware companies", "Computers" ]
8,582,684
https://en.wikipedia.org/wiki/Reward%20system
The reward system (the mesocorticolimbic circuit) is a group of neural structures responsible for incentive salience (i.e., "wanting"; desire or craving for a reward and motivation), associative learning (primarily positive reinforcement and classical conditioning), and positively-valenced emotions, particularly ones involving pleasure as a core component (e.g., joy, euphoria and ecstasy). Reward is the attractive and motivational property of a stimulus that induces appetitive behavior, also known as approach behavior, and consummatory behavior. A rewarding stimulus has been described as "any stimulus, object, event, activity, or situation that has the potential to make us approach and consume it is by definition a reward". In operant conditioning, rewarding stimuli function as positive reinforcers; however, the converse statement also holds true: positive reinforcers are rewarding. The reward system motivates animals to approach stimuli or engage in behaviour that increases fitness (sex, energy-dense foods, etc.). Survival for most animal species depends upon maximizing contact with beneficial stimuli and minimizing contact with harmful stimuli. Reward cognition serves to increase the likelihood of survival and reproduction by causing associative learning, eliciting approach and consummatory behavior, and triggering positively-valenced emotions. Thus, reward is a mechanism that evolved to help increase the adaptive fitness of animals. In drug addiction, certain substances over-activate the reward circuit, leading to compulsive substance-seeking behavior resulting from synaptic plasticity in the circuit. Primary rewards are a class of rewarding stimuli which facilitate the survival of one's self and offspring, and they include homeostatic (e.g., palatable food) and reproductive (e.g., sexual contact and parental investment) rewards. Intrinsic rewards are unconditioned rewards that are attractive and motivate behavior because they are inherently pleasurable. Extrinsic rewards (e.g., money or seeing one's favorite sports team winning a game) are conditioned rewards that are attractive and motivate behavior but are not inherently pleasurable. Extrinsic rewards derive their motivational value as a result of a learned association (i.e., conditioning) with intrinsic rewards. Extrinsic rewards may also elicit pleasure (e.g., euphoria from winning a lot of money in a lottery) after being classically conditioned with intrinsic rewards. Definition In neuroscience, the reward system is a collection of brain structures and neural pathways that are responsible for reward-related cognition, including associative learning (primarily classical conditioning and operant reinforcement), incentive salience (i.e., motivation and "wanting", desire, or craving for a reward), and positively-valenced emotions, particularly emotions that involve pleasure (i.e., hedonic "liking"). Reward related activities, such as feeding, exercise, sex, substance use, and social interactions play a factor in elevated levels of dopamine, ultimately altering the CNS (or the central nervous system). Dopamine is the chemical messanger that plays a role in regulating mood, motivation, reward, and pleasure. Terms that are commonly used to describe behavior related to the "wanting" or desire component of reward include appetitive behavior, approach behavior, preparatory behavior, instrumental behavior, anticipatory behavior, and seeking. Terms that are commonly used to describe behavior related to the "liking" or pleasure component of reward include consummatory behavior and taking behavior. The three primary functions of rewards are their capacity to: produce associative learning (i.e., classical conditioning and operant reinforcement); affect decision-making and induce approach behavior (via the assignment of motivational salience to rewarding stimuli); elicit positively-valenced emotions, particularly pleasure. Neuroanatomy Overview The brain structures that compose the reward system are located primarily within the cortico-basal ganglia-thalamo-cortical loop; the basal ganglia portion of the loop drives activity within the reward system. Most of the pathways that connect structures within the reward system are glutamatergic interneurons, GABAergic medium spiny neurons (MSNs), and dopaminergic projection neurons, although other types of projection neurons contribute (e.g., orexinergic projection neurons). The reward system includes the ventral tegmental area, ventral striatum (i.e., the nucleus accumbens and olfactory tubercle), dorsal striatum (i.e., the caudate nucleus and putamen), substantia nigra (i.e., the pars compacta and pars reticulata), prefrontal cortex, anterior cingulate cortex, insular cortex, hippocampus, hypothalamus (particularly, the orexinergic nucleus in the lateral hypothalamus), thalamus (multiple nuclei), subthalamic nucleus, globus pallidus (both external and internal), ventral pallidum, parabrachial nucleus, amygdala, and the remainder of the extended amygdala. The dorsal raphe nucleus and cerebellum appear to modulate some forms of reward-related cognition (i.e., associative learning, motivational salience, and positive emotions) and behaviors as well. The laterodorsal tegmental nucleus (LDT), pedunculopontine nucleus (PPTg), and lateral habenula (LHb) (both directly and indirectly via the rostromedial tegmental nucleus (RMTg)) are also capable of inducing aversive salience and incentive salience through their projections to the ventral tegmental area (VTA). The LDT and PPTg both send glutaminergic projections to the VTA that synapse on dopaminergic neurons, both of which can produce incentive salience. The LHb sends glutaminergic projections, the majority of which synapse on GABAergic RMTg neurons that in turn drive inhibition of dopaminergic VTA neurons, although some LHb projections terminate on VTA interneurons. These LHb projections are activated both by aversive stimuli and by the absence of an expected reward, and excitation of the LHb can induce aversion. Most of the dopamine pathways (i.e., neurons that use the neurotransmitter dopamine to communicate with other neurons) that project out of the ventral tegmental area are part of the reward system; in these pathways, dopamine acts on D1-like receptors or D2-like receptors to either stimulate (D1-like) or inhibit (D2-like) the production of cAMP. The GABAergic medium spiny neurons of the striatum are components of the reward system as well. The glutamatergic projection nuclei in the subthalamic nucleus, prefrontal cortex, hippocampus, thalamus, and amygdala connect to other parts of the reward system via glutamate pathways. The medial forebrain bundle, which is a set of many neural pathways that mediate brain stimulation reward (i.e., reward derived from direct electrochemical stimulation of the lateral hypothalamus), is also a component of the reward system. Two theories exist with regard to the activity of the nucleus accumbens and the generation liking and wanting. The inhibition (or hyper­polar­ization) hypothesis proposes that the nucleus accumbens exerts tonic inhibitory effects on downstream structures such as the ventral pallidum, hypothalamus or ventral tegmental area, and that in inhibiting in the nucleus accumbens (NAcc), these structures are excited, "releasing" reward related behavior. While GABA receptor agonists are capable of eliciting both "liking" and "wanting" reactions in the nucleus accumbens, glutaminergic inputs from the basolateral amygdala, ventral hippocampus, and medial prefrontal cortex can drive incentive salience. Furthermore, while most studies find that NAcc neurons reduce firing in response to reward, a number of studies find the opposite response. This had led to the proposal of the disinhibition (or depolarization) hypothesis, that proposes that excitation or NAcc neurons, or at least certain subsets, drives reward related behavior. After nearly 50 years of research on brain-stimulation reward, experts have certified that dozens of sites in the brain will maintain intracranial self-stimulation. Regions include the lateral hypothalamus and medial forebrain bundles, which are especially effective. Stimulation there activates fibers that form the ascending pathways; the ascending pathways include the mesolimbic dopamine pathway, which projects from the ventral tegmental area to the nucleus accumbens. There are several explanations as to why the mesolimbic dopamine pathway is central to circuits mediating reward. First, there is a marked increase in dopamine release from the mesolimbic pathway when animals engage in intracranial self-stimulation. Second, experiments consistently indicate that brain-stimulation reward stimulates the reinforcement of pathways that are normally activated by natural rewards, and drug reward or intracranial self-stimulation can exert more powerful activation of central reward mechanisms because they activate the reward center directly rather than through the peripheral nerves. Third, when animals are administered addictive drugs or engage in naturally rewarding behaviors, such as feeding or sexual activity, there is a marked release of dopamine within the nucleus accumbens. However, dopamine is not the only reward compound in the brain. Key pathway Ventral tegmental area The ventral tegmental area (VTA) is important in responding to stimuli and cues that indicate a reward is present. Rewarding stimuli (and all addictive drugs) act on the circuit by triggering the VTA to release dopamine signals to the nucleus accumbens, either directly or indirectly. The VTA has two important pathways: The mesolimbic pathway projecting to limbic (striatal) regions and underpinning the motivational behaviors and processes, and the mesocortical pathway projecting to the prefrontal cortex, underpinning cognitive functions, such as learning external cues, etc. Dopaminergic neurons in this region converts the amino acid tyrosine into DOPA using the enzyme tyrosine hydroxylase, which is then converted to dopamine using the enzyme DOPA decarboxylase. Striatum (Nucleus Accumbens) The striatum is broadly involved in acquiring and eliciting learned behaviors in response to a rewarding cue. The VTA projects to the striatum, and activates the GABA-ergic Medium Spiny Neurons via D1 and D2 receptors within the ventral (Nucleus Accumbens) and dorsal striatum. The Ventral Striatum (the Nucleus Accumbens) is broadly involved in acquiring behavior when fed into by the VTA, and eliciting behavior when fed into by the PFC. The NAc shell projects to the pallidum and the VTA, regulating limbic and autonomic functions. This modulates the reinforcing properties of stimuli, and short term aspects of reward. The NAc Core projects to the substantia nigra and is involved in the development of reward-seeking behaviors and its expression. It is involved in spatial learning, conditional response, and impulsive choice; the long term elements of reward. The Dorsal Striatum is involved in learning, the Dorsal Medial Striatum in goal directed learning, and the Dorsal Lateral Striatum in stimulus-response learning foundational to Pavlovian response. On repeated activation by a stimuli, the Nucleus Accumbens can activate the Dorsal Striatum via an intrastriatal loop. The transition of signals from the NAc to the DS allows reward associated cues to activate the DS without the reward itself being present. This can activate cravings and reward-seeking behaviors (and is responsible for triggering relapse during abstinence in addiction). Prefrontal Cortex The VTA dopaminergic neurons project to the PFC, activating glutaminergic neurons that project to multiple other regions, including the Dorsal Striatum and NAc, ultimately allowing the PFC to mediate salience and conditional behaviors in response to stimuli. Notably, abstinence from addicting drugs activates the PFC, glutamatergic projection to the NAc, which leads to strong cravings, and modulates reinstatement of addiction behaviors resulting from abstinence. The PFC also interacts with the VTA through the mesocortical pathway, and helps associate environmental cues with the reward. There are several parts of the brain related to the prefrontal cortex that help with decision-making in different ways. The dACC (dorsal anterior cingulate cortex) tracks effort, conflict, and mistakes. The vmPFC (ventromedial prefrontal cortex) focuses on what feels rewarding and helps make choices based on personal preferences. The OFC (orbitofrontal cortex) evaluates options and predicts their outcomes to guide decisions. Together, they work with dopamine signals to process rewards and actions. Hippocampus The Hippocampus has multiple functions, including in the creation and storage of memories . In the reward circuit, it serves to contextual memories and associated cues. It ultimately underpins the reinstatement of reward-seeking behaviors via cues, and contextual triggers. Amygdala The AMY receives input from the VTA, and outputs to the NAc. The amygdala is important in creating powerful emotional flashbulb memories, and likely underpins the creation of strong cue-associated memories. It also is important in mediating the anxiety effects of withdrawal, and increased drug intake in addiction. Pleasure centers Pleasure is a component of reward, but not all rewards are pleasurable (e.g., money does not elicit pleasure unless this response is conditioned). Stimuli that are naturally pleasurable, and therefore attractive, are known as intrinsic rewards, whereas stimuli that are attractive and motivate approach behavior, but are not inherently pleasurable, are termed extrinsic rewards. Extrinsic rewards (e.g., money) are rewarding as a result of a learned association with an intrinsic reward. In other words, extrinsic rewards function as motivational magnets that elicit "wanting", but not "liking" reactions once they have been acquired. The reward system contains – i.e., brain structures that mediate pleasure or "liking" reactions from intrinsic rewards. hedonic hotspots have been identified in subcompartments within the nucleus accumbens shell, ventral pallidum, parabrachial nucleus, orbitofrontal cortex (OFC), and insular cortex. The hotspot within the nucleus accumbens shell is located in the rostrodorsal quadrant of the medial shell, while the hedonic coldspot is located in a more posterior region. The posterior ventral pallidum also contains a hedonic hotspot, while the anterior ventral pallidum contains a hedonic coldspot. In rats, microinjections of opioids, endocannabinoids, and orexin are capable of enhancing liking reactions in these hotspots. The hedonic hotspots located in the anterior OFC and posterior insula have been demonstrated to respond to orexin and opioids in rats, as has the overlapping hedonic coldspot in the anterior insula and posterior OFC. On the other hand, the parabrachial nucleus hotspot has only been demonstrated to respond to benzodiazepine receptor agonists. Hedonic hotspots are functionally linked, in that activation of one hotspot results in the recruitment of the others, as indexed by the induced expression of c-Fos, an immediate early gene. Furthermore, inhibition of one hotspot results in the blunting of the effects of activating another hotspot. Therefore, the simultaneous activation of every hedonic hotspot within the reward system is believed to be necessary for generating the sensation of an intense euphoria. Wanting and liking Incentive salience is the "wanting" or "desire" attribute, which includes a motivational component, that is assigned to a rewarding stimulus by the nucleus accumbens shell (NAcc shell). The degree of dopamine neurotransmission into the NAcc shell from the mesolimbic pathway is highly correlated with the magnitude of incentive salience for rewarding stimuli. Activation of the dorsorostral region of the nucleus accumbens correlates with increases in wanting without concurrent increases in liking. However, dopaminergic neurotransmission into the nucleus accumbens shell is responsible not only for appetitive motivational salience (i.e., incentive salience) towards rewarding stimuli, but also for aversive motivational salience, which directs behavior away from undesirable stimuli. In the dorsal striatum, activation of D1 expressing MSNs produces appetitive incentive salience, while activation of D2 expressing MSNs produces aversion. In the NAcc, such a dichotomy is not as clear cut, and activation of both D1 and D2 MSNs is sufficient to enhance motivation, likely via disinhibiting the VTA through inhibiting the ventral pallidum. Robinson and Berridge's 1993 incentive-sensitization theory proposed that reward contains separable psychological components: wanting (incentive) and liking (pleasure). To explain increasing contact with a certain stimulus such as chocolate, there are two independent factors at work – our desire to have the chocolate (wanting) and the pleasure effect of the chocolate (liking). According to Robinson and Berridge, wanting and liking are two aspects of the same process, so rewards are usually wanted and liked to the same degree. However, wanting and liking also change independently under certain circumstances. For example, rats that do not eat after receiving dopamine (experiencing a loss of desire for food) act as though they still like food. In another example, activated self-stimulation electrodes in the lateral hypothalamus of rats increase appetite, but also cause more adverse reactions to tastes such as sugar and salt; apparently, the stimulation increases wanting but not liking. Such results demonstrate that the reward system of rats includes independent processes of wanting and liking. The wanting component is thought to be controlled by dopaminergic pathways, whereas the liking component is thought to be controlled by opiate-GABA-endocannabinoids systems. Anti-reward system Koobs & Le Moal proposed that there exists a separate circuit responsible for the attenuation of reward-pursuing behavior, which they termed the anti-reward circuit. This component acts as brakes on the reward circuit, thus preventing the over pursuit of food, sex, etc. This circuit involves multiple parts of the amygdala (the bed nucleus of the stria terminalis, the central nucleus), the Nucleus Accumbens, and signal molecules including norepinephrine, corticotropin-releasing factor, and dynorphin. This circuit is also hypothesized to mediate the unpleasant components of stress, and is thus thought to be involved in addiction and withdrawal. While the reward circuit mediates the initial positive reinforcement involved in the development of addiction, it is the anti-reward circuit that later dominates via negative reinforcement that motivates the pursuit of the rewarding stimuli. Learning Rewarding stimuli can drive learning in both the form of classical conditioning (Pavlovian conditioning) and operant conditioning (instrumental conditioning). In classical conditioning, a reward can act as an unconditioned stimulus that, when associated with the conditioned stimulus, causes the conditioned stimulus to elicit both musculoskeletal (in the form of simple approach and avoidance behaviors) and vegetative responses. In operant conditioning, a reward may act as a reinforcer in that it increases or supports actions that lead to itself. Learned behaviors may or may not be sensitive to the value of the outcomes they lead to; behaviors that are sensitive to the contingency of an outcome on the performance of an action as well as the outcome value are goal-directed, while elicited actions that are insensitive to contingency or value are called habits. This distinction is thought to reflect two forms of learning, model free and model based. Model free learning involves the simple caching and updating of values. In contrast, model based learning involves the storage and construction of an internal model of events that allows inference and flexible prediction. Although pavlovian conditioning is generally assumed to be model-free, the incentive salience assigned to a conditioned stimulus is flexible with regard to changes in internal motivational states. Distinct neural systems are responsible for learning associations between stimuli and outcomes, actions and outcomes, and stimuli and responses. Although classical conditioning is not limited to the reward system, the enhancement of instrumental performance by stimuli (i.e., Pavlovian-instrumental transfer) requires the nucleus accumbens. Habitual and goal directed instrumental learning are dependent upon the lateral striatum and the medial striatum, respectively. During instrumental learning, opposing changes in the ratio of AMPA to NMDA receptors and phosphorylated ERK occurs in the D1-type and D2-type MSNs that constitute the direct and indirect pathways, respectively. These changes in synaptic plasticity and the accompanying learning is dependent upon activation of striatal D1 and NMDA receptors. The intracellular cascade activated by D1 receptors involves the recruitment of protein kinase A, and through resulting phosphorylation of DARPP-32, the inhibition of phosphatases that deactivate ERK. NMDA receptors activate ERK through a different but interrelated Ras-Raf-MEK-ERK pathway. Alone NMDA mediated activation of ERK is self-limited, as NMDA activation also inhibits PKA mediated inhibition of ERK deactivating phosphatases. However, when D1 and NMDA cascades are co-activated, they work synergistically, and the resultant activation of ERK regulates synaptic plasticity in the form of spine restructuring, transport of AMPA receptors, regulation of CREB, and increasing cellular excitability via inhibiting Kv4.2. Disorders Addiction ΔFosB (DeltaFosB) – a gene transcription factor – overexpression in the D1-type medium spiny neurons of the nucleus accumbens is the crucial common factor among virtually all forms of addiction (i.e., behavioral addictions and drug addictions) that induces addiction-related behavior and neural plasticity. In particular, ΔFosB promotes self-administration, reward sensitization, and reward cross-sensitization effects among specific addictive drugs and behaviors. Certain epigenetic modifications of histone protein tails (i.e., histone modifications) in specific regions of the brain are also known to play a crucial role in the molecular basis of addictions. Addictive drugs and behaviors are rewarding and reinforcing (i.e., are addictive) due to their effects on the dopamine reward pathway. The lateral hypothalamus and medial forebrain bundle has been the most-frequently-studied brain-stimulation reward site, particularly in studies of the effects of drugs on brain stimulation reward. The neurotransmitter system that has been most-clearly identified with the habit-forming actions of drugs-of-abuse is the mesolimbic dopamine system, with its efferent targets in the nucleus accumbens and its local GABAergic afferents. The reward-relevant actions of amphetamine and cocaine are in the dopaminergic synapses of the nucleus accumbens and perhaps the medial prefrontal cortex. Rats also learn to lever-press for cocaine injections into the medial prefrontal cortex, which works by increasing dopamine turnover in the nucleus accumbens. Nicotine infused directly into the nucleus accumbens also enhances local dopamine release, presumably by a presynaptic action on the dopaminergic terminals of this region. Nicotinic receptors localize to dopaminergic cell bodies and local nicotine injections increase dopaminergic cell firing that is critical for nicotinic reward. Some additional habit-forming drugs are also likely to decrease the output of medium spiny neurons as a consequence, despite activating dopaminergic projections. For opiates, the lowest-threshold site for reward effects involves actions on GABAergic neurons in the ventral tegmental area, a secondary site of opiate-rewarding actions on medium spiny output neurons of the nucleus accumbens. Thus the following form the core of currently characterised drug-reward circuitry; GABAergic afferents to the mesolimbic dopamine neurons (primary substrate of opiate reward), the mesolimbic dopamine neurons themselves (primary substrate of psychomotor stimulant reward), and GABAergic efferents to the mesolimbic dopamine neurons (a secondary site of opiate reward). Motivation Dysfunctional motivational salience appears in a number of psychiatric symptoms and disorders. Anhedonia, traditionally defined as a reduced capacity to feel pleasure, has been re-examined as reflecting blunted incentive salience, as most anhedonic populations exhibit intact "liking". On the other end of the spectrum, heightened incentive salience that is narrowed for specific stimuli is characteristic of behavioral and drug addictions. In the case of fear or paranoia, dysfunction may lie in elevated aversive salience. In modern literature, anhedonia is associated with the proposed two forms of pleasure, "anticipatory" and "consummatory". Neuroimaging studies across diagnoses associated with anhedonia have reported reduced activity in the OFC and ventral striatum. One meta analysis reported anhedonia was associated with reduced neural response to reward anticipation in the caudate nucleus, putamen, nucleus accumbens and medial prefrontal cortex (mPFC). Mood disorders Certain types of depression are associated with reduced motivation, as assessed by willingness to expend effort for reward. These abnormalities have been tentatively linked to reduced activity in areas of the striatum, and while dopaminergic abnormalities are hypothesized to play a role, most studies probing dopamine function in depression have reported inconsistent results. Although postmortem and neuroimaging studies have found abnormalities in numerous regions of the reward system, few findings are consistently replicated. Some studies have reported reduced NAcc, hippocampus, medial prefrontal cortex (mPFC), and orbitofrontal cortex (OFC) activity, as well as elevated basolateral amygdala and subgenual cingulate cortex (sgACC) activity during tasks related to reward or positive stimuli. These neuroimaging abnormalities are complemented by little post mortem research, but what little research has been done suggests reduced excitatory synapses in the mPFC. Reduced activity in the mPFC during reward related tasks appears to be localized to more dorsal regions(i.e. the pregenual cingulate cortex), while the more ventral sgACC is hyperactive in depression. Attempts to investigate underlying neural circuitry in animal models has also yielded conflicting results. Two paradigms are commonly used to simulate depression, chronic social defeat (CSDS), and chronic mild stress (CMS), although many exist. CSDS produces reduced preference for sucrose, reduced social interactions, and increased immobility in the forced swim test. CMS similarly reduces sucrose preference, and behavioral despair as assessed by tail suspension and forced swim tests. Animals susceptible to CSDS exhibit increased phasic VTA firing, and inhibition of VTA-NAcc projections attenuates behavioral deficits induced by CSDS. However, inhibition of VTA- projections exacerbates social withdrawal. On the other hand, CMS associated reductions in sucrose preference and immobility were attenuated and exacerbated by VTA excitation and inhibition, respectively. Although these differences may be attributable to different stimulation protocols or poor translational paradigms, variable results may also lie in the heterogenous functionality of reward related regions. Optogenetic stimulation of the mPFC as a whole produces antidepressant effects. This effect appears localized to the rodent homologue of the pgACC (the prelimbic cortex), as stimulation of the rodent homologue of the sgACC (the infralimbic cortex) produces no behavioral effects. Furthermore, deep brain stimulation in the infralimbic cortex, which is thought to have an inhibitory effect, also produces an antidepressant effect. This finding is congruent with the observation that pharmacological inhibition of the infralimbic cortex attenuates depressive behaviors. Schizophrenia Schizophrenia is associated with deficits in motivation, commonly grouped under other negative symptoms such as reduced spontaneous speech. The experience of "liking" is frequently reported to be intact, both behaviorally and neurally, although results may be specific to certain stimuli, such as monetary rewards. Furthermore, implicit learning and simple reward-related tasks are also intact in schizophrenia. Rather, deficits in the reward system are apparent during reward-related tasks that are cognitively complex. These deficits are associated with both abnormal striatal and OFC activity, as well as abnormalities in regions associated with cognitive functions such as the dorsolateral prefrontal cortex (DLPFC). Attention deficit hyperactivity disorder In those with ADHD, core aspects of the reward system are underactive, making it challenging to derive reward from regular activities. Those with the disorder experience a boost of motivation after a high-stimulation behaviour triggers a release of dopamine. In the aftermath of that boost and reward, the return to baseline levels results in an immediate drop in motivation. People with more ADHD-related behaviors show weaker brain responses to reward anticipation (not reward delivery), especially in the nucleus accumbens. While there is the initial boost of motivation and release of dopamine, as stated above, there is a higher risk of a noticeable drop in motivation. Research shows that for those who have ADHD, monetary rewards triggered the strongest brain activity, while verbal feedback triggered the least. Impairments of dopaminergic and noradrenergic function are said to be key factors in ADHD. These impairments can lead to executive dysfunction such as dysregulation of reward processing and motivational dysfunction, including anhedonia. History The first clue to the presence of a reward system in the brain came with an accidental discovery by James Olds and Peter Milner in 1954. They discovered that rats would perform behaviors such as pressing a bar, to administer a brief burst of electrical stimulation to specific sites in their brains. This phenomenon is called intracranial self-stimulation or brain stimulation reward. Typically, rats will press a lever hundreds or thousands of times per hour to obtain this brain stimulation, stopping only when they are exhausted. While trying to teach rats how to solve problems and run mazes, stimulation of certain regions of the brain where the stimulation was found seemed to give pleasure to the animals. They tried the same thing with humans and the results were similar. The explanation to why animals engage in a behavior that has no value to the survival of either themselves or their species is that the brain stimulation is activating the system underlying reward. In a fundamental discovery made in 1954, researchers James Olds and Peter Milner found that low-voltage electrical stimulation of certain regions of the brain of the rat acted as a reward in teaching the animals to run mazes and solve problems. It seemed that stimulation of those parts of the brain gave the animals pleasure, and in later work humans reported pleasurable sensations from such stimulation. When rats were tested in Skinner boxes where they could stimulate the reward system by pressing a lever, the rats pressed for hours. Research in the next two decades established that dopamine is one of the main chemicals aiding neural signaling in these regions, and dopamine was suggested to be the brain's "pleasure chemical". Ivan Pavlov was a psychologist who used the reward system to study classical conditioning. Pavlov used the reward system by rewarding dogs with food after they had heard a bell or another stimulus. Pavlov was rewarding the dogs so that the dogs associated food, the reward, with the bell, the stimulus. Edward L. Thorndike used the reward system to study operant conditioning. He began by putting cats in a puzzle box and placing food outside of the box so that the cat wanted to escape. The cats worked to get out of the puzzle box to get to the food. Although the cats ate the food after they escaped the box, Thorndike learned that the cats attempted to escape the box without the reward of food. Thorndike used the rewards of food and freedom to stimulate the reward system of the cats. Thorndike used this to see how the cats learned to escape the box. More recently, Ivan De Araujo and colleagues used nutrients inside the gut to stimulate the reward system via the vagus nerve. Other species Animals quickly learn to press a bar to obtain an injection of opiates directly into the midbrain tegmentum or the nucleus accumbens. The same animals do not work to obtain the opiates if the dopaminergic neurons of the mesolimbic pathway are inactivated. In this perspective, animals, like humans, engage in behaviors that increase dopamine release. Kent Berridge, a researcher in affective neuroscience, found that sweet (liked ) and bitter (disliked ) tastes produced distinct orofacial expressions, and these expressions were similarly displayed by human newborns, orangutans, and rats. This was evidence that pleasure (specifically, liking) has objective features and was essentially the same across various animal species. Most neuroscience studies have shown that the more dopamine released by the reward, the more effective the reward is. This is called the hedonic impact, which can be changed by the effort for the reward and the reward itself. Berridge discovered that blocking dopamine systems did not seem to change the positive reaction to something sweet (as measured by facial expression). In other words, the hedonic impact did not change based on the amount of sugar. This discounted the conventional assumption that dopamine mediates pleasure. Even with more-intense dopamine alterations, the data seemed to remain constant. However, a clinical study from January 2019 that assessed the effect of a dopamine precursor (levodopa), antagonist (risperidone), and a placebo on reward responses to music – including the degree of pleasure experienced during musical chills, as measured by changes in electrodermal activity as well as subjective ratings – found that the manipulation of dopamine neurotransmission bidirectionally regulates pleasure cognition (specifically, the hedonic impact of music) in human subjects. This research demonstrated that increased dopamine neurotransmission acts as a sine qua non condition for pleasurable hedonic reactions to music in humans. Berridge developed the incentive salience hypothesis to address the wanting aspect of rewards. It explains the compulsive use of drugs by drug addicts even when the drug no longer produces euphoria, and the cravings experienced even after the individual has finished going through withdrawal. Some addicts respond to certain stimuli involving neural changes caused by drugs. This sensitization in the brain is similar to the effect of dopamine because wanting and liking reactions occur. Human and animal brains and behaviors experience similar changes regarding reward systems because these systems are so prominent. See also References External links Scholarpedia Reward Scholarpedia Reward signals Addiction Cognitive neuroscience Behavioral neuroscience Behaviorism Behavior modification Dopamine Motivation Neuroanatomy Neuropsychology
Reward system
[ "Biology" ]
7,636
[ "Behavior", "Behavioral neuroscience", "Motivation", "Behavior modification", "Behavioural sciences", "Behaviorism", "Ethology", "Human behavior" ]
8,584,125
https://en.wikipedia.org/wiki/Anthropic%20rock
Anthropic rock is rock that is made, modified and moved by humans. Concrete is the most widely known example of this. The new category has been proposed to recognise that human-made rocks are likely to last for long periods of Earth's future geological time, and will be important in humanity's long-term future. History Historically, anthropogenic lithogenesis is a new event or process on Earth. For millennia humans dug and built only with natural rock. Archaeologists, during 1998, reported that artificial rock was made in ancient Mesopotamia. The ancient Romans developed and widely used concrete, much of which is intact today. British Victorians were very familiar with the durable mock-rock surface formations used in public parks, constructed of Pulhamite and Coade stone. Concrete, as we know it today, dates from the development of modern cement in 1756. Classification and theory The US geologist James Ross Underwood Jr. advocated a fourth class of rocks to be added to Earth and planetary materials studies which would supplement geology's long-identified igneous, sedimentary and metamorphic groups. His practical proposal for an "anthropic rocks" category recognizes the pervading spread of humankind and its industrial products. Future NASA and others have offered many settlement proposals that entail the use of in-situ resources of the Moon and Mars by astronauts. The relatively inert nature of rocks has been exploited in many methods to immobilize chemical and/or radioactive wastes; the Australian researcher, A.E. Ringwood, developed a titanate ceramic called Synroc, his acronym for "synthetic rock". D.J. Sheppard proposed Sun-orbiting space colonies, interplanetary and interstellar spaceships ought to be manufactured of concrete. There have also been proposals for deep-diving submarines constructed of concrete and concrete ships. Alan Weisman in The World Without Us (2007) noted that anthropic rocks of all kinds, among other artifacts, will exist far into our planet's future even should our species disappear "tomorrow". Environmental impact Climate experts at COP27 called for a reduction of greenhouse gas (GHG) emissions from the three construction sector industries, including the concrete industry, because concrete is responsible for over seven percent of the world’s carbon emissions. It is estimated that one ton of cement produces one ton of carbon dioxide, although modernized factories have found ways to reduce these emissions. Nature journal estimated that the concrete industry was responsible for nine percent of all water withdrawals from industry, and by 2025, most of the water withdrawals for concrete production will be in geographical areas that already face water stress. The rapid urbanisation of the past century has resulted in drastic biodiversity loss, as animals, plants and fungi have found themselves and their ecosystems smothered under tonnes of concrete. As much as 80 percent of urban spaces are covered by pavement or buildings, leaving little land for green spaces. See also Anthropocene References Rocks Building materials Artificial stone
Anthropic rock
[ "Physics", "Engineering" ]
607
[ "Building engineering", "Architecture", "Construction", "Materials", "Physical objects", "Rocks", "Matter", "Building materials" ]
8,585,407
https://en.wikipedia.org/wiki/Flood-meadow
A flood-meadow (or floodmeadow) is an area of grassland or pasture beside a river, subject to seasonal flooding. Flood-meadows are distinct from water-meadows in that the latter are artificially created and maintained, with flooding controlled on a seasonal and even daily basis. Examples Austria: Hohenau an der March Bosnia and Herzegovina: List of karst polje in Bosnia and Herzegovina Estonia: Emajõe flood-meadow Kasari, Matsalu National Park Finland: Mattholmsfladan, Pargas Levänluhta, Isokyrö Ireland: Shannon Callows United Kingdom: Angel & Greyhound Meadow, Oxford Christchurch Meadows, Reading Christ Church Meadow, Oxford Mill Meadows, Henley-on-Thames Port Meadow, Oxford Mottey Meadows, Staffordshire Riverside Park, St Neots, Cambridgeshire References See also Coastal plain Field Flooded grasslands and savannas Plain Prairie Riparian zone Wet meadow Floodplain Berm Riparian zone Meadows Rivers Environmental terminology Water and the environment
Flood-meadow
[ "Environmental_science" ]
200
[ "Riparian zone", "Hydrology", "Hydrology stubs" ]
8,586,502
https://en.wikipedia.org/wiki/Enrolled%20actuary
An enrolled actuary is an actuary enrolled by the Joint Board for the Enrollment of Actuaries under the Employee Retirement Income Security Act of 1974 (ERISA). Enrolled actuaries, under regulations of the Department of the Treasury and the Department of Labor, perform a variety of tasks with respect to pension plans in the United States under ERISA. As of August, 2024, there were approximately 3,400 enrolled actuaries. Qualifications The Joint Board for the Enrollment of Actuaries administers two examinations to prospective enrolled actuaries. Once the two examinations have been passed, and an individual has also obtained sufficient relevant professional experience, that individual becomes an enrolled actuary. The first exam (EA-1) tests basic knowledge of the mathematics of compound interest, the mathematics of life contingencies, and practical demographic analysis. The second (EA-2) examination consists of two segments, which are offered during separate exam sittings in either the fall or the spring. Segment F covers the selection of actuarial assumptions, actuarial cost methods, and the calculation of minimum (required) and maximum (tax-deductible) contributions to pension plans. Segment L tests knowledge of relevant federal pension laws (in particular, the provisions of ERISA) as they affect pension actuarial practice. Employers Enrolled actuaries generally work for human resource consulting firms, investment and insurance brokers, accounting firms, government organizations, and law firms. Some firms that employ enrolled actuaries combine two or more of these practice specialties. Organizations Many enrolled actuaries belong to one or more of the following organizations: the Society of Actuaries, the American Academy of Actuaries. the Conference of Consulting Actuaries or the American Society of Pension Professionals & Actuaries. Notes and references External links Joint Board for the Enrollment of Actuaries Actuarial science Actuary Employee Retirement Income Security Act of 1974
Enrolled actuary
[ "Mathematics" ]
379
[ "Applied mathematics", "Actuarial science" ]
8,586,700
https://en.wikipedia.org/wiki/Joint%20Board%20for%20the%20Enrollment%20of%20Actuaries
The Joint Board for the Enrollment of Actuaries licenses actuaries to perform a variety of actuarial tasks required of pension plans in the United States by the Employee Retirement Income Security Act of 1974 (ERISA). The Joint Board consists of five members – three appointed by the Secretary of the Treasury and two by the Secretary of Labor – as well as a sixth non-voting member representing the Pension Benefit Guaranty Corporation. The Joint Board administers two examinations to prospective Enrolled Actuaries. After an individual passes the two exams and completes sufficient relevant professional experience, she or he becomes an Enrolled Actuary. See also Title 20 of the Code of Federal Regulations Sources Joint Board for the Enrollment of Actuaries Employee Retirement Income Security Act of 1974 Pension Benefit Guaranty Corporation United States Department of the Treasury United States Department of Labor Actuarial science
Joint Board for the Enrollment of Actuaries
[ "Mathematics" ]
170
[ "Applied mathematics", "Actuarial science" ]
8,587,112
https://en.wikipedia.org/wiki/David%20Lary
David J. Lary (born 7 December 1965) is a British-American atmospheric scientist interested in applying computational and information systems to facilitate discovery and decision support in Earth system science. His main contributions have been to highlight the role of carbonaceous aerosols in atmospheric chemistry, heterogeneous bromine reactions, and to employ chemical data assimilation for satellite validation, and the use of machine learning for remote sensing applications. He is author of AutoChem, NASA release software that constitutes an automatic computer code generator and documentor for chemically reactive systems. It was designed primarily for modeling atmospheric chemistry, and in particular, for chemical data assimilation. He is author of more than 200 publications receiving more than 6,000 citations. AutoChem has won five NASA awards and has been used to perform long term chemical data assimilation of atmospheric chemistry and in the validation of observations from the NASA Aura satellite. It has been used in numerous peer reviewed articles. David Lary completed his education in the United Kingdom. He received a first class double honors BSc in physics and chemistry from King's College London (1987) with the Sambrooke Exhibition Prize in Natural Science, and a PhD in atmospheric chemistry from the University of Cambridge, Department of Chemistry while at Churchill College (1991). His thesis described the first chemical scheme for the ECMWF numerical weather prediction model. He then held post-doctoral research assistant and associate positions at the University of Cambridge until receiving a Royal Society research fellowship in 1996 (also at Cambridge). From 1998 to 2000 he held a joint position at Cambridge and the University of Tel-Aviv as a senior lecturer and Alon fellow. In 2001 he joined NASA/UMBC/GEST as the first distinguished Goddard fellow in earth science. Between 2001 and 2010 he was part of various branches at NASA Goddard Space Flight Center including the Global Modeling and Assimilation Office, the Atmospheric Chemistry and Dynamics Branch, the Software Integration and Visualization Office, and the Goddard Earth Sciences (GES) Data and Information Services Center (DISC). In 2010 he moved to the William B. Hanson Center for Space Sciences as a professor of physics at the University of Texas at Dallas, where he has focused on the health effects of atmospheric particulates, and developing a fleet of unmanned aerial vehicles for a variety of agricultural, environmental, and meteorological applications. He is also adjunct professor in data science and machine learning at Southern Methodist University, adjunct professor at Baylor University Center for Astrophysics, Space Physics & Engineering Research, a scholar of the Institute for Integrative Health, adjunct professor at the School of Public Health, University of North Texas Health Science Center, and the departments of electrical engineering, geographic information systems, and bioengineering at the University of Texas at Dallas, and a United States Special Operations Command Fellow at SOFWERX by J5, the Futures Mission Directorate. In 2021 David was appointed adjunct professor of military/emergency medicine at the Uniformed Services University of the Health Sciences, a UT Dallas Center for Brain Health Investigator, and research scholar at the U.S. Department of Veterans Affairs' Complex Exposure Threats Center Network (CETC), part of the War Related Illness and Injury Study Center . References External links Google Scholar page ISI publication list Mendeley publication list UTD faculty page Personal page UTD Mints (His consortium) Build UAS (Website run by his students) 1965 births Living people Alumni of King's College London Alumni of Churchill College, Cambridge American atmospheric scientists 21st-century American chemists British chemists 21st-century American physicists British physicists University of Texas at Dallas faculty Place of birth missing (living people) Atmospheric chemists Computational chemists British atmospheric scientists
David Lary
[ "Chemistry" ]
744
[ "Computational chemistry", "Theoretical chemists", "Computational chemists" ]
8,587,280
https://en.wikipedia.org/wiki/Wet%20meadow
A wet meadow is a type of wetland with soils that are saturated for part or all of the growing season which prevents the growth of trees and brush. Debate exists whether a wet meadow is a type of marsh or a completely separate type of wetland. Wet prairies and wet savannas are hydrologically similar. Hydrology and ecology Wet meadows may occur because of restricted drainage or the receipt of large amounts of water from rain or melted snow. They may also occur in riparian zones and around the shores of large lakes. Unlike a marsh or swamp, a wet meadow does not have standing water present except for brief to moderate periods during the growing season. Instead, the ground in a wet meadow fluctuates between brief periods of inundation and longer periods of saturation. Wet meadows often have large numbers of wetland plant species, which frequently survive as buried seeds during dry periods, and then regenerate after flooding. Wet meadows therefore do not usually support aquatic life such as fish. They typically have a high diversity of plant species, and may attract large numbers of birds, small mammals and insects including butterflies. Vegetation in a wet meadow usually includes a wide variety of herbaceous species including sedges, rushes, grasses and a wide diversity of other plant species. A few of many possible examples include species of Rhexia, Parnassia, Lobelia, many species of wild orchids (e.g. Calopogon and Spiranthes), and carnivorous plants such as Sarracenia and Drosera. Woody plants, if present, account for a minority of the total area cover. High water levels are one of the important factors that prevent invasion by woody plants; in other cases, fire is important. In areas with low frequencies of fire, or reduced water level fluctuations, or higher fertility, plant diversity will decline. Conservation Wet meadows were once common in wetland types around the world. They remain an important community type in wet savannas and flatwoods. The also survive along rivers and lakeshores where water levels are allowed to change within and among years. But their area has been dramatically reduced. In some areas, wet meadows are partially drained and farmed and therefore lack the biodiversity described here. In other cases, the construction of dams has interfered with the natural fluctuation of water levels that generates wet meadows. The most important factors in creating and maintaining wet meadows are therefore natural water level fluctuations and recurring fire. In some cases, small areas of wet meadow are artificially created. Due to the concern with damage that excessive stormwater runoff can cause to nearby lakes and streams, artificial wetlands can be created to capture stormwater. Often this produce marshes, but in some cases wet meadows may be produced. The idea is to capture and store rainwater onsite and use it as a resource to grow attractive native plants that thrive in such conditions. The Buhr Park Children's Wet Meadow is one such project. It is a group of wet meadow ecosystems in Ann Arbor, Michigan designed as an educational opportunity for school-age children. In Europe, wet meadows are sometimes managed by hay-cutting and grazing. Intensified agricultural practices (too frequent mowing, use of mineral fertilizers, manure and insecticides), may lead to declines in the abundance of organisms and species diversity. See also Coastal plain Coastal prairie Flooded grasslands and savannas Flood-meadow Water-meadow Bog References External links How to create a wet meadow garden Selected species for wet meadow gardens Children's Wet Meadow CDHabitatWetPrairieFen Illinois Department of Natural Resources Wetlands Meadows Grasslands Fluvial landforms
Wet meadow
[ "Biology", "Environmental_science" ]
732
[ "Hydrology", "Grasslands", "Ecosystems", "Wetlands" ]
8,587,640
https://en.wikipedia.org/wiki/Health%20%28game%20terminology%29
Health is a video game or tabletop game quality that determines the maximum amount of damage or fatigue something takes before leaving the main game. In role-playing games, this typically takes the form of hit points (HP), a numerical attribute representing the health of a character or object. The game character can be a player character, a boss, or a mob. Health can also be attributed to destructible elements of the game environment or inanimate objects such as vehicles and their individual parts. In video games, health is often represented by visual elements such as a numerical fraction, a health bar or a series of small icons, though it may also be represented acoustically, such as through a character's heartbeat. Mechanics In video games, as in tabletop role-playing games, an object usually loses health as a result of being attacked. Protection points or armor help them to reduce the damage taken. Characters acting as tanks usually have more health and armor. In many games, particularly role-playing video games, the player starts with a small number of health and defense points, but can increase them by gaining the required number of experience points and raising the character's level. In game design, it is considered important to clearly show that the player's character (or other object that they control) is losing health. In his book Level Up!: The Guide to Great Video Game Design, game designer Scott Rogers wrote that "health should deplete in an obvious manner, because with every hit, a player is closer to losing their life". As examples of visualizing health loss, Rogers cited Arthur of Ghosts 'n Goblins, who loses a piece of armor with each sustained hit, as well as the cars in the Grand Theft Auto series, in which smoke begins to flow from the hood after the car takes a significant amount of damage. The use of health points simplifies the game development process (since developers do not need to create complex damage systems), allows computers to simplify calculations associated with the game, and makes it easier for the player to understand the game. However, more complex and realistic damage systems are used in a number of games. In Dwarf Fortress, instead of health points, dwarves have separate body parts, each of which can be damaged. The Fallout games use health points, but allow characters to inflict damage to different parts of the enemy's body, which affects gameplay. For example, if a leg is injured, the character can get a fracture, which will reduce their movement speed, and if their arm is injured, the character can drop their weapon. Health can also serve as a plot element. In Assassin's Creed, if the protagonist takes too much damage, thus departing from the "correct" route, the game ends and returns the player to the nearest checkpoint. In some games such as The Legend of Zelda and Monster Hunter, only the player's health points are visible. This is done so that the player does not know how many blows still need to be delivered, which makes the game less predictable. Contrariwise, other games such as the Street Fighter series have both the player's and the opponent's health meters clearly visible, which allows the player to understand how successful their combat strategy is and how many remaining blows need to be inflicted on the enemy. Restoration Players can often restore a character's health by using various items such as potions, food or first-aid kits. In role-playing video games, the player often can also restore a character's health by visiting a doctor or resting at an inn. A number of games incorporate a mechanic known as "life steal" or "life leech", which allows a character to restore health by siphoning it from an enemy. Methods for replenishing health differ from each other and are dependent on the game's genre. In more dynamic action games, it is important to quickly restore a character's health, while role-playing games feature slower-paced methods of health restoration to achieve realism. A number of games incorporate a regeneration system that automatically replenishes health if the character does not take damage. This makes the game easier to play by giving the player the opportunity to restore the character's health after a difficult battle. This system may allow the player to safely run through dangerous parts of the game without consequence. Tag team games often regenerate part of the health of a resting character. Armor class In some role-playing games, armor class (abbreviated AC; also known as defense) is a derived statistic that indicates how difficult it is to land a successful blow on a character with an attack; it can also indicate damage reduction to a character's health. AC is typically a representation of a character's physical defenses such as their ability to dodge attacks and their protective equipment. Armor class is a mechanic that can be used as part of health and combat game balancing. AC "is roughly equivalent to defensive dodging in war games". Presentation The health indicator can be represented in various ways. The most basic forms are fractions and health bars, as well as various icons such as hearts or shields. More recent games can use a nonlinear health bar, where earlier hits take off more damage than later ones, in order to make the game appear more exciting. The indicator can be combined with other elements of the game interface. Doom uses a character portrait located at the bottom of the screen as such an indicator, in addition to a numerical health percentage display. If the hero takes damage, his face will appear increasingly pained and blood-covered. The health point indicator can also be part of the character. In Dead Space, it is located on the main character's costume. In Trespasser, it is represented as a tattoo on the main character's chest. In Half-Life: Alyx, a VR game, the indicator is located on the back of the player's non-dominant hand, requiring the player to physically look at their tracked hand to check their health. The character's condition can be conveyed through sound. In Dungeons of Daggorath, the frequency of the player character's audible heartbeat is dependent on how much damage has been received. Silent Hill uses a similar system, but transmits the heartbeat via vibrations from the DualShock controller. The player character's health point indicator often occupies a significant position in the game's heads-up display. In The Legend of Zelda, it occupies one third of the HUD. However, a number of games do without such an indicator. In the Super Mario series, the player character initially only has one health point, and the character's appearance is used to signify the number of health points; if the character collects a Super Mushroom, they grow in size and gain an additional health point. In a number of first-person shooters, such as Call of Duty or Halo, the numerical value of the character's health points is hidden from the player. However, when the player character receives a large amount of damage, the game screen (or the part of the screen to which damage was dealt) is painted red, often including drops of blood, which simulates the effect of real-life injury. As health is restored, these effects gradually disappear. History Hit points The term "hit points" was coined by Dungeons & Dragons co-creator Dave Arneson. While developing the tabletop role-playing game Dungeons & Dragons with Gary Gygax based on the latter's previous game Chainmail, Arneson felt that it was more interesting for players to manage small squads than a large army. This also allowed them to act out the role of each squad member. However, this approach had one drawback: according to the rules of Chainmail, the player rolls the dice during each battle, and depending on the number rolled, the character either kills the enemy or is killed. Because players did not want to lose the characters they had become accustomed to, Arneson created a "hit point" system based on similar mechanics previously used in the wargames Don't Give Up the Ship and Ironclads. According to this system, each character has a certain number of hit points, which decreases with each blow dealt to them. This allows the character to survive several hits from an enemy. Some of the first home computer games to use hit points are Rogue (1980), in which health is represented by a fraction, and Dungeons of Daggorath (1982), which includes an audible heartbeat influenced by the player character's condition. Action games also began moving away from one-hit deaths to health systems allowing players to take multiple hits, such as SNK's arcade shoot 'em up game Ozma Wars (1979) numerically representing an energy supply that depletes when taking hits and Mattel's Intellivision game Tron: Deadly Discs (1982) allowing players to take multiple hits at the cost of reducing maneuverability. Health meter Before the introduction of health meters, action video games typically used a lives system in which the player could only take damage once, but could continue the game at the expense of a life. The introduction of health meters granted players the right to make mistakes and allowed game developers to influence a game's difficulty by adjusting the damage an enemy character inflicts. Data East's Flash Boy (1981) for the arcade DECO Cassette System, a scrolling action game based on the manga and anime series Astro Boy (1952–1968), has an energy bar that gradually depletes over time and some of which can be sacrificed for temporary invincibility. Punch-Out!! (1983), an arcade boxing game developed by Nintendo, has a stamina meter that replenishes every time the player successfully strikes the opponent and decreases if the player fails to dodge the opponent's blow; if the meter is fully depleted, the player character loses consciousness. Yie Ar Kung-Fu (1984), an arcade fighting game developed by Konami, replaced the point-scoring system of Karate Champ (1984) with a health meter system. Each fighter has a health meter, which depletes as they take hits; once a fighter's health meter is fully depleted, it leads to a knockout. Yie Ar Kung-Fu established health meters as a standard feature in fighting games. Kung-Fu Master (1984), an arcade beat 'em up developed by Irem, uses a health meter to represent player health, with the bar depleting when taking damage. In addition to the player character having a health meter, the bosses also have health meters, which leads to the game temporarily becoming a one-on-one fighting game during boss battles. Kung-Fu Master established health meters as a standard feature in side-scrolling action games such as beat 'em ups. Health meters also began being used to represent hit points in role-playing video games, starting with The Black Onyx (1984), developed by Bullet-Proof Software. This inspired the use of a health bar in Hydlide (1984), an action role-playing game by T&E Soft, which took it a step further with a regenerating health bar. Namco's arcade action role-playing title Dragon Buster (1984) further popularized the use of a health bar in role-playing games. Regeneration The 1982 Apple II platform game Crisis Mountain displays health as a number from 3 (full) to 0 (dead), and health gradually regenerates over time. In Hydlide (1984) and the Ys series, the character's health (represented as both hit points and a health meter) are restored when the character does not move. Halo: Combat Evolved (2001) is credited with popularizing the use of regeneration in first-person shooters. However, according to GamesRadar+'s Jeff Dunn, regeneration in its current form was introduced in The Getaway (2002), as Halo: Combat Evolved only used shield regeneration. Defense Arneson is also credited for the term "armor class" which was used in Chainmail and then Dungeons & Dragons; "although armor class might have been inspired by the rules in Don't Give Up the Ship!, there is not an explicit attribute with that name in the game's rules. [...] It seems more likely that Arneson's house rules for armor class never made it into the final published version of the wargame". However, many role-playing games that followed Dungeons & Dragons moved away from the term "armor class" and simply replaced the term with "defense". See also Magic (game terminology) Experience point Medical state, a real-world indicator of health status for hospital patients References Bibliography External links Role-playing game terminology Video game terminology
Health (game terminology)
[ "Technology" ]
2,606
[ "Computing terminology", "Video game terminology" ]
6,918,898
https://en.wikipedia.org/wiki/Hindeodus
Hindeodus is an extinct genus of conodonts in the family Anchignathodontidae. The generic name Hindeodus is a tribute to George Jennings Hinde, a British geologist and paleontologist from the 1800s and early 1900s. The suffix -odus typically describes the animal's teeth, essentially making Hindeodus mean Hinde-teeth. Conodonts such as Hindeodus are typically small, elongate, marine animals that look similar to eels today. Hindeodus existed from the early Carboniferous through the early Triassic during which they inhabited a wide variety of different environments in the Paleozoic and Triassic seas. Their body consisted entirely of soft tissues, except for an assortment of phosphatic elements believed to be their feeding apparatus. Despite years of controversy regarding their phylogenetic position, conodonts such as Hindeodus are now considered to be vertebrates. They are slightly more derived than the early vertebrates called Cyclostomata, and are part of a large clade of "complex conodonts" called Prioniodontida in the order Ozarkodinina. Hindeodus fossils are distributed worldwide due to the diversity of environments they inhabited. Species of Hindeodus are differentiated by slight variation of the elements of their feeding apparatus. A species of Hindeodus called Hindeodus parvus is particularly well studied because it is used as an index fossil defining the Permian-Triassic boundary. History and discovery Hindeodus was first described by Rexroad and Furnish in 1964 during the Illinois State Geological Survey's study of Mississippian stratigraphy.  The specimen was found in the Pella Formation of South-central Iowa which is known for excellent preservation of conodonts. However, species of Hindeodus were among the least abundant fossils (less than 0.25 specimens per kilogram of sample). They initially believed that Hindeodus may be a species of Trichondella or Elsonella but determined that Hindeodus is not morphologically and phylogenetically comparable to either and thus, must be a new genus. While faunal diversity during the end Permian extinction event (251 million years ago) drastically plummeted, Hindeodus survived into the early Triassic. A possible explanation for this is the versatility of certain Hindeodus species in terms of the environments they are able to survive in. Additionally, there is evidence that Hindeodus was able to migrate during the Permian-Triassic transitional period which lead to its wide distribution worldwide during this time. Description and paleobiology Hindeodus elements Hindeodus was primarily soft-bodied; the only mineralized tissue of Hindeodus (and all other conodonts) are their “elements” that are tooth like structures arranged in particular positions and are thought to have served as a feeding apparatus functioning to grasp and intake prey. Because conodont elements are essentially the only basis for conodont taxonomy, elements are extensively studied and debated. Therefore, there is specific categorization of elements based on their shape and position. The elements are divided into S, M and P elements. In Hindeodus, S elements are ramiform (branch-like), M elements are makellate (pick-shaped) and P elements are pectiniform (cone-shaped). The H.parvus apparatus in particular consists of six kinds of elements arranged in 13 different positions: nine S elements (unpair S0, paired S1, S2, S3, S4), two M elements, and one pair of P elements (P1). The S0 element is unpaired and has a long sharp cusp but lacks a posterior process. S1 and S2 elements are differentiated by being laterally compressed and having a long sharp cusp with two lateral processes. S3 and S4 elements have a long sharp cusp and an anterior process that is shorter than the posterior process. The M element is the typical makellate (pick-shaped) structure and the P1 element is pectiniform. There are several different hypotheses for the functions of the Hindeodus apparatus. One hypothesis is that the elements were used as support structures for filamentous soft tissue used for suspension feeding. However, upon further analysis it was determined that the S, M and P elements would not provide enough surface area to support ciliated tissue needed for suspension feeding. The more accepted hypothesis is that the conodont elements were used for predation. It is predicted that the S and M elements open allowing the prey to be captured in the oral cavity of the animal. The cusps of these elements aid in food intake by firmly gripping the prey while the blade-like P elements slice like a pair of scissors. This hypothesis is supported by the presence of lingual cartilage found in conodonts that resembles those found in extant cyclostomes (Hagfish and Lamprey) which are also predators.   Morphology of conodont teeth varies widely, but the 15-element dentition of conodonts and their relative position is stable from the Ordovician to the Triassic. The typical 15-element apparatus consisted of 4P elements, 9S elements, and 2M elements. However, Triassic conodonts (such as Hindeodus parvus) only had a 13-element apparatus (9S elements, 2M elements and 2P elements). It was previously believed that the 2 missing elements were due to failure to preserve S1 elements. This was not highly plausible because every other element was exceptionally preserved on the same bedding plane, so it was unlikely that apparatuses were preserved incompletely. An alternative hypothesis was that Hindeodus lost two S elements which implies changes in capture of prey (as the primary function of the S and M array is to trap prey in the animal's mouth). A final hypothesis is that Hindeodus lost two P elements which implies changes in food processing ability, which may be due to a change of diet to food that requires less slicing or crushing to ingest. Upon analysis of conodont history, it is evident that while P elements vary among conodonts, the S-M array is essentially conserved for over 250 million years. This suggests that evolutionary constraints on the number of S and M elements are stronger than those for the P elements, and thus are less likely to change. The loss of P elements is likely linked to the Permian-Triassic extinction event during which many environmental changes occurred that may have impacted the availability of Hindeodus prey, resulting in a change of diet and a new food processing mechanism. Classification Hindeodus is characterized by a P element with a large cusp, denticles that increase in width anteriorly (toward the head) except for the anterior-most denticle and generally decrease in height anteriorly, except for the posterior-most three denticles (the ones furthest back) which are at equal heights. Their cusps are much higher than denticles, and they possess S elements with a short lateral processes that are slightly upturned laterally with denticles of variable size. Hindeodus is differentiated from other conodonts by having P elements with large fixed cusps located at the anterior end of the blade and usually grow primarily by adding new denticles only to the posterior end of the element. Other conodonts vary in growth pattern and location of their cusps. For example, Ozarkidina have cusps located within the blade and growth can occur both anteriorly or posteriorly. In Hindeodus, the P element is crucial for identifying the genus, and had a stable morphology from the Carboniferous into the Triassic with only one minor morphological change. However, in the late Permian and the early Triassic there was rapid evolutionary change especially in the P element. The cause of rapid change in morphology is not certain, but may be related to environmental changes leading to different availability of food source thus leading to changes in feeding mechanism. Species of Hindeodus are divided into two groups based on the morphology of the posterior portion of the elements. Species such as H.parvus and H.eurypyge grow posteriorly and look rectangular from a lateral perspective. Elements grow by the addition of new denticles to the posterior margin. After one denticle fully grows, a bulge begins to form on the lower posterior margin of the element, and gradually grows upward until the denticle fully develops. The cycle repeats with a new bulge. These elements tend to grow evenly thus yielding a rectangular shape. In contrast, other species such as H.typicalis of H. latidentatus have a sloped lateral profile because the posterior section slopes downward. New denticles form near the posterobasal corner and grow gradually upward but also to the side. These elements tend to preferentially grow on the posterior portion of the element leading to a more sloped shape. Hindeodus is part of a large clade Prioniodontida (otherwise known as "complex conodonts") which has two major orders of conodonts, Prioniodinina and Ozarkodinina. Hindeodus is part of Ozarkodinina in the family Anchignathodontidae. The synapomorphies that define the clade Prioniodontida is the presence of a P elements with an inner lateral process and peg-like denticles. The synapomorphies of Ozarkodinida are not as clear, but may be the presence of inner and outer lateral process on the S elements. Species relationships within the genus Hindeodus are also complex, and there is lack of an established and accepted phylogenetic association between species of Hindeodus, but certain relationships may be inferred. H.parvus is likely derived from H.latidentatus based on the location of the fossils along with similarities among their elements. H.parvus and its forerunner H.latidentatus are both easily identified by their P element of their apparatus and their S elements. However, H.parvus is differentiated by the presence of cusps that are two times longer than the surrounding denticles. There are also transition forms that have apparatus features of both H.parvus and H.latidentatus which provide evidence of H.parvus being derived from H.latidentatus. There is also evidence to suggest that most species of Hindeodus likely evolved from H.typicalis and an unnamed species H.n.sp.B that were alive in the early Changsingian. Additionally, the genus Isarcicella likely evolved from Hindeodus (H.parvus) in the Early Triassic. Paleoenvironmental and geological information The paleoecology of Hindeodus was frequently debated. Clark (1974) proposed that Hindeodus was most abundant in nutrient-deficient deep waters of normal salinity, but some may have been in shallow water as well. Behnken (1975) proposed that Hindeodus lived in abnormal salinities. Wardlaw and Collinson (1984) proposed that Hindeodus dominated in lagoonal facies. Orchard (1996) considered Hindeodus to be dominant in shallow, near shore and warm regions. The general consensus now is that Hindeodus lived in wide range of marine depositional environments: nearshore, shallower, and warmer environments as well as deep-water environments, and offshore environments. Kozur (1996) pointed out that the presence of Hindeodus in a certain area seems to not be controlled by the depth of the water or distance to the shore, but more dependent on the presence of competitor species (such as Gondelellids) that are better adapted to survive in that environment. For example, in the Meishan, Gondelellids were dominant in deep warm-water environments before the ecological stress that occurred in the Late Permian (which was possibly short-lasting cooling in low latitudes due to presence of aerosoles). Gondelellids and many other Permian species in the area disappeared, but ecologically tolerant Hindeodus survived and dominated the area. A similar situation occurred in Iran where Gondelellids were abruptly replaced by the Hindeodus in the deep-water areas. There is evidence that Hindeodus was able to migrate during the Permian Triassic transitional period which lead to its wide distribution worldwide during this time. They were able to survive and evolve in warm-water or cold water and shallow water or deep-water environments despite widespread anoxia during the Permian-Triassic transitional period. This is one of the reasons Hindeodus is an ideal index fossil for the defining the Permian-Triassic boundary. However, not all species of Hindeodus were able to survive and thrive in a variety of different environments. Species such as H. julfenis, H.changxingensis, H.altudaensis among others are ecologically restricted to deeper, but warm water environments. They are never found in shallow water facies, or in deep water that was presumably home to cold water fauna. In contrast, more common species such as H. typicalis, and H. parvus were more ecologically tolerant and could live in environments not tolerated by other conodonts. H.parvus in particular is exceptionally versatile in regards to what environments it inhabited. H. parvus was found in both shallow water deposits as well as pelagic deposits. It is found in Japan, North America, the Boreal realm (Greenland), and the entire Tethys. Although Hindeodus is globally widespread, the Meishan section in Changxing County, Zejiang Province, South China is one of the more notable locations Hindeodus fossils were located. The Meishan section is used as the GSSP (global boundary stratotype section and point) for the Permian-Triassic boundary defined by the first appearance of H.parvus. It is a continuous, pelagic sedimentary record across the Permian-Triassic boundary without any stratigraphic gaps, and is essentially thermally unaltered (CAI=1-1.5). The section consists of 7 quarries at the southern slope of the Meishan hill, 70 to 400m away from each other. The beds of these quarries are nearly identical as they have the same thickness, facies, and fossil content. Quarry D is best studied because it exposes the entire Changxing Limestone whereas the other quarries only expose the middle and upper part of the Changxing Limestone. Biostratigraphic significance The species Hindeodus parvus is an index fossil whose first appearance in the fossil beds at Meishan, Changxing County, Zhejiang marks the base of the Triassic, and thus the boundary between the Triassic and Permian. 96% of the fauna in the late Permian disappeared at the Permian-Triassic boundary. Most of the groups that disappeared, re-appeared in the late Olekian (Middle Triassic). It is hypothesized that the extinction was caused by dense aerosols from strong volcanic activity in areas such as the Siberian Trap. These dense aerosols then caused short-lived rapid cooling in low latitudes, similar to a nuclear winter. Widespread anoxic conditions in the lower Triassic prevented the fauna from recovering.  Despite the rapid decrease in fauna, the exact Permian/Triassic boundary was still undetermined. It was initially defined by the first appearance of ammonoid (cephalopod) Otoceras. Then, the base of the Triassic was defined by the appearance of Isarcicella isarcica. The location of Isarcicella isarcica is nearly identical to the base of H. parvus, but there are several advantages to using H. parvus to define the biostratigraphic Permian-Triassic boundary. Firstly, Hindeodus is the first globally distributed species that appears immediately after (5 cm above) the minimum in fossil diversity indicated by the minimum in Carbon 13 at the Meishan section. Hindeodus is also not environmentally restricted and can be found in both shallow water deposits and deep-water deposits. It is also thermally tolerant and is found in cool-water environments, mild environments as well as tropical warm-water. Additionally, the derivation of H. parvus from its forerunner H. latidentatus is clear because they are found sandwiched between transition forms. Despite the close proximity with similar fossils, Hindeodus is easily determinable and readily separable by its large cusp. The wide distribution, clear derivation, and easy identifiability of Hindeodus makes it the ideal index fossil, which is why the International Commission on Stratigraphy (ICS) has assigned the First Appearance Datum of Hindeodus parvus as the defining biological marker for the start of the Induan, 252.2 ± 0.5 million years ago, the first stage of the Triassic. Notes References Index fossils Ozarkodinida genera Triassic animals of Asia Permian animals of Asia Fossils of India Fossils of Japan Fossils of Oman Fossils of Thailand Fossils of Turkey Triassic animals of Europe Fossils of Austria Fossils of Hungary Fossils of Italy Permian animals of North America Carboniferous animals of North America Fossils of Greenland Fossils of Mexico Fossils of New Zealand Fossil taxa described in 1964 Permian conodonts Triassic conodonts Conodont genera Conodont taxonomy Extinction events Fossils of China Conodont families Prioniodontida Fossils of Serbia
Hindeodus
[ "Biology" ]
3,576
[ "Evolution of the biosphere", "Extinction events" ]
6,919,200
https://en.wikipedia.org/wiki/Dorothy%20Metcalf-Lindenburger
Dorothy Marie "Dottie" Metcalf-Lindenburger (born May 2, 1975) is a retired American astronaut. She was a science teacher at Hudson's Bay High School in Vancouver, Washington when she was selected in 2004 as an educator mission specialist. She was the first Space Camp alumna to become an astronaut. Early life and career Dorothy Metcalf was born in Colorado Springs, Colorado to Joyce and Keith Metcalf. Metcalf graduated from Fort Collins High School, Fort Collins, Colorado. She went on to attend Whitman College in Walla Walla, Washington, where she studied geology. As an undergrad, she undertook research with the KECK Consortium for two summers. In 1995, she worked in Wyoming mapping the last glaciations of Russell Creek, and in 1996 she mapped and determined the petrology of the rocks in the Wet Mountain region of Colorado. She graduated from Whitman College in 1997. She received her teaching certification at Central Washington University, Ellensburg, Washington in 1999. Metcalf spent five years teaching earth science and astronomy at Hudson's Bay High School in Vancouver, Washington. She spent three years coaching cross-country at the high school level, and two years coaching Science Olympiad. In 2000, she married Jason Lindenburger, a fellow Whitman College graduate and educator, from Pendleton, Oregon. They have one daughter together. In 2016, Metcalf-Lindenburger earned her master's degree in geology from the University of Washington. She is a member of the organizations International Order of the Rainbow for Girls, Phi Beta Kappa, the National Education Association, and The Mars Generation. Honors 2007 Space Camp Hall of Fame Inaugural Inductee 1999 Outstanding Teacher Preparation Candidate at Central Washington University. 1997 Whitman College Leed's Geology Award 1997 Whitman College Order of the Waiilatpu 1996 GSA Field Camp Award 1995–1996 NAIA Academic All-American in Cross Country and Track 1996 NAIA Conference Champion in the 10K. NASA career Metcalf-Lindenburger was selected by NASA in May 2004 as an astronaut candidate. Astronaut candidate training includes orientation briefings and tours, numerous scientific and technical briefings, intensive instruction in Shuttle and International Space Station systems, physiological training, T-38 flight training, and water and wilderness survival training. Successful completion of this training in February 2006 qualified her as a NASA Astronaut. She served as a mission specialist on STS-131, an April 2010 Space Shuttle mission to the International Space Station. The mission's primary payload was the Multi-Purpose Logistics Module. On July 20, 2009, Metcalf-Lindenburger sang the national anthem at the Houston Astros game against the St. Louis Cardinals in celebration of the 40th anniversary of the Apollo 11 Moon landing. She has been a long-time lead singer with the all-astronaut rock band, "Max Q". On April 16, 2012, NASA announced that Metcalf-Lindenburger would command the NEEMO 16 undersea exploration mission aboard the Aquarius underwater laboratory, scheduled to begin on June 11, 2012, and last twelve days. The NEEMO 16 crew successfully "splashed down" at 11:05 am on June 11. On the morning of June 12, Metcalf-Lindenburger and her crewmates officially became aquanauts, having spent over 24 hours underwater. The crew safely returned to the surface on June 22. Metcalf-Lindenburger retired from NASA on June 13, 2014, to live and work in the Seattle area. Spaceflights STS-131 Discovery (April 5 to 20, 2010), a resupply mission to the International Space Station, was launched at night from the Kennedy Space Center, Florida. On arrival at the station, Discoverys crew dropped off more than 27,000 pounds of hardware, supplies and equipment, including a tank full of ammonia coolant that required three spacewalks to hook up, new crew sleeping quarters and three experiment racks. On the return journey, Leonardo, the Multi-Purpose Logistics Module (MPLM) inside Discoverys payload bay, was packed with more than 6,000 pounds of hardware, science results and trash. The STS-131 mission was accomplished in 15 days, 2 hours, 47 minutes and 10 seconds and traveled 6,232,235 statute miles in 238 Earth orbits. References External links Spacefacts biography of Dorothy Metcalf-Lindenburger The Mars Generation TEDx Talk 1975 births Living people 21st-century American women educators American women astronauts Aquanauts Educator astronauts NASA civilian astronauts People from Colorado Springs, Colorado People from Vancouver, Washington Schoolteachers from Washington (state) American science teachers Space Shuttle program astronauts Whitman College alumni
Dorothy Metcalf-Lindenburger
[ "Astronomy" ]
950
[ "Educator astronauts", "Astronomy education" ]
6,919,308
https://en.wikipedia.org/wiki/Nicole%20Stott
Nicole Marie Passonno Stott (born November 19, 1962) is an American engineer and a retired NASA astronaut. She served as a flight engineer on ISS Expedition 20 and Expedition 21 and was a mission specialist on STS-128 and STS-133. After 27 years of working at NASA, the space agency announced her retirement effective June 1, 2015. She is married to Christopher Stott, a Manx-born American space entrepreneur. Early life and education Stott was born in Albany, New York and resides in St. Petersburg, Florida. She attended St. Petersburg College studying aviation administration, graduated with a B.S. degree in aeronautical engineering from Embry-Riddle Aeronautical University in 1987, and received her M.S. degree in Engineering Management from the University of Central Florida in 1992. Nicole Stott began her career in 1987 as a structural design engineer with Pratt & Whitney Government Engines in West Palm Beach, Florida. She spent a year with the Advanced Engines Group performing structural analyses of advanced jet engine component designs. Stott is an instrument rated private pilot. NASA career In 1988, Stott joined NASA at the Kennedy Space Center (KSC), Florida as an Operations Engineer in the Orbiter Processing Facility (OPF). After six months, she was detailed to the Director of Shuttle Processing as part of a two-person team tasked with assessing the overall efficiency of Shuttle processing flows, and implementing tools for measuring the effectiveness of improvements. She was the NASA KSC Lead for a joint Ames/KSC software project to develop intelligent scheduling tools. The Ground Processing Scheduling System (GPSS) was developed as the technology demonstrator for this project. GPSS was a success at KSC, and also a commercial success that is part of the PeopleSoft suite of software products. During her time at KSC, Stott also held a variety of positions within NASA Shuttle Processing, including Vehicle Operations Engineer; NASA Convoy Commander; assistant to the Flow Director for Space Shuttle Endeavour; and Orbiter Project Engineer for Columbia. During her last two years at KSC, she was a member of the Space Station Hardware Integration Office and relocated to Huntington Beach, California where she served as the NASA Project Lead for the ISS truss elements under construction at the Boeing Space Station facility. In 1998, she joined the Johnson Space Center (JSC) team in Houston, Texas as a member of the NASA Aircraft Operations Division, where she served as a Flight Simulation Engineer (FSE) on the Shuttle Training Aircraft (STA). Selected as a mission specialist by NASA in July 2000, Stott reported for astronaut candidate training in August 2000. Following the completion of two years of training and evaluation, she was assigned technical duties in the Astronaut Office Station Operations Branch, where she performed crew evaluations of station payloads. She also worked as a support astronaut and CAPCOM for the ISS Expedition 10 crew. In April 2006, she was a crew member on the NEEMO 9 mission (NASA Extreme Environment Mission Operations) where she lived and worked with a six-person crew for 18 days on the Aquarius undersea research habitat. Stott was previously assigned to Expedition 20 and Expedition 21. She was launched to the International Space Station with the crew of STS-128, participating in the first spacewalk of that mission, and returned on STS-129, thus becoming the last Expedition crew-member to return to Earth via the space shuttle. Stott completed her second spaceflight on STS-133, the third to last (antepenultimate) flight of the space shuttle. First live tweet-up from space On October 21, 2009, Stott and her Expedition 21 crewmate Jeff Williams participated in the first NASA Tweetup from the station with members of the public gathered at NASA Headquarters in Washington, D.C. This involved the first live Twitter connection for the astronauts. Previously, astronauts on board the Space Shuttle or ISS had sent the messages they desired to send as tweets down to Mission Control which then posted them via the Internet to Twitter. Post NASA Stott was featured in a Super Bowl LIV commercial promoting Girls Who Code. Stott has also written Back To Earth, described as "What Life in Space Taught Me About Our Home Planet and Our Mission to Protect It". She is also an artist and brought a small watercolor kit on ISS Expedition 21 where she was the first person to paint with watercolor in space. Her current works often relate to astronomy including her Earth Observation collection and Spacecraft collection. In 2022, she is providing the narration to a piece being performed by the Schenectady Symphony Orchestra, Glen Cortese's "Voyager: A Journey to the Stars." References External links Nicole Stott – Spacefacts biography Nicole Stott – Video-opinion (4:19) (NYT; April 26, 2020) 1962 births Living people American people of German descent American people of Italian descent American astronauts Aquanauts Clearwater High School alumni University of Central Florida alumni Crew members of the International Space Station American women astronauts Space art Space artists Space Shuttle program astronauts Spacewalkers
Nicole Stott
[ "Astronomy" ]
1,031
[ "Space artists", "Space art", "Outer space" ]
6,920,233
https://en.wikipedia.org/wiki/Interspecific%20competition
Interspecific competition, in ecology, is a form of competition in which individuals of different species compete for the same resources in an ecosystem (e.g. food or living space). This can be contrasted with mutualism, a type of symbiosis. Competition between members of the same species is called intraspecific competition. If a tree species in a dense forest grows taller than surrounding tree species, it is able to absorb more of the incoming sunlight. However, less sunlight is then available for the trees that are shaded by the taller tree, thus interspecific competition. Leopards and lions can also be in interspecific competition, since both species feed on the same prey, and can be negatively impacted by the presence of the other because they will have less food. Competition is only one of many interacting biotic and abiotic factors that affect community structure. Moreover, competition is not always a straightforward, direct, interaction. Interspecific competition may occur when individuals of two separate species share a limiting resource in the same area. If the resource cannot support both populations, then lowered fecundity, growth, or survival may result in at least one species. Interspecific competition has the potential to alter populations, communities and the evolution of interacting species. On an individual organism level, competition can occur as interference or exploitative competition. Types All of the types described here can also apply to intraspecific competition, that is, competition among individuals within a species. Also, any specific example of interspecific competition can be described in terms of both a mechanism (e.g., resource or interference) and an outcome (symmetric or asymmetric). Based on mechanism Exploitative competition, also referred to as resource competition, is a form of competition in which one species consumes and either reduces or more efficiently uses a shared limiting resource and therefore depletes the availability of the resource for the other species. Thus, it is an indirect interaction because the competing species interact via a shared resource. Interference competition is a form of competition in which individuals of one species interacts directly with individuals of another species via antagonistic displays or more aggressive behavior. In a review and synthesis of experimental evidence regarding interspecific competition, Schoener described six specific types of mechanisms by which competition occurs, including consumptive, preemptive, overgrowth, chemical, territorial, and encounter. Consumption competition is always resource competition, but the others cannot always be regarded as exclusively exploitative or interference. Separating the effect of resource use from that of interference is not easy. A good example of exploitative competition is found in aphid species competing over the sap in plant phloem. Each aphid species that feeds on host plant sap uses some of the resource, leaving less for competing species. In one study, Fordinae geoica was observed to out-compete F. formicaria to the extent that the latter species exhibited a reduction in survival by 84%. Another example is the one of competition for calling space in amphibians, where the calling activity of a species prevents the other one from calling in an area as wide as it would in allopatry. A last example is driving of bisexual rock lizards of genus Darevskia from their natural habitats by a daughter unisexual form; interference competition can be ruled out in this case, because parthenogenetic forms of the lizards never demonstrate aggressive behavior. This type of competition can also be observed in forests where large trees dominate the canopy and thus allow little light to reach smaller competitors living below. These interactions have important implications for the population dynamics and distribution of both species. Based on outcome Scramble and contest competition refer to the relative success of competitors. Scramble competition is said to occur when each competitor is equal suppressed, either through reduction in survival or birth rates. Contest competition is said to occur when one or a few competitors are unaffected by competition, but all others suffer greatly, either through reduction in survival or birth rates. Sometimes these types of competition are referred to as symmetric (scramble) vs. asymmetric (contest) competition. Scramble and contest competition are two ends of a spectrum, of completely equal or completely unequal effects. Apparent competition Apparent competition is actually an example of predation that alters the relative abundances of prey on the same trophic level. It occurs when two or more species in a habitat affect shared natural enemies in a higher trophic level. If two species share a common predator, for example, apparent competition can exist between the two prey items in which the presence of each prey species increases the abundance of the shared enemy, and thereby suppresses one or both prey species. This mechanism gets its name from experiments in which one prey species is removed and the second prey species increases in abundance. Investigators sometimes mistakenly attribute the increase in abundance in the second species as evidence for resource competition between prey species. It is "apparently" competition, but is in fact due to a shared predator, parasitoid, parasite, or pathogen. Notably, species competing for resources may often also share predators in nature. Interactions via resource competition and shared predation may thus often influence one another, thus making it difficult to study and predict their outcome by only studying one of them. Consequences Many studies, including those cited previously, have shown major impacts on both individuals and populations from interspecific competition. Documentation of these impacts has been found in species from every major branch of organism. The effects of interspecific competition can also reach communities and can even influence the evolution of species as they adapt to avoid competition. This evolution may result in the exclusion of a species in the habitat, niche separation, and local extinction. The changes of these species over time can also change communities as other species must adapt. Competitive exclusion The competitive exclusion principle, also called "Gause's law" which arose from mathematical analysis and simple competition models states that two species that use the same limiting resource in the same way in the same space and time cannot coexist and must diverge from each other over time in order for the two species to coexist. One species will often exhibit an advantage in resource use. This superior competitor will out-compete the other with more efficient use of the limiting resource. As a result, the inferior competitor will suffer a decline in population over time. It will be excluded from the area and replaced by the superior competitor. A well-documented example of competitive exclusion was observed to occur between Dolly Varden charr (Trout)(Salvelinus malma) and white spotted char (Trout)(S. leucomaenis) in Japan. Both of these species were morphologically similar but the former species was found primarily at higher elevations than the latter. Although there was a zone of overlap, each species excluded the other from its dominant region by becoming better adapted to its habitat over time. In some such cases, each species gets displaced into an exclusive segment of the original habitat. Because each species suffers from competition, natural selection favors the avoidance of competition in such a way. Niche differentiation is a process by which competitive exclusion leads to differences in resource use. In some cases, niche differentiation results in spatial displacement, where species avoid direct competition by occupying different areas. However, niche differentiation can also cause other changes, such as altered behaviors or ecological roles, that help species avoid competition. If competition avoidance is possible, species may specialize in different areas of the niche, minimizing overlap and resource competition (Watts & Holekamp, 2008). For example, spotted hyenas (Crocuta crocuta) and lions (Panthera leo) in Africa share similar habitats and prey but have different hunting strategies. Hyenas use stamina to chase prey over long distances, while lions rely on ambush hunting. This difference in hunting strategies helps reduce direct competition for food (Hayward & Slotow, 2009). Another example of niche differentiation comes from birds, where species with similar ecological requirements shift their behavior to avoid competition. In the Galapagos Islands, finch species have been observed to change their feeding habits within a few generations, adapting to new dietary resources to minimize competition. This adaptation allowed different finch species to coexist despite overlapping habitats and food sources (Kruuk, 1972). Similarly, hyenas and lions may alter their roles in the ecosystem through spatial and behavioral differentiation, helping them avoid direct conflict and share resources (Groenewald et al., 2009). In some ecosystems, niche differentiation is influenced by third-party species or predators. For example, a keystone predator can significantly alter the behavior of competing species. Hyenas, by preying on lions or scavenging their kills, can reduce the lions’ ability to dominate a territory. This helps other predators and scavengers, like cheetahs, access resources they might otherwise be excluded from (Hayward & Slotow, 2009). Additionally, in bacterial ecosystems, phage parasites have been shown to mediate coexistence between competing bacterial species by reducing the dominance of one species. This kind of interaction helps maintain biodiversity in microbial communities, which can have important implications for both medical research and ecological theory (Groenewald et al., 2009). Local extinction Although local extinction of one or more competitors has been less documented than niche separation or competitive exclusion, it does occur. In an experiment involving zooplankton in artificial rock pools, local extinction rates were significantly higher in areas of interspecific competition. In these cases, therefore, the negative effects are not only at the population level but also species richness of communities. Impacts on communities As mentioned previously, interspecific competition has great impact on community composition and structure. Niche separation of species, local extinction and competitive exclusion are only some of the possible effects. In addition to these, interspecific competition can be the source of a cascade of effects that build on each other. An example of such an effect is the introduction of an invasive species to the United States, purple-loosestrife. This plant when introduced to wetland communities often outcompetes much of the native flora and decreases species richness, food and shelter to many other species at higher trophic levels. In this way, one species can influence the populations of many other species as well as through a myriad of other interactions. Because of the complicated web of interactions that make up every ecosystem and habitat, the results of interspecific competition are complex and site-specific. Competitive Lotka–Volterra model The impacts of interspecific competition on populations have been formalized in a mathematical model called the Competitive Lotka–Volterra equations, which creates a theoretical prediction of interactions. It combines the effects of each species on the other. These effects are calculated separately for the first and second population respectively: In these formulae, N is the population size, t is time, K is the carrying capacity, r is the intrinsic rate of increase and α and β are the relative competition coefficients. The results show the effect that the other species has on the species being calculated. The results can be graphed to show a trend and possible prediction for the future of the species. One problem with this model is that certain assumptions must be made for the calculation to work. These include the lack of migration and constancy of the carrying capacities and competition coefficients of both species. The complex nature of ecology determines that these assumptions are rarely true in the field but the model provides a basis for improved understanding of these important concepts. An equivalent formulation of these models is: In these formulae, is the effect that an individual of species 1 has on its own population growth rate. Similarly, is the effect that an individual of species 2 has on the population growth rate of species 1. One can also read this as the effect on species 1 of species 2. In comparing this formulation to the one above, we note that , and . Coexistence between competitors occurs when and . We can translate this as coexistence occurs when the effect of each species on itself is greater the effect of the competitor. There are other mathematical representations that model species competition, such as using non-polynomial functions. Interspecific competition in macroevolution Interspecific competition is a major factor in macroevolution. Darwin assumed that interspecific competition limits the number of species on Earth, as formulated in his wedge metaphor: "Nature may be compared to a surface covered with ten-thousand sharp wedges ... representing different species, all packed closely together and driven in by incessant blows, . . . sometimes a wedge of one form and sometimes another being struck; the one driven deeply in forcing out others; with the jar and shock often transmitted very far to other wedges in many lines of direction." (From Natural Selection - the "big book" from which Darwin abstracted the Origin). The question whether interspecific competition limits global biodiversity is disputed today, but analytical studies of the global Phanerozoic fossil record are in accordance with the existence of global (although not constant) carrying capacities for marine biodiversity. Interspecific competition is also the basis for Van Valen's Red Queen hypothesis, and it may underlie the positive correlation between origination and extinction rates that is seen in almost all major taxa. In the previous examples, the macroevolutionary role of interspecific competition is that of a limiting factor of biodiversity, but interspecific competition also promotes niche differentiation and thus speciation and diversification. The impact of interspecific competition may therefore change during phases of diversity build-up, from an initial phase where positive feedback mechanisms dominate to a later phase when niche-peremption limits further increase in the number of species; a possible example for this situation is the re-diversification of marine faunas after the end-Permian mass extinction event. See also Minimum viable population Symbiosis Mutualism (biology) Macroevolution Red Queen hypothesis References Further reading Begon, M., C.R. Townsend and J.L. Harper. 2006. Ecology: From Individuals to Ecosystems. Blackwell Publishing, Malden, MA. Giller, P. S. 1984. Community Structure and the Niche. Chapman & Hall, London. Holekamp, K.E. 2006. Interspecific competition and anti-predator behavior. National Science Foundation. https://www.nsf.gov/ Solomon, E. P., Berg, L. R., & Martin, D. W. (2002). Biology, sixth edition. (N. Rose, Ed.). Stamford, CT: Thomson Learning Weiner, J. 1994. The Beak of the Finch. Cambridge University Press, New York. External links Competition for Territory: The Levins Model for Two Species Wolfram Demonstrations Project — requires CDF player (free) Biological interactions Competition
Interspecific competition
[ "Biology" ]
3,047
[ "Biological interactions", "Ethology", "Behavior", "nan" ]
6,920,598
https://en.wikipedia.org/wiki/Xtracycle
Xtracycle is the name of a company and the name commonly used for the variety of load-carrying bicycle that results from use of the company's products: the FreeRadical kit. Web forums and blogs often use the shorthand Xtrabike, Xtra, or simply X to refer to either the FreeRadical extension or the entire extended bicycle. History and spin-offs The FreeRadical was conceived by Ross Evans at Stanford University and developed during his work in the mid-1990s managing a "Bikes Not Bombs" project in Nicaragua, where having a bicycle enhances a person's employment opportunities. In 1998 Evans and his friend Kipchoge Spencer created Xtracycle Inc to manufacture and market the invention, as well as a nonprofit organization, Worldbike, devoted to encourage a bicycle-centric lifestyle and culture. Despite the fact that the FreeRadical qualifies as an aftermarket bike accessory, its growing acceptance has sparked an Xtracycle aftermarket not formally connected with Xtracycle Inc: varieties of specialized kickstands, electric-assist motors, and even bike-mounted blenders have come to market, even though each requires the prior purchase of a FreeRadical or other Xtracycle-compatible frame to function properly. Big Dummy Xtracycle Inc has worked with various bicycle manufacturers to build purpose-built extended bicycles compatible with their accessories. The first to actually produce and market an integrated Xtracycle frameset was Surly Bikes with the Big Dummy. XInc continues to form similar covenants with manufacturers in all price ranges, with the goal of making the Xtracycle less of a niche product and more mainstream. XInc is also working on FreeRadical attachments sized for children's and youths' bicycles. Other applications for the FreeRadical have included linking two Xtracycles to support a mobile stage for use in parades and street fairs, and a computerized chalkpowder-printer device mounted on an Xtracycle that leaves a dot-matrix trail of messages on the street. Open source In 2008, Xtracycle put their longtail bike frame specifications online as part of their project to open source their longtail frame design. They’ve created a Longtail Standard and logoing to allow vendors to design their products to fit in the Xtracycle FreeRadical ecosystem. The "Longtail Technology" logo can be used on bikes, accessories or packaging. The open sourcing of the patented technology was meant to stimulate the cargo bike movement, while developing a standard for "longtail" frames and accessories. Several frame and accessory makers have adopted the standards, while others have developed competing and incompatible long-frame cargo bike designs. However, the documents are no longer freely available, and now require an agreement with Xtracycle first. Products FreeRadical The FreeRadical is an extender for a bicycle. Radish In 2009 the Radish was launched by Xtracycle. It is a production long-tail bicycle with a low-standover height frame and matching FreeRadical. EdgeRunner In 2013 the EdgeRunner is a second generation cargo bicycle with a 20-inch rear wheel. The EdgeRunner has been called the "Best longtail ever. No contest." CargoJoe In 2013 the CargoJoe is a folding cargo bike developed in a partnership between Xtracycle and Tern. Sidecar In 2011 Xtracycle created a sidecar for cargo transport that can carry up to 250 pounds. See also Bicycle Luggage carrier Electric bicycle Cargo bike Outline of cycling Modular design Open-hardware vehicle References External links Official site links to cargo-carrying bikes Bike Hugger Bettie, a Sport Utility Bike project using the Xtracycle Surly's page describing the Big Dummy frameset Xtracycle Gallery—Gallery of Xtracycle pictures Cycle types Modular design
Xtracycle
[ "Engineering" ]
781
[ "Systems engineering", "Design", "Modular design" ]
6,920,898
https://en.wikipedia.org/wiki/Ernst%20G.%20Straus
Ernst Gabor Straus (February 25, 1922 – July 12, 1983) was a German-American mathematician of Jewish origin who helped found the theories of Euclidean Ramsey theory and of the arithmetic properties of analytic functions. His extensive list of co-authors includes Albert Einstein, Paul Erdős, Richard Bellman, Béla Bollobás, Sarvadaman Chowla, Ronald Graham, Lee Albert Rubel, Mathukumalli V Subbarao, László Lovász, Carl Pomerance, Moshe Goldberg, and George Szekeres. Biography Straus was born in Munich, February 25, 1922, the youngest of five children (Isa, Hana, Peter, Gabriella) of a prominent Zionist attorney, Elias (Eli) Straus, and his wife Rahel Straus, a medical doctor and feminist. Ernst Gabor Straus came to be known as a mathematical prodigy from a very young age. Following the death of his father, the family fled the Nazi regime for Palestine in 1933, and Straus was educated at the Hebrew University in Jerusalem. Although he never received an undergraduate degree, Straus began graduate studies at Columbia University in New York, earning a PhD in 1948 under F. J. Murray. Two years later, he became the assistant of Albert Einstein. After a three-year stint at the Institute for Advanced Study, Straus took a position at the University of California, Los Angeles, which he kept for the rest of his life. Straus died July 12, 1983, of heart failure. Straus's interests ranged widely over his career, beginning with his early work on relativity with Einstein and continuing with deep work in analytic number theory, extremal graph theory, and combinatorics. One of his best known contributions in popular mathematics is the Erdős–Straus conjecture that every number of the form 4/n has a three-term Egyptian fraction. See also Illumination problem Notes References Goldberg, Moshe. "Ernst G. Straus (1922–1983)". Linear Algebra and Its Applications. 64 (1985), 1–19. Obituary with List of works 1922 births 1983 deaths Combinatorialists Institute for Advanced Study visiting scholars Scientists from Munich Israeli emigrants to the United States German people of Hungarian-Jewish descent Jewish emigrants from Nazi Germany to Mandatory Palestine 20th-century American mathematicians 20th-century German mathematicians
Ernst G. Straus
[ "Mathematics" ]
477
[ "Combinatorialists", "Combinatorics" ]
6,921,014
https://en.wikipedia.org/wiki/Noyes%20Laboratory%20of%20Chemistry
The William Albert Noyes Laboratory of Chemistry, located on the campus of the University of Illinois Urbana-Champaign at 505 S. Mathews Avenue in Urbana, Illinois, United States, was built in 1902 as the "New Chemical Laboratory", and was designed by Nelson Strong Spencer in the Richardsonian Romanesque style. Founded in 1867, the Chemistry Department was the first department of the university to move into its own building in 1878. When the department outgrew that building, department head Arthur W. Palmer convinced the state legislature to build a new lab, with 77,884 square feet of usable space, at a cost of under $100,000. Ten years later, when more space was needed,the east wingwith 86,396 square feet of additional spacewas built in 1915–16 at the cost of $250,000. The building then housed the largest chemistry department in the United States at the time. At various times, the buildings also housed the departments of Biochemistry, Chemical Engineering and Bacteriology, as well as the Illinois Water Survey. In 1939 the building was dedicated in honor of the influential UI chemist William A. Noyes. It was designated a National Historic Chemical Landmark by the American Chemical Society in 2002, in recognition of the many contributions to the chemical sciences that have been made there over the last 100 years. In 1930, James McLaren White's Chemistry Annex Building was completed, and connected to the Noyes Lab Building underground. It added 39,000 square feet at the cost of $335,000, and in 1951 the East Chemistry Annex was added to the complex, at the cost of $5.9 million. References Notes External links Educational institutions established in 1902 Buildings and structures of the University of Illinois Urbana-Champaign University and college laboratories in the United States Chemistry laboratories 1902 establishments in Illinois
Noyes Laboratory of Chemistry
[ "Chemistry" ]
367
[ "Chemistry laboratories", "Chemistry organization stubs" ]
6,921,017
https://en.wikipedia.org/wiki/Algebraic%20Geometry%20%28book%29
Algebraic Geometry is an algebraic geometry textbook written by Robin Hartshorne and published by Springer-Verlag in 1977. Importance It was the first extended treatment of scheme theory written as a text intended to be accessible to graduate students. Contents The first chapter, titled "Varieties", deals with the classical algebraic geometry of varieties over algebraically closed fields. This chapter uses many classical results in commutative algebra, including Hilbert's Nullstellensatz, with the books by Atiyah–Macdonald, Matsumura, and Zariski–Samuel as usual references. The second and the third chapters, "Schemes" and "Cohomology", form the technical heart of the book. The last two chapters, "Curves" and "Surfaces", respectively explore the geometry of 1- and 2-dimensional objects, using the tools developed in the chapters 2 and 3. Notes References Graduate Texts in Mathematics 1977 non-fiction books Algebraic geometry Mathematics textbooks Monographs
Algebraic Geometry (book)
[ "Mathematics" ]
195
[ "Fields of abstract algebra", "Algebraic geometry" ]
6,921,352
https://en.wikipedia.org/wiki/John%20Monteith
John Lennox Monteith DSc, FRS (3 September 1929 – 20 July 2012) was a British scientist who pioneered the application of physics to biology. He was an authority in the related fields of water management for agricultural production, soil physics, micrometeorology, transpiration, and the influence of the natural environment on field crops, horticultural crops, forestry, and animal production. Research His pioneering work with Howard Penman on evapotranspiration is applied worldwide as the Penman-Monteith equation. It predicts evapotranspiration and is recommended by the Food and Agriculture Organization for calculating irrigation quantities. Monteith's research on the role of the environment in agriculture, the physics of crop microclimate, physiology of crop growth and yield, radiation climatology, heat balance in animals, and instrumentation for measuring physical and physiological variables in agriculture has been published in journals throughout the world. He was President of the Royal Meteorological Society from 1978 to 1980. In his presidential address in 1980 he advised colleagues that unless they could understand how crop yields were determined by weather events, they would have little hope of predicting how crop yields would vary as a result of global warming and elevated levels. When he retired in 1992 a conference on resource capture by crops was organised and a further conference was held in 2008. The American Society of Agronomy also organised a symposium in his honour in 1996. In an obituary by researchers at Nottingham it was noted it was "impossible to quantify" the impact of his research but that his influence was major judging by the large number of researchers that he supervised who held senior positions in organisations around the world. Career 1954 Rothamsted Experimental Station, Harpenden, Herts, UK 1967 University of Nottingham, School of Agriculture, UK 1987 International Crops Research Institute for the Semi-Arid Tropics, Hyderabad, India Senior visiting fellow of NERC. Awards and honours Elected Fellow of the Royal Society, 1971 Elected Fellow of the Royal Society of Edinburgh, 1972 Awarded Rank Prize for Human and Animal Nutrition and Crop Husbandry, 1989 for "his elucidation of the physical control of determining crop growth." Honorary Doctor of Science, University of Edinburgh, 1989 Symons Gold Medal of the Royal Meteorological Society, 1994 His nomination for the Royal Society reads: References 1929 births British scientists Environmental scientists Fellows of the Royal Society of Edinburgh Fellows of the Royal Society Presidents of the Royal Meteorological Society British hydrologists Academics of the University of Nottingham 2012 deaths
John Monteith
[ "Environmental_science" ]
501
[ "Environmental scientists", "British environmental scientists" ]
6,921,893
https://en.wikipedia.org/wiki/Tight%20closure
In mathematics, in the area of commutative algebra, tight closure is an operation defined on ideals in positive characteristic. It was introduced by . Let be a commutative noetherian ring containing a field of characteristic . Hence is a prime number. Let be an ideal of . The tight closure of , denoted by , is another ideal of containing . The ideal is defined as follows. if and only if there exists a , where is not contained in any minimal prime ideal of , such that for all . If is reduced, then one can instead consider all . Here is used to denote the ideal of generated by the 'th powers of elements of , called the th Frobenius power of . An ideal is called tightly closed if . A ring in which all ideals are tightly closed is called weakly -regular (for Frobenius regular). A previous major open question in tight closure is whether the operation of tight closure commutes with localization, and so there is the additional notion of -regular, which says that all ideals of the ring are still tightly closed in localizations of the ring. found a counterexample to the localization property of tight closure. However, there is still an open question of whether every weakly -regular ring is -regular. That is, if every ideal in a ring is tightly closed, is it true that every ideal in every localization of that ring is also tightly closed? References Commutative algebra Ideals (ring theory)
Tight closure
[ "Mathematics" ]
299
[ "Fields of abstract algebra", "Commutative algebra" ]
6,922,667
https://en.wikipedia.org/wiki/Model-based%20definition
Model-based definition (MBD), sometimes called digital product definition (DPD), is the practice of using 3D models (such as solid models, 3D PMI and associated metadata) within 3D CAD software to define (provide specifications for) individual components and product assemblies. The types of information included are geometric dimensioning and tolerancing (GD&T), component level materials, assembly level bills of materials, engineering configurations, design intent, etc. By contrast, other methodologies have historically required accompanying use of 2D engineering drawings to provide such details. Use of the 3D digital data set Modern 3D CAD applications allow for the insertion of engineering information such as dimensions, GD&T, notes and other product details within the 3D digital data set for components and assemblies. MBD uses such capabilities to establish the 3D digital data set as the source of these specifications and design authority for the product. The 3D digital data set may contain enough information to manufacture and inspect product without the need for engineering drawings. Engineering drawings have traditionally contained such information. In many instances, use of some information from 3D digital data set (e.g., the solid model) allows for rapid prototyping of product via various processes, such as 3D printing. A manufacturer may be able to feed 3D digital data directly to manufacturing devices such as CNC machines to manufacture the final product. Limited Dimension Drawing Limited Dimension Drawing (LDD), sometimes Reduced Dimension Drawing, are 2D drawings that only contain critical information, noting that all missing information is to be taken from an associated 3D model. For companies in transition to MBD from traditional 2D documentation a Limited Dimension Drawing allows for referencing 3D geometry while retaining a 2D drawing that can be used in existing corporate procedures. Only limited information is placed on the 2D drawing and then a note is placed to notify manufactures they must build off the 3D model for any dimensions not found on the 2D drawing. Standardization In 2003, ASME published the ASME Y14.41 Digital Product Definition Data Practices, which was revised in 2012 and again in 2019. The standard provides for the use of many MBD aspects, such as GD&T display and other annotation behaviors within 3D modelling environment. ISO 16792 standardizes MBD within the ISO standards, sharing many similarities with the ASME standard. Other standards, such as ISO 1101 and of AS9100 also make use of MBD. In 2013, the United States Department of Defense released MIL-STD-31000 Revision A to codify the use of MBD as a requirement for technical data packages (TDP). See also ASME Y14.41 CAD standards References External links Model-centric Design, Design World, 2008 Computer-aided design Computer-aided engineering Product lifecycle management Computer-aided manufacturing software Management cybernetics
Model-based definition
[ "Engineering" ]
573
[ "Computer-aided design", "Design engineering", "Industrial engineering", "Computer-aided engineering", "Construction" ]
6,922,834
https://en.wikipedia.org/wiki/Ogiri
Ogiri also called Ogiri Ijebu is a flavoring made of fermented oil seeds, such as sesame seeds or egusi seeds. The process and product are similar to iru or douchi. Its smell is similar to cheese, miso, or stinky tofu. Ogiri is best known in West Africa. It is popular among the Yoruba people. Ogiri among the Igbo people of Nigeria is different and similar to Iru Pete. Ogiri was discovered by Yoruba people. Ogiri made in the traditional West African way contains: Egusi seeds, sesame seeds, salt, and water References Fermented foods African cuisine
Ogiri
[ "Biology" ]
140
[ "Fermented foods", "Biotechnology products" ]
6,922,963
https://en.wikipedia.org/wiki/Mad%20honey
Mad honey is honey that contains grayanotoxins. The dark, reddish honey is produced from the nectar and pollen of genus Rhododendron and has moderately toxic and narcotic effects. Mad honey is produced principally in Nepal and Turkey, where it is used both as a traditional medicine and a recreational drug. In the Hindu Kush Himalayan range, it is produced by Himalayan giant honey bees (Apis laboriosa). Honey hunting in Nepal has been traditionally performed by the Gurung people. The honey can also be found rarely in the eastern United States. Historical accounts of mad honey are found in Ancient Greek texts. The Greek military leader Xenophon wrote in his Anabasis about the effects of mad honey on soldiers in 401 BCE. In 65 BCE, during the Third Mithridatic War, King Mithridates used mad honey as a biological weapon against Roman soldiers under General Pompey. During the 18th century, mad honey was imported to Europe where it was added to alcoholic beverages. Historical accounts Historical accounts of mad honey stretch back over two millennia. Early accounts by Ancient Greek historians noted the properties of the honey and its floral origins. There are a few accounts of its use as a biological weapon, usually as experienced by foraging soldiers. The 6th-century BCE Homeric Hymn to Hermes, part of the Homeric Hymns, may indirectly allude to the use of mad honey. The text refers to the melissai (bee-oracles) of Delphi's Mount Parnassus who could prophesy only after ingesting meli chloron (green honey), and may have been referring to Pythia, the Oracle of Delphi. The Greek military leader and historian Xenophon wrote an account of a 401 BCE incident involving mad honey in his work Anabasis about the expedition of the Ten Thousand. In his account, he describes how Greek soldiers traveling near Trabzon (now part of Turkey) near the Black Sea, ate mad honey and then became disoriented, suffering vomiting and diarrhea, and no longer able to stand. The soldiers recovered the following day. Roman and Greek authorities believed mad honey could cure insanity. Aristotle noted that "at Trapezus honey from boxwood has a heavy scent, and they say that healthy men go mad, but that epileptics are cured by it immediately". Roman naturalist Pliny the Elder referred to mad honey as meli mænomenon and was among the first to recognize that the toxicity was linked to oleander, azalea, and Rhododendron species. Historians also noted that mad honey's potency or intoxicating effects varied seasonally or cyclically. Pliny noted that the honey was most hazardous after wet springs, while Greek physician Pedanius Dioscorides noted that the honey was only dangerous in certain seasons. Mad honey was used as an early biological weapon in the Black Sea region. In 65 BCE, during the Third Mithridatic War, King Mithridates staged a strategic withdrawal from Roman soldiers under General Pompey. Possibly under the counsel of Greek botanist Kateuas, Mithridates had the withdrawing soldiers place combs of mad honey on their path. The Roman soldiers who ate the honey succumbed to mad honey intoxication and were slain. The Greek geographer Strabo described the incident as having wiped out three maniples of Romans, which could mean anywhere from 480 to 1,800 soldiers. Other incidents of honey poisonings may have been caused by mad honey. In 946, allies of Queen Olga of Kiev sent several tons of fermented honey to her Russian foes. 5,000 Russians were massacred as they lay in a stupor. Later in 1489, in the same region, Tatars consumed casks of mead made using mad honey that had been left in an abandoned camp. 10,000 of the Tatars were slaughtered by Russians. During the 18th century, around 25 tons of mad honey were exported from the Black Sea Region to Europe every year. It was known then in France as miel fou (crazy honey) and was added to beer and other alcoholic drinks to give them extra potency. American botanist Benjamin Smith Barton observed that beekeepers in Pennsylvania became intoxicated by mad honey. They added the honey to liquor and sold the concoction in New Jersey as an elixir they named 'metheglin' (mead). Barton noted that the inebriation began pleasantly, but could suddenly turn "ferocious". Former Confederate surgeon J. Grammer described in 1875 in Gleanings in Bee Culture that there were several incidents with soldiers from the South involving mad honey intoxication. The chemical compound andromedotoxin (grayanotoxin I) was isolated from Trabzon honey by German scientist P. C. Plugge in 1891. The 1929 edition of the Encyclopædia Britannica dismissed the notion of poison honey as described in Greek and Roman texts, concluding that "in all likelihood the symptoms described by these old writers were due to overeating" or that the honey had been eaten on empty stomachs. Prevalence and harvesting Rhododendron species and other plants in the family Ericaceae produce grayanotoxins. Honey made from the nectar contains pollen from these plants as well as the grayanotoxins. Mad honey is darker and redder than other honeys, and has a slightly bitter taste. Due to its reddish color, it is sometimes called rose of the forest honey. Mad honey is produced in specific world regions, notably the Black Sea Region of Turkey and Nepal. Small-scale producers of mad honey typically harvest honey from a small area or single hive, producing a honey containing a significant concentration of grayanotoxins. In contrast, large-scale honey production often mixes honey gathered from different locations, diluting the concentration of any contaminated honey. A Caucasus beekeeper noted in a 1929 article in Bee World that the potency of the honey could vary across a single honeycomb and that the most dangerous mad honey was produced at high elevations during dry spells. In Turkey In Turkey, mad honey is known as deli bal and is used as a recreational drug and traditional medicine. It is most commonly made from the nectar of Rhododendron luteum and Rhododendron ponticum in the Caucasus region. Beekeepers in the Kaçkar Mountains have produced mad honey for centuries. In the Hindu Kush Himalayan region Mad honey is produced in the foothills of the Himalayas by Himalayan giant honey bees (Apis laboriosa). In southern Asia, Apis laboriosa nests are found mostly in the Hindu Kush Himalayan region. The bees produce mad honey in the spring when plants from the family Ericaceae, such as rhododendrons are in bloom. Apis laboriosa nests consist of single, open combs with large bases reaching . The hives are built on tree limbs or steep, southeast or southwest-facing rocky cliffsides, at elevations of , often situated underneath overhanging ledges where they are protected from the elements. Honey gathering In central Nepal and northern India, the Gurung people have traditionally gathered the honey for centuries, scaling cliffsides to reach the hives. Residents collect the honey twice a year, once in late spring and once in the late fall. The honey hunters use rope ladders with wooden rungs to access the nests and set fires underneath to smoke out the bees. Apis laboriosa populations in Nepal have experienced dramatic declines due to overharvesting, hydroelectric dam and road construction, and the loss of water sources. Population decline is also attributed to deforestation and landslides. In Nepal, there has been an annual 70% decline in honeybee populations in Himalayan cliffs. A specialist with the International Centre for Integrated Mountain Development reported in 2022 that there had been a decrease both in the number of cliffs that host bees and in the number of colonies each cliff supports. Recommendations for sustainable honey harvesting include leaving half of the newly formed combs undisturbed and only harvesting portions of the combs. In other regions United States Mad honey is rarely produced in the United States. According to Texas A&M professor Vaughn Bryant, an expert on honey, mad honey is produced in the Appalachian Mountains in the Eastern U.S. when a late cold snap kills most flowers but not rhododendrons. Honeys produced from mountain laurel (Kalmia latifolia) and sheep laurel (Kalmia angustifolia) also contain grayanotoxins and are potentially deadly if large quantities are eaten. Europe In Europe, honey containing grayanotoxins is produced from Rhododendron ferrugineum, which occurs in the Alps and Pyrenees. However, no grayanane intoxication cases have been reported for honeys from the European Union. Physiological effects Consumption of mad honey can cause a poisonous reaction called grayanotoxin poisoning, mad honey disease, honey intoxication, or rhododendron poisoning. The honey is the most common cause of grayanotoxin poisoning. Bees are not affected by grayanotoxins. In humans and some other animals, grayanotoxins act on the central nervous system, binding to sodium ion channels and preventing them from closing. This results in low blood pressure (hypotension) and reduced heart rates (bradycardia). Corresponding effects include lightheadedness, blurred vision, dizziness, and respiratory difficulty. In some cases, blood pressure may be reduced to potentially dangerous levels, causing nausea, fainting, seizures, arrhythmia, atrioventricular blocks, muscle paralysis, and loss of consciousness. The degree of mad honey intoxication depends on the quantity consumed as well as the concentration of grayanotoxins. It may act as a hypnotic, with milder symptoms including tingling sensations, numbness, dizziness, swooning, and giddiness. With stronger doses, the effects may include delirium, vertigo, nausea, psychedelic optical effects such as tunnel vision and whirling lights, hallucinations, and impaired speech where syllables and words are spoken out of sequence. The recovery time ranges from hours to days, but most symptoms typically subside after 12 hours. A 2015 systematic review of 1199 cases of mad honey intoxication found no reported deaths. Treatments for mad honey poisoning include atropine, adrenaline, and saline infusions. Usage Mad honey is most frequently produced and consumed in regions of Turkey and Nepal as a traditional medicine or recreational drug. It is used as a traditional medicine to treat sore throat, arthritis, diabetes, and hypertension. In the Turkish Black Sea Region it is used to treat indigestion, abdominal pain, gastritis, peptic ulcers, and the flu. For centuries, in the Caucasus, small amounts of Pontic azalea honey have been added to alcoholic drinks to amplify the intoxicating effect. In Turkey, a spoonful of mad honey is traditionally added to milk as a tonic. Mad honey was banned in South Korea in 2005. Mad honey is also thought to help with erectile dysfunction and increase sexual performance. Most cases of mad honey poisoning are experienced by middle-aged men. See also Bees and toxic chemicals A Haunting in Venice References Biological agents Deliriants Foodborne illnesses Honey Psychoactive drugs Traditional medicine Mithridates VI Eupator
Mad honey
[ "Chemistry", "Biology", "Environmental_science" ]
2,348
[ "Toxicology", "Biological agents", "Biological warfare", "Psychoactive drugs", "Neurochemistry" ]
6,923,669
https://en.wikipedia.org/wiki/Hammick%20reaction
The Hammick reaction, named after Dalziel Hammick, is a chemical reaction in which the thermal decarboxylation of α-picolinic (or related) acids in the presence of carbonyl compounds forms 2-pyridyl-carbinols. Using p-cymene as solvent has been shown to increase yields. Reaction mechanism Upon heating α-picolinic acid will spontaneously decarboxylate forming the so-called 'Hammick Intermediate' (3). This was initially thought to be an aromatic ylide, but is now believed to be a carbene In the presence of a strong electrophile, such as an aldehyde or ketone, this species will undergo nucleophilic attack faster than proton transfer. After nucleophilic attack intramolecular proton transfer yields the desired carbinol (6). The scope of the reaction is effectively limited to decarboxylating acids where the carboxyl group is α to the nitrogen, (reactivity has been reported when the acids are located elsewhere on the molecule but with low yields) thus suitable substrates are limited to the derivatives of α-picolinic acid including the α-carboxylic acids of quinoline and isoquinoline. See also Quinonoid zwitterion References Addition reactions Carbon-carbon bond forming reactions Name reactions
Hammick reaction
[ "Chemistry" ]
286
[ "Coupling reactions", "Name reactions", "Carbon-carbon bond forming reactions", "Organic reactions" ]
6,924,337
https://en.wikipedia.org/wiki/Ebullioscopic%20constant
In thermodynamics, the ebullioscopic constant relates molality to boiling point elevation. It is the ratio of the latter to the former: is the van 't Hoff factor, the number of particles the solute splits into or forms when dissolved. is the molality of the solution. A formula to compute the ebullioscopic constant is: is the ideal gas constant. is the molar mass of the solvent. is boiling point of the pure solvent in kelvin. is the molar enthalpy of vaporization of the solvent. Through the procedure called ebullioscopy, a known constant can be used to calculate an unknown molar mass. The term ebullioscopy means "boiling measurement" in Latin. This is related to cryoscopy, which determines the same value from the cryoscopic constant (of freezing point depression). This property of elevation of boiling point is a colligative property. It means that the property, in this case , depends on the number of particles dissolved into the solvent and not the nature of those particles. Values for some solvents See also Ebullioscope List of boiling and freezing information of solvents Boiling-point elevation Colligative properties References External links Ebullioscopic constant calculator AD Phase transitions
Ebullioscopic constant
[ "Physics", "Chemistry" ]
274
[ "Thermodynamics stubs", "Physical phenomena", "Phase transitions", "Phases of matter", "Critical phenomena", "Thermodynamics", "Statistical mechanics", "Physical chemistry stubs", "Matter" ]
6,924,404
https://en.wikipedia.org/wiki/Lake%20Natoma
Lake Natoma is a small lake in the Western United States, along the lower American River, between Folsom and Nimbus Dams in Sacramento County, California. The lake is located within the Folsom Lake State Recreation Area which maintains the facilities and bike trails surrounding the lake. Lake Natoma is located east of Sacramento, and has 500 surface acres of water.  The total length of lake Natoma is . Lake Natoma is a recreational lake for rowing, kayaking, and swimming; powerboats are permitted with a "no wake" restriction. It is home to the Sacramento State Aquatic Center, and regularly hosts West Coast College Rowing Championships, the Pac-12 Conference rowing championships, and, every four years, the Intercollegiate Rowing Association Championships. The Lake Natoma Four Bridges Half Marathon is held each October at the lake. Lake Natoma includes the historic Black Miners Bar area in Folsom, the site of a gold rush era African-American mining camp. The Folsom Powerhouse State Historic Park overlooks Lake Natoma in the city of Folsom. It is a California State Historical site, preserving an early hydroelectric power station. A paved cycling and jogging trail encircles the lake along with unpaved equestrian trails. The Folsom South Canal Trail also begins at the lake. Several parking lots and boat launching ramps are located around the lake. History This was the site of many gold mining operations in the 1800s. In the 1950s, after the Folsom Dam was constructed as part of the Central Valley Project work began on Nimbus Dam which would manage water released from Folsom Dam. In addition to maintaining water flow the lake provides water to irrigation canals and generates hydro electric power. After Lake Natoma and Folsom Lake were built, the Bureau of Reclamation was given control of operation for both reservoirs and dams. Around 1956, the Bureau of Reclamation and California Department of Parks and Recreation, also known as State Parks, formed an agreement that State Parks will be responsible for recreational activities on Lake Natoma, as well as Folsom Lake. In 1979, the general plan for the State Recreational Area at Folsom Lake, including Lake Natoma, was amended three times before it was approved. As part of the 1979 General Plan amendment, the Folsom Powerhouse State Historic Park became a separate unit, not a part of the Folsom Unit. In 2002, multiple meetings were held for public input and for interested stakeholder to plan and prepare for recreational purposes adequate enough for the growing populations. There was a 62% increase in population since the General Plan was accepted in 1979. Physical features Surrounding this narrow lake are foothills, plateaus, cliffs and river canyons. A dense riparian ecosystem encircles the lake. The Lake Natoma Bluffs stand and line the lake from the Black Miners Bar to the Mississippi Bar. Recreation The area is accessible via US Highway 50. Since these reservoirs are located in the metropolitan area, the State Recreational Area (SRA) tries to create a habitat suitable for the wildlife that are already living there, to have both "recreation and nature." In 2000, there were over 1.5 million visitors to Folsom Lake State Recreation Area, including Lake Natoma and Folsom Lake. Water and land uses Lake Natoma was first ready for the public in 1958. The common water activities are kayaking, rowing, canoeing, swimming, water skiing, sailing, and fishing. Land activities include hiking, biking, picnicking, jogging, and horseback riding. Educational activities are also available, including information about historical sites near Lake Natoma and Folsom Lake. Like the common fish that spawn in the American River, the history of the California Gold Rush, and Native American life before the Europeans' arrival. Major facilities Boat launch The lake has three boat launches for powerboats, jetskis, and sailboats. These launches are well designed with two hard surface launches and one gravel launch. With enough room to turn around, and parking areas. Fishermen like to use the Black Miners Bar launch, as many fishing tournament events occur nearby. California State University Sacramento (CSUS) holds rowing classes here. Campgrounds There is only one campground open to the public on lake Natoma: Black Miner's Bar Group Campground. Willow Creek Located at the Willow Creek inlet to Lake Natoma, this area is used for both land and water activities. The area is used for picnicking, birdwatching, fishing, and canoeing. Lake Natoma Trail is nearby. Aquatic Center for CSUS Where the Nimbus Dam ends, there is an Aquatic Center right on Lake Natoma that belongs to the California State University Sacramento (CSUS). This is where CSUS holds some of their aquatic classes, such as skiing and wind surfing. Bike path A bike path follows along the East Trail and West Trail of Nimbus Dam that lines Lake Natoma. Dirt trails Six miles of dirt trail lead to Nimbus Flat and Willow Creek. There are also dirt trails on both sides of Lake Natoma; one is six miles and other is nine miles. Nimbus Fish Hatchery Under Nimbus Dam and Lake Natoma lies the Nimbus Fish Hatchery, operated by the California Department of Fish and Games, that was built in 1955 by the U.S Bureau of Reclamation. The California Department of Fish and Games operates a visitor center here. The hatchery replicates spawning environments by creating a fish ladder that guides salmon and steelheads to spawn. For recreational fishing in the northern and Central California bodies of water, the hatchery produces 4 million Chinook salmon and over 400,000 Steelhead trout per year. Folsom Powerhouse State Historic Park On the south bank of Lake Natoma lies the old Folsom Powerhouse State Historic Park, located in the City of Folsom at the intersection of Riley and Scott Streets. In 1895, Folsom Powerhouse became the first powerhouse to generate electricity for the city of Sacramento.[clarification needed] The facility, which included the powerhouse and a dam, operated until 1952 when construction of the modern Folsom Dam hydroelectric facility was completed. The new Folsom Dam rendered the old Folsom Powerhouse obsolete. The old dam that had been used in conjunction with the historic Folsom Powerhouse was removed during construction of the modern Folsom Dam facility.[clarification needed] During its operation (1895-1952) the powerhouse delivered 11,000 volts of electricity over 22 miles to Sacramento. The historic site is listed in the National Register of Historic Places. There is a visitor center where the public can learn more about the history of the park. Animals Native animals known to live here are the mule deer, coyote, bobcat, mountain lion, quail, bald and golden eagle, heron, egret, western pond turtle, and California horned lizard. A few common fish in Lake Natoma are catfish and carp. Aquatic animals in Lake Natoma are usually tolerant of warm water and low oxygen level water. Other fish found here are bass, bluegill, and green sunfish. Right under Nimbus Dam are steelheads, Chinook salmon, American shad, and Pacific lampreys. Bald eagles and golden eagles can be found around Folsom Lake and Lake Natoma for nesting; about six bald eagles and two golden eagles are observed annually. Both are protected under the Federal Bald Eagle Protection Act of 1940. Environmental issues Mercury Fish at Lake Natoma were found to have high levels of mercury in their tissue. A sample of 22 fish showed that mercury levels had approached or exceed guidelines set by the U.S. Environmental Protection Agency, which is set at 3 microgram Hg/g wet weight. Since there were too few samples, it is not known for sure if mercury is randomly distributed throughout Lake Natoma. Concentration of mercury increases as sizes of fish increase, usually because of bioaccumulation. Factors include length, weight, and age. For example, predators at the top of the food chain, such as large bass mouth, spotted bass, and white catfish, usually have higher concentrations. The California Office of Environmental Health Hazard Assessment (OEHHA) has developed a safe eating advisory for Lake Natoma, based on levels of mercury found in fish caught here. See also List of dams and reservoirs in California List of lakes in California External links Nimbus Dam fact sheet - United States Bureau of Reclamation References Further reading Saiki, M.K. (2004). Summary of total mercury concentrations in fillets of selected sport fishes collected during 2000-2003 from Lake Natoma, Sacramento County, California [Data Series 103]. Reston, VA: U.S. Department of the Interior, U.S. Geological Survey. Natoma Natoma American River (California) Central Valley Project Parks in the San Joaquin Valley Rowing venues in the United States Natoma
Lake Natoma
[ "Engineering" ]
1,809
[ "Irrigation projects", "Central Valley Project" ]
6,924,623
https://en.wikipedia.org/wiki/Heterogeneous%20ribonucleoprotein%20particle
Heterogeneous nuclear ribonucleoproteins (hnRNPs) are complexes of RNA and protein present in the cell nucleus during gene transcription and subsequent post-transcriptional modification of the newly synthesized RNA (pre-mRNA). The presence of the proteins bound to a pre-mRNA molecule serves as a signal that the pre-mRNA is not yet fully processed and therefore not ready for export to the cytoplasm. Since most mature RNA is exported from the nucleus relatively quickly, most RNA-binding protein in the nucleus exist as heterogeneous ribonucleoprotein particles. After splicing has occurred, the proteins remain bound to spliced introns and target them for degradation. hnRNPs are also integral to the 40S subunit of the ribosome and therefore important for the translation of mRNA in the cytoplasm. However, hnRNPs also have their own nuclear localization sequences (NLS) and are therefore found mainly in the nucleus. Though it is known that a few hnRNPs shuttle between the cytoplasm and nucleus, immunofluorescence microscopy with hnRNP-specific antibodies shows nucleoplasmic localization of these proteins with little staining in the nucleolus or cytoplasm. This is likely because of its major role in binding to newly transcribed RNAs. High-resolution immunoelectron microscopy has shown that hnRNPs localize predominantly to the border regions of chromatin, where it has access to these nascent RNAs. The proteins involved in the hnRNP complexes are collectively known as heterogeneous ribonucleoproteins. They include protein K and polypyrimidine tract-binding protein (PTB), which is regulated by phosphorylation catalyzed by protein kinase A and is responsible for suppressing RNA splicing at a particular exon by blocking access of the spliceosome to the polypyrimidine tract. hnRNPs are also responsible for strengthening and inhibiting splice sites by making such sites more or less accessible to the spliceosome. Cooperative interactions between attached hnRNPs may encourage certain splicing combinations while inhibiting others. Role in cell cycle and DNA damage hnRNPs affect several aspects of the cell cycle by recruiting, splicing, and co-regulating certain cell cycle control proteins. Much of hnRNPs' importance to cell cycle control is evidenced by its role as an oncogene, in which a loss of its functions results in various common cancers. Often, misregulation by hnRNPs is due to splicing errors, but some hnRNPs are also responsible for recruiting and guiding the proteins themselves, rather than just addressing nascent RNAs. BRCA1 hnRNP C is a key regulator of the BRCA1 and BRCA2 genes. In response to ionizing radiation, hnRNP C partially localizes to the site of DNA damage, and when depleted, S-phase progression of the cell is impaired. Additionally, BRCA1 and BRCA2 levels fall when hnRNP C is lost. BRCA1 and BRCA2 are crucial tumor-suppressor genes which are strongly implicated in breast cancers when mutated. BRCA1 in particular causes G2/M cell cycle arrest in response to DNA damage via the CHEK1 signaling cascade. hnRNP C is important for the proper expression of other tumor suppressor genes including RAD51 and BRIP1 as well. Through these genes, hnRNP is necessary to induce cell-cycle arrest in response to DNA damage by ionizing radiation. HER2 HER2 is overexpressed in 20-30% of breast cancers and is commonly associated with poor prognosis. It is therefore an oncogene whose differently spliced variants have been shown to have different functions. Knocking down hnRNP H1 was shown to increase the amount of an oncogenic variant Δ16HER2. HER2 is an upstream regulator of cyclin D1 and p27, and its overexpression leads to the deregulation of the G1/S checkpoint. p53 hnRNPs also play a role in DNA damage response in coordination with p53. hnRNP K is rapidly induced after DNA damage by ionizing radiation. It cooperates with p53 to induce the activation of p53 target genes, thus activating cell-cycle checkpoints. p53 itself is an important tumor-suppressor gene sometimes known by the epithet “the guardian of the genome.” hnRNP K’s close association with p53 demonstrates its importance in DNA damage control. p53 regulates a large group of RNAs that are not translated into protein, called large intergenic noncoding RNAs (lincRNAs). p53 suppression of genes is often carried out by a number of these lincRNAs, which in turn have been shown to act though hnRNP K. Through physical interactions with these molecules, hnRNP K is targeted to genes and transmits p53 regulation, thus acting as a key repressor within the p53-dependent transcriptional pathway. Functions hnRNP serves a variety of processes in the cell, some of which include: Preventing the folding of pre-mRNA into secondary structures that may inhibit its interactions with other proteins. Possible association with the splicing apparatus. Transport of mRNA out of the nucleus. The association of a pre-mRNA molecule with a hnRNP particle prevents formation of short secondary structures dependent on base pairing of complementary regions, thereby making the pre-mRNA accessible for interactions with other proteins. CD44 Regulation hnRNP has been shown to regulate CD44, a cell-surface glycoprotein, through splicing mechanisms. CD44 is involved in cell-cell interactions and has roles in cell adhesion and migration. Splicing of CD44 and the functions of the resulting isoforms are different in breast cancer cells, and when knocked down, hnRNP reduced both cell viability and invasiveness. Telomeres Several hnRNPs interact with telomeres, which protect the ends of chromosomes from deterioration and are often associated with cell longevity. hnRNP D associates with the G-rich repeat region of the telomeres, possibly stabilizing the region from secondary structures which would inhibit telomere replication. hnRNP has also been shown to interact with telomerase, the protein responsible for elongating telomeres and prevent their degradation. hnRNPs C1 and C2 associate with the RNA component of telomerase, which improves its ability to access the telomere. Examples Human genes encoding heterogeneous nuclear ribonucleoproteins include: HNRNPA0, HNRNPA1, HNRNPA1L1, HNRNPA1L2, HNRNPA3, HNRNPA2B1 HNRNPAB HNRNPB1 HNRNPC, HNRNPCL1 HNRNPD (AUF1), HNRPDL HNRNPF HNRNPG (RBMX) HNRNPH1, HNRNPH2, HNRNPH3 HNRNPI (PTB) HNRNPK HNRNPL, HNRPLL HNRNPM HNRNPP2 (FUS/TLS) HNRNPR HNRNPQ (SYNCRIP) HNRNPU, HNRNPUL1, HNRNPUL2, HNRNPUL3 FMR1 See also Messenger RNP: complex between mRNA and protein(s) present in nucleus References Further reading Gene expression Ribonucleoproteins
Heterogeneous ribonucleoprotein particle
[ "Chemistry", "Biology" ]
1,606
[ "Gene expression", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
6,924,749
https://en.wikipedia.org/wiki/Capping%20enzyme
A capping enzyme (CE) is an enzyme that catalyzes the attachment of the 5' cap to messenger RNA molecules that are in the process of being synthesized in the cell nucleus during the first stages of gene expression. The addition of the cap occurs co-transcriptionally, after the growing RNA molecule contains as little as 25 nucleotides. The enzymatic reaction is catalyzed specifically by the phosphorylated carboxyl-terminal domain (CTD) of RNA polymerase II. The 5' cap is therefore specific to RNAs synthesized by this polymerase rather than those synthesized by RNA polymerase I or RNA polymerase III. Pre-mRNA undergoes a series of modifications - 5' capping, splicing and 3' polyadenylation before becoming mature mRNA that exits the nucleus to be translated into functional proteins and capping of the 5' end is the first of these modifications. Three enzymes, RNA triphosphatase, guanylyltransferase (or CE), and methyltransferase are involved in the addition of the methylated 5' cap to the mRNA. Formation of the cap Capping is a three-step process that utilizes the enzymes RNA triphosphatase, guanylyltransferase, and methyltransferase. Through a series of three steps, the cap is added to the first nucleotide's 5' hydroxyl group of the growing mRNA strand while transcription is still occurring. First, RNA 5' triphosphatase hydrolyzes the 5' triphosphate group to make diphosphate-RNA. Then, the addition of GMP by guanylyltransferase produces the guanosine cap. Last, RNA methyltransferase transfers a methyl group to the guanosine cap to yield 7-methylguanosine cap that is attached to the 5' end of the transcript. These three enzymes, collectively called the capping enzymes, are only able to catalyze their respective reactions when attached to RNA polymerase II, an enzyme necessary for the transcription of DNA into pre-mRNA. When this complex of RNA polymerase II and the capping enzymes is achieved, the capping enzymes are able to add the cap to the mRNA while it is produced by RNA polymerase II. Function Eukaryotic RNA must undergo a series of modifications in order to be exported from the nucleus and successfully translated into function proteins, many of which are dependent on mRNA capping, the first mRNA modification to take place. 5' capping is essential for mRNA stability, enhancing mRNA processing, mRNA export and translation. After successful capping, an additional phosphorylation event initiates the recruitment of machinery necessary for RNA splicing, a process by which introns are removed to produce a mature mRNA. The addition of the cap onto mRNA confers protection to the transcript from exonucleases that degrade unprotected RNA and assist in the nuclear export transport process so that the mRNA can be translated to form proteins. The function of the 5' cap is essential to the ultimate expression of the RNA. Structure The capping enzyme is part of the covalent nucleotidyl transferases superfamily, which also includes DNA ligases and RNA ligases. The enzymes of this superfamily share the following similarities: Conserved regions known as motifs I, II, III, IIIa, IV, V and VI, which are arranged in the same order and similar spacing A lysine containing motif KxDG (motif I) A covalent lysyl-NMP intermediate The capping enzyme is composed of two domains, a nucleotidyl transferase (NTase) domain and a C-terminal oligonucleotide binding (OB) domain. The NTase domain, conserved in capping enzymes, DNA and RNA ligases, is made up 5 motifs, I, III, IIIa, IV and V. Motif I or KxDG is the active site where the covalent (lysyl)-N-GMP intermediate is formed. Both the NTase and OB domains undergo conformational changes that assist in the capping reaction. Capping enzymes are found in the nucleus of eukaryotic cells. Depending on the organism, the capping enzyme is either a monofunctional or bifunctional polypeptide. The guanylyltransferases (Ceg1) of Saccharomyces cerevisiae is encoded by the CEG1 gene and is composed of 459 amino acids (53-kD). The RNA triphosphatase (Cet1) is a separate 549 amino acid polypeptide (80-kD), encoded by the CET1 gene. The human capping enzyme is an example of a bifunctional polypeptide, which has both triphosphatase (N-terminal) and guanylyltransferase (C-terminal) domains. The human mRNA guanylyltransferase domain of the capping enzyme is composed of seven helices and fifteen β strands that are grouped into three, five and seven strands, arranged as antiparallel β sheets. The enzyme structure has three sub-domains referred to hinge, base and lid. The GTP binding site is located between the hinge and base domain. The lid domain determines the conformation of the active site cleft, which consists of the GTP binding site, phosphoamide linking lysine and surrounding residues. The guanylyltransferase domain is linked to the triphosphatase domain via a 25 amino acid flexible loop structure. Impact of the enzyme's activity Splicing is dependent on the presence of the 7-methylguanosine cap. A defect in splicing can occur as a result of mutation(s) in the guanylyltransferase, which can inhibit enzyme activity, preventing the formation of the cap. However the severity of the effect is dependent on the guanylyltransferase mutation. Furthermore, the guanylyltransferase relieves transcriptional repression mediated by NELF. NELF together with DSIF prevents transcription elongation. Thus, mutations in the enzyme can affect transcription elongation. See also RNA splicing mRNA (guanine-N7-)-methyltransferase Post-transcriptional modification Translation (biology) Ribosome Transcription RNA Polymerase II Eukaryotic transcription References Gene expression Enzymes Molecular evolution
Capping enzyme
[ "Chemistry", "Biology" ]
1,361
[ "Evolutionary processes", "Molecular evolution", "Gene expression", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
6,925,199
https://en.wikipedia.org/wiki/PABPII
PABPII, or polyadenine binding protein II, is a protein involved in the assembly of the polyadenine tail added to newly synthesized pre-messenger RNA (mRNA) molecules during the process of gene transcription. It is a regulatory protein that controls the rate at which polyadenine polymerase (PAP) adds adenine nucleotides to the 3' end of the growing tail within the nucleus of the cell. In the absence of PABPII, PAP adds adenines slowly, typically about 12. PABPII then binds to the short polyadenine tail and induces an acceleration in the rate of addition by PAP until the tail has grown to about 200 adenines long. The mechanism by which PABPII signals the termination of the polymerization reaction once the tail has reached its required length is not clearly understood. PABPII is distinct from the related protein PABPI in being localized to the cell nucleus rather than the cytoplasm. See also PABPN1 References Lodish H, Berk A, Matsudaira P, Kaiser CA, Krieger M, Scott MP, Zipursky SL, Darnell J. (2004). Molecular Cell Biology. WH Freeman: New York, NY. 5th ed. Gene expression
PABPII
[ "Chemistry", "Biology" ]
269
[ "Protein stubs", "Gene expression", "Biochemistry stubs", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
6,925,359
https://en.wikipedia.org/wiki/Tree%20well
A tree well, also known as a spruce trap, is the space around a tree under its branches that does not get the same amount of snow as the surrounding open space. This creates a void or area of loose snow below the branches and around the trunk that is dangerous to any hikers, snowshoers, skiers, snowboarders, and snowmobilers who fall into them. If someone lands in such a well, often as a result of a fall, it can be too deep for them to climb up the surrounding loose snow before they are buried. Making the situation more dangerous, they often fall into the well head-first and as the result of an accident which could leave them injured or unconscious. Formation A tree's branches shelter the area around its trunk from snowfall. If the snow is deep enough, there is a significant void or area of loose snow underneath the branches around the trunk. Such wells have been observed as deep as . Similar "wells" can also occur near rocks and along streams. Tree wells occur outside of groomed trails and represent a significant risk to those who ski or snowboard off-piste, in backcountry areas, but can also be found on the boundaries between groomed and ungroomed areas. The risk of encountering one is greatest during and immediately following a heavy snowstorm. Hazard Victims become trapped in tree wells and are unable to free themselves. In two experiments conducted in North America, 90% of volunteers temporarily placed in tree wells were unable to rescue themselves. If the snow is deep enough, the surrounding snow banks can collapse over them, depriving them of air. "If a partner is not there for immediate rescue, the skier or rider may die very quickly from suffocation – in many cases, he or she can die as quickly as someone can drown in water", according to the Tree Well and Snow Immersion Suffocation (SIS) Information website. Frequently victims fall into wells head-first, complicating recovery efforts. Often they are injured in the process, suffering joint dislocation or concussion. When fatal, this type of incident is termed a non-avalanche related snow immersion death (NARSID). In the United States, on average several skiers or snowboarders die each year from non-avalanche related snow immersion. References External links Deep Snow Safety video showing tree wells Skiing Snow Snowboarding Weather hazards Hazards of outdoor recreation
Tree well
[ "Physics" ]
494
[ "Weather", "Physical phenomena", "Weather hazards" ]
6,925,376
https://en.wikipedia.org/wiki/Cleavage%20stimulation%20factor
Cleavage stimulatory factor or cleavage stimulation factor (CstF or CStF) is a heterotrimeric protein, made up of the proteins CSTF1 (55kDa), CSTF2 (64kDa) and CSTF3 (77kDa), totalling about 200 kDa. It is involved in the cleavage of the 3' signaling region from a newly synthesized pre-messenger RNA (mRNA) molecule. CstF is recruited by cleavage and polyadenylation specificity factor (CPSF) and assembles into a protein complex on the 3' end to promote the synthesis of a functional polyadenine tail, which results in a mature mRNA molecule ready to be exported from the cell nucleus to the cytosol for translation. The amount of CstF in a cell is dependent on the phase of the cell cycle, increasing significantly during the transition from G0 phase to S phase in mouse fibroblast and human splenic B cells. Genes CSTF1, CSTF2 or CSTF2T, CSTF3 References Further reading Lodish H, Berk A, Matsudaira P, Kaiser CA, Krieger M, Scott MP, Zipursky SL, Darnell J. (2004). Molecular Cell Biology. WH Freeman: New York, NY. 5th ed. External links Protein complexes Gene expression RNA-binding proteins
Cleavage stimulation factor
[ "Chemistry", "Biology" ]
284
[ "Protein stubs", "Gene expression", "Biochemistry stubs", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
6,925,469
https://en.wikipedia.org/wiki/Cleavage%20and%20polyadenylation%20specificity%20factor
Cleavage and polyadenylation specificity factor (CPSF) is involved in the cleavage of the 3' signaling region from a newly synthesized pre-messenger RNA (pre-mRNA) molecule in the process of gene transcription. In eukaryotes, messenger RNA precursors (pre-mRNA) are transcribed in the nucleus from DNA by the enzyme, RNA polymerase II. The pre-mRNA must undergo post-transcriptional modifications, forming mature RNA (mRNA), before they can be transported into the cytoplasm for translation into proteins. The post-transcriptional modifications are: the addition of a 5' m7G cap, splicing of intronic sequences, and 3' cleavage and polyadenylation. According to Schönemann et al., "CPSF recognizes the polyadenylation signal (PAS), providing sequence specificity in pre-mRNA cleavage and polyadenylation, and catalyzes pre-mRNA cleavage." It is required to induce RNA polymerase pausing once it recognizes a functional PAS. It is the first protein to bind to the signaling region near the cleavage site of the pre-mRNA, to which the poly(A) tail will be added by polynucleotide adenylyltransferase. The 10-30 nucleotide upstream signaling region of the cleavage site, polyadenylation signal (PAS), has the canonical nucleotide sequence AAUAAA, which is highly conserved across the vast majority of pre-mRNAs. The AAUAAA region is usually defined by a cytosine/adenine (CA) dinucleotide, which is the preferred sequence, that is 5' to the site of the endonucleolytic cleavage. A second downstream signaling region, located approximately 40 nucleotides downstream from the cleavage site on the portion of the pre-mRNA that is cleaved before polyadenylation, consists of a U/GU-rich region required for efficient processing. This downstream fragment is degraded. The mature RNA are transported into the cytoplasm, where they are translated into proteins. Protein Structure & Interactions In mammals, CPSF is a protein complex, consisting of six subunits: CPSF-160 (CPSF1), CPSF-100 (CPSF2), CPSF-73 (CPSF3), and CPSF-30 (CPSF4) kDa subunits, WDR33 and Fip1 (FIP1L1). The subunits form two components: mammalian polyadenylation specificity factors (mPSF) and mammalian cleavage factor (mCF). The mPSF is made up of CPSF-160, WDR33, CPSF-30, and Fip1. It is necessary for PAS recognition and polyadenylation. The mCF is made up of CPSF-73, CPSF-100, and symplekin. It catalyzes the cleavage reaction by recognizing the histone mRNA 3' processing site. CPSF-73 is a zinc-dependent hydrolase which cleaves the mRNA precursor between a CA dinucleotide just downstream the polyadenylation signal sequence AAUAAA. CPSF-100 contributes to the endonuclease activity of CPSF-73. CPSF-160 (160 kDa) is the largest subunit of CPSF and directly binds to the AAUAAA polyadenylation signal. 160 kDa has three β-propeller domains and a C-terminal domain. CPSF-30 (30 kDa) has five Cys-Cys-Cys-His (CCCH) zinc-finger motifs near the N terminus and a CCCH zinc knuckle at the C terminus. Two isoforms of CPSF-30 exist and can be found in CPSF complexes. The RNA binding activity of CPSF-30 is mediated by its zinc-fingers 2 and 3. WD repeat domain 33 (146 kDa) has a WD40 domain near the N terminus. The WD40 domain interacts with RNA. WDR33 and CPSF-30 recognize the polyadenylation signal (PAS) in pre-mRNA, which aids in defining the position of RNA cleavage. CPSF-30 recognizes the AU-rich hexamer region by a cooperative, metal-dependent binding mechanism. Although CPSF-160 is the largest subunit of CPSF, a study conducted by Schönemann et al., debate that WDR33 is responsible for recognizing the PAS and not CPSF-160 as previously believed. The study concluded that the reason that CPSF-160 was believed to be responsible for recognizing the PAS was due to the fact that the WDR33 subunit had not been discovered at the time of the claim. Fip1 binds to U-rich RNAs by its arginine-rich C-terminus. It binds to RNA sequences upstream of the AAUAAA hexamer region in vitro. Fip1 and CPSF-160 recruit poly(A) polymerase (PAP) to the 3' processing site. PAP is stimulated by Poly(A) binding protein nuclear one to add the poly(A) tail, a non-templated adenosine residues, at the cleavage site. Only CPSF-160, CPSF-30, Fip1, and WDR33 are necessary and sufficient to form an active CPSF subcomplex in AAUAAA-dependent polyadenylation. CPSF-73 and CPSF-100 are disposable. CPSF recruits proteins to the 3' region. Identified proteins that are coordinated by CPSF activity include: cleavage stimulatory factor and the two poorly understood cleavage factors. The binding of the polynucleotide adenylyltransferase responsible for actually synthesizing the tail is a necessary prerequisite for cleavage, thus ensuring that cleavage and polyadenylation are tightly coupled processes. Genes CPSF1, CPSF2, CPSF3, CPSF4, NUDT21, CPSF6, CPSF7, FIP1L1 Alternative Polyadenylation (APA) Alternative polyadenylation (APA) is a regulatory mechanism that forms multiple 3' end on mRNA. APA isoforms from the same gene can encode different proteins and/or contain different 3' untranslated regions (UTRs). Deregulation of APA has been associated with a number of human diseases. Since longer UTRs have more binding sites for microRNAs and/or RNA-binding proteins in comparison to shorter UTRs, APA require different stability, translation efficiency, and/or intracellular localization. Mammalian PASs have a number of key cis elements. A(A/U)AAA hexamer U/GU-rich downstream element (DSE) U-rich upstream auxiliary elements (USEs) Upstream sequences conforming to the consensus UGUA PAS sequences are variable, and many PASs lack one or more cis elements. PAS recognition is accomplished by protein-RNA interactions. CPSF synergistically binds to the AAUAAA hexamer and CstF synergistically binds to the downstream element (DSE). The CFI complex binds to the UGUA motifs. CPSF, CstF, and CFI bind directly to RNA. They also recruit other proteins such as CFII, symplekin, and the poly(A) polymerase (PAP) to assemble the mRNA 3' processing complex, also known as the cleavage and polyadenylation complex. The assembly of these factors are facilitated by the C-terminal domain (CTD) of the RNA polymerase II (RNAP II) large subunit. The CTD provides a landing pad for mRNA processing factors. Other Protein Complexes in the Cleavage and Polyadenylation Complex Symplekin (SYMPK) is a scaffolding protein that mediates the interaction between CPSF and CstF. In mammalian CPSF, both cleavage factor I (CFIm) and cleavage and polyadenylation specificity factor (CPSF) are required for cleavage and polyadenylation whereas cleavage stimulation factor (CstF) is only essential for the cleavage step. CPSF and CstF travel along with RNA polymerase II (RNAP II) during nascent gene transcription in search of the PAS. Cleavage factor I (CFIm) is made of 25 (CPSF5), 59 (CPSF7), and 68 (CPSF6) kDa proteins. Cleavage factor II (CFIIm) is made of Pcf11, Clp1, and cleavage stimulation factor (CstF). CFIIm binds to the RNAP II C-terminal domain and other CpA factors. Cleavage stimulation factor (CstF) has three subunits: CstF77 (CstF3), CstF50 (CstF1), and CstF64 (CstF2 and CstF2T). CstF recognizes the PAS that is 20 nucleotides downstream the signaling region of the cleavage site, which is a GU-rich sequence motif followed by U-rich sequences. CstF contributes to the selection of the cleavage site, as well as alternative polyadenylation. Coupled Processes Coupling of RNA polymerase II (pol II) transcription can influence processing reactions in three ways. localization positions mRNA processing factors at the elongation complex, which raises their local concentration in the vicinity of the nascent transcript kinetic coupling the rate of transcript can have profound effects on RNA folding and the assembly of RNA-protein complexes allosteric contact between the pol II elongation complex and mRNA processing factors can allosterically inhibit or activate mRNA processing factors References Further reading External links Protein complexes Gene expression
Cleavage and polyadenylation specificity factor
[ "Chemistry", "Biology" ]
2,000
[ "Gene expression", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
6,925,576
https://en.wikipedia.org/wiki/Cleavage%20factor
Cleavage factors are two closely associated protein complexes involved in the cleavage of the 3' untranslated region of a newly synthesized pre-messenger RNA (mRNA) molecule in the process of gene transcription. The cleavage is the first step in adding a polyadenine tail to the pre-mRNA, which is one of the necessary post-transcriptional modifications necessary for producing a mature mRNA molecule. In mammals, the two cleavage factors are known as CFIm and CFIIm. The proteins that constitute these complexes are recruited to the cleavage site by cleavage and polyadenylation specificity factor and cleavage stimulatory factor, and form a larger complex that also includes polyadenine polymerase, which performs the polyadenylation reaction. The CFIm complex Involved in the earliest step for the formation of the active cleavage complex, the CFIm complex is formed by three proteins of 25, 59 and 68 kDa, respectively: CFIm25 (or CPSF5/NUDT21) CFIm59 (or CPSF7) CFIm68 (or CPSF6) CFIm25 and CFIm68 are sufficient for the activity of the complex, proving the expected redundancy of CFIm68 and CFIm59, which share great sequence similarity. The CFIIm complex The CFIIm complex is responsible for transcription termination and triggering the disassembly of the elongation complex. It is composed of only two proteins: PCF11 CLP1 References Further reading Lodish H, Berk A, Matsudaira P, Kaiser CA, Krieger M, Scott MP, Zipursky SL, Darnell J. (2004). Molecular Cell Biology. WH Freeman: New York, NY. 5th ed. Protein complexes Gene expression
Cleavage factor
[ "Chemistry", "Biology" ]
364
[ "Gene expression", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
6,925,638
https://en.wikipedia.org/wiki/Elizabeth%20Fulhame
Elizabeth Fulhame (fl. 1794) was an early British chemist who invented the concept of catalysis and discovered photoreduction. She was described as 'the first solo woman researcher of modern chemistry'. Although she only published one text, she describes catalysis as a process at length in her 1794 book An Essay On Combustion with a View to a New Art of Dying and Painting, wherein the Phlogistic and Antiphlogistic Hypotheses are Proved Erroneous. The book relates in painstaking detail her experiments with oxidation-reduction reactions, and the conclusions she draws regarding phlogiston theory, in which she disagrees with both the Phlogistians and Antiphlogistians. In 1798, the book was translated into German by Augustin Gottfried Ludwig Lentin as Versuche über die Wiederherstellung der Metalle durch Wasserstoffgas. In 1810, it was published in the United States, to much critical acclaim. That same year, Fulhame was made an honorary member of the Philadelphia Chemical Society. Thomas P. Smith applauded her work, stating that "Mrs. Fulhame has now laid such bold claims to chemistry that we can no longer deny the sex the privilege of participating in this science also." Personal life Elizabeth Fulhame published under her married name, as Mrs. Fulhame. She was married to Thomas Fulhame, an Irish-born physician who had attended the University of Edinburgh and studied puerperal fever as a student of Andrew Duncan (1744–1828). Dr Thomas Fulhame was listed in Edinburgh directories between 1784–1800 (Bristo Square in 1784, Bristo Street in 1794, at 9 Society 1799, in Brown's Square 1800). Sir Benjamin Thompson, Count Rumford, referred to her as "the ingenious and lively Mrs. Fulhame", however this opinion may reflect the style of her book. Work Mrs. Fulhame's work began with her interest in finding a way of staining cloth with heavy metals under the influence of light. She originally considered calling her work An Essay on the Art of making Cloths of Gold, Silver, and other Metals, by chymical processes, but considering the "imperfect state of the art", decided to select a title reflecting the broader implications of her experiments. She was apparently encouraged to publish an account of her 14 years of research as a result of meeting Joseph Priestley in 1793. Fulhame studied the experimental reduction of metallic salts in a variety of states (aqueous solution, dry state, and sometimes an ether or alcohol solution) by exposing them to the action of various reducing agents. The metal salts she examined included gold, silver, platinum, mercury, copper, and tin. As reducing agents, she experimented with hydrogen gas, phosphorus, potassium sulfide, hydrogen sulfide, phosphine, charcoal, and light. She discovered a number of chemical reactions by which metal salts could be reduced to pure metals. Rayner-Canham considers her most important contribution to chemistry to be the discovery that metals could be processed through aqueous chemical reduction at room temperature, as an alternative to smelting at high temperatures. Her theoretical work on catalysis was "a major step in the history of chemistry", predating both Jöns Jakob Berzelius and Eduard Buchner. She proposed, and demonstrated through experiment, that many oxidation reactions occur only in the presence of water, that they directly involve water, and that water is regenerated and is detectable at the end of the reaction. Further, she proposed "recognisably modern mechanisms" for those reactions, and may have been the first scientist to do so. The role of oxygen, as she describes it, differs significantly from other theories of the time. Based on her experiments, she disagreed with some of the conclusions of Antoine Lavoisier as well as with the phlogiston theorists that he critiqued. Her research could be seen as a precursor to the work of Jöns Jakob Berzelius, however Fulhame focused specifically on water rather than heavy metals. Further, Eder, in 1905, and Schaaf consider her work on silver chemistry to be a landmark in the birth and early history of photography. Fulhame's work on the role of light sensitive chemicals (silver salts) on fabric, predates Thomas Wedgwood's more famous photogram trials of 1801. Fulhame did not, however, attempt to make "images" or representational shadow prints in the way Wedgwood did, but she did engage in photoreduction using light. Reception In addition to her book being republished in Germany and America, Fulhame's experiments were reviewed in a French journal, and several British magazines, and were positively commented on by Sir Benjamin Thompson, Count Rumford, and Sir John Herschel. According to the introduction of her book by her American editor in 1810, her work was lesser known than it could or should have been, adding that "the pride of science, revolted at the idea of being taught by a female". Fulhame says as much in her own preface to the work: Such a reaction, she says, was particularly acute amongst some who held esteemed positions, whom she described as having a 'dictatorship in science'. Fulhame published her experiments on reductions using water with metals in a book in the first place in order not to be "plagiarized." She also describes her book as possibly serving as "a beacon to future mariners" (e.g. women) taking up scientific inquiries. Antoine Lavoisier was executed six months before the publication of her book and thus could not respond to her theory. Irish chemist William Higgins complained that she had ignored his work on the involvement of water in the rusting of iron, but magnanimously concluded "I read her book with great pleasure, and heartily wish that her laudible example may be followed by the rest of her sex." Fulhame's work was largely forgotten by the end of the 19th century, but it was rediscovered by J. W. Mellor. In the 20th century, she was noted in Physics Today, as being the first to 'systematically' vary 'her reaction conditions' and to 'generalise a whole class of reactions.... the reduction of metals' and first to suggest an explanation for the situations where 'water dissociated into its ionic form, facilitated the intermediate reaction steps, and was regenerated by the end of the metal reduction.' See also Timeline of women in science References External links Scottish women chemists Scottish chemists 18th-century Scottish writers Year of birth missing Year of death missing 18th-century British chemists 18th-century Scottish scientists 18th-century Scottish women scientists Catalysis
Elizabeth Fulhame
[ "Chemistry" ]
1,388
[ "Catalysis", "Chemical kinetics" ]
6,925,818
https://en.wikipedia.org/wiki/Egyptian%20Atomic%20Energy%20Authority
The Egyptian Atomic Energy Authority (EAEA) has been established in 1955. It leads the national research and development in the basic and applied peaceful nuclear research. Egypt was the second in the African Continent, after South Africa, to build a nuclear reactor. The first research reactor (ET-RR-1), commissioned in 1961, is a Van de Graf type 4 MW reactor engineered and built by Russia. Another research reactor (ET-RR-2) is 22 MW open pool MultiPurpose Reactor (MPR) located at Inshas, 60 km from Cairo, engineered and built by INVAP from Argentina. The EAEA has scientists educated in the topmost universities and research institutes. It is organized into four research centers: Nuclear Research Center (NRC) Hot Laboratory and Waste Management Center (HLWMC) National Centre for Radiation Research and Technology (NCRRT) National Centre for Nuclear Safety and Radiation Control (NCNSRC) These centres are further subdivided into major research divisions. The EAEA is a member of the International Atomic Energy Agency and other regional and international organizations. See also Energy in Egypt References External links Egyptian Atomic Energy Authority – official website Federation of American Scientists 1955 establishments in Egypt Nuclear organizations Atomic Energy Authority Nuclear power in Egypt Nuclear program of Egypt
Egyptian Atomic Energy Authority
[ "Engineering" ]
258
[ "Nuclear organizations", "Energy organizations" ]
6,925,876
https://en.wikipedia.org/wiki/Captorhinida
Captorhinida (older name: Cotylosauria) is a doubly paraphyletic grouping of early reptiles. Robert L. Carroll (1988) ranked it as an order in the subclass Anapsida, composed of the following suborders: A paraphyletic Captorhinomorpha, containing the families Protorothyrididae, Captorhinidae, Bolosauridae, Acleistorhinidae and possibly also Batropetidae Procolophonia, containing families Nyctiphruretidae, Procolophonidae and Sclerosauridae Pareiasauroidea, with families Rhipaeosauridae and Pareiasauridae Millerosauroidea, with a single family Millerettidae. While they all share primitive features and resemble the ancestors of all modern reptiles, some of these families are more closely related to (or belong to) the clade Parareptilia, while others are further along the line leading to diapsids. For this reason, the group is only used informally, if at all, by most modern paleontologists. All members of this group are thought to be extinct. References Prehistoric reptile taxonomy Prehistoric tetrapod orders Paraphyletic groups
Captorhinida
[ "Biology" ]
256
[ "Phylogenetics", "Paraphyletic groups" ]
6,926,084
https://en.wikipedia.org/wiki/Methoxypropane
Methoxypropane, or methyl propyl ether, is an ether once used as a general anaesthetic. It is a clear colorless flammable liquid with a boiling point of 38.8 °C. Marketed under the trade names Metopryl and Neothyl, methoxypropane was used as an alternative to diethyl ether because of its greater potency. Its use as an anaesthetic has since been supplanted by modern halogenated ethers which are much less flammable. References Dialkyl ethers General anesthetics GABAA receptor positive allosteric modulators
Methoxypropane
[ "Chemistry" ]
133
[]
6,926,718
https://en.wikipedia.org/wiki/Batch%20distillation
Batch distillation refers to the use of distillation in batches, meaning that a mixture is distilled to separate it into its component fractions before the distillation still is again charged with more mixture and the process is repeated. This is in contrast with continuous distillation where the feedstock is added and the distillate drawn off without interruption. Batch distillation has always been an important part of the production of seasonal, or low capacity and high-purity chemicals. It is a very frequent separation process in the pharmaceutical industry. Batch rectifier The simplest and most frequently used batch distillation configuration is the batch rectifier, including the alembic and pot still. The batch rectifier consists of a pot (or reboiler), rectifying column, a condenser, some means of splitting off a portion of the condensed vapour (distillate) as reflux, and one or more receivers. The pot is filled with liquid mixture and heated. Vapour flows upwards in the rectifying column and condenses at the top. Usually, the entire condensate is initially returned to the column as reflux. This contacting of vapour and liquid considerably improves the separation. Generally, this step is named start-up. The first condensate is the head, and it contains undesirable components. The last condensate is the feints and it is also undesirable, although it adds flavor. In between is the heart and this forms the desired product. The head and feints may be thrown out, refluxed, or added to the next batch of mash/juice, according to the practice of the distiller. After some time, a part of the overhead condensate is withdrawn continuously as distillate and it is accumulated in the receivers, and the other part is recycled into the column as reflux. Owing to the differing vapour pressures of the distillate, there will be a change in the overhead distillation with time, as early on in the batch distillation, the distillate will contain a high concentration of the component with the higher relative volatility. As the supply of the material is limited and lighter components are removed, the relative fraction of heavier components will increase as the distillation progresses. Batch stripper The other simple batch distillation configuration is the batch stripper. The batch stripper consists of the same parts as the batch rectifier. However, in this case, the charge pot is located above the stripping column. During operation (after charging the pot and starting up the system) the high boiling constituents are primarily separated from the charge mixture. The liquid in the pot is depleted in the high boiling constituents, and enriched in low boiling ones. The high boiling product is routed into the bottom product receivers. The residual low boiling product is withdrawn from the charge pot. This mode of batch distillation is very seldom applied in industrial processes. Middle vessel column A third feasible batch column configuration is the middle vessel column. The middle vessel column consists of both a rectifying and a stripping section and the charge pot is located at the middle of the column. Feasibility studies Generally, the feasibility studies of batch distillation are based on analyses of the following maps: Residue curve map still path map distillate path map different column profile maps During the feasibility studies, the following basic simplifying assumptions are made: infinite number of equilibrium stages infinite reflux ratio negligible tray hold-up in the two column sections quasi-steady state in the column constant molar overflow Bernot et al. used the batch distillation regions to determine the sequence of the fractions. According to Ewell and Welch, a batch distillation region gives the same fractions upon rectification of any mixture lying within it. Bernot et al. examined the still and distillate paths for the determination of the region boundaries under high number of stages and high reflux ratio, named maximal separation. Pham and Doherty in pioneering work described the structure and properties of residue curve maps for ternary heterogeneous azeotropic mixtures. In their model, the possibility of the phase separation of the vapour condensed is not taken into consideration yet. The singular points of the residue curve maps determined by this method were used to assign batch distillation regions by Rodriguez-Donis et al. and Skouras et al. Modla et al. pointed out that this method may give misleading results for the minimal amount of entrainer. Lang and Modla extended the method of Pham and Doherty and suggested a new, general method for the calculation of residue curves and for the determination of batch distillation regions of heteroazeotropic distillation. Lelkes et al. published a feasibility method for the separation of minimum boiling point azeotropes by continuously entrainer feeding batch distillation. This method has been applied for the use of a light entrainer in the batch rectifier and stripper by Lang et al. (1999) and it applied for maximum azeotropes by Lang et al. Modla et al. extended this method for batch heteroazeotropic distillation under continuous entrainer feeding. See also Azeotropic distillation Extractive distillation Fractional distillation Heteroazeotrope Steam distillation Vacuum distillation Theoretical plate References Further reading Hilmen Eva-Katrine, Separation of Azeotropic Mixtures:Tools for Analysis and Studies on Batch Distillation Operation, Thesis, Norwegian University of Science and Technology Department of Chemical Engineering, (2000). External links Batch distillation program online Batch distillation of the hydrocarbon compounds. Distillation
Batch distillation
[ "Chemistry" ]
1,204
[ "Distillation", "Separation processes" ]
6,927,432
https://en.wikipedia.org/wiki/Heteroazeotrope
A heteroazeotrope is an azeotrope where the vapour phase coexists with two liquid phases. Sketch of a T-x/y equilibrium curve of a typical heteroazeotropic mixture Examples of heteroazeotropes Benzene - Water NBP 69.2 °C Dichloromethane - Water NBP 38.5 °C n-Butanol - Water NBP 93.5 °C Toluene - Water NBP 82 °C Continuous heteroazeotropic distillation Heterogeneous distillation means that during the distillation the liquid phase of the mixture is immiscible. In this case on the plates can be two liquid phases and the top vapour condensate splits in two liquid phases, which can be separated in a decanter. The simplest case of continuous heteroazeotropic distillation is the separation of a binary heterogeneous azeotropic mixture. In this case the system contains two columns and a decanter. The fresh feed (A-B) is added into the first column. (The feed may also be added into the decanter directly or into the second column depending on the composition of the mixture). From the decanter the A-rich phase is withdrawn as reflux into the first column while the B-rich phase is withdrawn as reflux into the second column. This mean the first column produces "A" and the second column produces "B" as a bottoms product. In industry the butanol-water mixture is separated with this technique. At the previous case the binary system forms already a heterogeneous azeotrope. The other application of the heteroazeotropic distillation is the separation of a binary system (A-B) forming a homogeneous azeotrope. In this case an entrainer or solvent is added to the mixture in order to form an heteroazeotrope with one or both of the components in order to help the separation of the original A-B mixture. Batch heteroazeotropic distillation Batch heteroazeotropic distillation is an efficient method for the separation of azeotropic and low relative volatility (low α) mixtures. A third component (entrainer, E) is added to the binary A-B mixture, which makes the separation of A and B possible. The entrainer forms a heteroazeotrope with at least one (and preferably with only one (selective entrainer)) of the original components. The main parts of the conventional batch distillation columns are the following: - pot (include reboiler) - column - condenser to condense the top vapour - product receivers - (entrainer fed) In case of the heteroazeotropic distillation the equipment is completed with a decanter, where the two liquid phases are split. Three different cases are possible for the addition of the entrainer: 1, Batch Addition of the Entrainer: The total quantity of the entrainer is added to the charge before the start of the procedure. 2, Continuous Entrainer Feeding: The total quantity of the entrainer is introduced continuously to the column. 3, Mixed Addition of the Entrainer: The combination of the batch addition and continuous feeding of the entrainer. We added one part of the entrainer to the charge before the start of the distillation and the other part continuously during distillation. In the last years the batch heteroazeotropic distillation has come into prominence so several studies have been published. The heteroazeotropic batch distillation was investigated by feasibility studies, rigorous simulation calculations and laboratory experiments. Feasibility analysis is conducted in Modla et al. and Rodriguez-Donis et al. for the separation of low-relative-volatility and azeotropic mixtures by heterogeneous batch distillation in a batch rectifier. Rodriguez-Donis et al. were the first to provide the entrainer selection rules. The feasibility methods was extended and modified by Rodriguez-Donis et al., Rodriguez-Donis et al., (2005), Skouras et al., and Lang and Modla. Varga applied these feasibility studies in her thesis. Experimental result was published by Rodriguez-Donis et al., Xu and Wand, Van Kaam and others. References See also Azeotrope Batch distillation Distillation Steam distillation Phase transitions
Heteroazeotrope
[ "Physics", "Chemistry" ]
960
[ "Physical phenomena", "Phase transitions", "Phases of matter", "Critical phenomena", "Statistical mechanics", "Matter" ]
6,927,446
https://en.wikipedia.org/wiki/AMAX
AMAX is a certification program for AM radio broadcasting standards, created in the United States beginning in 1991 by the Electronic Industries Association (EIA) and the National Association of Broadcasters (NAB). It was developed with the intention of helping AM stations, especially ones with musical formats, become more competitive with FM broadcasters. The standards cover both consumer radio receivers and broadcasting station transmission chains. Although the Federal Communications Commission (FCC) endorsed the AMAX proposal, the agency never made it into a formal requirement, leaving its adoption as voluntary. Ultimately few receiver manufacturers and radio stations adhered to the standard, thus it has done little to stem the continued decline in AM station listenership. Standards Receiver AMAX radio receivers are divided into three categories: home, automotive and portable. Receiver certification requirements include: Wide audio bandwidth, with a minimum of 7,500 hertz for home and automotive radios, and 6,500 hertz for portables. Bandwidth control, either manual or automatic, including at least two settings, such as "narrow" and "wide". Meet receiver standards for low total harmonic distortion and proper NRSC-1 audio de-emphasis curve. Attenuation of the 10,000 hertz "whistle" heterodyne. (In the U.S., 10 kHz is the standard separation of adjacent transmitting frequencies.) Provision for connecting an external AM antenna. Ability to receive stations broadcasting on the 1610 to 1700 kHz expanded AM band frequencies. Effective noise blanking, for home and automotive receivers. Transmission For AM broadcasting stations, the AMAX qualifications specified "a unified standard for pre–emphasis and distortion" for broadcasting station transmission chains. Implementation From a technical standpoint, the AMAX standards met with approval, with one reviewer noting that "The AMAX standard is a last-ditch effort by broadcasters and radio makers to save AM by reviving a long-forgotten tactic: quality" and "With a good station (often hard to find), their AM sections sound so good you could easily be fooled into thinking they were FM." A review of an AMAX-certified mono portable receiver, the GE Superadio III, reported that the radio "reproduces sound with the clarity and dynamics of FM. Its audio response is more than two octaves greater than a standard AM radio." However, absent a mandate by the FCC, few receiver manufacturers were interested in incurring the cost of improving the AM sections of their receivers. A 1992 review of a high-end consumer audio catalog found that of 80 listings only three were AM stereo capable, and there were no references to the AMAX standard. A 1996 report stated that "At a recent Consumer Electronics Show in Las Vegas, consumer audio companies also demonstrated apathy toward improved super set AM radios, few of which could be found on the exhibit floor. The prevailing attitude among manufacturer reps was, 'Who cares?'" Also, stations with low-fidelity spoken-word formats saw little need to upgrade their transmissions for better audio quality. A 2015 review concluded that "Initially the consumer manufacturers made a concerted attempt to specify performance of AM receivers through the 1993 AMAX standard, a joint effort of the EIA and the NAB, with FCC backing... The FCC rapidly followed up on this with codification of the CQUAM AM stereo standard, also in 1993. At this point, the stage appeared to be set for rejuvenation of the AM band. Nevertheless, with the legacy of confusion and disappointment in the rollout of the multiple incompatible AM stereo systems, and failure of the manufacturers (including the auto makers) to effectively promote AMAX radios, coupled with the ever-increasing background of noise in the band, the general public soon lost interest and moved on to other media." References Telecommunications-related introductions in 1993 1993 in radio Broadcast engineering Certification marks Radio technology Standards of the United States Stereophonic sound National Association of Broadcasters
AMAX
[ "Mathematics", "Technology", "Engineering" ]
793
[ "Information and communications technology", "Broadcast engineering", "Stereophonic sound", "Telecommunications engineering", "Symbols", "Radio technology", "Electronic engineering", "Audio engineering", "Certification marks" ]
11,682,860
https://en.wikipedia.org/wiki/Barnardisation
Barnardisation is a method of statistical disclosure control for tables of counts. It involves adding +1, 0 or -1 to some or all of the internal non-zero cells in a table in a pseudo-random fashion. The probability of adjustment for each internal cell is calculated as p/2 (add 1), 1-p (leave as is), p/2 (subtract 1). The table totals are then calculated as the sum of the post-adjustment internal counts. Etymology The technique of Barnardisation appears to have been named after Professor George Alfred Barnard (1915–2002), a Professor of Mathematics at the University of Essex. Barnard, at that time President of the Royal Statistical Society, was one of three Fellows appointed by the Council of the Royal Statistical Society to help provide a government-commissioned review of data security for the 1971 UK Census. The resulting report questioned whether rounding small numbers to the nearest five was the best approach to preserving respondent confidentiality. The formal government response to the report noted that an additional safeguard of small random adjustments had been introduced for 1971 Census, the suggestion for which they explicitly attributed to Professor Barnard, as did a New Scientist article dated July 1973. Muddying the waters slightly, a 1973 paper in the Journal of the Royal Statistical Society discussing this new safeguard reported that "after much discussion, a variant of a procedure suggested in Canada was adopted.". Presumably Professor Barnard was involved in these discussions, and was the inventor of the variant. In any case, no evidence can be found of any such safeguard being applied in Canada, with Statistics Canada seeming to stick instead to the use of random rounding of all counts to the nearest 0 or 5. Despite originating from Prof Barnard, in documentation surrounding the 1971 Census the method of adjustment now known as Barnardisation was simply described as a 'procedure'; an 'adjustment of values'; a 'special procedure'; a 'process of random error injection'; or a 'modification' or 'adjustment'. The earliest use of the term 'Barnardisation' found in print so far dates to an Office for Population Censuses and Surveys working paper written by Hakim in 1979, where the term is mentioned without citation, and without ascribing it to Prof G A Barnard. But, at the time, Hakim's coinage of this term appears to have been either widely overlooked or widely ignored, at least in print, as demonstrated by the wide range of later publications already cited above. The term 'Barnardisation' does not appear to have reemerged in print until the 1995 publication of Stan Openshaw's Census Users' Handbook, where it is used by two separate chapter authors and by the index compiler. However, by at least the late 1980s the term was already in widespread conversational usage during UK academic conferences and meetings. More recently the term 'Barnardisation' has also become firmly ensconced in the lexicon of official reports produced by official UK statistical agencies and others. Operational details As originally conceived and implemented in the 1971 UK Census, Barnardisation had the added characteristic of pairing tables from separate areas, and applying equal and opposite adjustments to the two areas. For example, if a given table cell in Area A had its value increased by 1, then in paired Area B the equivalent table cell would have its value reduced by 1 (subject to not making the value negative). The purpose of this pairing was to cancel out, as much as possible, the amount of noise introduced via the Barnardisation process at a more aggregate level. For the 1991 UK Census the pairing of areas prior to the application of Barnardisation was dropped; and for the more detailed Local Base Statistics, its scope was extended to include adjustments of -2, -1, 0, +1 or +2, achieved by applying the +1, 0 or +1 adjustment twice. In the United Kingdom, barnardisation became increasingly employed by public agencies in order to enable them to provide information for statistical purposes without infringing the information privacy rights of the individuals to whom the information relates (e.g.). In some cases this has involved further modifications to the Barndardisation procedure. For example, as implemented by the Common Service Agency, adjustments of -1, 0 or +1 were only applied to counts of 1 to 4, whilst counts of 0, instead of being left unchanged, were adjusted by the addition of 0 or +1. Pros and cons A review of Statistical Disclosure Control methods in the run up to the 2011 UK Census identified the following list of pros/cons of Barnardisation from the point-of view of the data provider: Advantages Easy to understand Easy to implement Table totals are consistent with internal cell values The adjustment is unbiased Disadvantages Leads to inconsistent values for the same cell counts and table totals if they are present in two or more separately barnardised tables The adjustment can be unpicked via differencing if other tables are available that share the same counts or totals, or that provide an unadjusted total for a larger spatial area within which the barnardised tables nest The probability of adjustment used is typically small, meaning that many cell values are left unadjusted From a user point-of-view, another advantage of Barnardisation is that it has been shown to have a smaller impact on typical user analyses than the following Statistical Disclose Control measures: random rounding to base 5; as used by Statistics Canada; random rounding to base 3, as used by Statistics New Zealand; and Small Cell Adjustment, as used at various points in time by the Office for National Statistics and the Australian Bureau of Statistics. Efficacy reappraised Since the late 1990s concerns over the efficacy of Barnardisation in protecting confidentiality have increased to the point where it is now no longer recommended as a 'go to' tool, but rather as a technique only to be used in special circumstances. This change in attitudes appears to centre around the relatively high probability that Barnardisation will leave a small count (in particular a 1) unadjusted and, secondarily, to the dangers of reverse engineering the original value if sufficient overlapping barnardised tables are released. For these and other reasons UK Censuses from 2001 onwards have abandoned the use of Barnardisation. See Spicer for a good review of the 2001, 2011 and 2021 alternatives to Barnardisation that have been adopted, and the rationale for this,. The question of whether barnardisation may fall short of the complete anonymisation of data, and the status of barnardised data under the complex provisions of the Data Protection Act 1998, were considered by the Scottish Information Commissioner. Some aspects of an initial decision by the Commissioner were overturned on appeal to the House of Lords, and the Commissioner was invited to revisit his original decision. The Commissioner's final decision ruled that barnardisation provided insufficient disclosure protection for rare events (in this case, Childhood Leukaemia), reversing in part his original decision: "the barnardised data, by themselves, can lead to identification, and [...] the effect of barnardisation on the actual figures, at least as deployed by the CSA, does not have the effect of concealing or disguising the data which he [the Commissioner] had originally considered that it would." However, in his written decision the Commissioner offered no statistical justification for this assertion. Instead the Commissioner's decision centred mainly around addressing points of law relating to the nature of the original and barnardised data, and how this related to legal definitions of (sensitive) personal data. References Survey methodology Information privacy
Barnardisation
[ "Engineering" ]
1,538
[ "Cybersecurity engineering", "Information privacy" ]
11,683,572
https://en.wikipedia.org/wiki/Benzodiazepine%20withdrawal%20syndrome
Benzodiazepine withdrawal syndrome (BZD withdrawal) is the cluster of signs and symptoms that may emerge when a person who has been taking benzodiazepines as prescribed develops a physical dependence on them and then reduces the dose or stops taking them without a safe taper schedule. Typically, benzodiazepine withdrawal is characterized by sleep disturbance, irritability, increased tension and anxiety, depression, panic attacks, hand tremor, shaking, sweating, difficulty with concentration, confusion and cognitive difficulty, memory problems, dry mouth, nausea and vomiting, diarrhea, loss of appetite and weight loss, burning sensations and pain in the upper spine, palpitations, headache, nightmares, tinnitus, muscular pain and stiffness, and a host of perceptual changes. More serious symptoms may also occur such as depersonalization, restless legs syndrome, seizures, and suicidal ideation. Benzodiazepine withdrawal can also lead to disturbances in mental function that persist for several months or years after onset of symptoms (referred to as post-acute-withdrawal syndrome in this form). Withdrawal symptoms can be managed through awareness of the withdrawal reactions, individualized taper strategies according to withdrawal severity, the addition of alternative strategies such as reassurance, and referral to benzodiazepine withdrawal support groups. Signs and symptoms Withdrawal symptoms occur during dose reduction and may include insomnia, anxiety, distress, weight loss, dizziness, night sweats, shaking, muscle twitches, aphasia, panic attacks, depression, dissociation, paranoia, indigestion, diarrhea, and photophobia. As withdrawal progresses, patients often find their physical and mental health improves with improved mood and improved cognition. A more complete list of possible symptoms stated in publications: Rapid discontinuation may result in a more serious syndrome Mechanism The neuroadaptive processes involved in tolerance, dependence, and withdrawal mechanisms implicate both the GABAergic and the glutamatergic systems. Gamma-Aminobutyric acid (GABA) is the major inhibitory neurotransmitter of the central nervous system; roughly one-quarter to one-third of synapses use GABA. GABA mediates the influx of chloride ions through ligand-gated chloride channels called GABAA receptors. When chloride enters the nerve cell, the cell membrane potential hyperpolarizes thereby inhibiting depolarization, or reduction in the firing rate of the post-synaptic nerve cell. Benzodiazepines potentiate the action of GABA, by binding a site between the α and γ subunits of the 5-subunit receptor thereby increasing the frequency of the GABA-gated chloride channel opening in the presence of GABA. When potentiation is sustained by long-term use, neuroadaptations occur which result in decreased GABAergic response. What is certain is that surface GABAA receptor protein levels are altered in response to benzodiazepine exposure, as is receptor turnover rate. The exact reason for the reduced responsiveness has not been elucidated but down-regulation of the number of receptors has only been observed at some receptor locations including in the pars reticulata of the substantia nigra; down-regulation of the number of receptors or internalization does not appear to be the main mechanism at other locations. Evidence exists for other hypotheses including changes in the receptor conformation, changes in turnover, recycling, or production rates, degree of phosphorylation and receptor gene expression, subunit composition, decreased coupling mechanisms between the GABA and benzodiazepine site, decrease in GABA production, and compensatory increased glutamatergic activity. A unified model hypothesis involves a combination of internalization of the receptor, followed by preferential degradation of certain receptor sub-units, which provides the nuclear activation for changes in receptor gene transcription. It has been postulated that when benzodiazepines are cleared from the brain, these neuroadaptations are "unmasked", leading to unopposed excitability of the neuron. Glutamate is the most abundant excitatory neurotransmitter in the vertebrate nervous system. Increased glutamate excitatory activity during withdrawal may lead to sensitization or kindling of the CNS, possibly leading to worsening cognition and symptomatology and making each subsequent withdrawal period worse. Those who have a prior history of withdrawing from benzodiazepines are found to be less likely to succeed the next time around. Diagnosis In severe cases, the withdrawal reaction or protracted withdrawal may exacerbate or resemble serious psychiatric and medical conditions, such as mania, schizophrenia, agitated depression, panic disorder, generalised anxiety disorder, and complex partial seizures and, especially at high doses, seizure disorders. Failure to recognize discontinuation symptoms can lead to false evidence for the need to take benzodiazepines, which in turn leads to withdrawal failure and reinstatement of benzodiazepines, often to higher doses. Pre-existing disorder or other causes typically do not improve, whereas symptoms of protracted withdrawal gradually improve over the ensuing months. Symptoms may lack a psychological cause and can fluctuate in intensity with periods of good and bad days until eventual recovery. Prevention According to the British National Formulary, it is better to withdraw too slowly rather than too quickly from benzodiazepines. The rate of dosage reduction is best carried out so as to minimize the symptoms' intensity and severity. Anecdotally, a slow rate of reduction may reduce the risk of developing a severe protracted syndrome. Long half-life benzodiazepines like diazepam or chlordiazepoxide are preferred to minimize rebound effects and are available in low dose forms. Some people may not fully stabilize between dose reductions, even when the rate of reduction is slowed. Such people sometimes simply need to persist as they may not feel better until they have been fully withdrawn from them for a period of time. Management Management of benzodiazepine dependence involves considering the person's age, comorbidity and the pharmacological pathways of benzodiazepines. Psychological interventions may provide a small but significant additional benefit over gradual dose reduction alone at post-cessation and at follow-up. The psychological interventions studied were relaxation training, cognitive-behavioral treatment of insomnia, and self-monitoring of consumption and symptoms, goal-setting, management of withdrawal and coping with anxiety. There is no standard approach to managing benzodiazepine withdrawal. With sufficient motivation and the proper approach, almost anyone can successfully withdraw from benzodiazepines. However, a prolonged and severe withdrawal syndrome can cause profound disability, which may lead to breakdown of relationships, loss of employment, financial difficulties, as well as more serious adverse effects such as hospitalization and suicide. As such, long-term users should not be forced to discontinue against their will. Over-rapid withdrawal, lack of explanation, and failure to reassure individuals that they are experiencing temporary withdrawal symptoms led some people to experience increased panic and fears they are going mad, with some people developing a condition similar to post-traumatic stress disorder as a result. A slow withdrawal regimen, coupled with reassurance from family, friends, and peers improves the outcome. According to a 2015 Cochrane review, cognitive behavior therapy plus taper was effective in achieving discontinuation in the short-term but the effect was not certain after six months. Medications While some substitutive pharmacotherapies may have promise, current evidence is insufficient to support their use. Some studies found that the abrupt substitution of substitutive pharmacotherapy was actually less effective than gradual dose reduction alone, and only three studies found benefits of adding melatonin, paroxetine, trazodone, or valproate in conjunction with a gradual dose reduction. Antipsychotics are generally ineffective for benzodiazepine withdrawal-related psychosis. Antipsychotics should be avoided during benzodiazepine withdrawal as they tend to aggravate withdrawal symptoms, including convulsions. Some antipsychotic agents may be riskier than others during withdrawal, especially clozapine, olanzapine or low potency phenothiazines (e.g., chlorpromazine), as they lower the seizure threshold and can worsen withdrawal effects; if used, extreme caution is required. Barbiturates are cross tolerant to benzodiazepines and should generally be avoided; however phenobarbital can be used, as it is relatively safe, see below. Benzodiazepines or cross tolerant drugs should be avoided after discontinuation, even occasionally. These include the nonbenzodiazepines Z-drugs, which have a similar mechanism of action. This is because tolerance to benzodiazepines has been demonstrated to be still present at four months to two years after withdrawal depending on personal biochemistry. Re-exposures to benzodiazepines typically resulted in a reactivation of the tolerance and benzodiazepine withdrawal syndrome. Bupropion, which is used primarily as an antidepressant and smoking cessation aid, is contraindicated in people experiencing abrupt withdrawal from benzodiazepines or other sedative-hypnotics (e.g. alcohol), due to an increased risk of seizures. Buspirone augmentation was not found to increase the discontinuation success rate. Caffeine may worsen withdrawal symptoms because of its stimulatory properties. At least one animal study has shown some modulation of the benzodiazepine site by caffeine, which produces a lowering of seizure threshold. Carbamazepine, an anticonvulsant, appears to have some beneficial effects in the treatment and management of benzodiazepine withdrawal; however, research is limited and thus the ability of experts to make recommendations on its use for benzodiazepine withdrawal is not possible at present. Ethanol, the primary alcohol in alcoholic beverages, even mild to moderate use, has been found to be a significant predictor of withdrawal failure, probably because of its cross tolerance with benzodiazepines. Flumazenil has been found to stimulate the reversal of tolerance and the normalization of receptor function. However, further research is needed in the form of randomised trials to demonstrate its role in the treatment of benzodiazepine withdrawal. Flumazenil stimulates the up-regulation and reverses the uncoupling of benzodiazepine receptors to the GABAA receptor, thereby reversing tolerance and reducing withdrawal symptoms and relapse rates. Because of limited research and experience compared to the possible risks involved, the flumazenil detoxification method is controversial and can only be done as an inpatient procedure under medical supervision. Flumazenil was found to be more effective than placebo in reducing feelings of hostility and aggression in patients who had been free of benzodiazepines for 4–266 weeks. This may suggest a role for flumazenil in treating protracted benzodiazepine withdrawal symptoms. A study into the effects of the benzodiazepine receptor antagonist, flumazenil, on benzodiazepine withdrawal symptoms persisting after withdrawal was carried out by Lader and Morton. Study subjects had been benzodiazepine-free for between one month and five years, but all reported persisting withdrawal effects to varying degrees. Persistent symptoms included clouded thinking, tiredness, muscular symptoms such as neck tension, depersonalisation, cramps and shaking and the characteristic perceptual symptoms of benzodiazepine withdrawal, namely, pins and needles feeling, burning skin, pain and subjective sensations of bodily distortion. Therapy with 0.2–2 mg of flumazenil intravenously was found to decrease these symptoms in a placebo-controlled study. This is of interest as benzodiazepine receptor antagonists are neutral and have no clinical effects. The author of the study suggested the most likely explanation is past benzodiazepine use and subsequent tolerance had locked the conformation of the GABA-BZD receptor complex into an inverse agonist conformation, and the antagonist flumazenil resets benzodiazepine receptors to their original sensitivity. Flumazenil was found in this study to be a successful treatment for protracted benzodiazepine withdrawal syndrome, but further research is required. A study by Professor Borg in Sweden produced similar results in patients in protracted withdrawal. In 2007, Hoffmann–La Roche the makers of flumazenil, acknowledged the existence of protracted benzodiazepine withdrawal syndromes, but did not recommended flumazenil to treat the condition. Fluoroquinolone antibiotics have been noted to increase the incidence of a CNS toxicity from 1% in the general population, to 4% in benzodiazepine-dependent population or in those undergoing withdrawal from them. This is probably the result of their GABA antagonistic effects as they have been found to competitively displace benzodiazepines from benzodiazepine receptor sites. This antagonism can precipitate acute withdrawal symptoms, that can persist for weeks or months before subsiding. The symptoms include depression, anxiety, psychosis, paranoia, severe insomnia, paresthesia, tinnitus, hypersensitivity to light (photophobia) and sound (hyperacusis), tremors, status epilepticus, suicidal thoughts and suicide attempt. Fluoroquinolone antibiotics should be contraindicated in patients who are dependent on or in benzodiazepine withdrawal. NSAIDs have some mild GABA antagonistic properties and animal research indicate that some may even displace benzodiazepines from their binding site. However, NSAIDs taken in combination with fluoroquinolones cause a very significant increase in GABA antagonism, GABA toxicity, seizures, and other severe adverse effects. Imidazenil has received some research for management of benzodiazepine withdrawal, but is not currently used in withdrawal. Imipramine was found to statistically increase the discontinuation success rate. Melatonin augmentation was found to statistically increase the discontinuation success rate for people with insomnia. Phenobarbital, a barbiturate, is used at "detox" or other inpatient facilities to prevent seizures during rapid withdrawal or cold turkey. The phenobarbital is followed by a one- to two-week taper, although a slow taper from phenobarbital is preferred. In a comparison study, a rapid taper using benzodiazepines was found to be superior to a phenobarbital rapid taper. Pregabalin may help reduce the severity of benzodiazepine withdrawal symptoms, and reduce the risk of relapse. Propranolol was not found to increase the discontinuation success rate. SSRI antidepressants have been found to have little value in the treatment of benzodiazepine withdrawal. Trazodone was not found to increase the discontinuation success rate. Inpatient treatment Inpatient drug detox or rehabilitation facilities may be inappropriate for those who have become tolerant or dependent while taking the drug as prescribed, as opposed to recreational use. Such inpatient referrals may be traumatic for these individuals. Prognosis A 2006 meta-analysis found evidence for the efficacy of stepped care: minimal intervention (e.g. send an advisory letter, or meet a large number of patients to advise discontinuation), followed by systematic tapered discontinuation alone without augmentation if the first try was unsuccessful. Cognitive behavioral therapy improved discontinuation success rates for panic disorder, melatonin for insomnia, and flumazenil or sodium valproate for general long-term benzodiazepine use. A ten-year follow-up found that more than half of those who had successfully withdrawn from long-term use were still abstinent two years later and that if they were able to maintain this state at two years, they were likely to maintain this state at the ten-year follow-up. One study found that after one year of abstinence from long-term use of benzodiazepines, cognitive, neurological and intellectual impairments had returned to normal. Those who had a prior psychiatric diagnosis had a similar success rate from a gradual taper at a two-year follow-up. Withdrawal from benzodiazepines did not lead to an increased use of antidepressants. Withdrawal process It can be too difficult to withdraw from short- or intermediate-acting benzodiazepines because of the intensity of the rebound symptoms felt between doses. Moreover, short-acting benzodiazepines appear to produce a more intense withdrawal syndrome. For this reason, discontinuation is sometimes carried out by first substituting an equivalent dose of a short-acting benzodiazepine with a longer-acting one like diazepam or chlordiazepoxide. Failure to use the correct equivalent amount can precipitate a severe withdrawal reaction. Benzodiazepines with a half-life of more than 24 hours include chlordiazepoxide, diazepam, clobazam, clonazepam, chlorazepinic acid, ketazolam, medazepam, nordazepam, and prazepam. Benzodiazepines with a half-life of less than 24 hours include alprazolam, bromazepam, brotizolam, flunitrazepam, loprazolam, lorazepam, lormetazepam, midazolam, nitrazepam, oxazepam, and temazepam. The resultant equivalent dose is then gradually reduced. The consensus is to reduce dosage gradually over several weeks, e.g. 4 or more weeks for diazepam doses over 30 mg/day, with the rate determined by the person's ability to tolerate symptoms. The recommended reduction rates range from 50% of the initial dose every week or so, to 10–25% of the daily dose every 2 weeks. For example, the reduction rate used in the Heather Ashton protocol calls for eliminating 10% of the remaining dose every two to four weeks, depending on the severity and response to reductions with the final dose at 0.5 mg dose of diazepam or 2.5 mg dose of chlordiazepoxide. For most people, discontinuation over 4–6 weeks or 4–8 weeks is suitable. A prolonged period of reduction for longer than six months should be avoided to prevent the withdrawal process from becoming a "morbid focus" for the person. Duration After the last dose has been taken, the acute phase of the withdrawal generally lasts for about two months although withdrawal symptoms, even from low-dose use, can persist for six to twelve months gradually improving over that period, however, clinically significant withdrawal symptoms may persist for years, although gradually declining. A clinical trial of patients taking the benzodiazepine alprazolam for as short as eight weeks triggered protracted symptoms of memory deficits which were still present up to eight weeks after cessation of alprazolam. Protracted withdrawal syndrome Protracted withdrawal syndrome refers to symptoms persisting for months or even years. A significant minority of people withdrawing from benzodiazepines, perhaps 10–15%, experience a protracted withdrawal syndrome which can sometimes be severe. Symptoms may include tinnitus, psychosis, cognitive deficits, gastrointestinal complaints, insomnia, paraesthesia (tingling and numbness), pain (usually in limbs and extremities), muscle pain, weakness, tension, painful tremor, shaking attacks, jerks, dizziness and blepharospasm and may occur even without a pre-existing history of these symptoms. Tinnitus occurring during dose reduction or discontinuation of benzodiazepines is alleviated by recommencement of benzodiazepines. Dizziness is often reported as being the withdrawal symptom that lasts the longest. A study testing neuropsychological factors found psychophysiological markers differing from normals, and concluded that protracted withdrawal syndrome was a genuine iatrogenic condition caused by the long-term use. The causes of persisting symptoms are a combination of pharmacological factors such as persisting drug induced receptor changes, psychological factors both caused by the drug and separate from the drug and possibly in some cases, particularly high dose users, structural brain damage or structural neuronal damage. Symptoms continue to improve over time, often to the point where people eventually resume their normal lives, even after years of incapacity. A slow withdrawal rate significantly reduces the risk of a protracted or severe withdrawal state. Protracted withdrawal symptoms can be punctuated by periods of good days and bad days. When symptoms increase periodically during protracted withdrawal, physiological changes may be present, including dilated pupils as well as an increase in blood pressure and heart rate. The change in symptoms has been proposed to be due to changes in receptor sensitivity for GABA during the process of tolerance reversal. A meta-analysis found cognitive impairments in many areas due to benzodiazepine use show improvements after six months of withdrawal, but significant impairments in most areas may be permanent or may require more than six months to reverse. Protracted symptoms continue to fade over a period of many months or several years. There is no known cure for protracted benzodiazepine withdrawal syndrome except time, however, the medication flumazenil was found to be more effective than placebo in reducing feelings of hostility and aggression in patients who had been free of benzodiazepines for 4–266 weeks. This may suggest a role for flumazenil in treating protracted benzodiazepine withdrawal symptoms. Epidemiology The severity and length of the withdrawal syndrome is likely determined by various factors, including rate of tapering, length of use and dosage size, and possible genetic factors. Those who have a prior history of withdrawing from benzodiazepines may have a sensitized or kindled central nervous system leading to worsening cognition and symptomatology, and making each subsequent withdrawal period worse. Special populations Pediatrics A neonatal withdrawal syndrome, sometimes severe, can occur when the mother had taken benzodiazepines, especially during the third trimester. Symptoms include hypotonia, apnoeic spells, cyanosis, impaired metabolic responses to cold stress, and seizures. The neonatal benzodiazepine withdrawal syndrome has been reported to persist from hours to months after birth. A withdrawal syndrome is seen in about 20% of pediatric intensive care unit children after infusions with benzodiazepines or opioids. The likelihood of having the syndrome correlates with total infusion duration and dose, although duration is thought to be more important. Treatment for withdrawal usually involves weaning over a 3- to 21-day period if the infusion lasted for more than a week. Symptoms include tremors, agitation, sleeplessness, inconsolable crying, diarrhea and sweating. In total, over fifty withdrawal symptoms are listed in this review article. Environmental measures aimed at easing the symptoms of neonates with severe abstinence syndrome had little impact, but providing a quiet sleep environment helped in mild cases. Pregnancy Discontinuing benzodiazepines or antidepressants abruptly due to concerns of teratogenic effects of the medications has a high risk of causing serious complications, so is not recommended. For example, abrupt withdrawal of benzodiazepines or antidepressants has a high risk of causing extreme withdrawal symptoms, including suicidal ideation and a severe rebound effect of the return of the underlying disorder if present. This can lead to hospitalisation and potentially, suicide. One study reported one-third of mothers who suddenly discontinued or very rapidly tapered their medications became acutely suicidal due to 'unbearable symptoms'. One woman had a medical abortion, as she felt she could no longer cope, and another woman used alcohol in a bid to combat the withdrawal symptoms from benzodiazepines. Spontaneous abortions may also result from abrupt withdrawal of psychotropic medications, including benzodiazepines. The study reported physicians generally are not aware of the severe consequences of abrupt withdrawal of psychotropic medications such as benzodiazepines or antidepressants. Elderly A study of the elderly who were benzodiazepine dependent found withdrawal could be carried out with few complications and could lead to improvements in sleep and cognitive abilities. At 52 weeks after successful withdrawal, a 22% improvement in cognitive status was found, as well as improved social functioning. Those who remained on benzodiazepines experienced a 5% decline in cognitive abilities, which seemed to be faster than that seen in normal aging, suggesting the longer the intake of benzodiazepines, the worse the cognitive effects become. Some worsening of symptoms were seen in the first few months of benzodiazepine abstinence, but at a 24-week follow-up, elderly subjects were clearly improved compared to those who remained on benzodiazepines. Improvements in sleep were seen at the 24- and 52-week follow-ups. The authors concluded benzodiazepines were not effective in the long term for sleep problems except in suppressing withdrawal-related rebound insomnia. Improvements were seen between 24 and 52 weeks after withdrawal in many factors, including improved sleep and several cognitive and performance abilities. Some cognitive abilities, which are sensitive to benzodiazepines, as well as age, such as episodic memory, did not improve. The authors, however, cited a study in younger patients who at a 3.5-year follow-up showed no memory impairments and speculated that certain memory functions take longer to recover from chronic benzodiazepine use and further improvements in elderly people's cognitive function may occur beyond 52 weeks after withdrawal. The reason it took 24 weeks for improvements to be seen after cessation of benzodiazepine use was due to the time it takes the brain to adapt to the benzodiazepine-free environment. At 24 weeks, significant improvements were found, including accuracy of information processing improved, but a decline was seen in those who remained on benzodiazepines. Further improvements were noted at the 52-week follow-up, indicating ongoing improvements with benzodiazepine abstinence. Younger people on benzodiazepines also experience cognitive deterioration in visual-spatial memory but are not as vulnerable as the elderly to the cognitive effects. Improved reaction times were noted at 52 weeks in elderly patients free from benzodiazepines. This is an important function in the elderly, especially if they drive a car due to the increased risk of road traffic accidents in benzodiazepine users. At the 24-week follow-up, 80% of people had successfully withdrawn from benzodiazepines. Part of the success was attributed to the placebo method used for part of the trial which broke the psychological dependence on benzodiazepines when the elderly patients realised they had completed their gradual reduction several weeks previously and had only been taking placebo tablets. This helped reassure them they could sleep without their pills. The authors also warned of the similarities in pharmacology and mechanism of action of the newer nonbenzodiazepine Z drugs. The elimination half-life of diazepam and chlordiazepoxide, as well as other long half-life benzodiazepines, is twice as long in the elderly compared to younger individuals. Many doctors do not adjust benzodiazepine dosage according to age in elderly patients. See also Alcohol withdrawal syndrome Benzodiazepine dependence List of benzodiazepines Opioid withdrawal Physical dependence Post-acute-withdrawal syndrome Rebound effect Antidepressant discontinuation syndrome Neuroleptic discontinuation syndrome References External links Benzodiazepines: How they work and how to withdraw by Professor Heather Ashton The Minor Tranquilliser Project, For support, Camden, UK Addiction psychiatry Adverse effects of psychoactive drugs Benzodiazepines Biology of obsessive–compulsive disorder Causes of death Disorders causing seizures Substance dependence Withdrawal syndromes
Benzodiazepine withdrawal syndrome
[ "Chemistry" ]
5,966
[ "Drug safety", "Adverse effects of psychoactive drugs" ]
11,683,576
https://en.wikipedia.org/wiki/Accessible%20tourism
Accessible tourism is the ongoing endeavor to ensure tourist destinations, products, and services are accessible to all people, regardless of their physical or intellectual limitations, disabilities or age. It encompasses publicly and privately owned and operated tourist locations. The goal of accessible tourism is to create inclusivity of all including those traveling with children, people with disabilities, as well as seniors. This allows those with access requirements to be able to function as an independent using products following the universal design principle, a variety of services, and different environments. Background Overview Accessible tourism is defined as a way of making tourist locations more accessible to all populations. It does not just encompass those with disability, but it includes people of all populations including those with children and the elderly. The tourism industry is continuously evolving which has led to a need for accessibility. Because of this, it has also led to an increased market for accessible tourism. With the rise of the independent living movement, seen in places such as Berkeley, California, it has also raised questions about the definition of the landscape and the people within it. The rise of this movement in turn created a demand from the population to modify the city to allow for greater and equal access for everyone. Modern society is increasingly aware of the concept of integration of people with disabilities. Issues such as accessibility and universal design are featured in the international symposia of bodies such as the European Commission. Steps have been taken to promote guidelines and best practices, and major resources are now dedicated to this field. A greater understanding of the accessible tourism market has been promoted through research commissioned by the European Commission where a stakeholder analysis has provided an insight into the complexities of accessible tourism. Similarly, the Australian Sustainable Tourism Cooperative Research Centre funded an Accessible Tourism Research Agenda that sought to outline a research base on which to develop the supply, demand and coordination/regulation information required to develop the market segment. The research agenda has now seen three other funded projects contribute towards a research base on which the tourism industry and government marketing authorities can make more informed decisions. As of 2020, approximately 15% of the world's population lives with some form of disability, with one-fifth of the total, or between 110 million and 190 million people, living with a disability that affects daily life. Based on a report in 2011 by World Health Organization and the World Bank, over 1 billion of people in the world had some disability, with 200 million of those who have experienced severe difficulty in functioning. In addition to the social and health benefits, the market represents an opportunity for new investment and new service requirements, rarely provided by key players in the tourism sector. According to ENAT, the European Network for Accessible Tourism, accessible tourism includes but is not limited to: Barrier-free destinations: infrastructure and facilities Transport: by air, land and sea, suitable for all users High quality services: delivered by trained staff Activities, exhibits, attractions: allowing participation in tourism by everyone Marketing, booking systems, web sites & services: information accessible to all Brief history and trends The shift from the medical model to the social model of disability had a major contribution in the development of the concept of accessible tourism. With the Disability Rights Movement in full swing in the mid to late-1900s, the traditional view of disability that focuses on the individuals' impairments and the medical interventions to fix those impairments was significantly challenged. The newly emerged social model of disability postulates that disability is not constructed solely by a medical condition a person has but rather by the social environments that impose various kinds of barriers on people with impairments. With the influence of the social model, the general understanding of disability has been expanded to place greater emphasis on removing the socially imposed barriers and achieving greater accessibility for individuals with disability and various access needs. This endeavor to create a more inclusive environment for all people led to the emergence of the concept of Universal Design, which is the design of products and environments that can be easily accessed, understood, and used by anyone, regardless of one's ability. In 1997, the 7 principles of universal design were developed. These principles include: Equitable Use Flexibility in Use Simple and Intuitive Use Perceptible Information Tolerance for Error Low Psychical Effort Size and Space for Approach and Use The principles of universal design provided important conceptual foundation and guidelines for the tourism industry on how to design tourism products and services that have the value of inclusivity at their center. Today, Europe and the United States of America are home to the majority of the existing companies in the accessible tourism industry. However, companies worldwide are starting to appear as the result of a growing need, largely driven by senior tourism, due to increasing life expectancy in developed countries. The United States requires ADA compliant ramp access to virtually all businesses and public places. Portugal, Spain, the United Kingdom, Germany, France and other northern European countries are increasingly prepared to receive tourists in wheelchairs, and to provide disability equipment and wheelchair accessible transport. With the growth of the internet, online travel planning is also becoming more common, leading to the rise of online accessibility maps. For example, starting in 2016, Lonely Planet started offering online accessibility resources by country. As for the future of accessibility tourism, b predicts that, "emerging field of study will influence tourism destination competitiveness in the future, whether that be from a human rights, emerging market segment or service delivery perspective" (Michopoulou, Darcy, Ambrose, & Buhalis, 2015). Regulations Many individual countries have legislation designed to support the needs of people with disabilities, but the closest thing to an international standard for accessible tourism would be Article 9 of the United Nations Convention on the Rights of Persons with Disabilities (CRPD). Since its adoption on December 13, 2006, the CRPD has gained 164 signatories and was the fastest human rights treaty to be enacted. The convention was designed to combat many of the challenges that people with disabilities face through legal protections of rights and freedoms, increased access to services that facilitate independent living, decreased discrimination and stigmatization, and raised awareness of disability-related issues. Article 9 focuses specifically on accessibility and what is required to provide people with disabilities with equal access and opportunities to participate in every aspect of society. Not only do these accommodations benefit the disabled citizens of the countries that are a part of the CRPD, but they also improve the experiences of disabled travelers and tourists. Typical accommodations that are beginning to become implemented globally to improve accessible tourism include, but are not limited to: Accessible travel-related websites Reliable information about a specific attraction's level of accessibility Professional staff capable of dealing with accessibility issues Accessible airport transfer, vehicles, and public transportation Accessible restaurants, bars, and other facilities Technical aids and disability equipment such as wheelchairs, bath chairs, and toilet raisers available when making living arrangements Adapted restrooms in restaurants and public places Accessible streets, sidewalks, and building entrances/exits Accessible communication systems Specific accommodations Although this is not an exhaustive list of possible accommodations related to accessible tourism, the examples below demonstrate some solutions to common problems that people with disabilities experience while traveling. References External links Accessible tourism at the Open Directory Project European Network for Accessible Tourism Accessibility Accessible transportation Types of tourism
Accessible tourism
[ "Physics", "Engineering" ]
1,442
[ "Accessible transportation", "Physical systems", "Transport", "Accessibility", "Design" ]
11,683,611
https://en.wikipedia.org/wiki/Diaporthe%20phaseolorum
Diaporthe phaseolorum is a plant pathogen with five subspecies: Diaporthe phaseolorum var. batatae Diaporthe phaseolorum var. caulivora Diaporthe phaseolorum var. meridionalis Diaporthe phaseolorum var. phaseolorum Diaporthe phaseolorum var. sojae See also List of soybean diseases References Fungal plant pathogens and diseases phaseolorum Soybean diseases Fungus species
Diaporthe phaseolorum
[ "Biology" ]
97
[ "Fungi", "Fungus species" ]
11,683,694
https://en.wikipedia.org/wiki/Guignardia%20fulvida
Guignardia fulvida is a fungus that is a plant pathogen in the family Botryosphaeriaceae. References Fungal plant pathogens and diseases Botryosphaeriaceae Fungi described in 1948 Fungus species
Guignardia fulvida
[ "Biology" ]
47
[ "Fungi", "Fungus species" ]
11,683,712
https://en.wikipedia.org/wiki/Clar%27s%20rule
In organic and physical organic chemistry, Clar's rule is an empirical rule that relates the chemical stability of a molecule to its aromaticity. It was introduced in 1972 by the Austrian organic chemist Erich Clar in his book The Aromatic Sextet. The rule states that given a polycyclic aromatic hydrocarbon, the resonance structure most important to characterize its properties is that with the largest number of aromatic π-sextets i.e. benzene-like moieties. The rule In general, the chemical structure of a given polycyclic aromatic hydrocarbon allows more than one resonance structure: these are sometimes referred to as Kekulé resonance structures. Some such structures may contain aromatic π-sextets, namely groups of six π-electrons localized in a benzene-like moiety and separated by adjacent rings through C–C bonds. An aromatic π-sextet can be represented by a circle, as in the case of the anthracene molecule (below). Clar's rule states that for a benzenoid polycyclic aromatic hydrocarbon (i.e. one with only hexagonal rings), the resonance structure with the largest number of disjoint aromatic π-sextets is the most important to characterize its chemical and physical properties. Such a resonance structure is called a Clar structure. In other words, a polycyclic aromatic hydrocarbon with a given number of π-sextets is more stable than its isomers with fewer π-sextets. In 1984, Glidewell and Lloyd provided an extension of Clar's rule to polycyclic aromatic hydrocarbons containing rings of any size. More recently, Clar's rule was further extended to diradicaloids in their singlet state. Drawing a Clar structure When drawing a Clar structure, the following rules must be satisfied: each vertex of the molecular graph representing the polycyclic aromatic hydrocarbon either belongs to a double bond or a circle; such double bonds and circles never join; there are no rings with three double bonds, since they are always represented by circles; moreover, the number of circles in the graph is maximized; when a ring with a circle is adjacent to a ring with two double bonds, an arrow is drawn from the former to the latter ring. Some results from these rules are worth being made explicit. Following Clar, rules 1 and 2 imply that circles can never be in adjacent rings. Rule 3 means that only four options are viable for rings, namely (i) having only one double bond, (ii) having two double bonds, (iii) having a circle, or (iv) being empty, i.e. having no double bonds. Finally, the arrow mentioned in rule 4 can be interpreted in terms of mobility of π-sextets (in this case, we speak of migrating π-sextets) or, equivalently, of a quantum-mechanical resonance between different Clar structures. Examples The resonance structures of phenanthrene According to the rules expressed above, the phenanthrene molecule allows two different resonance structures: one of them presents a single circle in the center of the molecule, with each of the two adjacent rings having two double bonds; the other one has the two peripheral rings each with one circle, and the central ring with one double bond. According to Clar's rule, this last resonance structure gives the most important contribution to the determination of the properties of phenanthrene. The migrating π-sextet of anthracene The anthracene molecule allows three resonance structures, each with a circle in one ring and two sets of double bonds in the other two. Following rule 4 above, anthracene is better described by a superposition of these three equivalent structures, and an arrow is drawn to indicate the presence of a migrating π-sextet. Following the same line of reasoning, one can find migrating π-sextets in other molecules of the acene series, such as tetracene, pentacene, and hexacene. The role of angular rings Fusing angular rings around a benzene moiety leads to an increase in stability. The Clar structure of anthracene, for instance, has only one π-sextet but, by moving one ring into the angular position, phenanthrene is obtained, the Clar structure of which carries two circles instead of one. Phenanthrene can be thought of as a benzene moiety with two fused rings; a third ring can be fused to obtain triphenylene, with three aromatic π-sextets in its Clar structure. The chemical stability of these molecules is greatly influenced by the degree of aromaticity of their Clar structures. As a result, while anthracene reacts with maleic acid, phenanthrene does not, and triphenylene is the most stable species of these three. Experimental evidence and applications Since its formal statement in 1972, Clar's rule has been supported by a vast amount of experimental evidence. The dependence of the color and reactivity of some small polycyclic aromatic hydrocarbons on the number of π-sextets in their structures was reported by Clar himself in his seminal contribution. Similarly, it was shown that the HOMO-LUMO gap, and therefore the color, of a series of heptacatafusenes depends on the number of π-sextets. Clar's rule has also been supported by experimental results about the distribution of π-electrons in polycyclic aromatic hydrocarbons, valence bond calculations, and nucleus-independent chemical shift studies. Clar's rule is widely applied in the fields of chemistry and materials science. For instance, Clar's rule can be used to predict several properties of graphene nanoribbons. Aromatic π-sextets play an important part in the determination of the ground state of open shell biradical-type structures., Clar's rule can rationalize the observed decrease in the bandgap of holey graphenes with increasing size. Limitations Despite the experimental support mentioned above, Clar's rule suffers from some limitations. In the first place, Clar's rule is formulated only for species with hexagonal rings, and thus it cannot be applied to species having rings different from the benzene moiety, even though an extension of the rule to molecules with rings of any dimension has been provided by Glidewell and Lloyd. Secondly, if more than one Clar structure exists for a given species, Clar's rule does not provide for a comparison of the relative importance of each structure in the determination of the physicochemical properties. Finally, it is important to mention that exceptions to the Clar's rule exist, such as in the case of triangulenes. See also Hückel's rule Baird's rule References Physical organic chemistry Rules of thumb
Clar's rule
[ "Chemistry" ]
1,430
[ "Physical organic chemistry" ]
11,683,722
https://en.wikipedia.org/wiki/Elsino%C3%AB%20ampelina
Elsinoë ampelina is a plant pathogen, which is the causal agent of anthracnose on grape. This type of anthracnose affects several plant varieties, including some brambles and wine grapes. Grape anthracnose can be identified by the "bird's eye" lesions on the berries and sunken black or greyish lesions on leaves and shoots. From these lesions, conidia are produced. This disease can be lethal to the plant, either through defoliation and removal of photosynthetic capacity, or through injury to the active regions of the vine. Grape anthracnose is particularly important to the wine industry, as it can decrease quality and quantity of berries produced as well as kill the vine outright, leading to large economic losses, in particular during the middle summer months. Hosts and symptoms E. ampelina affects two species of Rubus and three species of Vitis. Specifically, E. ampelina affects blackberries, raspberries, mountain grapes, fox or concord grapes, and the European wine grape. Anthracnose diseases can cause disease on a variety of plants, but the primary host for E. ampelina, is grape. Anthracnose on grape presents itself as lesions on shoots, leaves, and berries. Lesions will first appear on young shoots, showing up as small circular reddish spots that will later become larger and create grey lesions which appear sunken. The lesions will eventually develop margins that are a dark reddish-brown to violet-black color. If left untreated, lesions on shoots will become larger and eventually kill the shoot. While these lesions may be very apparent and easy to identify, they can sometimes be confused for hail damage. Hail damage typically appears on only one side of the plants. Also, anthracnose lesions will have darker and more raised edge. Anthracnose lesions on leaves and petioles look very similar to those on shoots. However, on leaves, the lesions will have dry grey or white centers that will eventually fall off, leaving a hole. This response by the plant is called a shot-hole. Should the lesions spread and the infection make it into the vascular system of the leaf, the anthracnose will prevent the proper development of the leaf and will lead to malformation or to the drying of the leaf. Grape vines are susceptible to anthracnose before flowering all the way through fruit soften and coloration. Essentially, the berries are susceptible to the pathogen throughout the growing season. Anthracnose presents itself on the berries as small reddish circles, around a quarter inch in diameter, that will become sunken with a narrow dark brown to black border. Eventually, the center of the lesion will change color from violet to white or grey and become velvety. These lesions often look like a shooting target or bullseye. Should the disease spread to and affect the pulp of the berry, it will cause cracking, which opens the berry to secondary infections. Disease cycle Late in the season, the Grape Anthracnose fungus produces sclerotia, which are located primarily at the edge of the infected lesions on shoots. Unlike acervuli, sclerotia serves as the overwintering structures. Because the fungus over-winters in dormant and dead canes—one-year-old wood that starts to become lignified—disease control becomes very difficult. Large numbers of conidia are disseminated from sclerotia in the spring when there are wet periods of 24 hours and temperature is above 36 °F (2 °C). The conidia infect the young leaves, shoots, and berries of the grape vine. Conidia will germinate, causing primary inoculum under the following circumstances: presence of free water in 12 hours and adequate temperature (36-90 °F (2-32 °C)). In fact, primary inoculum of Grape Anthracnose is possible even before bud break. The infection rate will escalate with increases in temperature. Development of disease symptom is also temperature-dependent: within 13 days at 36 °F, or within 4 days at 90 °F. Simultaneously, ascospores are produced on the lesions of infected canes or berries left on the trellis system or on the vineyard floor to carry out the infection. These ascospores are formed in asci, which are in cavities within a stroma—the dense structural tissue that produces fruiting bodies in fungi—of pseudothecium. Pseudothecium of grape anthracnose, the sexual fruiting body of the fungus, has asci containing eight four-celled ascospores. The fungus also overwinters as pseudothecium, but the importance of ascospores in disease development is not clearly understood. The study done by Mirica (1998) validated that the ascospores do germinate and infect the tissue and produce the Sphaceloma phase which shows the existence of the perfect stage of Elsinoe ampelina. Overall, conidia and ascospores overwinter on the ground and on infected tissue and become the source of primary inoculum. Throughout spring and summer, the fungus produces acervuli on the exterior of the necrotic areas at their mature stage. Under wet condition, these acervuli form conidia. The conidia from acervuli becomes the secondary sources of infection for the remainder of the growing seasons. In summary, the disease cycle of Elsinoe Ampelina is as follows: 1) the fungus overwinters by forming both pseudothecium and sclerotia, 2) the spores from both structures cause primary inoculum and form mycelium on the infected lesions, 3) acervuli disseminate conidia which becomes the source of secondary inoculum. As mentioned earlier, grape anthracnose is dependent upon moisture and temperature. It can be exacerbated during heavy rainfall and hail, or by overhead irrigation. Environment Grape anthracnose can be found where ever grapes are grown, however it is more prevalent in certain areas. It thrives under warm and wet conditions. Both primary and secondary inoculum are spread by the splashing of rain on to new tissue. Moisture is required for the germination of conidia on tissue. New tissue is the most vulnerable to infection. Overgrown vines also promote infection as they take longer to dry out after dew or rain, often due to decreased air flow in the canopy. The disease can become even more severe in areas of poorly drained soil or during years of heavy rainfall or rain coupled with high temperatures. Management Sanitation is a critical factor in controlling grape anthracnose. The removal of infected tissues is done during the dormant stage, often when it is cold and dry in the winter months. The infected tissue must be then be destroyed upon removal. This reduces the amount of primary inoculum available to be released in the spring. Wild grape varieties in proximity to cultivated grapes should be removed. The wild species can host grape anthracnose and are a source of primary inoculum. Because the conidia are spread by water splashing, it is not crucial to eliminate all wild grapes, just the ones near the cultivated grapes. Planting varieties with resistance or tolerance to grape anthracnose can aid in management of the disease. American varieties like 'Concord' and 'Niagara' have more resistance to the disease, while French hybrids and Vitis vinifera are more susceptible to infection. Specific susceptible hybrid grape cultivars include 'Vidal', 'Mars', 'Marquis', and 'Reliance'. Canopy upkeep can be an important preventive measure when dealing with anthracnose. Proper pruning and training will increase air flow around the plant and thus reduce the drying time of external tissue surfaces. Appropriate care is especially crucial for target areas of new growth because they are most susceptible to the pathogen. Fungicides are a control measure commonly used once grape anthracnose has become established in a vineyard. The most important fungicide application occurs in early spring during the dormant period before bud swell. A lime-sulfur solution is most commonly used. This is typically applied at a rate of ten gallons per acre. Commercially available Sulforix can also be used at a rate of one gallon per acre. Both fungicides target the sclerotia overwintering in the canes. This dormant fungicide application is then followed up throughout the season by foliar sprays—sprays that target the surface of foliage. These sprays help protect the new susceptible tissues. Foliar sprays are typically recommended at two-week intervals. Other commercial products often used include Mancozeb, Captan, Ziram, Sovran, Rally, Elite, Inspire Super, Adamant, Mettle, Revus Top, Vintage, and Pristine. The majority of these fungicides are sterol inhibitors and a few are EBDCs—non-systemic, surface-acting fungicides. It is important to use fungicides with different modes of action to avoid resistance development. Another control method is ensuring the use of disease-free plantings, although phytosanitary regulations ban the movement of infected plants and propagules. The best way to ensure one is getting disease-free plantings is to buy them from a certified operation with disease-tested grape vines. Importance Grape anthracnose can be found wherever grapes are grown. Lesions can kill leaves, shoots, the actively growing parts of vines, and cause the berries to be undesirable and unusable. Damage can be seen throughout the growing season, with severe damage in July through September, as the berries are ripening and undergoing veraison. In climates with strong winters, even if the disease does not outright kill the vine, it will reduce its photosynthetic capacity, leading to decreased amounts of carbohydrate reserves in the vine and eventual death in winter as those reserves dry up and the plant is unable to sustain itself. In addition, once the disease afflicts the berries, it will lead to a decrease in quality and quantity, which will have detrimental economic impact as wine makers will have lower volumes of lower quality berries to work with. References George N. Agrios (2004). "Plant Pathology 5th Edition", "Elsevier Academic Press"; 420, 512 External links Index Fungorum USDA ARS Fungal Database Elsinoë Fungal plant pathogens and diseases Fungal grape diseases Ornamental plant pathogens and diseases Fungi described in 1874 Fungus species
Elsinoë ampelina
[ "Biology" ]
2,191
[ "Fungi", "Fungus species" ]
11,683,889
https://en.wikipedia.org/wiki/Exobasidium%20vaccinii
Exobasidium vaccinii, commonly known as “red leaf disease,” or “Azalea Gall,” is a biotrophic species of fungus that causes galls on ericaceous plant species, such as blueberry and azalea (Vaccinium and Rhododendron spp.). Exobasidium vaccinii is considered the type species of the Exobasidium genus. As a member of the Ustilagomycota, it is a basidiomycete closely related to smut fungi. Karl Wilhelm Gottlieb Leopold Fuckel first described the species in 1861 under the basionym Fusidium vaccinii, but in 1867 Mikhail Stepanovich Voronin (often cited as “Woronin”) later placed it in the genus Exobasidium. The type specimen is from Germany, and it is held in the Swedish Museum of Natural History. Exobasidium vaccinii, in current definition from John Axel Nannfeldt in 1981, is limited on the host Vaccinium vitis-idaea. This idea is used in most recent papers on E. vaccinii. Morphology In its pathogenic state, E. vaccinii causes discoloration and, depending on the host, may cause hypertrophy and hyperplasia on the leaves and meristem, often forming flower-like structures (i.e. “pseudoflowers”). It may also cause green spots on blueberry fruits, which are sometimes tinted red and have occasional white spore masses. Symptoms within the host plant are often varied compared to other species of Exobasidium, and distinguishing among species has relied traditionally upon spore size. In a typical disease cycle, leaves on infected shoots will first turn greenish red to bright red when the host species would typically fruit. During the late stage of disease development, the undersurface (abaxial side) of leaves will become covered in a white mass, consisting of sparse hyphae, basidia, basidiospores, secondary spores, and secondary spores forming conidia. Basidiospores are musiform with a round apex and a distinctive hilar region at the spore base. The spores are hyaline and the dimensions are about 10-13 micrometers long and 3-4 micrometers wide. Some spores have a transverse medial septum separating two nuclei. Woronin first observed Exobasidium’s ability to produce asexual spores in 1867, and over a century later, scanning electron microscopy and transmission electron microscopy has confirmed E. vaccinii’s ability to produce conidia from secondary spores. There are no known reports of E. vaccinii forming appressoria; however, there are numerous reports of appressoria forming in E. vexans, which is pathogenic on tea, and among other members of the Ustilagomycetes. The intercellular hyphae are septate with short, lobed haustoria. Hyphae and haustoria contacting host cells cause significant amounts of pressure and subsequent distortion in the surrounding tissues. Haustoria contain membranous inclusion bodies and are associated with electron-dense deposits, much like other plant pathogenic fungi. Ecology E. vaccinii is dimorphic and can be grown in culture; in its non-pathogenic state in nature, it likely lives in a yeast-like form in the soil or on the plant similar to many of its smut relatives. In its biotrophic state, E. vaccinii gets its energy from its ericaceous host plants. Most species of native and cultivated rhododendron and azalea are considered susceptible, in addition to high and lowbush blueberry cultivars. E. vaccinii is distributed across the Northern Hemisphere, including most of eastern North America and western Europe, according to known studies. It has also been reported in parts of Asia on endemic Vaccinium. Endemic species previously reported to be infected with E. vaccinii in Hawaii has been discovered to be a different species, Exobasidium darwinii. Spores are produced on basidia on the outside of galls, typically in the late spring and early summer. Eventually, the mycelium present in the leaves colonizes the host's rhizomes, where it becomes systemic; any new shoots growing from these rhizomes are often infected and fail to fruit or flower. Systemically infected plants also often experience higher infection rates and gall loads. Agricultural Impacts Blueberries infected with E. vaccinii remain edible, but the spots result in what may be considered “unsightly” fruits. The disease has been observed to infect up to 25% of certain harvests, rendering the berries unmarketable. Additionally, lower fruit yields in systemically infected plants pose a great risk to commercial growers. Gall formation negatively affects reproductive measures, decreasing flower production, flower size, and fruit yield. A study conducted in Nova Scotia found that the disease decreases flowers by 42% and the number of berries per stem by 74%. Branches of infected shoots will also typically die the following year. Recommendations for preventative control includes pruning infected shoots before the fungus produces spores. Taxonomy Taxonomic reports on E. vaccinii are ongoing. While Fuckel first described E. vaccinii on a Vaccinium species in Germany, many studies attribute gall formation in multiple species of North American blueberry cultivars and others in native and cultivated azalea. Most of these records and publications do not have a phylogenetic basis for their identifications and rely on the morphology of the spores; therefore, we cannot confirm their taxonomic placement and host relationship without conducting more in-depth phylogenetic studies. One hypothesis argues that in addition to coevolution, sporulation site plays a significant role in speciation. There is also some disagreement in the literature over whether or not E. vaccinii even causes hypertrophy on certain Vaccinium species. However, taxonomic resolution will require additional phylogenetic studies. Originally E. vaccinii was a broad spectrum group but later studies showed it was a complex made of different species with narrow host ranges. Many species once considered E. vaccinii have been separated as their own. Nannfeldt in 1981 "proposed that Exobasidium species have narrow host ranges that are restricted to one plant species or a group of closely-related species.” References External links Fungal plant pathogens and diseases Ustilaginomycotina Fungi described in 1867 Fungus species Gall-inducing fungi
Exobasidium vaccinii
[ "Biology" ]
1,368
[ "Gall-inducing fungi", "Fungi", "Fungus species" ]
11,683,991
https://en.wikipedia.org/wiki/Fusarium%20acuminatum
Fusarium acuminatum is a fungal plant pathogen. It was originally found on the living stems of Solanum tuberosum in New York, USA. Fusarium acuminatum has been found to be a ripe rot pathogen of Actinidia deliciosa (fuzzy kiwifruit) in New Zealand. It has been found to cause post-harvest Rot on stored Kiwifruit (Actinidia arguta) in China. It was described as soft, brown, slightly sunken, water-soaked lesions with abundant white-to-pink mycelium. It also causes root rot of Maidong (Ophiopogon japonicus) in China. Fusarium acuminatum and Fusarium solani are known to be major pathogens causing root rot of Astragalus membranaceus (Mongolian milkvetch), which can lead to serious yield loss of the herb in China. References Fungal plant pathogens and diseases Hypocreales Fungi described in 1895 Taxa named by Benjamin Matlack Everhart Fungus species
Fusarium acuminatum
[ "Biology" ]
223
[ "Fungi", "Fungus species" ]
11,684,040
https://en.wikipedia.org/wiki/Fusarium%20equiseti
Fusarium equiseti is a fungal species and plant pathogen on a varied range of crops. It is considered to be a weak pathogen on cereals and is occasionally found to be associated with 'Fusarium head blight' infected kernels. It is commonly found in tropical and sub-tropical areas. The species has been reported to be a causal organism of wilt in Capsicum chinense in Mexico in 2016. Fusarium equiseti is also one of the causal organisms for causing chilli wilt in Kashmir along with other fungi species; Fusarium oxysporum and Fusarium solani. References External links USDA ARS Fungal Database Fungal plant pathogens and diseases equiseti Fungi described in 1866 Fungus species
Fusarium equiseti
[ "Biology" ]
155
[ "Fungi", "Fungus species" ]
11,684,056
https://en.wikipedia.org/wiki/Fusarium%20tricinctum
Fusarium tricinctum is a fungal and plant pathogen of various plant diseases worldwide, especially in temperate regions. It is found on many crops in the world including malt barley (Andersen et al., 1996), and cereals (Chelkowski et al., 1989; Bottalico and Perrone, 2002; Kosiak et al., 2003; and Wiśniewska et al., 2014;). It is also found on animals such as Rainbow trout, Marasas et al., 1967. In cereals, it is one of the most common species causes of Fusarium head blight (FHB) and also root rot. References tricinctum Fungal plant pathogens and diseases Fungi described in 1886 Fungus species
Fusarium tricinctum
[ "Biology" ]
157
[ "Fungi", "Fungus species" ]
11,684,229
https://en.wikipedia.org/wiki/Helicobasidium%20mompa
Helicobasidium mompa is a species of fungus in the subdivision Pucciniomycotina. Basidiocarps (fruit bodies) are corticioid (patch-forming) and are typically violet to purple. Microscopically they have auricularioid (laterally septate) basidia. Helicobasidium mompa is an opportunistic plant pathogen and is one of the causes of violet root rot of crops and other plants. DNA sequencing suggests that it is a distinct, eastern Asian species. Taxonomy Helicobasidium mompa was first described in 1891 by Japanese mycologist Nobujiro Tanaka for a species found on mulberry in Japan that was similar to the European Helicobasidium purpureum, but with basidiospores depicted as ovoid and of slightly smaller size. In 1955 Seiya Ito synonymized the long-spored H. mompa f. macrosporum and H. compactum with the short-spored H. mompa. As a result at least some subsequent references to H. mompa refer to a long-spored species. A 1999 study considered H. mompa a nomen dubium (a name of unknown application) because of uncertainty concerning its description and interpretation. Initial molecular research, based on cladistic analysis of DNA sequences, indicates, however, that Japanese and Korean specimens determined as H. mompa form a grouping distinct from those named Helicobasidium longisporum or H. purpureum. Description Basidiocarps are corticioid smooth, membranaceous, purple to purple-brown. Microscopically the hyphae are easily visible, 5-8 μm diam., brownish-purple, and lack clamp connections. Basidia are tubular, curved or crook-shaped, and auricularioid (laterally septate). Basidiospores were originally described as ovoid, 10 -12 x 5 -7 μm, but have been re-interpreted as elongated, 10–23 x 4–7.5 μm. Distribution Helicobasidium mompa has been recorded mainly from temperate areas of Japan, Korea, and China. It is reported to cause violet root rot of various crops. References Fungal plant pathogens and diseases Fungi described in 1891 Fungi of Asia Pucciniomycotina Fungus species
Helicobasidium mompa
[ "Biology" ]
498
[ "Fungi", "Fungus species" ]