id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
7,044,429
https://en.wikipedia.org/wiki/Absolute%20irreducibility
In mathematics, a multivariate polynomial defined over the rational numbers is absolutely irreducible if it is irreducible over the complex field. For example, is absolutely irreducible, but while is irreducible over the integers and the reals, it is reducible over the complex numbers as and thus not absolutely irreducible. More generally, a polynomial defined over a field K is absolutely irreducible if it is irreducible over every algebraic extension of K, and an affine algebraic set defined by equations with coefficients in a field K is absolutely irreducible if it is not the union of two algebraic sets defined by equations in an algebraically closed extension of K. In other words, an absolutely irreducible algebraic set is a synonym of an algebraic variety, which emphasizes that the coefficients of the defining equations may not belong to an algebraically closed field. Absolutely irreducible is also applied, with the same meaning, to linear representations of algebraic groups. In all cases, being absolutely irreducible is the same as being irreducible over the algebraic closure of the ground field. Examples A univariate polynomial of degree greater than or equal to 2 is never absolutely irreducible, due to the fundamental theorem of algebra. The irreducible two-dimensional representation of the symmetric group S3 of order 6, originally defined over the field of rational numbers, is absolutely irreducible. The representation of the circle group by rotations in the plane is irreducible (over the field of real numbers), but is not absolutely irreducible. After extending the field to complex numbers, it splits into two irreducible components. This is to be expected, since the circle group is commutative and it is known that all irreducible representations of commutative groups over an algebraically closed field are one-dimensional. The real algebraic variety defined by the equation is absolutely irreducible. It is the ordinary circle over the reals and remains an irreducible conic section over the field of complex numbers. Absolute irreducibility more generally holds over any field not of characteristic two. In characteristic two, the equation is equivalent to (x + y −1)2 = 0. Hence it defines the double line x + y =1, which is a non-reduced scheme. The algebraic variety given by the equation is not absolutely irreducible. Indeed, the left hand side can be factored as where is a square root of −1. Therefore, this algebraic variety consists of two lines intersecting at the origin and is not absolutely irreducible. This holds either already over the ground field, if −1 is a square, or over the quadratic extension obtained by adjoining i. References Algebraic geometry Representation theory
Absolute irreducibility
[ "Mathematics" ]
559
[ "Representation theory", "Fields of abstract algebra", "Algebraic geometry" ]
7,044,567
https://en.wikipedia.org/wiki/Kari%20Enqvist
Kari-Pekka Enqvist (born February 16, 1954, in Lahti, Finland) is a professor of cosmology in the Department of Physical Sciences at the University of Helsinki. Enqvist was awarded his PhD in theoretical physics in 1983. Enqvist is the chairman of the scientific advisory board of Skepsis ry (a Finnish sceptics' society) and has written many books that popularize physics. In 1997 Enqvist was granted the Magnus Ehrnrooth Foundation Physics Award for his efforts in particle physics and cosmology. In 1999, he was awarded the Tieto-Finlandia award, Finland's most significant award for non-fiction, for his book Olemisen porteilla ("At the gates of being"). Enqvist retired from the University of Helsinki in 2019. References External links Kari Enqvist's homepage 20th-century Finnish physicists Finnish science writers Particle physicists Finnish skeptics Finnish atheists Academic staff of the University of Helsinki People from Lahti 1954 births Living people Tieto-Finlandia Award winners Cosmologists 21st-century Finnish physicists
Kari Enqvist
[ "Physics" ]
228
[ "Particle physicists", "Particle physics" ]
7,044,573
https://en.wikipedia.org/wiki/Station%20biologique%20de%20Roscoff
The Station biologique de Roscoff (SBR) is a French marine biology and oceanography research and teaching center. Founded by Henri de Lacaze-Duthiers (1821–1901) in 1872, it is at the present time affiliated to the Sorbonne Faculty of Science and Engineering (part of Sorbonne University) and the Centre National de la Recherche Scientifique (CNRS). Overview The Station biologique is situated in Roscoff on the northern coast of Brittany (France) about 60 km east of Brest. Its location offers access to exceptional variety of biotopes, most of which are accessible at low tide. These biotopes support a large variety of both plant (700) and animal (3000) marine species. Founded in 1872 by Professor Henri de Lacaze-Duthiers (then Zoology Chair at the Sorbonne University ), the SBR constitutes, since March 1985, the Internal School 937 of the Pierre and Marie Curie University (UPMC). In November 1985, the SBR was given the status of Oceanographic Observatory by the Institut National des Sciences de l'Univers et de l'Environnement (National Institute for the Cosmological and Environmental Sciences; INSU). The SBR is also, since January 2001, a Research Federation within the Life Sciences Department of the CNRS. The personnel of the SBR, which includes about 200 permanent staff, consists of scientists, teaching scientists, technicians, postdoctoral fellows, PhD students and administrative staff. These personnel is organized into various research groups within research units that are recognised by the Life Sciences Department of the CNRS (the current research units have the following codes: FR 2424, UMR 8227, UMR 7144, UMI 3614 and USR 3151). The various research groups work on a wide range of topics, ranging from investigation of the fine structure and function of biological macromolecules to global oceanic studies. Genomic approaches constitute an important part of many of the research programmes, notably via the European Network of Excellence "Marine Genomics" which is coordinated by the SBR. With the accommodation facilities at its hotel and its teaching facilities and equipment, the SBR provides conditions for teaching a range of subjects including zoology, phycology and coastal oceanography. Teaching at the SBR includes courses that form part of the UPMC Master's program and the European Socrates. The SBR is part of the network "BioGenOuest" and give access to different technological platforms as sequencing, mass spectrometry, microscopy and bioinformatics. The station publishes (since 1960) a bilingual scientific journal, the Cahiers de Biologie Marine (CBM). The SBR also hosts between 12 and 15 national and international conferences per year, including the Jacques Monod Conferences. History References External links Station Biologique de Roscoff (official web site) - (English/French) History History of the Station biologique by André Toulmond Archives of the Station biologique de Roscoff Biography of Georges Teissier Gallery Oceanography Marine biological stations Oceanographic organizations Research institutes in France
Station biologique de Roscoff
[ "Physics", "Environmental_science" ]
649
[ "Oceanography", "Hydrology", "Applied and interdisciplinary physics" ]
7,045,361
https://en.wikipedia.org/wiki/Biomedical%20Engineering%20Society
BMES (the Biomedical Engineering Society) is the professional society for students, faculty, researchers and industry working in the broad area of biomedical engineering. BMES is the leading biomedical engineering society in the United States and was founded on February 1, 1968 "to promote the increase of biomedical engineering knowledge and its utilization." There are 7,000 members in 2018. Since 1972, the society has published an academic journal, the Annals of Biomedical Engineering (online archive). History The BMES was first established in Illinois on February 1, 1968 as a non-profit organization that aims to serve the biomedical engineering students, academics, researchers, and professionals. Upon establishing the organization it first had 171 founding members and 89 charter members. The BMES held its first meeting on April 17, 1968 with cooperation of the American Societies for Experimental Biology at the Ritz-Carlton Hotel in Atlantic City, NJ. References External links Engineering Society Biomedical engineering Biomedical Engineering Society
Biomedical Engineering Society
[ "Engineering", "Biology" ]
190
[ "Biological engineering", "Bioengineering stubs", "Biomedical engineering", "Biotechnology stubs", "Medical technology stubs", "Medical technology" ]
7,045,490
https://en.wikipedia.org/wiki/Dual%20abelian%20variety
In mathematics, a dual abelian variety can be defined from an abelian variety A, defined over a field k. A 1-dimensional abelian variety is an elliptic curve, and every elliptic curve is isomorphic to its dual, but this fails for higher-dimensional abelian varieties, so the concept of dual becomes more interesting in higher dimensions. Definition Let A be an abelian variety over a field k. We define to be the subgroup consisting of line bundles L such that , where are the multiplication and projection maps respectively. An element of is called a degree 0 line bundle on A. To A one then associates a dual abelian variety Av (over the same field), which is the solution to the following moduli problem. A family of degree 0 line bundles parametrized by a k-variety T is defined to be a line bundle L on A×T such that for all , the restriction of L to A×{t} is a degree 0 line bundle, the restriction of L to {0}×T is a trivial line bundle (here 0 is the identity of A). Then there is a variety Av and a line bundle , called the Poincaré bundle, which is a family of degree 0 line bundles parametrized by Av in the sense of the above definition. Moreover, this family is universal, that is, to any family L parametrized by T is associated a unique morphism f: T → Av so that L is isomorphic to the pullback of P along the morphism 1A×f: A×T → A×Av. Applying this to the case when T is a point, we see that the points of Av correspond to line bundles of degree 0 on A, so there is a natural group operation on Av given by tensor product of line bundles, which makes it into an abelian variety. In the language of representable functors one can state the above result as follows. The contravariant functor, which associates to each k-variety T the set of families of degree 0 line bundles parametrised by T and to each k-morphism f: T → T the mapping induced by the pullback with f, is representable. The universal element representing this functor is the pair (Av, P). This association is a duality in the sense that there is a natural isomorphism between the double dual Avv and A (defined via the Poincaré bundle) and that it is contravariant functorial, i.e. it associates to all morphisms f: A → B dual morphisms fv: Bv → Av in a compatible way. The n-torsion of an abelian variety and the n-torsion of its dual are dual to each other when n is coprime to the characteristic of the base. In general - for all n - the n-torsion group schemes of dual abelian varieties are Cartier duals of each other. This generalizes the Weil pairing for elliptic curves. History The theory was first put into a good form when K was the field of complex numbers. In that case there is a general form of duality between the Albanese variety of a complete variety V, and its Picard variety; this was realised, for definitions in terms of complex tori, as soon as André Weil had given a general definition of Albanese variety. For an abelian variety A, the Albanese variety is A itself, so the dual should be Pic0(A), the connected component of the identity element of what in contemporary terminology is the Picard scheme. For the case of the Jacobian variety J of a compact Riemann surface C, the choice of a principal polarization of J gives rise to an identification of J with its own Picard variety. This in a sense is just a consequence of Abel's theorem. For general abelian varieties, still over the complex numbers, A is in the same isogeny class as its dual. An explicit isogeny can be constructed by use of an invertible sheaf L on A (i.e. in this case a holomorphic line bundle), when the subgroup K(L) of translations on L that take L into an isomorphic copy is itself finite. In that case, the quotient A/K(L) is isomorphic to the dual abelian variety Av. This construction of Av extends to any field K of characteristic zero. In terms of this definition, the Poincaré bundle, a universal line bundle can be defined on A × Av. The construction when K has characteristic p uses scheme theory. The definition of K(L) has to be in terms of a group scheme that is a scheme-theoretic stabilizer, and the quotient taken is now a quotient by a subgroup scheme. The Dual Isogeny Let be an isogeny of abelian varieties. (That is, is finite-to-one and surjective.) We will construct an isogeny using the functorial description of , which says that the data of a map is the same as giving a family of degree zero line bundles on , parametrized by . To this end, consider the isogeny and where is the Poincare line bundle for . This is then the required family of degree zero line bundles on . By the aforementioned functorial description, there is then a morphism so that . One can show using this description that this map is an isogeny of the same degree as , and that . Hence, we obtain a contravariant endofunctor on the category of abelian varieties which squares to the identity. This kind of functor is often called a dualizing functor. Mukai's Theorem A celebrated theorem of Mukai states that there is an isomorphism of derived categories , where denotes the bounded derived category of coherent sheaves on X. Historically, this was the first use of the Fourier-Mukai transform and shows that the bounded derived category cannot necessarily distinguish non-isomorphic varieties. Recall that if X and Y are varieties, and is a complex of coherent sheaves, we define the Fourier-Mukai transform to be the composition , where p and q are the projections onto X and Y respectively. Note that is flat and hence is exact on the level of coherent sheaves, and in applications is often a line bundle so one may usually leave the left derived functors underived in the above expression. Note also that one can analogously define a Fourier-Mukai transform using the same kernel, by just interchanging the projection maps in the formula. The statement of Mukai's theorem is then as follows.Theorem:''' Let A be an abelian variety of dimension g'' and the Poincare line bundle on . Then, , where is the inversion map, and is the shift functor. In particular, is an isomorphism. Notes References Abelian varieties Abelian variety
Dual abelian variety
[ "Mathematics" ]
1,432
[ "Mathematical structures", "Category theory", "Duality theories", "Geometry" ]
7,045,887
https://en.wikipedia.org/wiki/Black%20cat%20bone
A black cat bone is a type of lucky charm used in the magical tradition of hoodoo. It is thought to ensure a variety of positive effects, such as invisibility, good luck, protection from malevolent magic, rebirth after death, and romantic success. The bone, anointed with Van Van oil, may be carried as a component of a mojo bag; alternatively, without the coating of oil, it is held in the charm-user's mouth. Origins The black cat has been a symbol of both good and ill luck in near-worldwide folklore accounts. Magical traditions involving black cat bones, specifically, have been found in German-Canadian practice as well as in hoodoo; these German-Canadian magic-makers were not previously in contact with hoodooists, suggesting a European origin to the charm. The use of the black cat bone to ensure invisibility, specifically as an aid to people, is comparable to the European Hand of Glory. Differences in method After a black cat is caught, it is almost universally boiled alive in a pot of water at midnight, so that its bones may be more easily looked over by the practitioner. One particular bone, special to each individual cat, contains all the magical efficacy alone. This part of the ritual comes from the European magical text, the Book of Saint Cyprian. A variety of rituals and methods are used to determine which bone is the right one, and preparation before the cat's slaughter can vary according to tradition. One method of obtaining a black cat bone, described in Zora Neale Hurston's Mules and Men, involves a period of fasting before the actual catching of the animal. After the standard boiling of the cat's corpse, each bone is tasted by the hoodooist, who then selects the first bitter-tasting bone as the correct one. Another way to determine the magical bone, though it is otherwise similar in procedure, involves a mirror. When the reflection of the bone becomes dark, the hoodoo practitioner will know that it is the right one. A variation of this method is also practiced on the Sea Islands, where the one bone that does not reflect in the mirror is believed to be magical. Yet another method of determining which bone is the correct one is to dump all the bones into a river. The bone that floats upstream is to be considered the bone of choice. Sale of purported "black cat bones" Contemporary hoodoo supply shops sell items labeled "black cat bones", usually small bones taken from a chicken and dyed black. Contemporary hoodoo, Wiccan, and other metaphysical supply shops use black cat fur for black cat magic, instead of bones. References Hoodoo (spirituality) Blues Magic items Cat folklore
Black cat bone
[ "Physics" ]
553
[ "Magic items", "Physical objects", "Matter" ]
7,046,215
https://en.wikipedia.org/wiki/Loewe%203NF
The Loewe 3NF was an early attempt to combine several functions in one electronic device. Produced by the German Loewe-Audion GmbH as early as 1926, the device consisted of three triode valves (tubes) in a single glass envelope together with two fixed capacitors and four fixed resistors required to make a complete radio receiver. The resistors and capacitors had to be sealed in their own glass tubes to prevent them from contaminating the vacuum. The only other parts required to build a radio receiver were the tuning coil, the tuning capacitor and the loudspeaker. The device was produced not to enter the integrated circuit era several decades early, but to evade German taxes levied on a per valveholder basis. As the Loewe set had only one valveholder, it was able to substantially undercut the competition. The resultant radio receiver required a 90 volt HT plus a 4 volt LT (A and B) battery (the HT battery provided not only 82.5 volts for the HT, but also two grid bias supplies at −1.5 volts and −7.5 volts). One million were manufactured, and were "a first step in integration of radioelectronic devices". One major disadvantage of the 3NF was that if one filament failed, the whole device was rendered useless. Loewe countered this by offering a filament repair service. Loewe were to also offer the 2NF (two tetrodes plus passive components) and the WG38 (two pentodes, a triode and the passive components). References Vacuum tubes
Loewe 3NF
[ "Physics" ]
344
[ "Vacuum tubes", "Vacuum", "Matter" ]
7,046,642
https://en.wikipedia.org/wiki/Analyst%20%28journal%29
Analyst is a biweekly peer-reviewed scientific journal covering all aspects of analytical chemistry, bioanalysis, and detection science. It is published by the Royal Society of Chemistry and the editor-in-chief is Norman Dovichi (University of Notre Dame). The journal was established in 1877 by the Society for Analytical Chemistry. Abstracting and indexing The journal is abstracted and indexed in MEDLINE and Analytical Abstracts. According to the Journal Citation Reports, the journal has a 2022 impact factor of 4.2. Analytical Communications In 1999, the Royal Society of Chemistry closed the journal Analytical Communications because it felt that the material submitted to that journal would be best included in a new communications section of Analyst. Predecessor journals of Analytical Communications were Proceedings of the Society for Analytical Chemistry, 1964–1974; Proceedings of the Analytical Division of the Chemical Society, 1975–1979; Analytical Proceedings, 1980–1993; Analytical Proceedings including Analytical Communications, 1994–1995. References External links Chemistry journals Analytical chemistry Royal Society of Chemistry academic journals Publications established in 1876 English-language journals Biweekly journals 1876 establishments in the United Kingdom
Analyst (journal)
[ "Chemistry" ]
224
[ "nan" ]
10,871,573
https://en.wikipedia.org/wiki/Open%20Architecture%20Computing%20Environment
Open Architecture Computing Environment (OACE) was a specification that aimed to provide a standards-based computing environment in order to decouple computing environment from software applications. It was proposed for the United States Department of Defense in 2004. See also Open architecture Mission Data Interface References Distributed computing architecture United States Navy
Open Architecture Computing Environment
[ "Technology" ]
62
[ "Computing stubs", "Computer science", "Computer science stubs" ]
10,872,064
https://en.wikipedia.org/wiki/Hardmask
A hardmask is a material used in semiconductor processing as an etch mask instead of a polymer or other organic "soft" resist material. Hardmasks are necessary when the material being etched is itself an organic polymer. Anything used to etch this material will also etch the photoresist being used to define its patterning since that is also an organic polymer. This arises, for instance, in the patterning of low-κ dielectric insulation layers used in VLSI fabrication. Polymers tend to be etched easily by oxygen, fluorine, chlorine and other reactive gases used in plasma etching. Use of a hardmask involves an additional deposition process, and hence additional cost. First, the hardmask material is deposited and etched into the required pattern using a standard photoresist process. Following that the underlying material can be etched through the hardmask. Finally the hardmask is removed with a further etching process. Hardmask materials can be metal or dielectric. Silicon based masks such as silicon dioxide or silicon carbide are usually used for etching low-κ dielectrics. However, SiOCH (carbon doped hydrogenated silicon oxide), a material used to insulate copper interconnects, requires an etchant that attacks silicon compounds. For this material, metal or amorphous carbon hardmasks are used. The most common metal for hardmasks is titanium nitride, but tantalum nitride has also been used. References Bibliography Shi, Hualing; Shamiryan, Denis; de Marneffe, Jean-François; Huang, Huai; Ho, Paul S.; Baklanov, Mikhail R., "Plasma processing of low-κ dielectrics", ch. 3 in, Baklanov, Mikhail; Ho, Paul S.; Zschech, Ehrenfried (eds), Advanced Interconnects for ULSI Technology, John Wiley & Sons, 2012 . Wong, T.; Ligatchev, V.; Rusli, R., "Structural properties and defect characterisation of plasma deposited carbon doped silicon oxide low-k dielectric films", pp. 133–141 in, Mathad, G.S. (ed); Baker, B.C.; Reidesma-Simpson, C.; Rathore, H.S.; Ritzdorf, T.L. (asst. eds), Copper Interconnects, New Contact Metallurgies, Structures, and Low-k Interlevel Dielectrics: Proceedings of the International Symposium, The Electrochemical Society, 2003 Semiconductor device fabrication
Hardmask
[ "Materials_science", "Engineering" ]
553
[ "Semiconductor device fabrication", "Materials science stubs", "Materials science", "Microtechnology" ]
10,872,408
https://en.wikipedia.org/wiki/Lipiduria
Lipiduria or lipuria is the presence of lipids in the urine. Lipiduria is most frequently observed in nephrotic syndrome where it is passed as lipoproteins along with other proteins. It has also been reported as a sign following fat embolism. When lipiduria occurs, epithelial cells or macrophages contain endogenous fats. When filled with numerous fat droplets, such cells are called oval fat bodies. Oval fat bodies exhibit a "Maltese cross" configuration under polarized light microscopy. The Maltese cross appearance occurs because of its liquid-crystalline structure giving it a double refraction (birefringence). See also Urostealith References Urine
Lipiduria
[ "Biology" ]
146
[ "Urine", "Excretion", "Animal waste products" ]
10,872,707
https://en.wikipedia.org/wiki/Parry%20arc
A Parry arc is a rare halo, an optical phenomenon which occasionally appears over a 22° halo together with an upper tangent arc. Discovery The halo was first described by Sir William Edward Parry (1790–1855) in 1820 during one of his Arctic expeditions in search for the Northwest Passage. On April 8, under harsh conditions while his two ships were trapped by ice forcing him to winter over at Melville Island in the northern Canadian Arctic Archipelago, he made a drawing of the phenomenon. The drawing accurately renders the parhelic circle, a 22° halo, a pair of sun dogs, a lower tangent arc, a 46° halo, and a circumzenithal arc. He did, however, get the upper tangent arc slightly wrong. On the other hand, he added two arcs extending laterally from the bases of the 46° halo, for long interpreted as incorrectly drawn infralateral arcs, but were probably correctly drawn subhelic arcs (both produced by the same crystal orientation but with light passing through different faces of the crystals). Formation Parry arcs are generated by double-oriented hexagonal column crystals, i.e. a so-called Parry orientation, where both the central main axis of the prism and the top and bottom prism side faces are oriented horizontally. This orientation is responsible for several rare haloes. Parry arcs are the result of light passing through two side faces forming a 60° angle. The shape of Parry arcs changes with the elevation of the sun and are subsequently called upper or lower arcs to indicate they are found above or under the sun, and sunvex or suncave depending on their orientation. The mechanism by which column crystals adopt this special Parry orientation has been subject to much speculation – recent laboratory experiments have shown that it is the presence of crystals with a scalene hexagonal cross-section which are likely to be the cause. Parry arcs can be confused with either upper tangent arcs, Lowitz arcs, and any of the odd radius halos produced by pyramidal crystals. See also Lowitz arc Notes References (Including a computer simulation recreating the halo observed by Parry.) External links Halo Reports – Photo by Joe MacGregor of a rare lower Parry sunvex arc in Antarctica (Blogg) Atmospheric optical phenomena
Parry arc
[ "Physics" ]
459
[ "Optical phenomena", "Physical phenomena", "Atmospheric optical phenomena", "Earth phenomena" ]
10,873,260
https://en.wikipedia.org/wiki/Magnet%20keeper
A magnet keeper, also known historically as an armature, is a bar made from magnetically soft iron or steel, which is placed across the poles of a permanent magnet to help preserve the strength of the magnet by completing the magnetic circuit; it is important for magnets that have low magnetic coercivity, such as alnico magnets (0.07T). Keepers also have a useful safety function, as they stop external metal being attracted to the magnet. Many magnets do not need a keeper, such as supermagnets, as they have very high coercivities; only those with lower coercivities, meaning that they are more susceptible to stray fields, require keepers. A magnet can be considered as the sum of many small magnetic domains, which may be only a few microns or smaller in size. Each domain carries its own small magnetic field, which can point in any direction. When all the domains are pointing in the same direction, the fields add up, yielding a strong magnet. When these all point in random directions, they cancel each other, and the net magnetic field is zero. In magnets with lower coercivities, the direction in which the magnetic domains are pointing is easily swayed by external fields, such as the Earth's magnetic field or perhaps by the stray fields caused by flowing currents in a nearby electrical circuit. Given enough time, such magnets may find their domains randomly oriented, and hence their net magnetization greatly weakened. A keeper for low-coercivity magnets is just a strong permanent magnet that keeps all the domains pointing the same way and realigns those that may have gone astray. References Magnetism
Magnet keeper
[ "Materials_science" ]
344
[ "Materials science stubs", "Electromagnetism stubs" ]
10,873,846
https://en.wikipedia.org/wiki/V%C3%A1clav%20Chv%C3%A1tal
Václav (Vašek) Chvátal () is a Professor Emeritus in the Department of Computer Science and Software Engineering at Concordia University in Montreal, Quebec, Canada, and a visiting professor at Charles University in Prague. He has published extensively on topics in graph theory, combinatorics, and combinatorial optimization. Biography Chvátal was born in 1946 in Prague and educated in mathematics at Charles University in Prague, where he studied under the supervision of Zdeněk Hedrlín. He fled Czechoslovakia in 1968, three days after the Soviet invasion, and completed his Ph.D. in Mathematics at the University of Waterloo, under the supervision of Crispin St. J. A. Nash-Williams, in the fall of 1970. Subsequently, he took positions at McGill University (1971 and 1978–1986), Stanford University (1972 and 1974–1977), the Université de Montréal (1972–1974 and 1977–1978), and Rutgers University (1986–2004) before returning to Montreal for the Canada Research Chair in Combinatorial Optimization at Concordia (2004–2011) and the Canada Research Chair in Discrete Mathematics (2011–2014) till his retirement. Research Chvátal first learned of graph theory in 1964, on finding a book by Claude Berge in a Plzeň bookstore and much of his research involves graph theory: His first mathematical publication, at the age of 19, concerned directed graphs that cannot be mapped to themselves by any nontrivial graph homomorphism Another graph-theoretic result of Chvátal was the 1970 construction of the smallest possible triangle-free graph that is both 4-chromatic and 4-regular, now known as the Chvátal graph. A 1972 paper relating Hamiltonian cycles to connectivity and maximum independent set size of a graph, earned Chvátal his Erdős number of 1. Specifically, if there exists an s such that a given graph is s-vertex-connected and has no (s + 1)-vertex independent set, the graph must be Hamiltonian. Avis et al. tell the story of Chvátal and Erdős working out this result over the course of a long road trip, and later thanking Louise Guy "for her steady driving." In a 1973 paper, Chvátal introduced the concept of graph toughness, a measure of graph connectivity that is closely connected to the existence of Hamiltonian cycles. A graph is t-tough if, for every k greater than 1, the removal of fewer than tk vertices leaves fewer than k connected components in the remaining subgraph. For instance, in a graph with a Hamiltonian cycle, the removal of any nonempty set of vertices partitions the cycle into at most as many pieces as the number of removed vertices, so Hamiltonian graphs are 1-tough. Chvátal conjectured that 3/2-tough graphs, and later that 2-tough graphs, are always Hamiltonian; despite later researchers finding counterexamples to these conjectures, it still remains open whether some constant bound on the graph toughness is enough to guarantee Hamiltonicity. Some of Chvátal's work concerns families of sets, or equivalently hypergraphs, a subject already occurring in his Ph.D. thesis, where he also studied Ramsey theory. In a 1972 conjecture that Erdős called "surprising" and "beautiful", and that remains open (with a $10 prize offered by Chvátal for its solution) he suggested that, in any family of sets closed under the operation of taking subsets, the largest pairwise-intersecting subfamily may always be found by choosing an element of one of the sets and keeping all sets containing that element. In 1979, he studied a weighted version of the set cover problem, and proved that a greedy algorithm provides good approximations to the optimal solution, generalizing previous unweighted results by David S. Johnson (J. Comp. Sys. Sci. 1974) and László Lovász (Discrete Math. 1975). Chvátal first became interested in linear programming through the influence of Jack Edmonds while Chvátal was a student at Waterloo. He quickly recognized the importance of cutting planes for attacking combinatorial optimization problems such as computing maximum independent sets and, in particular, introduced the notion of a cutting-plane proof. At Stanford in the 1970s, he began writing his popular textbook, Linear Programming, which was published in 1983. Cutting planes lie at the heart of the branch and cut method used by efficient solvers for the traveling salesman problem. Between 1988 and 2005, the team of David L. Applegate, Robert E. Bixby, Vašek Chvátal, and William J. Cook developed one such solver, Concorde. The team was awarded The Beale-Orchard-Hays Prize for Excellence in Computational Mathematical Programming in 2000 for their ten-page paper enumerating some of Concorde's refinements of the branch and cut method that led to the solution of a 13,509-city instance and it was awarded the Frederick W. Lanchester Prize in 2007 for their book, The Traveling Salesman Problem: A Computational Study. Chvátal is also known for proving the art gallery theorem, for researching a self-describing digital sequence, for his work with David Sankoff on the Chvátal–Sankoff constants controlling the behavior of the longest common subsequence problem on random inputs, and for his work with Endre Szemerédi on hard instances for resolution theorem proving. Books . Japanese translation published by Keigaku Shuppan, Tokyo, 1986. See also List of University of Waterloo people References External links Chvátal's website on encs.concordia.ca 1946 births Living people Scientists from Prague Canadian mathematicians Canadian people of Czech descent Czech mathematicians Czechoslovak emigrants to Canada Canada Research Chairs Combinatorialists University of Waterloo alumni Charles University alumni Academic staff of Concordia University John von Neumann Theory Prize winners
Václav Chvátal
[ "Mathematics" ]
1,203
[ "Combinatorialists", "Combinatorics" ]
10,874,021
https://en.wikipedia.org/wiki/Moldova%20Steel%20Works
Moldova Steel Works (; ) is a steel-producing company in Rîbnița, in the unrecognized state of Transnistria. It accounts for more than half of Transnistrian total industrial output. Moldova Steel Works was founded in 1985 for reprocessing of scrap metal. In 1998, majority of its shares was sold to Russian energy company Itera and 28.8% of shares was given to the employees of the company. Production peaked in 2000. In 2004, 90% of shares was acquired by "Austro-Ukrainian Hares Group" of Hares Youssef. Moldova Steel Works became owned by group of Russian–Ukrainian oligarchs, including in addition to Hares Youssef also Hryhoriy Surkis, Ihor Kolomoyskyi, Alisher Usmanov, Vadym Novynskyi and Rinat Akhmetov. Later the Russian company Metalloinvest, controlled by Alisher Usmanov and Vasily Anisimov, became owner of the company. In 2015, the ownership was returned to the Transnistrian authorities for a symbolic price. On 14 May 2018, the government of Ukraine included Moldova Steel Works in the list of sanctioned companies, but excluded it from the list on 19 March 2019 after a request by Moldovan Prime Minister Pavel Filip. The initial annual production capacity of the company was 684,000 tonnes of crude steel and 500,000 tonnes of rolled products. Later the capacity was reported to be is around 1,000,000 tonnes of steel and 1,000,000 tonnes of rolled products. In 2018, it produced almost 502,900 tonnes of steel and 497,900 tonnes of rolled goods. References External links Companies of Transnistria Steel companies of Moldova Iron and steel mills Rîbnița Moldavian Soviet Socialist Republic 1985 establishments in the Soviet Union
Moldova Steel Works
[ "Chemistry" ]
383
[ "Iron and steel mills", "Metallurgical facilities" ]
10,874,176
https://en.wikipedia.org/wiki/Mullion%20wall
A mullion wall is a structural system in which the load of the floor slab is taken by prefabricated panels around the perimeter. Visually, the effect is similar to the stone-mullioned windows of Perpendicular Gothic or Elizabethan architecture. The technology was devised by George Grenfell Baines and the engineer Felix Samuely in order to cope with material shortages at the Thomas Linacre School, Wigan (1952) and refined at the Shell Offices, Stanlow (1956), the Derby Colleges of Technology and Art (1956–64) and Manchester University Humanities Building (1961–67). A similar concept to the mullion wall was adopted by Eero Saarinen at the US Embassy, London (1955–60) and by Minoru Yamasaki at the World Trade Center, New York (1966–73). See also Curtain wall References Structural system Types of wall
Mullion wall
[ "Technology", "Engineering" ]
183
[ "Structural system", "Types of wall", "Structural engineering", "Building engineering" ]
10,874,478
https://en.wikipedia.org/wiki/Rope%20caulk
Rope caulk or caulking cord is a type of pliable putty or caulking formed into a rope-like shape. It is typically off-white in color, relatively odorless, and stays pliable for an extended period of time. Rope caulk can be used as caulking or weatherstripping around conventional windows installed in conventional wooden or metal frames (see glazing). It is also used as a form for epoxy work, since epoxy does not adhere to this material. Rope caulk has also been applied to the metallic structure supporting the magnet for a dynamic speaker to cut unwanted resonance of the metal structure, leading to improved speaker performance. It has also been used as a sonic damping material in sensitive phonograph components. History Mortite brand rope caulk was introduced by the J.W. Mortell Co. of Kankakee, Illinois in the 1940s, and called "pliable plastic tape". The trademark application was filed in March, 1943. It was later marketed as "caulking cord". The company was later acquired by Thermwell Products. Mortite Mortite putty is a brand of rope caulk marketed under the Frost King brand. Its primary ingredient is titanium dioxide; it has a specific gravity of 1.34. It is listed by the state of California as containing ingredients known to the state to cause cancer or adversely affect reproductive health (a "P65 Warning"). Notes Plastics Building engineering
Rope caulk
[ "Physics", "Engineering" ]
311
[ "Building engineering", "Unsolved problems in physics", "Architecture", "Civil engineering", "Amorphous solids", "Plastics" ]
10,874,913
https://en.wikipedia.org/wiki/RAD%20Data%20Communications
RAD Data Communications Ltd. is a privately held corporation, headquartered in Tel Aviv, Israel that designs and manufacturers specialized networking equipment. RAD is a member of the $1.3 billion RAD Group of companies. History RAD was founded by brothers Yehuda and Zohar Zisapel in 1981 as a spin-off from Bynet, a networking hardware distribution company founded by Yehuda in 1973. Their goal was to develop their own products; the company was simply named RAD, for Research And Development. RAD first successful product was a miniature (by 1980s standards) modem for telephone lines that did not require a separate power source. This novel concept quickly became a commercial success, and by 1985, RAD annual revenues reached $5.5 million. This initial product line evolved into RAD Data Communications, the largest company within the RAD Group. In 2014, RAD opened a new $32 million advanced R&D center for developing NFV and SDN solutions in the southern Israeli city of Beersheba. The company is active in industry standardization bodies such as the Broadband Forum, ETSI NFV ISG, International Telecommunication Union (ITU), Internet Engineering Task Force (IETF), and Metro Ethernet Forum (MEF). One of the 46 copies of Rodin's The Thinker that were made from the original cast after the sculptor's death was acquired by Yehuda Zisapel and placed on permanent exhibit in the lobby of RAD's current Tel Aviv headquarters when the building was opened in 2000. Products RAD's research, development and engineering includes hardware virtualization, operations, administration and management (OAM) and performance management; service assurance; traffic management; fault management; synchronization and timing over packet; TDM pseudowire; ASIC and FPGA development; hardware miniaturization; SFP form-factor solutions; and business DSL. An early RAD modem, the SRM-3, was recognized as the world's smallest in the 1992 Guinness Book of World Records. Used for connecting asynchronous terminals to host computers, it measured by by . In 1998, RAD invented TDM over IP (TDMoIP®) technology and in 2013 it pioneered Distributed Network Function Virtualization (D-NFV®). At Mobile World Congress 2015, RAD introduced the world's first SFP-based IEEE 1588 Grandmaster clock with a built-in GNSS receiver. In 2015 RAD also launched a virtual customer premises equipment (vCPE) device for IP and Carrier Ethernet services with a field pluggable module for hosting virtual network functions (VNFs) and in 2016 it added a white box option that is license-upgradable for network functions such as routing, service demarcation and performance monitoring. RAD's portfolio includes the smallest NFV-empowered device yet invented. RAD has also been cited as an industry leader in developing communications platforms and security solutions for public utilities. Markets The company's installed base now exceeds 15,000,000 units and includes more than 150 telecommunications carriers and service providers, in addition to a large number of public transportation systems, power utilities, governments, homeland security agencies, and educational institutions. RAD solutions are distributed through approximately 300 partner channels in over 150 countries. The company itself maintains 30 offices across six continents. Awards 1994: Israel Export Award See also Science and technology in Israel Silicon Wadi Economy of Israel List of networking hardware vendors References Technology companies of Israel Computer hardware companies Networking companies Networking hardware companies Information technology companies of Israel Electronics companies of Israel Companies based in Tel Aviv
RAD Data Communications
[ "Technology" ]
750
[ "Computer hardware companies", "Computers" ]
10,874,936
https://en.wikipedia.org/wiki/Sudan%20Airways%20Flight%20139
Sudan Airways Flight 139 was a Sudan Airways passenger flight that crashed on 8 July 2003 at Port Sudan. The Boeing 737 aircraft was operating a domestic scheduled Port Sudan–Khartoum passenger service. Some 15 minutes after takeoff, the aircraft lost power in one of its engines, which prompted the crew to return to the airport for an emergency landing. In doing so, the pilots missed the airport runway, and the airplane descended until it hit the ground, disintegrating after impact. Of the 117 people aboard, 116 died. Aircraft and crew The aircraft involved in the accident was a Boeing 737-2J8C, c/n 21169, registered ST-AFK. Powered by two Pratt & Whitney JT8D-7 engines, it had its maiden flight on 29 August 1975, and was delivered new to Sudan Airways on 15 September 1975. At the time of the accident, the aircraft was almost 28 years old. The pilots involved were Captain Awad Jaber, First Officer Amir al-Nujumi, and Second Officer Walid Khair. Accident The airplane had departed Port Sudan at 4:00 am (UTC+3), bound for Khartoum. Captain Jaber radioed about ten minutes after take-off about a problem with one of the engines, and that he would return to the airport to make an emergency landing. However, the plane plummeted into the ground before returning to the airfield and immediately caught fire. All but one of the 117 occupants of the aircraft— most of them Sudanese— perished in the accident. There were three Indians, a Briton, a Chinese, an Emirati, and an Ethiopian among the dead as well. A two-year-old boy was the sole survivor. Then-Sudanese foreign minister Mustafa Osman Ismail raised the trade embargo imposed by the U.S. government in 1997 as a contributing factor to the accident, claiming the airline was unable to get spare parts for the maintenance of its fleet because of sanctions. The aircraft involved in the accident, in particular, had not been serviced for years. See also Aviation accidents and incidents Sudan Airways Flight 109 References 2003 disasters in Sudan Sudan Airways accidents and incidents Aviation accidents and incidents in Sudan Aviation accidents and incidents in 2003 Accidents and incidents involving the Boeing 737 Original Airliner accidents and incidents caused by pilot error Airliner accidents and incidents caused by mechanical failure 2003 in Sudan July 2003 events in Africa
Sudan Airways Flight 139
[ "Materials_science" ]
488
[ "Airliner accidents and incidents caused by mechanical failure", "Mechanical failure" ]
10,875,031
https://en.wikipedia.org/wiki/Periodic%20continued%20fraction
In mathematics, an infinite periodic continued fraction is a simple continued fraction that can be placed in the form where the initial block of k+1 partial denominators is followed by a block of m partial denominators that repeats ad infinitum. For example, can be expanded to the periodic continued fraction . This article considers only the case of periodic regular continued fractions. In other words, the remainder of this article assumes that all the partial denominators ai (i ≥ 1) are positive integers. The general case, where the partial denominators ai are arbitrary real or complex numbers, is treated in the article convergence problem. Purely periodic and periodic fractions Since all the partial numerators in a regular continued fraction are equal to unity we can adopt a shorthand notation in which the continued fraction shown above is written as where, in the second line, a vinculum marks the repeating block. Some textbooks use the notation where the repeating block is indicated by dots over its first and last terms. If the initial non-repeating block is not present – that is, if k = -1, a0 = am and the regular continued fraction x is said to be purely periodic. For example, the regular continued fraction of the golden ratio φ is purely periodic, while the regular continued fraction of is periodic, but not purely periodic. As unimodular matrices Periodic continued fractions are in one-to-one correspondence with the real quadratic irrationals. The correspondence is explicitly provided by Minkowski's question-mark function. That article also reviews tools that make it easy to work with such continued fractions. Consider first the purely periodic part This can, in fact, be written as with the being integers, and satisfying Explicit values can be obtained by writing which is termed a "shift", so that and similarly a reflection, given by so that . Both of these matrices are unimodular, arbitrary products remain unimodular. Then, given as above, the corresponding matrix is of the form and one has as the explicit form. As all of the matrix entries are integers, this matrix belongs to the modular group Relation to quadratic irrationals A quadratic irrational number is an irrational real root of the quadratic equation where the coefficients a, b, and c are integers, and the discriminant, , is greater than zero. By the quadratic formula, every quadratic irrational can be written in the form where P, D, and Q are integers, D > 0 is not a perfect square (but not necessarily square-free), and Q divides the quantity (for example ). Such a quadratic irrational may also be written in another form with a square-root of a square-free number (for example ) as explained for quadratic irrationals. By considering the complete quotients of periodic continued fractions, Euler was able to prove that if x is a regular periodic continued fraction, then x is a quadratic irrational number. The proof is straightforward. From the fraction itself, one can construct the quadratic equation with integral coefficients that x must satisfy. Lagrange proved the converse of Euler's theorem: if x is a quadratic irrational, then the regular continued fraction expansion of x is periodic. Given a quadratic irrational x one can construct m different quadratic equations, each with the same discriminant, that relate the successive complete quotients of the regular continued fraction expansion of x to one another. Since there are only finitely many of these equations (the coefficients are bounded), the complete quotients (and also the partial denominators) in the regular continued fraction that represents x must eventually repeat. Reduced surds The quadratic surd is said to be reduced if and its conjugate satisfies the inequalities . For instance, the golden ratio is a reduced surd because it is greater than one and its conjugate is greater than −1 and less than zero. On the other hand, the square root of two is greater than one but is not a reduced surd because its conjugate is less than −1. Galois proved that the regular continued fraction which represents a quadratic surd ζ is purely periodic if and only if ζ is a reduced surd. In fact, Galois showed more than this. He also proved that if ζ is a reduced quadratic surd and η is its conjugate, then the continued fractions for ζ and for (−1/η) are both purely periodic, and the repeating block in one of those continued fractions is the mirror image of the repeating block in the other. In symbols we have where ζ is any reduced quadratic surd, and η is its conjugate. From these two theorems of Galois a result already known to Lagrange can be deduced. If r > 1 is a rational number that is not a perfect square, then In particular, if n is any non-square positive integer, the regular continued fraction expansion of contains a repeating block of length m, in which the first m − 1 partial denominators form a palindromic string. Length of the repeating block By analyzing the sequence of combinations that can possibly arise when is expanded as a regular continued fraction, Lagrange showed that the largest partial denominator ai in the expansion is less than , and that the length of the repeating block is less than 2D. More recently, sharper arguments based on the divisor function have shown that the length of the repeating block for a quadratic surd of discriminant D is on the order of Canonical form and repetend The following iterative algorithm can be used to obtain the continued fraction expansion in canonical form (S is any natural number that is not a perfect square): Notice that mn, dn, and an are always integers. The algorithm terminates when this triplet is the same as one encountered before. The algorithm can also terminate on ai when ai = 2 a0, which is easier to implement. The expansion will repeat from then on. The sequence is the continued fraction expansion: Example To obtain as a continued fraction, begin with m0 = 0; d0 = 1; and a0 = 10 (102 = 100 and 112 = 121 > 114 so 10 chosen). So, m1 = 10; d1 = 14; and a1 = 1. Next, m2 = 4; d2 = 7; and a2 = 2. Now, loop back to the second equation above. Consequently, the simple continued fraction for the square root of 114 is is approximately 10.67707 82520. After one expansion of the repetend, the continued fraction yields the rational fraction whose decimal value is approx. 10.67707 80856, a relative error of 0.0000016% or 1.6 parts in 100,000,000. Generalized continued fraction A more rapid method is to evaluate its generalized continued fraction. From the formula derived there: and the fact that 114 is 2/3 of the way between 102=100 and 112=121 results in which is simply the aforementioned evaluated at every third term. Combining pairs of fractions produces which is now evaluated at the third term and every six terms thereafter. See also Notes References (This is now available as a reprint from Dover Publications.) Continued fractions Mathematical analysis
Periodic continued fraction
[ "Mathematics" ]
1,509
[ "Mathematical analysis", "Continued fractions", "Number theory" ]
10,875,316
https://en.wikipedia.org/wiki/Knowledge%20policy
Knowledge policies provide institutional foundations for creating, managing, and using organizational knowledge as well as social foundations for balancing global competitiveness with social order and cultural values. Knowledge policies can be viewed from a number of perspectives: the necessary linkage to technological evolution, relative rates of technological and institutional change, as a control or regulatory process, obstacles posed by cyberspace, and as an organizational policy instrument. Policies are the paradigms of government and all bureaucracies. Policies provide a context of rules and methods to guide how large organizations meet their responsibilities. Organizational knowledge policies describe the institutional aspects of knowledge creation, management, and use within the context of an organization's mandate or business model. Social knowledge policies balance between progress in the knowledge economy to promote global competitiveness with social values, such as equity, unity, and the well-being of citizens. From a technological perspective, Thomas Jefferson (1816) noted that laws and institutions must keep pace with the progress of the human mind. Institutions must advance as new discoveries are made, new truths are discovered, and as opinions and circumstances change. Fast-forwarding to the late 20th century, Martin (1985) stated that any society with a high level of automation must frame its laws and safeguards so that computers can police other computers. Tim Berners-Lee (2000) noted that both policy and technology must be designed with an understanding of the implications of each other. Finally, Sparr (2001) points out that rules will emerge in cyberspace because even on the frontier, pioneers need property rights, standards, and rules of fair play to protect them from pirates. Government is the only entity that can enforce such rules, but they could be developed by others. From a rate of change point of view, McGee and Prusak (1993) note that when an organization changes its culture, information policies are among the last thing to change. From a market perspective, Martin (1996) points out that although cyberspace mechanisms change very rapidly, laws change very slowly, and that some businesses will use this gap for competitive advantage. Similarly, Sparr (2001) discerned that governments have the interest and means to govern new areas of technology, but that past laws generally do not yet cover these emerging technologies and new laws take time to create. A number of authors have indicated that it will be very difficult to monitor and regulate cyberspace. Negroponte (1997) uses a metaphor of limiting the freedom of bit radiation is like the Romans attempting to stop Christianity, even though early data broadcasters may be eaten by Washington lions. Brown (1997) questions whether it will even be possible for governments to monitor compliance with regulations in the face of exponentially increasing encrypted traffic within private networks. As cybernetic environments become central to commercial activity, monitoring electronic markets will become increasingly problematic. From a corporate point of view, Flynn (1956) notes that employee use of corporate computer resources poses liability risks and jeopardizes security and that no organization can afford to engage in electronic communications and e-commerce unprepared. A key attribute of cyberspace is that it is a virtual rather than a real place. Thus, a growing share of social and commercial electronic activity does not have a national physical location (Cozel (1997)), raising a key question of whether legislatures can even set national policies or coordinate international policies. Similarly, Berners-Lee (2000) explains that key criterion of Trademark law – separation in location or market – does not work for World-Wide Web domain names because the Internet crosses all geographic boundaries and has no concept of a market area. From an organizational perspective, Simard (2000) states that "if traditional policies are applied directly [to a digital environment], the Canadian Forest Service could become marginalized in a dynamic knowledge-based economy." Consequently, the CFS developed and implemented an Access to Knowledge Policy that "fosters the migration of the CFS towards providing free, open access to its knowledge assets, while recognizing the need for cost recovery and the need to impose restrictions on access in some cases" (Simard, 2005). The policy comprises a framework of objectives, guiding principles, staff responsibilities, and policy directives. The directives include ownership and use; roles, rights, and responsibilities; levels of access and accessibility; service to clients; and cost of access. See also References Berners-Lee, Tim. 2000. Weaving the Web. Harper Collins, New York, NY p 40, 124 Brown, David. 1997. Cybertrends, Penguin Books, London UK. p 100, 120 Cozel, Diane. 1997. The Weightless World. MIT Press, Cambridge, MA. p 18 Flynn, Nancy. 2001. The ePolicy Handbook. American Management Association. p 15 Hearn, G., & Rooney, D. (Eds.) 2008. Knowledge Policy: Challenges for the Twenty First Century. Cheltenham: Edward Elgar. Jefferson, Thomas. 1816. Letter to Samuel Kercheval (July 12, 1816) Martin, James. 1985. In: Information Processing Systems for Management (Hussain, 1985). Richard D. Irwin, Homewood, IL. p339 Martin, James. 1996. Cybercorp, The New Business Revolution. American Management Association, New York, NY. p19 Mcgee, James and Lawrence Prusak. 1993. Managing information Strategically. John Wiley & Sons, New York, NY. p167 Negroponte, Nicholas. 1996. Being Digital. Random House, New York, NY. P55 Rooney, D., Hearn, G., Mandeville T. & Joseph, R. (2003). Public Policy in Knowledge-Based Economies: Foundations and Frameworks, Cheltenham: Edward Elgar. Rooney, D., Hearn, G., & Ninan, A. (Eds.) 2005. Handbook on the Knowledge Economy. Cheltenham: Edward Elgar. Simard, Albert. 2000. Managing Knowledge at the Canadian Forest Service. Natural Resources Canada, Canadian Forest Service, Ottawa, ON. p51 Simard, Albert. 2005. Canadian Forest Service Access to Knowledge Policy. Natural Resources Canada, Canadian Forest Service, Ottawa, ON. 30p Sparr, Debora. 2001. Ruling the Waves. Harcourt, Inc. New York, NY. p14, 370 Knowledge management Business terms Information society
Knowledge policy
[ "Technology" ]
1,292
[ "Computing and society", "Information society" ]
10,875,676
https://en.wikipedia.org/wiki/Particle%20physics%20in%20cosmology
Particle physics is the study of the interactions of elementary particles at high energies, whilst physical cosmology studies the universe as a single physical entity. The interface between these two fields is sometimes referred to as particle cosmology. Particle physics must be taken into account in cosmological models of the early universe, when the average energy density was very high. The processes of particle pair production, scattering and decay influence the cosmology. As a rough approximation, a particle scattering or decay process is important at a particular cosmological epoch if its time scale is shorter than or similar to the time scale of the universe's expansion. The latter quantity is where is the time-dependent Hubble parameter. This is roughly equal to the age of the universe at that time. For example, the pion has a mean lifetime to decay of about 26 nanoseconds. This means that particle physics processes involving pion decay can be neglected until roughly that much time has passed since the Big Bang. Cosmological observations of phenomena such as the cosmic microwave background and the cosmic abundance of elements, together with the predictions of the Standard Model of particle physics, place constraints on the physical conditions in the early universe. The success of the Standard Model at explaining these observations support its validity under conditions beyond those which can be produced in a laboratory. Conversely, phenomena discovered through cosmological observations, such as dark matter and baryon asymmetry, suggest the presence of physics that goes beyond the Standard Model. Further reading Bergström, Lars & Goobar, Ariel (2004); Cosmology and Particle Astrophysics, 2nd ed. Springer Verlag. . Branco, G. C., Shafi, Q., & Silva-Marcos, J. I. (2001). Recent developments in particle physics and cosmology. Dordrecht: Kluwer Academic. Collins, P. D. B. (2007). Particle physics and cosmology. New York: John Wiley & Sons. Kazakov, D. I., & Smadja, G. (2005). Particle physics and cosmology the interface. NATO science series, v. 188. Dordecht: Springer. External links Center for Particle Cosmology at the University of Pennsylvania Physical cosmology Particle physics
Particle physics in cosmology
[ "Physics", "Astronomy" ]
467
[ "Astronomical sub-disciplines", "Theoretical physics", "Astrophysics", "Particle physics", "Particle physics stubs", "Physical cosmology" ]
10,875,756
https://en.wikipedia.org/wiki/Lie%27s%20third%20theorem
In the mathematics of Lie theory, Lie's third theorem states that every finite-dimensional Lie algebra over the real numbers is associated to a Lie group . The theorem is part of the Lie group–Lie algebra correspondence. Historically, the third theorem referred to a different but related result. The two preceding theorems of Sophus Lie, restated in modern language, relate to the infinitesimal transformations of a group action on a smooth manifold. The third theorem on the list stated the Jacobi identity for the infinitesimal transformations of a local Lie group. Conversely, in the presence of a Lie algebra of vector fields, integration gives a local Lie group action. The result now known as the third theorem provides an intrinsic and global converse to the original theorem. Historical notes The equivalence between the category of simply connected real Lie groups and finite-dimensional real Lie algebras is usually called (in the literature of the second half of 20th century) Cartan's or the Cartan-Lie theorem as it was proved by Élie Cartan. Sophus Lie had previously proved the infinitesimal version: local solvability of the Maurer-Cartan equation, or the equivalence between the category of finite-dimensional Lie algebras and the category of local Lie groups. Lie listed his results as three direct and three converse theorems. The infinitesimal variant of Cartan's theorem was essentially Lie's third converse theorem. In an influential book Jean-Pierre Serre called it the third theorem of Lie. The name is historically somewhat misleading, but often used in connection to generalizations. Serre provided two proofs in his book: one based on Ado's theorem and another recounting the proof by Élie Cartan. Proofs There are several proofs of Lie's third theorem, each of them employing different algebraic and/or geometric techniques. Algebraic proof The classical proof is straightforward but relies on Ado's theorem, whose proof is algebraic and highly non-trivial. Ado's theorem states that any finite-dimensional Lie algebra can be represented by matrices. As a consequence, integrating such algebra of matrices via the matrix exponential yields a Lie group integrating the original Lie algebra. Cohomological proof A more geometric proof is due to Élie Cartan and was published by . This proof uses induction on the dimension of the center and it involves the Chevalley-Eilenberg complex. Geometric proof A different geometric proof was discovered in 2000 by Duistermaat and Kolk. Unlike the previous ones, it is a constructive proof: the integrating Lie group is built as the quotient of the (infinite-dimensional) Banach Lie group of paths on the Lie algebra by a suitable subgroup. This proof was influential for Lie theory since it paved the way to the generalisation of Lie third theorem for Lie groupoids and Lie algebroids. See also Lie group integrator References External links Encyclopaedia of Mathematics (EoM) article Lie algebras Lie groups Theorems about algebras
Lie's third theorem
[ "Mathematics" ]
618
[ "Lie groups", "Mathematical structures", "Algebraic structures" ]
10,876,058
https://en.wikipedia.org/wiki/Crane%20tank%20locomotive
A crane tank locomotive (CT) is a steam locomotive fitted with a crane for working in railway workshops, docksides, or other industrial environments. The crane may be fitted at the front, centre or rear. The 'tank' in its name refers to water tanks mounted either side of the boiler, as cranes were usually constructed on tank locomotives (as opposed to tender locomotives) for greater mobility in the confined locations where they were normally used. There is also a crane engine in the museum of Scottish railways Preserved examples Shelton Iron & Steel Works No. 4101, an built by Dübs & Company built in 1901, entering preservation on the East Somerset Railway in 1970, working 1977-1986 and later sold to the Foxfield Railway, where it entered service in 2010. Millfield, an built by Robert Stephenson & Hawthorns in 1942 (works no.7070), preserved at Bressingham Steam & Gardens. See also Crane (rail) NLR crane tank Shelton Iron & Steel Works No. 4101 Three GWR engines constructed as crane tanks based on 850 class Further reading Crane Tank Locomotives in Australia - Australian Railway Historical Society Bulletin, June 1985, pp123-139 References External links Barclay crane tank no.2127 of 1942 Dübs crane tank no. 4101 RSH crane tank Steam locomotive types Cranes (machines)
Crane tank locomotive
[ "Engineering" ]
269
[ "Engineering vehicles", "Cranes (machines)" ]
10,877,048
https://en.wikipedia.org/wiki/Marinobacter
Marinobacter is a genus of bacteria found in sea water. They are also found in a variety of salt lakes. A number of strains and species can degrade hydrocarbons. The species involved in hydrocarbon degradation include M. alkaliphilus, M. arcticus, M. hydrocarbonoclasticus, M. maritimus, and M. squalenivorans. There are currently 46 species of Marinobacter that are characterized by Gram-negative rods and salt-tolerance. References Alteromonadales Hydrocarbon-degrading bacteria Bacteria genera
Marinobacter
[ "Biology" ]
118
[ "Hydrocarbon-degrading bacteria", "Bacteria" ]
10,877,079
https://en.wikipedia.org/wiki/Criegee%20intermediate
A Criegee intermediate (also called a Criegee zwitterion or Criegee biradical) is a carbonyl oxide with two charge centers. These chemicals may react with sulfur dioxide and nitrogen oxides in the Earth's atmosphere, and are implicated in the formation of aerosols, which are an important factor in controlling global climate. Criegee intermediates are also an important source of OH (hydroxyl radicals). OH radicals are the most important oxidant in the troposphere, and are important in controlling air quality and pollution. The formation of this sort of structure was first postulated in the 1950s by Rudolf Criegee, for whom it is named. It was not until 2012 that direct detection of such chemicals was reported. Infrared spectroscopy suggests the electronic structure has a substantially zwitterionic character rather than the biradical character that had previously been proposed. Formation Criegee intermediates are formed by the gas-phase reactions of alkenes and ozone in the Earth's atmosphere. Ozone adds across the carbon–carbon double bond of the alkene to form a molozonide, which then decomposes to produce a carbonyl (RR'CO) and a carbonyl oxide. The latter is known as the Criegee intermediate. The alkene ozonolysis reaction is extremely exothermic, releasing about of excess energy. Therefore, the Criegee intermediates are formed with a large amount of internal energy. Removal When Criegee intermediates are formed, some portion of them will undergo prompt unimolecular decay, producing OH radicals and other products. However, they may instead become stabilized by interactions with other molecules or react with other chemicals to give different products. Criegee intermediates may be collisionally stabilized via collisions with other molecules in the atmosphere. These stabilized Criegee intermediates may then undergo thermal unimolecular decay to OH radicals and other products, or may undergo bimolecular reactions with other atmospheric species. In the ozonolysis reaction sequence, the Criegee intermediate reacts with another carbonyl compound (generally the aldehyde or ketone byproduct of the Criegee-intermediate formation reaction itself) to form an ozonide (1,2,4-trioxolane). References Free radicals Chemical bonding Environmental chemistry Climate change mitigation
Criegee intermediate
[ "Physics", "Chemistry", "Materials_science", "Biology", "Environmental_science" ]
488
[ "Free radicals", "Environmental chemistry", "Senescence", "Condensed matter physics", "Biomolecules", "nan", "Chemical bonding" ]
10,877,102
https://en.wikipedia.org/wiki/Marinobacter%20hydrocarbonoclasticus
Marinobacter hydrocarbonoclasticus is a species of bacteria found in sea water which are able to degrade hydrocarbons. The cells are rod-shaped and motile by means of a single polar flagellum. Etymology ‘Hydrocarbonoclastic’ means ‘hydrocarbon dismantling.’ These bacteria were named as such because they can degrade the major components of oil. History Both the genus Marinobacter and the species Marinobacter hydrocarbonoclasticus were first identified and described in 1992 by Gauthier et al. Using polymerase chain reaction to analyze by 16sRNA DNA, Gauthier showed that it was a member of the gamma group of the Proteobacteria, with sufficient distance to other described Proteobacteria to warrant the creation of a new genus. In 2005, Marquez and Ventosa from the Department of Microbiology and Parasitology of the University of Sevilla in Spain used “G+C content, fatty acid composition, and DNA-DNA hybridization… to understand the taxonomic positions” of Marinobacter hydrocarbonoclasticus and Marinobacter aquaeolei. “Marquez suggests that the two species be united under the same name since they are heterotypic synonyms due to phenotypic and phylogenetic traits.” In 2011, Hamdan & Fuller discovered that Marinobacter hydrocarbonoclastus, die when exposed to the chemical dispersant COREXIT EC9500A used to treat the Deepwater Horizon oilspill. Genome Structure The genome of Marinobacter hydrocarbonoclasticus has a 52.7% guanine + cytosine content. Evolution and Phylogeny Marinobacter hydrocarbonoclasticus are a type of eubacteria. 16sRNA DNA analysis indicates that these organisms are related to the Gammaproteobacteria. Initial 16sRNA phylogenetic analysis did not reveal any close relatives to Marinobacter hydrocarbonoclasticus. Therefore, the organism was placed in a genus of its own, with scientists believing that Pseudomonas aeruginosa was its closest modern relative. In 1999, 16S rDNA sequence analysis revealed Marinobacter hydrocarbonoclasticus to have a very close relative in Marinobacter aquaeolei. The two organisms contain 16S rDNA sequences with 99.4% similarity. The organisms from the genus Marinobacter have been found to have high diversity in terms of the environments they inhabit. Marinobacter species have been discovered in “hypersaline bacterial mats, marine hot-water springs in Japan, [and] cold seawater as in Arctic and Antarctic regions.” Morphology and description Marinobacter hydrocarbonoclasticus are Gram-negative and rod shaped. Their cells are, on average, are 0.3-0.6 μm in diameter and 2-3 μm long. Their ability to produce flagella is largely dependent on the NaCl concentration of their environment. In solutions with NaCl concentrations of 0.6-1.5M, Marinobacter hydrocarbonoclasticus produce and move by the movement of “a single unsheathed polar flagellum.” In solutions with NaCl concentrations <0.2 or >1.5, M. hydrocarbonoclasticus are unable to produce flagella, and are thereby unable to influence their movement through medium. Metabolism Marinobacter hydrocarbonoclasticus cells do not contain cytochrome P450, which is the key enzyme for degrading aromatic rings, a major component of petroleum hydrocarbons. These organisms are adapted to growing on long non-cyclic alkanes, which are common in petroleum hydrocarbons. Cells can grow on aromatic hydrocarbons, such as hydrocarbons containing aromatic rings. Marinobacter hydrocarbonoclasticus are not obligate hydrocarbonoclastic organisms; they can also grow on standard medium, without hydrocarbons. Moreover, Marinobacter cells can denitrify, producing nitrogen gas. They can use either nitrate (NO3−) or nitrite (NO2−) as their terminal elector. Marinobacter hydrocarbonoclasticus cells can grow in aerobic liquid medium culture and form colonies on agar, showing that they are not obligate anaerobes. Growth, Reproduction, and Behaviour Marinobacter form discrete well-rounded colonies on plates, indicating that they reproduce via binary fission. Marinobacter hydrocarbonoclasticus can grow with or without the presence of oxygen. Their cells are tolerant of high salinities. They are capable of growing up to 3.5 Molar NaCl, but grow best at around 0.6 Molar, which is the molar of the Mediterranean seawater where they are isolated. They can grow as free plankton or as fixed elements of a biofilm. Marinobacter hydrocarbonoclasticus cells degrade hydrocarbons and excrete osmoprotectant ectoine (Site du Genoscope). They also excrete Petrobactin, “a bis-catechol α-hydroxy acid siderophore that readily undergoes a light-mediated decarboxylation reaction when bound to Fe(III).” Significance in Technology and Industry Marinobacter hydrocarbonoclastus degrade petroleum hydrocarbons, including those found in oceanic oil spills. In 2011, it was discovered that Marinobacter hydrocarbonoclastus are inhibited when exposed to the chemical COREXIT EC9500A. This chemical is a dispersant widely used to assist in the clean up after oceanic oil spills. In their tests, Hamdan and Fuller (2011) obtained data suggesting that, “hydrogen-degrading bacteria are inhibited by chemical dispersants, and that the use of dispersants has the potential to diminish the capacity of the environment to bioremediate spills.” Marinobacter hydrocarbonoclasticus are able to grow in liquid culture and on agar plates, where they produced beige colonies. They are tolerant of high salinity and can grow aerobically and anaerobically. The ability to grow in heterogeneous environments could prove beneficial for scientists seeking new, bacterial based, techniques for oceanic oil spill clean up. References External links Type strain of Marinobacter hydrocarbonoclasticus at BacDive - the Bacterial Diversity Metadatabase Alteromonadales Hydrocarbon-degrading bacteria Bacteria described in 1992
Marinobacter hydrocarbonoclasticus
[ "Biology" ]
1,332
[ "Hydrocarbon-degrading bacteria", "Bacteria" ]
10,877,467
https://en.wikipedia.org/wiki/Gliese%20581c
Gliese 581c (Gl 581c or GJ 581c) is an exoplanet orbiting within the Gliese 581 system. It is the second planet discovered in the system and the third in order from the star. With a mass about 6.8 times that of the Earth, it is classified as a super-Earth (a category of planets with masses greater than Earth's up to ten Earth masses). At the time of its discovery in 2007, Gliese 581c gained interest from astronomers because it was reported to be the first potentially Earth-like planet in the habitable zone of its star, with a temperature right for liquid water on its surface, and, by extension, potentially capable of supporting extremophile forms of Earth-like life. However, further research cast doubt upon the planet's habitability. Based on newer models of the habitable zone, the planet is likely too hot to be potentially habitable. In astronomical terms, the Gliese 581 system is relatively close to Earth, at in the direction of the constellation of Libra. This distance, along with the declination and right ascension coordinates, give its exact location in the Milky Way. Discovery The team released a paper of their findings dated 27 April 2007, published in the July 2007 journal Astronomy & Astrophysics. At the time of discovery, it was reported to be the first potentially Earth-like planet in the habitable zone of its star and the smallest-known exoplanet around a main-sequence star, but on 21 April 2009, another planet orbiting Gliese 581, Gliese 581e, with an approximate mass of 1.9 Earth masses, was announced. In the paper, they also announced the discovery of another planet in the system, Gliese 581d, with a minimum mass of 7.7 Earth masses and a semi-major axis of 0.25 astronomical units. Physical characteristics Mass The existence of Gliese 581c and its mass have been measured by the radial velocity method of detecting exoplanets. The mass of a planet is calculated by the small periodic movements around a common centre of mass between the host star Gliese 581 and its planets. When all planets are fitted with a Keplerian solution, the minimum mass of the planet is determined to be 5.5 Earth masses. The radial velocity method cannot by itself determine the true mass, but it cannot be very much larger than this or the system would be dynamically unstable. Dynamical simulations of the Gliese 581 system which assume the orbits of the planets are coplanar indicate that the planets cannot exceed approximately 1.6 to 2 times their minimum masses or the planetary system would be unstable (this is primarily due to the interaction between planets e and b). For Gliese 581c, the upper bound is 10.4 Earth masses. A 2024 study determined the inclination of the planet, allowing its true mass to be determined, which is about 30% greater than the minimum mass at about 6.8 Earth masses. Radius Since Gliese 581c has not been detected in transit, there are no measurements of its radius. Furthermore, the radial velocity method used to detect it only puts a lower limit on the planet's mass, which means theoretical models of planetary radius and structure can only be of limited use. However, assuming a random orientation of the planet's orbit, the true mass is likely to be close to the measured minimum mass. Assuming that the true mass is the minimum mass, the radius may be calculated using various models. For example, if Gliese 581c is a rocky planet with a large iron core, it should have a radius approximately 50% larger than that of Earth, according to Udry's team. Gravity on such a planet's surface would be approximately 2.24 times as strong as on Earth. However, if Gliese 581c is an icy and/or watery planet, its radius would be less than 2 times that of Earth, even with a very large outer hydrosphere, according to density models compiled by Diana Valencia and her team for Gliese 876 d. Gravity on the surface of such an icy and/or watery planet would be at least 1.25 times as strong as on Earth. They claim the real value of the radius may be anything between the two extremes calculated by density models outlined above. Other scientists' views differ. Sara Seager at MIT has speculated that Gliese 581c and other five-Earth-mass planets could be: "rock giants" mostly of silicate; "cannonball" planets of solid iron; "gas dwarfs" mostly of helium and hydrogen; carbon-rich "diamond worlds"; purely hot "ice VII worlds"; purely "carbon monoxide worlds". If the planet transits the star as seen from the direction of the Earth, the radius should be measurable, albeit with some uncertainty. Unfortunately, measurements made with the Canadian-built MOST space telescope indicate that transits do not occur. The new research suggests that the rocky centres of super-Earths are unlikely to evolve into terrestrial rocky planets like the inner planets of the Solar System because they appear to hold onto their large atmospheres. Rather than evolving to a planet composed mainly of rock with a thin atmosphere, the small rocky core remains engulfed by its large hydrogen-rich envelope. Orbit Gliese 581c has an orbital period ("year") of 13 Earth days and its orbital radius is only about 7% that of the Earth, about 11 million km, while the Earth is 150 million km from the Sun. Since the host star is smaller and colder than the Sun—and thus less luminous—this distance places the planet on the "warm" edge of the habitable zone around the star according to Udry's team. Note that in astrophysics, the "habitable zone" is defined as the range of distances from the star at which a planet could support liquid water on its surface: it should not be taken to mean that the planet's environment would be suitable for humans, a situation which requires a more restrictive range of parameters. In any case, based on newer models of the habitable zone, the planet is likely too hot to be potentially habitable. A typical radius for an M0 star of Gliese 581's age and metallicity is 0.00128 AU, against the Sun's 0.00465 AU. This proximity means that the primary star should appear 3.75 times wider and 14 times larger in area for an observer on the planet's surface looking at the sky than the Sun appears to be from Earth's surface. Tidal lock Because of its small separation from Gliese 581, the planet has been generally considered to always have one hemisphere facing the star (only day), and the other always facing away (only night), or in other words being tidally locked. The most recent orbital fit to the system, taking stellar activity into account indicates a nearly circular orbit, but older fits used an eccentricity between 0.10 and 0.22. If the orbit of the planet were eccentric, it would undergo violent tidal flexing. Because tidal forces are stronger when the planet is close to the star, eccentric planets are expected to have a rotation period that is shorter than its orbital period, also called pseudo-synchronization. An example of this effect is seen in Mercury, which is tidally locked in a 3:2 resonance, completing three rotations every two orbits. In any case, even in the case of 1:1 tidal lock, the planet would undergo libration and the terminator would be alternatively lit and darkened during libration. Models of the evolution of the planet's orbit over time suggest that heating resulting from this tidal locking may play a major role in the planet's geology. Models proposed by scientists predict that tidal heating could yield a surface heat flux about three times greater than that of Jupiter's moon Io, which could result in major geological activity such as volcanoes and plate tectonics. Habitability and climate The study of Gliese 581c by the von Bloh et al. team is quoted as concluding "The super-Earth Gl 581c is clearly outside the habitable zone, since it is too close to the star." The study by Selsis et al. states that "a planet in the habitable zone is not necessarily habitable" itself, and this planet "is outside what can be considered the conservative habitable zone" of the parent star, and further that if there was any water there then it was lost when the red dwarf was a strong X-ray and EUV emitter, it could have surface temperatures ranging from , like Venus today. Temperature speculations by other scientists were based on the temperature of (and heat from) the parent star Gliese 581 and have been calculated without factoring in the margin of error (96 °C/K) for the star's temperature of 3,432 K to 3,528 K, which leads to a large irradiance range for the planet, even before eccentricity is considered. Effective temperatures Using the measured stellar luminosity of Gliese 581 of 0.013 times that of the Sun, it is possible to calculate Gliese 581c's effective temperature, a.k.a. black body temperature, which probably differs from its surface temperature. According to Udry's team, the effective temperature for Gliese 581c, assuming an albedo (reflectivity) such as that of Venus (0.64), would be , and assuming an Earth-like albedo (0.296), it would be , a range of temperatures that overlap with the range at which water would be liquid at a pressure of 1 atmosphere. However, the effective temperature and actual surface temperature can be very different due to the greenhouse properties of the planetary atmosphere. For example, Venus has an effective temperature of , but a surface temperature of (mainly due to a 96.5% carbon dioxide atmosphere), a difference of about . Studies of habitability (i.e. liquid water for extremophile forms of life) conclude that Gliese 581c is likely to suffer from a runaway greenhouse effect similar to that found on Venus and, as such, is highly unlikely to be habitable. Nevertheless, this runaway greenhouse effect could be prevented by the presence of sufficient reflective cloud cover on the planet's day side. Alternatively, if the surface were covered in ice, it would have a high albedo (reflectivity), and thus could reflect enough of the incident sunlight back into space to render the planet too cold for habitability, although this situation is expected to be very unstable except for very high albedos greater than about 0.95 (i.e. ice): release of carbon dioxide by volcanic activity or of water vapor due to heating at the substellar point would trigger a runaway greenhouse effect. Liquid water Gliese 581c is likely to lie outside the habitable zone. No direct evidence has been found for water to be present, and it is probably not present in the liquid state. Techniques like the one used to measure the extrasolar planet HD 209458 b may in the future be used to determine the presence of water in the form of vapor in the planet's atmosphere, but only in the rare case of a planet with an orbit aligned so as to transit its star, which Gliese 581c is not known to do. Tidally locked models Theoretical models predict that volatile compounds such as water and carbon dioxide, if present, might evaporate in the scorching heat of the sunward side, migrate to the cooler night side, and condense to form ice caps. Over time, the entire atmosphere might freeze into ice caps on the night side of the planet. However, it remains unknown if water and/or carbon dioxide are even present on the surface of Gliese 581c. Alternatively, an atmosphere large enough to be stable would circulate the heat more evenly, allowing for a wider habitable area on the surface. For example, although Venus has a small axial inclination, very little sunlight reaches the surface at the poles. A slow rotation rate approximately 117 times slower than Earth's produces prolonged days and nights. Despite the uneven distribution of sunlight cast on Venus at any given time, polar areas and the night side of Venus are kept almost as hot as on the day side by globally circulating winds. A Message from Earth A Message from Earth (AMFE) is a high-powered digital radio signal that was sent on 9 October 2008 towards Gliese 581c. The signal is a digital time capsule containing 501 messages that were selected through a competition on the social networking site Bebo. The message was sent using the RT-70 radar telescope of Ukraine's State Space Agency. The signal will reach the planet Gliese 581c in early 2029. More than half a million people including celebrities and politicians participated in the AMFE project, which was the world's first digital time capsule where the content was selected by the public. As of 22 January 2015, the message has traveled 59.48 trillion km of the total 192 trillion km, which is 31.0% of the distance to the Gliese 581 system. See also Circumstellar habitable zone (Goldilocks phenomenon) CoRoT-7b Interstellar travel Planetary habitability Notes References Further reading News media reports Non-news media External links Gliese 581 - The "Red Dwarf" and implications for its "earthlike" planet Gliese 581c Exoplanets discovered in 2007 Exoplanets in the Gliese Catalog Exoplanets detected by radial velocity Gliese 581 Libra (constellation) Super-Earths
Gliese 581c
[ "Astronomy" ]
2,865
[ "Libra (constellation)", "Constellations" ]
10,877,475
https://en.wikipedia.org/wiki/Enterobacter%20cloacae
Enterobacter cloacae is a clinically significant Gram-negative, facultatively-anaerobic, rod-shaped bacterium. Microbiology In microbiology laboratories, E. cloacae is frequently grown at 30 °C on nutrient agar or at 35 °C in tryptic soy broth. It is a rod-shaped, Gram-negative bacterium, is facultatively anaerobic, and bears peritrichous flagella. It is oxidase-negative and catalase-positive. Industrial use Enterobacter cloacae has been used in a bioreactor-based method for the biodegradation of explosives and in the biological control of plant diseases. Enterobacter cloacae strain MBB8 isolated from the Gulf of Mannar, India was reported to degrade poly vinyl alcohol (PVA). This was the first report of a PVA degrader from the Enterobacter genus. E. cloacae was also reported to produce exopolysaccharide (EPS) as high as 18.3g/L. GC-MS analysis of E. cloacae EPS showed the presence of glucose and mannose in the molar ratio of 1: 1.5e−2. Enterobacter cloacae subsp. cloacae strain PR-4 was isolated and identified by 16S rDNA gene sequence with phylogenetic tree view from explosive-laden soil by P. Ravikumar (GenBank accession number KP261383). E. cloacae SG208 identified as a predominant microorganism in mixed culture isolated from petrochemical sludge (IOCL, Guwahati) responsible for degradation of benzene was reported by Padhi and Gokhale (2016). Safety Enterobacter cloacae is considered a biosafety level 1 organism in the United States and level 2 in Canada. Genomics A draft genome sequence of Enterobacter cloacae subsp. cloacae was announced in 2012. The bacteria used in the study were isolated from giant panda feces. Clinical significance Enterobacter cloacae is a member of the normal gut flora of many humans and is not usually a primary pathogen. Some strains have been associated with urinary tract and respiratory tract infections in immunocompromised individuals. It is a high risk AmpC producer and treatment with cefepime is recommended by the IDSA if causing disease rather than simply colonising. Treatment using cefepime and gentamicin has been reported. A 2012 study in which Enterobacter cloacae was transplanted into previously germ-free mice resulted in increased obesity when compared with germ-free mice fed an identical diet, suggesting a link between obesity and the presence of Enterobacter gut flora. See also Biohydrogen References External links Type strain of Enterobacter cloacae at BacDive - the Bacterial Diversity Metadatabase Enterobacteriaceae Gram-negative bacteria Bacteria described in 1890 Biodegradation
Enterobacter cloacae
[ "Chemistry" ]
650
[ "Biodegradation" ]
10,877,810
https://en.wikipedia.org/wiki/Neopentyllithium
Neopentyllithium is an organolithium compound with the chemical formula C5H11Li. Commercially available, it is a strong, non-nucleophilic base sometimes encountered in organometallic chemistry. Further reading Organolithium compounds Non-nucleophilic bases Neopentyl compounds
Neopentyllithium
[ "Chemistry" ]
70
[ "Non-nucleophilic bases", "Organolithium compounds", "Bases (chemistry)", "Reagents for organic chemistry" ]
10,879,587
https://en.wikipedia.org/wiki/Cosmodome
The Cosmodome () is a space science museum and education centre located in Laval, Quebec, Canada. Cosmodome is the home to both Space Camp Canada and the Space Science Centre (a museum). Space Camp Canada welcomed its first campers in July 1994 while the Space Science Centre opened its doors to the public in December 1994. History The Cosmodome opened in 1994, but was facing bankruptcy by 1997. It was rescued in March 1997 by a $11.9-million government bailout. The Space Science Centre The Space Science Centre is the only museum in Canada dedicated solely to the space sciences and houses one of two lunar rocks on display in Canada. The one featured was retrieved by astronaut James Irwin on the Apollo 15 mission. Space Camp Canada Space Camp Canada features 6 space simulators, described below. Each is designed to help the space camp trainee understand the difficulties of working in space. 1/6th Chair The 1/6th chair simulates lunar gravity by suspending the user in a chair connected to a series of springs on a rail which allows for movement in an allotted area. A trainee's challenge on the 1/6 chair is to pick up objects from the ground while bouncing. Zero G Wall The Wall of Weightlessness, also known as the Zero G Wall, uses a counterbalance to suspend the trainee in mid air. By filling the counterbalance with water until its weight is approximately that of the trainee, the trainee is free to move in the 3 translational directions. Missions are usually given to trainees to heighten the experience and generally consist of interacting, in one way or another, with a mock-satellite suspended in proximity to the wall. Space Station Mobility Trainer (SSMT) The SSMT was a device conceived with exercise while on-orbit in mind. It consists of a circular jogging pad in which a special chair and harness has been fitted. Once seated, the trainee may run freely forwards and backwards, rotating along the axis of the pad. Manned Maneuvering Unit The Manned Maneuvering Unit simulates the NASA vehicle of the same name. By forcing compressed air out of rubber pads, the Space Camp MMU functions as a hovercraft, and is hence able to move with little resistance. A series of motors controlled via user input allow for horizontal translations and rotations, and a large gear and hydraulic pump allow for 360 degree roll and 30 degree pitch respectively. Multi-Axis Chair A trainee is strapped to the multi-axis chair which spins on three axes, disorientating its occupant. The trainee's challenge at the multi-axis chair is to read words, identify images, and do other tasks while rolling in three dimensions simultaneously. Affiliations The Museum is affiliated with: CMA, CHIN, and Virtual Museum of Canada. References Museums in Laval, Quebec Science museums in Canada Aerospace museums in Quebec Space organizations Canada Museums established in 1994 1994 establishments in Quebec
Cosmodome
[ "Astronomy" ]
591
[ "Astronomy organizations", "Space organizations" ]
10,879,760
https://en.wikipedia.org/wiki/1%2C3-Dichloropropene
1,3-Dichloropropene, sold under diverse trade names, is an organochlorine compound with the formula . It is a colorless liquid with a sweet smell. It is feebly soluble in water and evaporates easily. It is used mainly in farming as a pesticide, specifically as a preplant fumigant and nematicide. It acts non-specifically and is in IRAC class 8A. It is widely used in the US and other countries, but is banned in 34 countries (including the European Union). Production, chemical properties, biodegradation It is a byproduct in the chlorination of propene to make allyl chloride. It is usually obtained as a mixture of the geometric isomers, called (Z)-1,3-dichloropropene, and (E)-1,3-dichloropropene. Although it was first applied in agriculture in the 1950s, at least two biodegradation pathways have evolved. One pathway degrades the chlorocarbon to acetaldehyde via chloroacrylic acid. Safety The TLV-TWA for 1,3-dichloropropene (DCP) is 1 ppm. It is a contact irritant. A wide range of complications have been reported. Carcinogenicity Evidence for the carcinogenicity of 1,3-dichloropropene in humans is inadequate, but results from several cancer bioassays provide adequate evidence of carcinogenicity in animals. In the US, the Department of Health and Human Services (DHHS) has determined that 1,3-dichloropropene may reasonably be anticipated to be a carcinogen. In California, the Office of Environmental Health Hazard Assessment has determined that 1,3-dichloropropene is a carcinogen, and in 2022 established a No Significant Risk Level (NSRL) of 3.7 micrograms/day. The International Agency for Research on Cancer (IARC) has determined that 1,3-dichloropropene is possibly carcinogenic to humans. The EPA has classified 1,3-dichloropropene as a probable human carcinogen. Use 1,3-Dichloropropene is used as a pesticide in the following crops: Contamination The ATSDR has extensive contamination information available. Market history Under the brand name Telone, 1,3-D was one of Dow AgroSciences's products until the merger into DowDuPont. Then it was spun off with Corteva, and has been licensed to Telos Ag Solutions and is no longer a Corteva product. References ATSDR ToxFAQs: Dichloropropenes USGS Pesticide National Synthesis Project – Crop & Compound Further reading ATSDR Toxicological Profile (9.2 MB) CDC – NIOSH Pocket Guide to Chemical Hazards Pesticides Chloroalkenes IARC Group 2B carcinogens Fumigants Sweet-smelling chemicals
1,3-Dichloropropene
[ "Biology", "Environmental_science" ]
647
[ "Biocides", "Toxicology", "Pesticides" ]
10,881,572
https://en.wikipedia.org/wiki/Domain%20drop%20catching
Domain drop catching, also known as domain sniping, is the practice of registering a domain name once registration has lapsed, immediately after expiry. Background When a domain is first registered, the customer is usually given the option of registering the domain for one year or longer, with automatic renewal as a possible option. Although some domain registrars often make multiple attempts to notify a registrant of a domain name's impending expiration, a failure on the part of the original registrant to provide the registrar with accurate contact information makes an unintended registration lapse possible. Practices also vary, and registrars are not required to notify customers of impending expiration. Unless the original registrant holds a trademark or other legal entitlement to the name, they are often left without any form of recourse in getting their domain name back. It is incumbent on registrants to be proactive in managing their name registrations and to be good stewards of their domain names. By law there are no perpetual rights to domain names after payment of registration fees lapses, aside from trademark rights granted by common law or statute. Redemption Grace Period (RGP) The Redemption Grace Period is an addition to ICANN's Registrar Accreditation Agreement (RAA) which allows a registrant to reclaim their domain name for a number of days after it has expired. This length of time varies by TLD, and is usually around 30 to 90 days. Prior to the implementation of the RGP by ICANN, individuals could easily engage in domain sniping to extort money from the original registrant to buy their domain name back. After the period between the domain's expiry date and the beginning of the RGP, the domain's status changes to "redemption period" during which an owner may be required to pay a fee (typically around US$100) to re-activate and re-register the domain. ICANN's RAA requires registrars to delete domain registrations once a second notice has been given and the RGP has elapsed. At the end of the "pending delete" phase of 5 days, the domain will be dropped from the ICANN database. Drop catch services For particularly popular domain names, there are often multiple parties anticipating the expiration. Competition for expiring domain names has since become a purview of drop catching services. These services offer to dedicate their servers to securing a domain name upon its availability, usually at an auction price. Individuals with their limited resources find it difficult to compete with these drop catching firms for highly desirable domain names. Retail registrars such as GoDaddy or eNom retain names for auction through services such as TDNAM or Snapnames through a practice known as domain warehousing. Drop catch services are performed by both ICANN-accredited registrars and non-accredited registrars. Domain futures / options or back-orders Some registry operators (for example dot-РФ, dot-PL, dot-RU, dot-ST, dot-TM, dot-NO) offer a service by which a back-order (also sometimes known as a "domain future" or "domain option") can be placed on a domain name. If a domain name is due to return to the open market, then the owner of the back-order will be given the first opportunity to acquire the domain name before the name is deleted and is open to a free-for-all. In this way back-orders will usually take precedence over drop-catch. There may be a fee for the back-order itself, often only one back-order can be placed per domain name and a further purchase or renewal fee may be applicable if the back-order succeeds. Back-Orders typically expire in the same way domain names do, so are purchased for a specific number of years. Different operators have different rules. In some cases back-orders can only be placed at certain times, for example after the domain name has expired, but before it has returned to the open market (see Redemption Grace Period). In the Commodity market sense, a back-order is often more like an "option" than a "future" as there is often no obligation for the new registrant to take the name, even after it has been handed to the owner of the back-order. For example, some registries give the new registrant 30 days to purchase a renewal on the name before it is once again returned to the open market (or any new back-order registrant). See also Domain hijacking Domain tasting Domain warehousing Drop registrar References Domain Name System Internet ethics
Domain drop catching
[ "Technology" ]
956
[ "Internet ethics", "Ethics of science and technology" ]
10,883,143
https://en.wikipedia.org/wiki/Register%20transfer%20notation
Register Transfer Notation (or RTN) is a way of specifying the behavior of a digital synchronous circuit. It is said to be a specification language for this reason. Register Transfer Languages (or RTL, where the L sometimes stands for Level of abstraction) are similar to Register Transfer Notation and used to describe much the same thing, however they are of a synthesizable format and more similar to a standard computer programming language, like C. RTN may be written as either abstract or concrete. Abstract RTN is a generic notation which does not have any specific machine implementation details. In contrast, concrete RTN is a notation which does implement specifics of the machine for which it is designed. The possible locations in which transfer of information occurs are: Memory-location Processor Register Registers in I/O device References Hardware description languages
Register transfer notation
[ "Engineering" ]
172
[ "Electronic engineering", "Hardware description languages" ]
10,883,144
https://en.wikipedia.org/wiki/Cenchrus%20clandestinus
The tropical grass species Cenchrus clandestinus (previously Pennisetum clandestinum) is known by several common names, most often Kikuyu grass. It is native to the highland regions of East Africa that is home to the Kikuyu people. Because of its rapid growth and aggressive nature, it is categorised as a noxious weed in some regions. However, it is also a popular garden lawn species in Australia, New Zealand, South Africa and the southern region of California in the United States, being inexpensive and moderately drought-tolerant. In addition, it is useful as pasture for livestock grazing and serves as a food source for many avian species, including the long-tailed widowbird. The flowering culms are very short and "hidden" amongst the leaves, giving this species its specific epithet (clandestinus). Description and habitat Cenchrus clandestinus is a rhizomatous grass with matted roots and a grass-like or herbaceous habit. The leaves are green, flattened or upwardly folded along the midrib, long, and wide. The apex of the leaf blade is obtuse. It occurs in sandy soil and reaches a height of between . The species favours moist areas and frequently becomes naturalised from introduction as a cultivated alien. Rooted nodes send up bunches of grass blades. It is native to the low-elevation tropics of Kenya and environs, where it grows best in humid heat, such as the wet coastal areas. The description of this species was published by Emilio Chiovenda in 1903, and acknowledges an earlier, invalid, description made by C. F. Hochstetter. As an invasive species It has been introduced across Africa, Asia, Australia, the Americas, and the Pacific, where it is subject to eradication through management practices. The ease of cultivation, and the thickly matting habit, have made this species desirable for use as a lawn. In southern California in the United States, the grass is commonly used on golf courses since it is drought resistant and creates challenging rough. The famed Riviera Country Club and Torrey Pines Golf Course both use this grass and host tournaments on the PGA Tour. Other minor golf courses in southern California have Kikuyu grass, many are in Long Beach: Lakewood, Skylinks, Big recreation, Little recreation, El Dorado, San Luis Obispo CC, and others. The aggressive colonisation of natural habitat has resulted in this grass becoming naturalised in regions such as Southwest Australia. It has high invasive potential due to its elongate rhizomes and stolons, with which it penetrates the ground, rapidly forming dense mats, and suppressing other plant species. It grows from a thick network of rhizomatous roots and sends out stolons which extend along the ground. It can climb over other plant life, shading it out and producing herbicidal toxins that kill competing plants. It prevents new sprouts of other species from growing, may kill small trees and can choke ponds and waterways. It is resistant to mowing and grazing due to its strong network of roots, which easily send up new shoots. It springs up in turfs and lawns and can damage buildings by growing in the gaps between stones and tiles. The plant is easily introduced to new areas on plowing and digging machinery, which may transfer bits of the rhizome in soil clumps. While the grass spreads well via vegetative reproduction from pieces of rhizome, it is also dispersed via seed. Rhizomes that have reached very hard-to-reach places will continue to grow as separate plants if they are snapped off during the attempted removal process. References clandestinus Agricultural pests Lawn grasses Plants described in 1903 Taxa named by Emilio Chiovenda
Cenchrus clandestinus
[ "Biology" ]
792
[ "Pests (organism)", "Agricultural pests" ]
10,883,352
https://en.wikipedia.org/wiki/Winged%20infusion%20set
A winged infusion set—also known as "butterfly" or "scalp vein" set—is a device specialized for venipuncture: i.e. for accessing a superficial vein or artery for either intravenous injection or phlebotomy. It consists, from front to rear, of a hypodermic needle, two bilateral flexible "wings", flexible small-bore transparent tubing (often 20–35 cm long), and lastly a connector (often female Luer). This connector attaches to another device: e.g. syringe, vacuum tube holder/hub, or extension tubing from an infusion pump or gravity-fed infusion/transfusion bag/bottle. Newer models include a slide and lock safety device slid over the needle after use, which helps prevent accidental needlestick injury and reuse of used needles, which can transmit infectious disease such as HIV and viral hepatitis. Use During venipuncture, the butterfly is held by its wings between thumb and index finger. This grasp very close to the needle facilitates precise placement. The needle is generally inserted toward the vein at a shallow angle, made possible by the set's design. When the needle enters the vein, venous blood pressure generally forces a small amount of blood into the set's transparent tubing providing a visual sign, called the "flash" or "flashback", that lets the practitioner know that the needle is actually inside of a vein. The butterfly offers advantages over a simple straight needle. The butterfly's flexible tubing reaches more body surface and tolerates more patient movement. The butterfly's precise placement facilitates venipuncture of thin, "rolling", fragile, or otherwise poorly accessible veins. The butterfly's shallow-angle insertion design facilitates venipuncture of very superficial veins, e.g. hand, wrist, or scalp veins (hence name "scalp vein" set). Needle size Butterflies are commonly available in 18-27 gauge bore, 21G and 23G being most popular. In phlebotomy, there is widespread avoidance of 25G and 27G butterflies based on belief that such small-bore needles hemolyze and/or clot blood samples and hence invalidate blood tests. Contrary to this belief, theoretical calculation and in vitro experiment both showed the exact opposite: namely, that shear stress and hence hemolysis decrease with decreasing needle bore (but the decrease can be clinically insignificant). In agreement with these results, a subsequent clinical trial found that 21G, 23G, and 25G butterflies connected directly to vacuum tubes caused the same amount of hemolysis and gave the same coagulation panel test results. References Medical equipment
Winged infusion set
[ "Biology" ]
556
[ "Medical equipment", "Medical technology" ]
10,883,414
https://en.wikipedia.org/wiki/Pound%20per%20hour
Pound per hour is a mass flow unit based on the international avoirdupois pound, which is used in both the British imperial and, being a former colony of Britain, the United States customary systems of measurement. It is abbreviated as PPH, or more conventionally as lb/h. Fuel flow for engines may be expressed using this unit. It is particularly useful when dealing with gases or liquids, as volume flow varies more with temperature and pressure. In the US utility industry, steam and water flows throughout turbine cycles are typically expressed in PPH, while in almost all of the rest of the world these mass flows are expressed using the International System of Units (SI), the modern form of the metric system. Minimum fuel intake on a jumbo jet can be as low as when idling; however, this is not enough to sustain flight. References Units of flow
Pound per hour
[ "Mathematics" ]
178
[ "Units of flow", "Quantity", "Units of measurement" ]
10,883,673
https://en.wikipedia.org/wiki/NGC%206242
NGC 6242 is an open cluster of stars in the southern constellation Scorpius. It can be viewed with binoculars or a telescope at about 1.5° to the south-southeast of the double star Mu Scorpii. This cluster was discovered by French astronomer Nicolas-Louis de Lacaille in 1752 from South Africa. It is located at a distance of approximately from the Sun, just to the north of the Sco OB 1 association. The cluster has an estimated age of 77.6 million years. A microquasar with the designation GRO J1655-40 is located in the vicinity of NGC 6242 and is moving away from the cluster with a runaway space velocity of . It may have originated in the cluster during a supernova explosion ago. References External links SEDS – NGC 6242 Open clusters Scorpius 6242
NGC 6242
[ "Astronomy" ]
175
[ "Scorpius", "Constellations" ]
10,883,715
https://en.wikipedia.org/wiki/4EGI-1
4EGI-1 is a synthetic chemical compound which has been found to interfere with the growth of certain types of cancer cells in vitro. Its mechanism of action involves interruption of the binding of cellular initiation factor proteins involved in the translation of transcribed mRNA at the ribosome. The inhibition of these initiation factors prevents the initiation and translation of many proteins whose functions are essential to the rapid growth and proliferation of cancer cells. Reaction mechanism 4EGI-1 mimics the action of a class of cellular regulatory molecules that naturally inhibit the binding of two initiation factors necessary for interaction of transcribed mRNA with the subunits of ribosomal complexes. These naturally occurring regulatory molecules, or binding proteins (BPs), bind to eukaryotic initiation factor eIF4E, preventing its association with eIF4G, another initiation factor. These two proteins, under unregulated conditions, form a complex, known as eIF4F, which associates with the 5’ cap of mRNA and the ribosomal subunits. eIF4E BPs (4E-BPs), as small polypeptides, consist of the same amino acid sequence as the portion of eIF4G that interacts with eIF4E. 4EGI-1 thus prevents the proper association of mRNA, carrying the coded message of transcribed genes, with the ribosome, the cellular component necessary for the translation of those genes into functional proteins. Naturally occurring 4E-BPs are regulated by a protein kinase, mTOR, which through phosphorylation deactivates the binding affinity of 4E-BPs for the eIF4E protein. Binding site specifics and effects of use 4EGI-1, like 4E-BP polypeptides, displaces eIF4G by associating with a binding site on eIF4E. Not only does the synthetic molecule prevent the association between the two initiation factors, but by binding to a different portion of eIF4E via the same motif, it has been shown to actually increase the binding affinity of eIF4E for endogenous (originating within an organism) 4E-BP1. The Harvard research group leading the study screened 16,000 compounds, looking for one that would displace a fluorescein-labeled peptide derived from the eIF4G sequence that binds to the eIF4E form at the same site. Eventually they turned up 4EGI-1, which displaced eIF4G by binding to a smaller subset of its binding site (on eIF4E). The newly found molecule had the added advantage of enhancing 4E-BP1 binding, a surprise given that this molecule is also believed to bind eIF4E via the same motif. It appears that by displacing the eIF4G sequence without blocking the entire binding interface of eIF4E, 4EGI-1 is able to clear the “docking site” of the endogenous regulator. Cap-dependent vs. initiation factor-independent translation One caveat to the function of 4EGI-1 and thus the entire class of 4E-BP regulatory proteins is that both the synthetic and naturally occurring molecules are effective at inhibiting only cap-dependent translation, not initiation factor-independent translation. Messenger RNAs (mRNAs) are transcribed from DNA, and serve as templates for the synthesis of proteins by ribosomal translation. Weak mRNAs contain long and highly structured untranslatable regions at their 5’ end. This lengthy region makes it difficult for enzymes to determine where transcription should begin. As a result, initiation factor proteins are required for translation of the message into protein. These weak mRNAs, or mRNAs that carry the code for proteins involved in the development of cancer cells, require cap-dependent translation which necessitates the cellular involvement of the eIFs. Examples of weak mRNAs include those that code for proliferation-related, and anti-apoptotic proteins. Strong mRNAs, in contrast, are translated with much less cellular machinery such as eIFs and generally code for biologically necessary proteins, such as those needed for the essential metabolic processes of a cell. Therapies such as the use of 4EGI-1 against cancer cells can thus be created such that their biologic targets include only the initiation factors involved in the production of weak mRNAs. Cap-dependent translation involves a series of steps that join the small and large ribosomal subunits at the start codon of mRNA. The initiation factor complex eIF4F is dependent upon the presence of a 5’ mRNA cap upstream from the start codon in order to initiate translation. Initiation factor independent translation does not require the association of initiation factors with the 5’ cap of mRNA. As an alternative, the associated ribosomal units are moved to the start location by internal ribosome entry site trans acting factors (ITAFs). It has been found that several cellular proteins that respond to apoptotic signals are translated in this fashion. Techniques of discovery When attempting to identify biological molecules that would disrupt the formation of the F complex, researchers developed a high-throughput fluorescence polarization (FP)-binding assay. In this assay, a small peptide of a known sequence was synthesized and tagged with a fluorescent molecule. This traceable peptide of sequence KYTYDELFQLK binds to the binding site of endogenous 4E-BPs on eIF4E. 16,000 compounds of known chemical composition were then tested in this assay. Compounds that displace the labeled peptide from eIF4E would yield a decrease in fluorescence polarization. The sequence of 4EGI-1 was such that it displaced the labeled peptide, thus demonstrating its affinity for the complex binding site on eIF4E. See also Rotavirus translation NSP3 (rotavirus) References Chlorobenzene derivatives 2-Nitrophenyl compounds Thiazoles Carboxylic acids
4EGI-1
[ "Chemistry" ]
1,231
[ "Carboxylic acids", "Functional groups" ]
992,733
https://en.wikipedia.org/wiki/Earth%20%28historical%20chemistry%29
Earths were defined by the Ancient Greeks as "materials that could not be changed further by the sources of heat then available". Several oxides were thought to be earths, such as aluminum oxide and magnesium oxide. It was not discovered until 1808 that these weren't elements but metallic oxides. See also Rare earth metals Alkaline earth metals References Inorganic chemistry
Earth (historical chemistry)
[ "Chemistry" ]
76
[ "nan" ]
992,734
https://en.wikipedia.org/wiki/Winnecke%20Catalogue%20of%20Double%20Stars
Winnecke Catalogue of Double Stars is a list of seven "new" double stars published by German Astronomer August Winnecke in Astronomische Nachrichten in 1869. Winnecke later noted that three of the double stars he catalogued had been discovered earlier (30 Eridani, Bradley 757, and 44 Cygni). The stars are sometimes given Winnecke designations (e.g. Winnecke 4), and sometimes abbreviated to WNC. References External links Winnecke Objects from SEDS A biography of August Winnecke from SEDS Astronomical catalogues of stars Double stars
Winnecke Catalogue of Double Stars
[ "Astronomy" ]
127
[ "Astronomical catalogue stubs", "Astronomy stubs" ]
992,829
https://en.wikipedia.org/wiki/Microcell
A microcell is a cell in a mobile phone network served by a low power cellular base station (tower), covering a limited area such as a mall, a hotel, or a transportation hub. A microcell is usually larger than a picocell, though the distinction is not always clear. A microcell uses power control to limit the radius of its coverage area. Typically the range of a microcell is less than two kilometers wide, whereas standard base stations may have ranges of up to 35 kilometres (22 mi). A picocell, on the other hand, is 200 meters or less, and a femtocell is on the order of 10 meters, although AT&T calls its femtocell that has a range of , a "microcell". AT&T uses "AT&T 3G MicroCell" as a trademark and not necessarily the "microcell" technology, however. A microcellular network is a radio network composed of microcells. Rationale Like picocells, microcells are usually used to add network capacity in areas with very dense phone usage, such as train stations. Microcells are often deployed temporarily during sporting events and other occasions in which extra capacity is known to be needed at a specific location in advance. Cell size flexibility is a feature of 2G (and later) networks and is a significant part of how such networks have been able to improve capacity. Power controls implemented on digital networks make it easier to prevent interference from nearby cells using the same frequencies. By subdividing cells, and creating more cells to help serve high density areas, a cellular network operator can optimize the use of spectrum and ensure capacity can grow. By comparison, older analog systems have fixed limits, beyond which attempts to subdivide cells simply would result in an unacceptable level of interference. Microcell/picocell-only networks Certain mobile phone systems, notably PHS and DECT, only provide microcellular (and Pico cellular) coverage. Microcellular systems are typically used to provide low cost mobile phone systems in high-density environments such as large cities. PHS is deployed throughout major cities in Japan as an alternative to ordinary cellular service. DECT is used by many businesses to deploy private license-free microcellular networks within large campuses where wireline phone service is less useful. DECT is also used as a private, non-networked, cordless phone system where its low power profile ensures that nearby DECT systems do not interfere with each other. A forerunner of these types of network was the CT2 cordless phone system, which provided access to a looser network (without handover), again with base stations deployed in areas where large numbers of people might need to make calls. CT2's limitations ensured the concept never took off. CT2's successor, DECT, was provided with an interworking profile, GIP so that GSM networks could make use of it for microcellular access, but in practice the success of GSM within Europe, and the ability of GSM to support microcells without using alternative technologies, meant GIP was rarely used, and DECT's use in general was limited to non-GSM private networks, including use as cordless phone systems. See also Femtocell GSM Picocell Small Cells External links Ericsson press release describing a GSM/UMTS picocell base station intended for residential use Nokia 7200 Tutorial including definition of "Micro Cellular Network" How To Install A Microcell Cell Phone Tower References Mobile telecommunications
Microcell
[ "Technology" ]
737
[ "Mobile telecommunications" ]
993,276
https://en.wikipedia.org/wiki/Rimantadine
Rimantadine (INN, sold under the trade name Flumadine) is an orally administered antiviral drug used to treat, and in rare cases prevent, influenzavirus A infection. When taken within one to two days of developing symptoms, rimantadine can shorten the duration and moderate the severity of influenza. Rimantadine can mitigate symptoms, including fever. Both rimantadine and the similar drug amantadine are derivates of adamantane. Rimantadine is found to be more effective than amantadine because when used the patient displays fewer symptoms. Rimantadine was approved by the Food and Drug Administration (FDA) in 1994. Rimantadine was approved for medical use in 1993. Seasonal H3N2 and 2009 pandemic flu samples tested have shown resistance to rimantadine, and it is no longer recommended to prescribe for treatment of the flu. Medical uses Influenza A Rimantadine inhibits influenza activity by binding to amino acids in the M2 transmembrane channel and blocking proton transport across the M2 channel. Rimantadine is believed to inhibit influenza's viral replication, possibly by preventing the uncoating of the virus's protective shells, which are the envelope and capsid. The M2 channel is known to be responsible for viral replication in the influenza virus. Genetic studies suggest that the virus M2 protein, an ion channel specified by virion M2 gene, plays an important role in the susceptibility of influenza A virus to inhibition by rimantadine. Rimantadine is bound inside the pore to amantadine specific amino acid binding sites with hydrogen binding and van der Waals interactions. The ammonium group (with neighboring water molecules) is positioned towards the C terminus with the amantadane group is positioned towards the N-terminus when bound inside the M2 pore. Influenza resistance Resistance to rimantadine can occur as a result of amino acid substitutions at certain locations in the transmembrane region of M2. This prevents binding of the antiviral to the channel. The mutation S31N binding site with rimantadine is shown in the image to the left. It shows rimantadine binding into lumenal (top) or peripheral (bottom) binding sites with influenza M2 channel Serine 31 (gold) or Asparagine 31 (blue). Rimantadine enantiomers interactions with M2 Rimantadine, when sold as flumadine, is present as a racemic mixture; the R and S states are both present in the drug. Solid state NMR studies have shown that the R enantiomer has a stronger binding affinity to the M2 channel pore than the S-enantiomer of rimantadine. Antiviral assay and electrophysiology studies show that there is no significant difference between the R and S enantiomers in binding affinity to amino acids in the M2 channel. Since the enantiomers have similar binding affinity, they also have similar ability to block the channel pore and work as an effective antiviral. Rimantadine enantiomers R and S are pictured interacting with the M2 pore below to the right. This image shows that there is not a significant modeled difference between the R and S enantiomers. Parkinson's disease Rimantadine, like its antiviral cousin amantadine, possesses antiparkinsonian activity and can be used in the treatment of Parkinson's disease. However, in general, neither rimantadine nor amantadine is a preferred agent for this therapy and would be reserved for cases of the disease that are less responsive to front-line treatments. Others Rimantadine is shown to be effective against other RNA-containing viruses. It can treat arboviruses like Saint Louis encephalitis and Sindbis. Other viruses that can be treated with Rimantadine include respiratory synctial and parainfluenza viruses. Rimantadine has also been shown to treat chronic hepatitis C. Side effects Rimantadine can produce gastrointestinal and central nervous system adverse effects. Approximately 6% of patients (compared to 4% of patients taking a placebo) reported side-effects at a dosage of 200 mg/d. Common side effects include: Nausea Upset stomach Nervousness Tiredness Lightheadedness Trouble sleeping (insomnia) Difficulty concentrating Confusion Rimantadine shows fewer CNS symptoms than its sister drug amantadine. Interactions Taking paracetamol (acetaminophen, Tylenol) or acetylsalicylic acid (aspirin) while taking rimantadine is known to reduce the body's uptake of rimantadine by approximately 12%. Cimetidine also affects the body's uptake of rimantadine. Taking anticholinergic drugs with amantadine may increase underlying seizure disorders and aggravate congestive heart failure. Pharmacology Pharmacodynamics The related drugs memantine and to a much lesser extent amantadine are known to act as NMDA receptor antagonists. The affinity of rimantadine for the NMDA receptor does not seem to have been reported. Analogues of rimantadine are known to act as NMDA receptor antagonists. Chemistry Synthesis 1-carboxyadamatanones are reduced with sodium borohydride to create racemic hydroxy acid. Excess methyllithium is then added to create methyl ketones which when reduced with lithium aluminum hydride gives the amine group. The synthesis pictured to the left is a synthesis of rimantadine as synthesized in Europe. History Rimantadine was discovered in 1963 and patented in 1965 in the US by William W. Prichard in Du Pont & Co., Wilmington, Delaware (patent on new chemical compound , 1965 and on the first method of synthesis , 1967). Prichard's methods of synthesis of rimantadine from the corresponding ketone oxime were based on its reduction with lithium aluminum hydride. See also Adapromine Bromantane Memantine Tromantadine References External links U.S. FDA press release announcing rimantadine's approval U.S. Center for Drug Evaluation and Research rimantadine description U.S. NIH rimantadine description U.S. CDC flu anti-viral treatment information Adamantanes Amines Anti-influenza agents Drugs with unknown mechanisms of action Proton channel blockers Suspected embryotoxicants Suspected teratogens
Rimantadine
[ "Chemistry" ]
1,327
[ "Amines", "Bases (chemistry)", "Functional groups" ]
993,328
https://en.wikipedia.org/wiki/Contingent%20cooperator
In game theory, a contingent cooperator is a person or agent who is willing to act in the collective interest, rather than his short-term selfish interest, if he observes a majority of the other agents in the collective doing the same. The apparent contradiction in this stance is resolved by game theory, which shows that in the right circumstances, cooperation with a sufficient number of other participants will have a better outcome for cooperators than pursuing short-term selfish interests. See also Cooperation Iterated prisoner's dilemma Tit for tat External links Ronald A. Heiner. Robust Evolution of Contingent Cooperation in Pure One-Shot Prisoners' Dilemmas. Discussion Papers Nos. 2002-09 and 2002–09, Center for the Study of Law and Economics discussion paper series, 2002. Christopher Wilson. “I Will if You Will: Facilitating Contingent Cooperation”, Optimum Online, Vol. 37, Issue 1, Apr 2007 Game theory
Contingent cooperator
[ "Mathematics" ]
187
[ "Game theory" ]
993,342
https://en.wikipedia.org/wiki/Mount%20Kineo
Mount Kineo is a prominent geological feature located on a peninsula that extends from the easterly shore of Moosehead Lake in the northern forest of Maine. With cliffs rising straight up from the water, it is the central feature of Mount Kineo State Park, a protected area of managed by the Maine Department of Agriculture, Conservation and Forestry. History Native American Native Americans once traveled great distances to Mt. Kineo to acquire its rhyolite rock. This rhyolite is evidence of an igneous (volcanic) phase although the mountain formations also contain slate and sandstone demonstrating sedimentary and metaphoric history as well. The mechanical properties of the rhyolite on Mount Kineo exhibits the physical properties of flint and was used extensively by indigenous peoples to make arrowheads and implements and thus, has often been referred to as "Kineo flint" in literature; but this term misleads by implication that the rhyolite is a cryptocrystalline form of the mineral quartz derived from a sedimentary origin. The rhyolite is actually an igneous extrusive material implying a volcanic phase that created the unique properties of this highly sought after material. Being the country's largest known mass of this rock, once used by Indigenous people to craft arrowheads, hatchets, chisels, etc., Indigenous implements made from the stone have been found in all parts of New England and even further south, it is evident that various tribes visited Mt. Kineo for centuries to obtain this material. Notable visitors In 1846, Henry David Thoreau visited the Moosehead Lake region, and the mountain's geological formation, Indian relics and traditions deeply interested him. Hotel resort The first Mt. Kineo House was built on the shores of Moosehead Lake in 1848, but burned in 1868. Rebuilt in 1870 and opened in 1871, the second Mt. Kineo House burned again in 1882. Designed by Arthur H. Vinal, the third Mt. Kineo House opened in 1884. In 1911, the Maine Central Railroad purchased the resort and engaged the Hiram Ricker Hotel Company to operate it. Then the largest inland waterfront hotel in America, it had accommodations for over 500 guests. In 1933, the railroad eliminated its Kineo branch, and in 1938 sold the hotel. It burned during demolition and the old employee house was burned down in 2018. Gallery Features State park The state park offers various trails around the peninsula and to the mountain peak. The park can only be reached by water. The Mount Kineo Golf Course operates the seasonal water shuttle service from the public dock in Rockwood to Mount Kineo. The park is one of five Maine State Parks that was in the path of totality for the 2024 solar eclipse, with 3 minutes and 24 seconds of totality. Golf course Mount Kineo Golf Course is believed to be the second oldest in New England. It came under new ownership in 2009. Played on the original 1893 course, the classic lakeside layout has no sand traps, small greens, and the Kineo cliff as a backdrop for the scenic over-the-water par 3 hole #4. References External links Mount Kineo State Park Department of Agriculture, Conservation and Forestry Moosehead Lake Shoreline Guide & Map Department of Agriculture, Conservation and Forestry Mountains of Piscataquis County, Maine Mount Kineo Mount Kineo Mount Kineo Mountains of Maine State parks of Maine
Mount Kineo
[ "Biology" ]
684
[ "Old-growth forests", "Ecosystems" ]
993,381
https://en.wikipedia.org/wiki/NGC%20185
NGC 185 (also known as Caldwell 18) is a dwarf spheroidal galaxy located 2.08 million light-years from Earth, appearing in the constellation Cassiopeia. It is a member of the Local Group, and is a satellite of the Andromeda Galaxy (M31). NGC 185 was discovered by William Herschel on November 30, 1787, and he cataloged it "H II.707". John Herschel observed the object again in 1833 when he cataloged it as "h 35", and then in 1864 when he cataloged it as "GC 90" within his General Catalogue of Nebulae and Clusters. NGC 185 was first photographed between 1898 and 1900 by James Edward Keeler with the Crossley Reflector of Lick Observatory. Unlike most dwarf elliptical galaxies, NGC 185 contains young stellar clusters, and star formation proceeded at a low rate until the recent past. NGC 185 has an active galactic nucleus (AGN) and is usually classified as a type 2 Seyfert galaxy, though its status as a Seyfert is questioned. It is possibly the closest Seyfert galaxy to Earth, and is the only known Seyfert in the Local Group. Distance measurements At least two techniques have been used to measure distances to NGC 185. The surface brightness fluctuations distance measurement technique estimates distances to galaxies based on the graininess of their appearance. The distance measured to NGC 185 using this technique is 2.08 ± 0.15 Mly (640 ± 50 kpc). However, NGC 185 is close enough that the tip of the red giant branch (TRGB) method may be used to estimate its distance. The estimated distance to NGC 185 using this technique is 2.02 ± 0.2 Mly (620 ± 60 kpc). Star formation Martínez-Delgado, Aparicio, & Gallart (1999) looked into the star formation history of NGC 185 and found that the majority of star formation in NGC 185 happened at early times. In the last ~1 Gyr, stars have formed only near the center of this galaxy. Walter Baade discovered young blue objects within this galaxy in 1951, but these have turned out to be star clusters and not individual stars. A supernova remnant near the center was also discovered by Martínez-Delgado et al. See also List of Andromeda's satellite galaxies References External links Dwarf galaxies Dwarf elliptical galaxies Dwarf spheroidal galaxies Local Group Andromeda Subgroup Cassiopeia (constellation) 0185 00396 02329 018b 17871130
NGC 185
[ "Astronomy" ]
523
[ "Cassiopeia (constellation)", "Constellations" ]
993,397
https://en.wikipedia.org/wiki/Carl%20von%20Linde
Carl Paul Gottfried von Linde (11 June 1842 – 16 November 1934) was a German scientist, engineer, and businessman. He discovered the refrigeration cycle and invented the first industrial-scale air separation and gas liquefaction processes, which led to the first reliable and efficient compressed-ammonia refrigerator in 1876. Linde was the founder of the company now known as Linde plc but formerly known (variously) as the Linde division of Union Carbide, Linde, Linde Air Products, Praxair, and others. This company is the world's largest producer of industrial gases and ushered in the creation of the global supply chain for industrial gases. Linde was a member of scientific and engineering associations, including being on the board of trustees of the Physikalisch-Technische Reichsanstalt and the Bavarian Academy of Sciences and Humanities. He was knighted in 1897 as Ritter von Linde. Biography Early years Born in , Bavaria as the son of a German-born minister and a Swedish mother, he was expected to follow in his father's footsteps but took another direction entirely. Von Linde's family moved to Munich in 1854, and eight years later he started a course in engineering at the Swiss Federal Institute of Technology in Zürich, Switzerland, where his teachers included Rudolf Clausius, Gustav Zeuner and Franz Reuleaux. In 1864, he was expelled before graduating for participating in a student protest, but Reuleaux found him a position as an apprentice at the Kottern cotton-spinning plant in Kempten. Linde stayed only a short time before moving first to Borsig in Berlin and then to the new Krauss locomotive factory in Munich, where he worked as head of the technical department. Von Linde married Helene Grimm in September 1866; their marriage lasted 53 years and they had six children. In 1868 Linde learned of a new university opening in Munich (the Technische Hochschule) and immediately applied for a job as a lecturer; he was accepted—at the age of 26—for the position. He became a full professor of mechanical engineering in 1872, and set up an engineering lab where students such as Rudolf Diesel studied. Middle years In 1870 and 1871, Linde published articles in the Bavarian Industry and Trade Journal describing his research findings in the area of refrigeration. Linde's first refrigeration plants were commercially successful, and development began to take up increasing amounts of his time. In 1879, he gave up his professorship and founded the Gesellschaft für Lindes Eismaschinen Aktiengesellschaft ("Linde's Ice Machine Company"), now Linde plc, in Wiesbaden, Germany. After a slow start in a difficult German economy, business picked up quickly in the 1880s. The efficient new refrigeration technology offered big benefits to the breweries, and by 1890 Linde had sold 747 machines. In addition to the breweries, other uses for the new technology were found in slaughterhouses and cold storage facilities all over Europe. In 1888, Linde moved back to Munich where he took up his professorship once more but was soon back at work developing new refrigeration cycles. In 1892, an order from the Guinness brewery in Dublin for a carbon dioxide liquefaction plant drove Linde's research into the area of low-temperature refrigeration, and in 1894 he started work on a process for the liquefaction of air. In 1895, Linde first achieved success, and filed for patent protection of his process (not approved in the US until 1903). In 1901, Linde began work on a technique to obtain pure oxygen and nitrogen based on the fractional distillation of liquefied air. By 1910, coworkers including Carl's son Friedrich had developed the Linde double-column process, variants of which are still in common use today. After a decade, Linde withdrew from managerial activities to refocus on research, and in 1895 he succeeded in liquefying air by first compressing it and then letting it expand rapidly, thereby cooling it. He then obtained oxygen and nitrogen from the liquid air by slow warming. In the early days of oxygen production, the biggest use by far for the gas was the oxyacetylene torch, invented in France in 1903, which revolutionized metal cutting and welding in the construction of ships, skyscrapers, and other iron and steel structures. In 1897, Linde was appointed to the Order of Merit of the Bavarian Crown and ennobled in accordance with its statutes. In addition to Linde's technical and engineering abilities, he was a successful entrepreneur. He formed many successful partnerships in Germany and internationally, working effectively to exploit the value of his patents and knowledge through licensing arrangements. In 1906, Linde negotiated a stake in Brin's Oxygen Company, renamed The BOC Group. in exchange for rights to Linde's patents in the UK and other countries, and held a board position until 1914. Linde also formed the Linde Air Products Company in the USA in 1907, a company that passed through US Government control to Union Carbide in the 1940s and on to form Praxair. In 2005 Linde, plc bought the BOC Group, and in 2019 Linde plc merged with Praxair, thus combining all three companies founded by Linde. Later years and death From around 1910, Linde started transferring responsibility for the company's operation to his sons Friedrich and Richard and to his son-in-law Rudolf Wucherer. He continued with supervisory board and advisory duties until his death. Carl von Linde died in Munich in November 1934 at the age of 92. Key inventions Linde's first refrigeration system used dimethyl ether as the refrigerant and was built by Maschinenfabrik Augsburg (now MAN AG) for the Spaten Brewery in 1873. He quickly moved on to develop more reliable ammonia-based cycles. These were early examples of vapor-compression refrigeration machines, and ammonia is still in wide use as a refrigerant in industrial applications. His apparatus for the liquefaction of air combined the cooling effect achieved by allowing a compressed gas to expand (the Joule–Thomson effect first observed by James Prescott Joule and Lord Kelvin) with a counter-current heat exchange technique that used the cold air produced by expansion to chill ambient air entering the apparatus. Over a period of time this effect gradually cooled the apparatus and air within it to the point of liquefaction. Linde followed development of air liquefaction equipment with equipment that also separated air into its constituent parts using distillation processes. Linde's inventions and developments spurred development in many areas of cryogenics, physics, chemistry and engineering. Patents CH10704 – 31 January 1896 – Gasverflüssigungs-maschine (Machine for the liquefaction of gas) (in German) – Switzerland GB189512528 – 16 May 1896 – Process and Apparatus for Liquefying Gases or Gaseous Mixtures, and for Producing Cold, more particularly applicable for Separating Oxygen from Atmospheric Air – UK – 12 May 1903 – Linde oxygen process – US – 12 May 1903 – Equipment for Linde oxygen process – US – 25 July 1905 – Equipment for Linde oxygen and nitrogen process – US Awards Wilhelm Exner Medal, 1922 See also Air separation Cryogenic nitrogen plant Industrial gas Timeline of low-temperature technology German inventors and discoverers References Further reading Carl von Linde: "Aus meinem Leben und von meiner Arbeit" (Memoirs: "From my life and about my work"), first published 1916, reprinted by Springer 1984, . External links Linde AG (Homepage) 1842 births 1934 deaths Burials at Munich Waldfriedhof ETH Zurich alumni German company founders German industrialists German chemical industry people 19th-century German businesspeople 20th-century German businesspeople German mechanical engineers Engineers from Bavaria 19th-century German inventors Industrial gases People from the Kingdom of Bavaria People from Kulmbach (district) Academic staff of the Technical University of Munich Werner von Siemens Ring laureates Recipients of the Pour le Mérite (civil class) Linde plc people
Carl von Linde
[ "Chemistry" ]
1,698
[ "Chemical process engineering", "Industrial gases" ]
993,407
https://en.wikipedia.org/wiki/Hibakusha
( or ; or ; or ) is a word of Japanese origin generally designating the people affected by the atomic bombings of Hiroshima and Nagasaki by the United States at the end of World War II. Definition The word is Japanese, originally written in kanji. While the term ( + + ) has been used before in Japanese to designate any victim of bombs, its worldwide democratization led to a definition concerning the survivors of the atomic bombs dropped in Japan by the United States Army Air Forces on 6 and 9 August 1945. Anti-nuclear movements and associations, among others of , spread the term to designate any direct victim of nuclear disaster, including the ones of the nuclear plant in Fukushima. They, therefore, prefer the writing (replacing with the homophonous ) or . This definition tends to be adopted since 2011. The legal status of is allocated to certain people, mainly by the Japanese government. Official recognition The Atomic Bomb Survivors Relief Law defines as people who fall into one or more of the following categories: within a few kilometers of the hypocenters of the bombs; within of the hypocenters within two weeks of the bombings; exposed to radiation from fallout; or not yet born but carried by pregnant women in any of the three previously mentioned categories. The Japanese government has recognized about 650,000 people as . , 106,825 were still alive, mostly in Japan, and in 2024 are expected to surpass the number of surviving US World War veterans. The government of Japan recognizes about 1% of these as having illnesses caused by radiation. are entitled to government support. They receive a certain amount of allowance per month, and the ones certified as suffering from bomb-related diseases receive a special medical allowance. The memorials in Hiroshima and Nagasaki contain lists of the names of the who are known to have died since the bombings. Updated annually on the anniversaries of the bombings, , the memorials record the names of more than 540,000 ; 344,306 in Hiroshima and 198,785 in Nagasaki. In 1957, the Japanese Parliament passed a law providing free medical care for . During the 1970s, non-Japanese who suffered from those atomic attacks began to demand the right to free medical care and the right to stay in Japan for that purpose. In 1978, the Japanese Supreme Court ruled that such persons were entitled to free medical care while staying in Japan. Korean survivors During the war, Korea had been under Japanese imperial rule, and many Koreans were forced to go to Hiroshima and Nagasaki as a labor force. According to recent estimates, about 20,000 Koreans were killed in Hiroshima and about 2,000 died in Nagasaki. It is estimated that one in seven of the Hiroshima victims was of Korean ancestry. For many years, Koreans had a difficult time fighting for recognition as atomic bomb victims and were denied health benefits. However, most issues have been addressed in recent years through lawsuits. Japanese-American survivors It was a common practice before the war for American Issei, or first-generation immigrants, to send their children on extended trips to Japan to study or visit relatives. More Japanese immigrated to the U.S. from Hiroshima than any other prefecture, and Nagasaki also sent many immigrants to Hawai'i and the mainland. There was, therefore, a sizable population of American-born Nisei and Kibei living in their parents' hometowns of Hiroshima and Nagasaki at the time of the atomic bombings. The actual number of Japanese Americans affected by the bombings is unknown – although estimates put approximately 11,000 in Hiroshima city alone – but some 3,000 of them are known to have survived and returned to the U.S. after the war. A second group of counted among Japanese American survivors are those who came to the U.S. in a later wave of Japanese immigration during the 1950s and 1960s. Most in this group were born in Japan and migrated to the U.S. in search of educational and work opportunities that were scarce in post-war Japan. Many were war brides, or Japanese women who had married American men related to the U.S. military's occupation of Japan. As of 2014, there are about 1,000 recorded Japanese American living in the United States. They receive monetary support from the Japanese government and biannual medical checkups with Hiroshima and Nagasaki doctors familiar with the particular concerns of atomic bomb survivors. The U.S. government provides no support to Japanese American . Other foreign survivors While one British Commonwealth citizen and seven Dutch POWs (two names known) died in the Nagasaki bombing, at least two POWs reportedly died postwar from cancer thought to have been caused by the atomic bomb. One American POW, the Navajo Joe Kieyoomia, was in Nagasaki at the time of the bombing but survived, reportedly having been shielded from the effects of the bomb by the concrete walls of his cell. Double survivors People who suffered the effects of both bombings are known as in Japan. These people were in Hiroshima on 6 August 1945, and within two days managed to reach Nagasaki. A documentary called Twice Bombed, Twice Survived: The Doubly Atomic Bombed of Hiroshima and Nagasaki was produced in 2006. The producers found 165 people who were victims of both bombings, and the production was screened at the United Nations. On 24 March 2009, the Japanese government officially recognized Tsutomu Yamaguchi (1916–2010) as a double . Yamaguchi was confirmed to be from ground zero in Hiroshima on a business trip when the bomb was detonated. He was seriously burnt on his left side and spent the night in Hiroshima. He got back to his home city of Nagasaki on 8 August, a day before the bomb in Nagasaki was dropped, and he was exposed to residual radiation while searching for his relatives. He was the first officially recognized survivor of both bombings. Yamaguchi died at the age of 93 on 4 January 2010 of stomach cancer. Discrimination and their children were (and still are) victims of severe discrimination when it comes to prospects of marriage or work due to public ignorance about the consequences of radiation sickness, with much of the public believing it to be hereditary or even contagious. This is despite the fact that no statistically demonstrable increase of birth defects or congenital malformations was found among the later conceived children born to survivors of the nuclear weapons used at Hiroshima and Nagasaki, or found in the later conceived children of cancer survivors who had previously received radiotherapy. The surviving women of Hiroshima and Nagasaki, who could conceive, and were exposed to substantial amounts of radiation, went on and had children with no higher incidence of abnormalities or birth defects than the rate observed in the Japanese population. Studs Terkel's book The Good War includes a conversation with two . The postscript observes: The is a group formed by in 1956 with the goals of pressuring the Japanese government to improve support of the victims and lobbying governments for the abolition of nuclear weapons. Some estimates are that 140,000 people in Hiroshima (38.9% of the population) and 70,000 people in Nagasaki (28.0% of the population) died in 1945, but how many died immediately as a result of exposure to the blast, heat, or due to radiation, is unknown. One Atomic Bomb Casualty Commission (ABCC) report discusses 6,882 people examined in Hiroshima, and 6,621 people examined in Nagasaki, who were largely within 2000 meters from the hypocenter, who suffered injuries from the blast and heat but died from complications frequently compounded by acute radiation syndrome (ARS), all within about 20–30 days. In the rare cases of survival for individuals who were in utero at the time of the bombing and yet who still were close enough to be exposed to less than or equal to 0.57 Gy, no difference in their cognitive abilities was found, suggesting a threshold dose for pregnancies below which there is no danger. In 50 or so children who survived the gestational process and were exposed to more than this dose, putting them within about 1000 meters from the hypocenter, microcephaly was observed; this is the only elevated birth defect issue observed in the , occurring in approximately 50 in-utero individuals who were situated less than 1000 meters from the bombings. In a manner dependent on their distance from the hypocenter, in the 1987 Life Span Study, conducted by the Radiation Effects Research Foundation, a statistical excess of 507 cancers, of undefined lethality, were observed in 79,972 who had still been living between 1958–1987 and who took part in the study. An epidemiology study by the RERF estimates that from 1950 to 2000, 46% of leukemia deaths and 11% of solid cancers, of unspecified lethality, could be due to radiation from the bombs, with the statistical excess being estimated at 200 leukemia deaths and 1,700 solid cancers of undeclared lethality. Health Effects of nuclear explosions on human health Radiation poisoning Notable Hiroshima Hiroshima Maidens – 25 young women who had surgery in the US after the war Hubert Schiffer – Jesuit priest at Hiroshima Ikuo Hirayama – of Hiroshima at 15 years old, painter Isao Harimoto – of Hiroshima at 5 years old, ethnic Korean baseball professional player Issey Miyake – of Hiroshima at 7 years old, clothing designer Julia Canny – Irish nun who survived Hiroshima and aided survivors Keiji Nakazawa – of Hiroshima at 6 years old, author of Barefoot Gen and other anti-war manga. Kiyoshi Tanimoto – at 36 years old, Methodist minister, anti-nuclear activist, helped Hiroshima Maidens and to gain social rights. Peace prize named after him Koko Kondo – of Hiroshima at 1 year old, notable peace activist and daughter of Reverend Kiyoshi Tanimoto Masaru Kawasaki – of Hiroshima at 19 years old, composer of the dirge performed at every Hiroshima Peace Memorial Ceremony since 1975 Michihiko Hachiya – of Hiroshima at 42 years old, physician specialized in , writer of Hiroshima Diary Sadako Kurihara – of Hiroshima at 32 years old, poet, anti-nuclear activist, founder of () Sadako Sasaki – at 2 years old, well known for her goal to fold a thousand origami cranes in order to cure herself of leukemia and as a symbol of peace Sankichi Tōge – at 28 years old, poet and militant Setsuko Thurlow – of Hiroshima at 13 years old, anti-nuclear activist, ambassador, and keynote speaker at the reception of the Nobel Peace Prize of the International Campaign to Abolish Nuclear Weapons Shigeaki Mori – a historian of allied prisoners of war Shigeko Sasamori – advocate for peace and nuclear disarmament Shinoe Shōda – at 34 years old, writer and poet Shuntaro Hida – of Hiroshima at 28 years old, physician specialized in treating Sunao Tsuboi – of Hiroshima at 20 years old, teacher and activist with Japan Confederation of A- and H-Bomb Sufferers Organizations Tamiki Hara – of Hiroshima at 39 years old, poet, writer, and university professor Tomotaka Tasaka – of Hiroshima at 43 years old, film director and scriptwriter Yoko Ota – of Hiroshima at 38 years old, writer Yoshito Matsushige – of Hiroshima at 32 years old, has taken the only five pictures known the day of the atomic bombing of Hiroshima Shigeru Nakamura – of Hiroshima at 34 years old, supercentenarian, former oldest living Japanese man (11 January 1911 – 15 November 2022). Nagasaki Joe Kieyoomia – an American Navajo prisoner of war who survived both the Bataan Death March and the Nagasaki bombing Kyoko Hayashi – of Nagasaki at 14 years old, writer Osamu Shimomura – organic chemist and marine biologist; Nobel Prize in Chemistry in 2008 Sumiteru Taniguchi – at 16 years old, known for a picture of him with his back skinless taken by a Marine; anti-nuclear peace activist, president of the council of the A-Bomb of Nagasaki, co-president of the Japan Confederation of A- and H-Bomb Sufferers Organizations in 2010 Takashi Nagai – of Nagasaki at 38 years old, doctor and author of The Bells of Nagasaki Terumi Tanaka – of Nagasaki at 13 years old, engineer and associated professor at the University of Tohoku, an activist with Japan Confederation of A- and H-Bomb Sufferers Organizations Yōsuke Yamahata – military photographer, not a direct victim of the Bomb but took pictures of Nagasaki the next day. Died of cancer probably due to radiation. Can be considered a according to the ABCC classification. Hiroshima and Nagasaki Tsutomu Yamaguchi – the first person officially recognized to have survived both the Hiroshima and Nagasaki atomic bombings. Artistic representations and documentaries Literature (原爆文学 Genbaku bungaku) literature Summer Flowers (), Tamiki Hara, 1946 From the Ruins (), Tamiki Hara, 1947 Prelude to Annihilation (), Tamiki Hara, 1949 City of Corpses (), Yōko Ōta, 1948 Human Rags (), Yōko Ōta, 1951 Penitence (), Shinoe Shōda, 1947 – collection of tanka poems Bringing Forth New Life (), Sadako Kurihara, 1946 I, A Hiroshima Witness (), Sadako Kurihara, 1967 Documents about Hiroshima Twenty-Four Years Later (), Sadako Kurihara, 1970 Ritual of Death (), Kyōko Hayashi, 1975 Poems of the Atomic Bomb (), Sankichi Tōge, 1951 The bells of Nagasaki (), Takashi Nagai, 1949 Little boy: stories of days in Hiroshima, Shuntaro Hida, 1984 Letters from the end of the world: a firsthand account of the bombing of Hiroshima, Toyofumi Ogura, 1997 The day the sun fell – I was 14 years old in Hiroshima, Hashizume Bun, 2007 Yoko's Diary: The Life of a Young Girl in Hiroshima During World War II, Yoko Hosokawa Hiroshima Diary, Michihiko Hachiya, 1955 One year ago Hiroshima (), Hisashi Tohara, 1946 Non- literature Hiroshima notes (), Kenzaburô Ooe, 1965 Black Rain (), Masuji Ibuse, 1965 Hiroshima, Makoto Oda, 1981 (), Yūichi Seirai, 2007 Sadako and the Thousand Paper Cranes, Eleanor Coerr, 1977 (Ashes of Hiroshima), Othman Puteh and Abdul Razak Abdul Hamid, 1987 Burnt Shadows, Kamila Shamsie, 2009 Nagasaki: Life After Nuclear War, Susan Southard, 2015 Hiroshima, John Hersey, 1946 Hibakusha (2015 short story) Manga and anime Barefoot Gen (), Keiji Nakazawa, 1973–1974, 10 volumes (also adapted in film in 1976, 1983 and a TV drama in 2007) Town of Evening Calm, Country of Cherry Blossoms (), Fumiyo Kōno, 2003–2004 (adapted into novel and film in 2007) Hibakusha, Steve Nguyen and Choz Belen, 2012 Bōshi (), Hiroshi Kurosaki, NHK, 2008, 90 minutes In This Corner of the World (), Masao Maruyama, MAPPA, 2016 Films Children of Hiroshima (), Kaneto Shindo, 1952 Frankenstein vs. Baragon (), Ishirō Honda and Eiji Tsuburaya, 1965 Black Rain (), Shohei Imamura, 1989 The bells of Nagasaki (), Hideo Ōba, 1950 Rhapsody in August (), Akira Kurosawa, 1991 Hiroshima mon amour, Alain Resnais, 1959 Hiroshima, Koreyoshi Kurahara and Roger Spottiswoode, 1995 Touch, Baltasar Kormákur, 2024 Music Silent Planet, Darkstrand (), 2013 Masaru Kawazaki, March forward for peace, 1966 Krzysztof Penderecki, Threnody to the Victims of Hiroshima, 1961 Masao Ohki, Symphony no 5 "Hiroshima", 1953 Toshio Hosokawa, Voiceless Voice in Hiroshima, 1989–2001 Fine art painting (), Ikuo Hirayama Carl Randall (UK artist who met and painted portraits of in Hiroshima, 2006–2009) Performing arts characters are featured in several Japanese plays including The Elephant by Minoru Betsuyaku Documentaries No More Hiroshima, Martin Duckworth, 1984 Hiroshima: The real History, Lucy van Beek, Brook Lapping Productions 2015 Hiroshima Witness, Hiroshima Peace Cultural Center and NHK, 1986 Hiroshima, Paul Wilmshurst, BBC, 2005, 89 minutes White Light/Black Rain: The Destruction of Hiroshima and Nagasaki, Steven Okazaki, HBO, 2007, 86 minutes Als die Sonne vom Himmel fiel, Aya Domenig, 2015, 78 minutes Atomic Wounds, Journeyman Pictures, 2008 See also Atomic veteran Atomic People Castle Bravo Doomsday clock Fat Man H Bomb Hibakujumoku Hiroshima Peace memorial park Little Boy Manhattan project Nihon Hidankyo Preparatory Commission for the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO) SCOJ 2005 No.1977 Treaty on the Prohibition of Nuclear Weapons – Preamble References Further reading Terkel, Studs, The Good War, Random House:New York, 1984. Hersey, John, Hiroshima, A.A. Knopf: New York, 1985. External links Nagasaki Archive White Light/Black Rain official website (film) Voices of the survivors from Hiroshima and Nagasaki Voice of Hibakusha "Eye-witness accounts of the bombing of Hiroshima" Hibakusha, fifteen years after the bomb (CBC TV news report) Virtual Museum " testimonies, coupled with photographs, memoirs and paintings, give a human face to the tragedy of the A-bombing. Starting in 1986, the Hiroshima Peace Culture Foundation initiated a project to record giving testimonies on video. In each year since, the testimonies of 50 people have been recorded and edited into 20-minute segments per person" The Voice of Hibakusha Atomic Bomb Casualty Commission ABCC Radiation Effects Research Foundation website "Survival in Nagasaki." "Living with a double A-bomb surviving parent." "Fight against the A-bomb." "Contribute actively to peace." Hibakusha Testimonies – Online reprints of published sources including excerpts from the Japan Times. Hibakusha Stories "Initiative of Youth Arts New York in partnership with Peace Boat, the Hiroshima Peace Culture Foundation, the United Nations Office for Disarmament Affairs, and New York Theatre Workshop." A-Bomb Survivors: Women Speak Out for Peace – Online DVD Testimonies of Hiroshima and Nagasaki Hibakusha with subtitles in 6 different languages. Literary Fallout: The legacies of Hiroshima and Nagasaki Three Quarters of A Century After Hiroshima and Nagasaki: The Hibakusha – Brave Survivors Working for a Nuclear-Free World – Online exhibit launched in 2023 by the No More Hiroshima & Nagasaki Museum. Nuclear warfare Atomic bombings of Hiroshima and Nagasaki Radiation health effects Survivors of disasters Anti–nuclear weapons movement Zainichi Korean history
Hibakusha
[ "Chemistry", "Materials_science" ]
3,890
[ "Radiation effects", "Radiation health effects", "Radioactivity", "Nuclear warfare" ]
993,408
https://en.wikipedia.org/wiki/Blinkenlights
In computer jargon, blinkenlights are diagnostic lights on front panels of old mainframe computers. More recently the term applies to status lights of modern network hardware (modems, network hubs, etc.). Blinkenlights disappeared from more recent computers for a number of reasons, the most important being the fact that with faster CPUs a human can no longer interpret the processes in the computer on the fly. Though more sophisticated UI mechanisms have since been developed, blinkenlights may still be present as additional status indicators and familiar skeuomorphs. Etymology The term has its origins in hacker humor and is taken from a famous (often blackletter-Gothic) mock warning sign written in a mangled form of German. Variants of the sign were relatively common in computer rooms in English-speaking countries from the early 1960s. One version read: Some versions of the sign end with the word blinkenlights. The sign dates back as far as 1955 at IBM, and a copy was reported at London University's Atlas computer facility. Although the sign might initially appear to be in German and uses an approximation of German grammar, it is composed largely of words that are either near-homonyms of English words or (in the cases of the longer words) actual English words that are rendered in a faux-German spelling. As such, the sign is generally comprehensible by many English speakers regardless of whether they have any fluency in German, but mostly incomprehensible to German speakers with no knowledge of English. Much of the humor in these signs was their intentionally incorrect language. Michael J. Preston relates the sign as being posted above photocopiers in offices as a warning not to mess with the machine in the first print reference from 1974. The sign is also reported to have been seen on an electron microscope at the Cavendish Laboratory in the 1950s. Such pseudo-German parodies were common in Allied machine shops during and following World War II, and an example photocopy is shown in the Jargon File. The Jargon File also mentions that German hackers had in turn developed their own versions of the blinkenlights poster, in broken English: Actual blinkenlights The bits and digits in the earliest mechanical and vacuum tube-based computers were typically large and few, making it easy to see (and often hear) activity. Afterwards, for decades, computers incorporated arrays of indicator lamps in their control panels, indicating the values carried on the address, data, and other internal buses, and in various registers. These could be used for diagnosing or "single-stepping" a halted machine, but even with the machine operating normally, a skilled operator could interpret the high-speed blur of the lamps to tell which section of a large program was executing, whether the program was caught in an endless loop, and so on. With rising processor clock rates, increased memory sizes, and improved interactive debugging tools, such panel lights gradually lost their usefulness, though today most devices have indicators showing power on/off status, hard disk activity, network activity, and other indicators of "signs of life". The Connection Machine, a -processor parallel computer designed in the mid-1980s, was a black cube with one side covered with a grid of red blinkenlights; the sales demo had them evolving Conway's Game of Life patterns. The two CPU load monitors on the front of BeBoxes were also called "blinkenlights". This word gave its name to several projects, including screen savers, hardware gadgets, and other nostalgic endeavours. Notable such enterprises include, but are not limited to, the German Chaos Computer Club's Project Blinkenlights and the Blinkenlights Archaeological Institute. See also Faxlore Macaronic language References Further reading External links DEC indicator panels Computer jargon Computer humour Tech humour Macaronic language
Blinkenlights
[ "Technology" ]
786
[ "Computing terminology", "Computer jargon", "Natural language and computing" ]
993,509
https://en.wikipedia.org/wiki/Science%20wars
The science wars were a series of scholarly and public discussions in the 1990s over the social place of science in making authoritative claims about the world. Encyclopedia.com, citing the Encyclopedia of Science and Religion, describes the science wars as the "complex of discussions about the way the sciences are related to or incarnated in culture, history, and practice. [...] [which] came to be called a 'war' in the mid 1990s because of a strong polarization over questions of legitimacy and authority. One side [...] is concerned with defending the authority of science as rooted in objective evidence and rational procedures. The other side argues that it is legitimate and fruitful to study the sciences as institutions and social-technical networks whose development is influenced by linguistics, economics, politics, and other factors surrounding formally rational procedures and isolated established facts." The science wars took place principally in the United States in the 1990s in the academic and mainstream press. Scientific realists (such as Norman Levitt, Paul R. Gross, Jean Bricmont and Alan Sokal) accused many writers, whom they described as 'postmodernist', of having effectively rejected scientific objectivity, the scientific method, empiricism, and scientific knowledge. Though much of the theory associated with 'postmodernism' (see post-structuralism) did not make any interventions into the natural sciences, the scientific realists took aim at its general influence. The scientific realists argued that large swathes of scholarship, amounting to a rejection of objectivity and realism, had been influenced by major 20th-century post-structuralist philosophers (such as Jacques Derrida, Gilles Deleuze, Jean-François Lyotard and others), whose work they declare to be incomprehensible or meaningless. They implicate a broad range of fields in this trend, including cultural studies, feminist studies, comparative literature, media studies, and especially science and technology studies, which does apply such methods to the study of science. Solid-state physicist N. David Mermin understands the science wars as a series of exchanges between scientists and "sociologists, historians and literary critics" who the scientists "thought ...were ludicrously ignorant of science, making all kinds of nonsensical pronouncements. The other side dismissed these charges as naive, ill informed and self-serving." Sociologist Harry Collins wrote that the "science wars" began "in the early 1990s with attacks by natural scientists or ex-natural scientists who had assumed the role of spokespersons for science. The subject of the attacks was the analysis of science coming out of literary studies and the social sciences." Historical background Until the mid-20th century, the philosophy of science had concentrated on the viability of scientific method and knowledge, proposing justifications for the truth of scientific theories and observations and attempting to discover at a philosophical level why science worked. Karl Popper, an early opponent of logical positivism in the 20th century, repudiated the classical observationalist/inductivist form of scientific method in favour of empirical falsification. He is also known for his opposition to the classical justificationist/verificationist account of knowledge which he replaced with critical rationalism, "the first non justificational philosophy of criticism in the history of philosophy". His criticisms of scientific method were adopted by several postmodernist critiques. A number of 20th-century philosophers maintained that logical models of pure science do not apply to actual scientific practice. It was the publication of Thomas Kuhn's The Structure of Scientific Revolutions in 1962, however, which fully opened the study of science to new disciplines by suggesting that the evolution of science was in part socially determined and that it did not operate under the simple logical laws put forward by the logical positivist school of philosophy. Kuhn described the development of scientific knowledge not as a linear increase in truth and understanding, but as a series of periodic revolutions which overturned the old scientific order and replaced it with new orders (what he called "paradigms"). Kuhn attributed much of this process to the interactions and strategies of the human participants in science rather than its own innate logical structure. (See sociology of scientific knowledge). Some interpreted Kuhn's ideas to mean that scientific theories were, either wholly or in part, social constructs, which many interpreted as diminishing the claim of science to representing objective reality, and that reality had a lesser or potentially irrelevant role in the formation of scientific theories. In 1971, Jerome Ravetz published Scientific knowledge and its social problems, a book describing the role that the scientific community, as a social construct, plays in accepting or rejecting objective scientific knowledge. Postmodernism A number of different philosophical and historical schools, often grouped together as "postmodernism", began reinterpreting scientific achievements of the past through the lens of the practitioners, often positing the influence of politics and economics in the development of scientific theories in addition to scientific observations. Rather than being presented as working entirely from positivistic observations, many scientists of the past were scrutinized for their connection to issues of gender, sexual orientation, race, and class. Some more radical philosophers, such as Paul Feyerabend, argued that scientific theories were themselves incoherent and that other forms of knowledge production (such as those used in religion) served the material and spiritual needs of their practitioners with equal validity as did scientific explanations. Imre Lakatos advanced a midway view between the "postmodernist" and "realist" camps. For Lakatos, scientific knowledge is progressive; however, it progresses not by a strict linear path where every new element builds upon and incorporates every other, but by an approach where a "core" of a "research program" is established by auxiliary theories which can themselves be falsified or replaced without compromising the core. Social conditions and attitudes affect how strongly one attempts to resist falsification for the core of a program, but the program has an objective status based on its relative explanatory power. Resisting falsification only becomes ad-hoc and damaging to knowledge when an alternate program with greater explanatory power is rejected in favor of another with less. But because it is changing a theoretical core, which has broad ramifications for other areas of study, accepting a new program is also revolutionary as well as progressive. Thus, for Lakatos the character of science is that of being both revolutionary and progressive; both socially informed and objectively justified. The science wars In Higher Superstition: The Academic Left and Its Quarrels With Science (1994), scientists Paul R. Gross and Norman Levitt accused postmodernists of anti-intellectualism, presented the shortcomings of relativism, and suggested that postmodernists knew little about the scientific theories they criticized and practiced poor scholarship for political reasons. The authors insist that the "science critics" misunderstood the theoretical approaches they criticized, given their "caricature, misreading, and condescension, [rather] than argument". The book sparked the so-called science wars. Higher Superstition inspired a New York Academy of Sciences conference titled The Flight from Science and Reason, organized by Gross, Levitt, and Gerald Holton. Attendees of the conference were critical of the polemical approach of Gross and Levitt, yet agreed upon the intellectual inconsistency of how laymen, non-scientist, and social studies intellectuals dealt with science. Social Text In 1996, Social Text, a Duke University publication of postmodern critical theory, compiled a "Science Wars" issue containing brief articles by postmodernist academics in the social sciences and the humanities, that emphasized the roles of society and politics in science. In the introduction to the issue, the Social Text editor, Andrew Ross, said that the attack upon science studies was a conservative reaction to reduced funding for scientific research, characterizing the Flight from Science and Reason conference as an attempted "linking together a host of dangerous threats: scientific creationism, New Age alternatives and cults, astrology, UFO-ism, the radical science movement, postmodernism, and critical science studies, alongside the ready-made historical specters of Aryan-Nazi science and the Soviet error of Lysenkoism" that "degenerated into name-calling". The historian Dorothy Nelkin characterised Gross and Levitt's vigorous response as a "call to arms in response to the failed marriage of Science and the State"—in contrast to the scientists' historical tendency to avoid participating in perceived political threats, such as creation science, the animal rights movement, and anti-abortionists' attempts to curb fetal research. At the end of the Soviet–American Cold War (1945–91), military funding of science declined, while funding agencies demanded accountability, and research became directed by private interests. Nelkin suggested that postmodernist critics were "convenient scapegoats" who diverted attention from problems in science. Also in 1996, physicist Alan Sokal had submitted an article to Social Text titled "Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity", which proposed that quantum gravity is a linguistic and social construct and that quantum physics supports postmodernist criticisms of scientific objectivity. After holding the article back from earlier issues due to Sokal's refusal to consider revisions, the staff published it in the "Science Wars" issue as a relevant contribution. Later, in the May 1996 issue of Lingua Franca, in the article "A Physicist Experiments With Cultural Studies", Sokal exposed his parody-article, "Transgressing the Boundaries" as an experiment testing the intellectual rigor of an academic journal that would "publish an article liberally salted with nonsense if (a) it sounded good and (b) it flattered the editors' ideological preconceptions". The matter became known as the "Sokal Affair" and brought greater public attention to the wider conflict. Jacques Derrida, a frequent target of "anti-relativist" criticism in the wake of Sokal's article, responded to the hoax in "Sokal and Bricmont Aren't Serious", first published in Le Monde. He called Sokal's action sad (triste) for having overshadowed Sokal's mathematical work and ruined the chance to sort out controversies of scientific objectivity in a careful way. Derrida went on to fault him and co-author Jean Bricmont for what he considered an act of intellectual bad faith: they had accused him of scientific incompetence in the English edition of a follow-up book (an accusation several English reviewers noted), but deleted the accusation from the French edition and denied that it had ever existed. He concluded, as the title indicates, that Sokal was not serious in his approach, but had used the spectacle of a "quick practical joke" to displace the scholarship Derrida believed the public deserved. Continued conflict In the first few years after the 'Science Wars' edition of Social Text, the seriousness and volume of discussion increased significantly, much of it focused on reconciling the 'warring' camps of postmodernists and scientists. One significant event was the 'Science and Its Critics' conference in early 1997; it brought together scientists and scholars who study science and featured Alan Sokal and Steve Fuller as keynote speakers. The conference generated the final wave of substantial press coverage (in both news media and scientific journals), though by no means resolved the fundamental issues of social construction and objectivity in science. Other attempts have been made to reconcile the two camps. Mike Nauenberg, a physicist at the University of California, Santa Cruz, organized a small conference in May 1997 that was attended by scientists and sociologists of science alike, among them Alan Sokal, N. David Mermin and Harry Collins. In the same year, Collins organized the Southampton Peace Workshop, which again brought together a broad range of scientists and sociologists. The Peace Workshop gave rise to the idea of a book that intended to map out some of the arguments between the disputing parties. The One Culture?: A Conversation about Science, edited by chemist Jay A. Labinger and sociologist Harry Collins, was eventually published in 2001. The book's title is a reference to C. P. Snow's The Two Cultures. It contains contributions from authors such as Alan Sokal, Jean Bricmont, Steven Weinberg, and Steven Shapin. Other significant publications related to the science wars include Fashionable Nonsense by Sokal and Jean Bricmont (1998), The Social Construction of What? by Ian Hacking (1999) and Who Rules in Science by James Robert Brown (2004). To John C. Baez, the Bogdanov Affair in 2002 served as the bookend to the Sokal controversy: the review, acceptance, and publication of papers, later alleged to be nonsense, in peer-reviewed physics journals. Cornell physics professor Paul Ginsparg, argued that the cases are not at all similar and that the fact that some journals and scientific institutions have low standards is "hardly a revelation". The new editor in chief of the journal Annals of Physics, who was appointed after the controversy along with a new editorial staff, had said that the standards of the journal had been poor leading up to the publication since the previous editor had become sick and died. Interest in the science wars has waned considerably in recent years. Though the events of the science wars are still occasionally mentioned in the mainstream press, they have had little effect on either the scientific community or the community of critical theorists. Both sides continue to maintain that the other does not understand their theories, or mistakes constructive criticisms and scholarly investigations for attacks. In 1999 Bruno Latour said, "Scientists always stomp around meetings talking about 'bridging the two-culture gap', but when scores of people from outside the sciences begin to build just that bridge, they recoil in horror and want to impose the strangest of all gags on free speech since Socrates: only scientists should speak about science!" Subsequently, Latour has suggested a re-evaluation of sociology's epistemology based on lessons learned from the Science Wars: "... scientists made us realize that there was not the slightest chance that the type of social forces we use as a cause could have objective facts as their effects". Reviewing Sokal's Beyond the Hoax, Mermin stated that "As a sign that the science wars are over, I cite the 2008 election of Bruno Latour [...] to Foreign Honorary Membership in that bastion of the establishment, the American Academy of Arts and Sciences" and opined that "we are not only beyond Sokal's hoax, but beyond the science wars themselves". However, more recently, some of the leading critical theorists have recognized that their critiques have, at times, been counter-productive and are providing intellectual ammunition for reactionary interests. Writing about these developments in the context of global warming, Latour noted that "dangerous extremists are using the very same argument of social construction to destroy hard-won evidence that could save our lives. Was I wrong to participate in the invention of this field known as science studies? Is it enough to say that we did not really mean what we said?" Kendrick Frazier notes that Latour is interested in helping to rebuild trust in science and that Latour has said that some of the authority of science needs to be regained. In 2016, Shawn Lawrence Otto, in his book The War on Science: Who's Waging It, Why It Matters, and What We can Do About It, that the winners of the war on science "will chart the future of power, democracy, and freedom itself." See also Chomsky-Foucault debate Culture war Deconstruction Grievance studies affair Historiography of science Nature versus nurture Normative science Positivism Positivism dispute Science for the People Scientism Searle-Derrida debate Strong programme Suppressed research in the Soviet Union Teissier affair Notes References Ashman, Keith M. and Barringer, Philip S. (ed.) (2001). After the science wars, Routledge, London. Gross, Paul R. and Levitt, Norman (1994). Higher Superstition: The Academic Left and Its Quarrels With Science, Johns Hopkins University Press, Baltimore, Maryland. Sokal, Alan D. (1996). Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity, Social Text 46/47, 217–252. Callon, Michel (1999). Whose Impostures? Physicists at War with the Third Person, Social Studies of Science 29(2), 261–286. Parsons, Keith (ed.) (2003). The Science Wars: Debating Scientific Knowledge and Technology, Prometheus Books, Amherst, NY, US. Labinger, Jay A. and Collins, Harry (eds.) (2001). The One Culture?: A Conversation About Science, University of Chicago Press, Chicago. Brown, James R. (2001). Who Rules in Science? An Opinionated Guide to the Wars, Harvard University Press, Cambridge, MA. External links Papers by Alan Sokal on the "Social Text Affair" Science and technology studies Historiography of science Science Wars Science Wars Criticism of science Scientific controversies Science Wars Politics of science Philosophical debates Philosophy controversies Criticism of academia
Science wars
[ "Technology" ]
3,589
[ "Science and technology studies" ]
993,533
https://en.wikipedia.org/wiki/Inclusionary%20zoning
Inclusionary zoning (IZ) is municipal and county planning ordinances that require or provide incentives when a given percentage of units in a new housing development be affordable by people with low to moderate incomes. Such housing is known as inclusionary housing. The term inclusionary zoning indicates that these ordinances seek to counter exclusionary zoning practices, which exclude low-cost housing from a municipality through the zoning code. (For example, single-family zoning makes it illegal to build multi-family apartment buildings.) Non-profit affordable housing developers build 100% of their units as affordable, but need significant taxpayer subsidies for this model to work. Inclusionary zoning allows municipalities to have new affordable housing constructed without taxpayer subsidies. In order to encourage for-profit developers to build projects that include affordable units, cities often allow developers to build more total units (a "density bonus") than their zoning laws currently allow so that there will be enough profit generating market-rate units to offset the losses from the below market-rate units and still allow the project to be financially feasible. Inclusionary zoning can be mandatory or voluntary, though the great majority of units have been built as a result of mandatory programmes. There are variations among the set-aside requirements (percentage of units set-aside for low-income residents), affordability levels (what income level is considered "low-income"), and length of time the unit is deed-restricted as affordable housing. In practice, these policies involve placing deed restrictions on 10–30% of new houses or apartments in order to make the cost of the housing affordable to lower-income households. The mix of "affordable housing" and "market-rate" housing in the same neighborhood is seen as beneficial by city planners and sociologists. Another goal of inclusionary zoning is to build mixed-income communities, rather than having poor households concentrated in specific city neighborhoods. Economists state that IZ functions as a price control on a percentage of units and has similar negative effects as other price controls (rent control) being that it discourages the supply of new housing. It can also be understood similar to impact fees as an "inclusionary tax" on market-rate units which raises the prices of new non-price-controlled units in that development and thereby diminishes the financial incentive to create new housing. Most inclusionary zoning is enacted at the municipal or county level; when imposed by the state, as in Massachusetts, it has been argued that such laws usurp local control. In such cases, developers can use inclusionary zoning to avoid certain aspects of local zoning laws. Historical background During the mid- to late-20th century, new suburbs grew and expanded around American cities as middle-class house buyers, supported by federal loan programs such as Veterans Administration housing loan guarantees, left established neighborhoods and communities. These newly populated places were generally more economically homogeneous than the cities they encircled. Many suburban communities enacted local ordinances, often in zoning codes, to preserve the character of their municipality. One of the most commonly cited exclusionary practices is the stipulation that lots must be of a certain minimum size and houses must be set back from the street a certain minimum distance. In many cases, these housing ordinances prevented affordable housing from being built, because the large plots of land required to build within the code restrictions were cost-prohibitive for modestly priced houses. Communities have remained accessible to wealthier citizens because of these ordinances, effectively shutting the low income families out of desirable communities. Such zoning ordinances have not always been enacted with conscious intent to exclude lower income households, but it has been the unintended result of such policies. By denying lower income families access to suburban communities, many feel that exclusionary zoning has contributed to the maintenance of inner city ghettos. Supporters of inclusionary zoning point out that low income households are more likely to become economically successful if they have middle class neighbors as peers and role models. When effective, inclusionary zoning reduces the concentration of poverty in slum districts where social norms may not provide adequate models of success. Education is one of the largest components in the effort to lift people out of poverty; access to high-quality public schools is another key benefit of reduced segregation. Statistically, a poor child in a school where 80% of the children are poor scores 13–15% lower compared to environments where the poor child's peers are 80% middle class. But this poor child, unlike their middle-class peers in market-rate housing, loses out on intergenerational wealth. In many of the communities where inclusionary zoning has been put into practice, income requirements allow households that earn 80–120% of the median income to qualify for the "affordable" housing. This is because in many places high housing prices have prevented even median-income households from buying market-rate properties. This is especially prominent in California, where only 16% of the population could afford the median-priced home during 2005. Potential benefits and limitations of IZ Policies Potential benefits Poor and working families would have access to a range of opportunities, including good employment opportunities, good schools, comprehensive transportation system and safe streets Alleviating the problem of inadequate supply of Affordable Housing Avoiding economic and racial segregation, which helps reducing crime rate, failing schools and improving social stability Relatively small amount of public subsidies required for adopting IZ as a market-based tool Potential limitations Low production of affordable housings, which produced approximately 150,000 units over several decades nationwide, comparing to other schemes, such as Housing Choice Vouchers that helps approximately two million households and the LIHTC program that has produced over two million affordable homes Unstable production of affordable housing that highly affected by local housing-market conditions Very little research on outcomes for participants in these programs. Although these affordable housing programs, by definition, offer lower-cost units that municipalities promote as inclusive, the deed restrictions imposed on participants in these programs result in additional economic disparities and other hardships not faced by market-rate homeowners. Economics Economists state that IZ functions as a price control on a percentage of units and has similar negative effects as other price controls (rent control) being that it discourages the supply of new housing. It can also be understood similar to impact fees as an "inclusionary tax" on market-rate units which raises the prices of new non-price-controlled units in that development and thereby diminishes the financial incentive to create new housing. Differences in ordinances Inclusionary zoning ordinances vary substantially among municipalities. These variables can include: Mandatory or voluntary ordinance. While many cities require inclusionary housing, many more offer zoning bonuses, expedited permits, reduced fees, cash subsidies, or other incentives for developers who voluntarily build affordable housing. Percentage of units to be dedicated as inclusionary housing. This varies quite substantially among jurisdictions, but appears to range from 10 to 30%. Minimum size of development that the ordinance applies to. Most jurisdictions exempt smaller developments, but some require that even developments incurring only a fraction of an inclusionary housing unit pay a fee (see below). Whether inclusionary housing must be built on site. Some programs allow housing to be built nearby, in cases of hardship. Whether fees can be paid in lieu of building inclusionary housing. Fees-in-lieu allow a developer to "buy out" of an inclusionary housing obligation. This may seem to defeat the purpose of inclusionary zoning, but in some cases the cost of building one affordable unit on-site could purchase several affordable units off-site. Income level or price defined as "affordable," and buyer qualification methods. Most ordinances seem to target inclusionary units to low- or moderate-income households which earn approximately the regional median income or somewhat below. Inclusionary housing typically does not create housing for those with very low incomes. Whether inclusionary housing units are limited by price or by size (the City of Johannesburg for example provides for both options) Appearance and integration of inclusionary housing units. Many jurisdictions require that inclusionary housing units be indistinguishable from market-rate units, but this can increase costs. Longevity of price restrictions attached to inclusionary housing units, and allowable appreciation. Ordinances that allow the "discount" to expire essentially grant a windfall profit, similar to what market-rate owners would get. Municipalities dislike this because it would mean they would have to create more affordable units. Instead, participants in these programs subsidize themselves, relieving municipalities of the financial burden to keep these programs running. However, placing the brunt of the work and subsidies on the people in these programs raises questions. It can trap individuals in public housing programs, making it nearly impossible for them to move out until they pass away. If they could not afford market-rate housing 15 years ago, staying in a unit that restricts appreciation becomes a significant barrier to leaving public housing. In addition, requiring participants to do maintenance and take on all other homeowner liabilities on a home that is economically similar to a rental (since there is limited appreciation minus HOA fees, interest, taxes, etc.) can add further housing related stress. Whether housing rehabilitation counts as "construction," either of market-rate or affordable units. Some cities, like New York City, allow developers to count rehabilitation of off-site housing as an inclusionary contribution. Which types of housing construction the ordinance applies to. For example, high-rise housing costs more to build per square foot (thus raising compliance costs, perhaps prohibitively), so some ordinances exempt it from compliance. Alternative solutions While many suburban communities feature Section 8 for low income households, they are generally restricted to concentrated sections. In some cases, counties specify small districts where Section 8 properties are to be rented. In other cases, the market tends to self-segregate property by income. For instance, in Montgomery County, Pennsylvania, a wealthy suburban county bordering Philadelphia, only 5% of the county's population live in the borough of Norristown, yet 50% of the county's Section 8 properties are located there. The large low income resident population burdens Norristown's local government and school district, while much of the county remains unburdened. Inclusionary zoning aims to reduce residential economic segregation by mandating that a mix of incomes be represented in a single development. Controversy Inclusionary zoning remains a controversial issue. Some affordable housing advocates seek to promote the policies in order to ensure that housing is available for a variety of income levels in more places. These supporters hold that inclusionary zoning produces needed affordable housing and creates income-integrated communities. Yet other Affordable Housing advocates state the reverse is true, that Inclusionary Zoning can have the opposite effect and actually reduce affordable housing in a community. For example, in Los Angeles, California, inclusionary zoning apparently accelerated gentrification, as older, unprofitable buildings were razed and replaced with mostly high-rent housing, and a small percentage of affordable housing; the net result was less affordable housing. In New York, NY, inclusionary zoning allows for up to a 400% increase in luxury housing for every unit of affordable housing and for an additional 400% luxury housing when combined with the liberal use of development rights. Critics have stated the affordable housing can be directed to those making up to $200,000 through the improper use of an Area Median Income, and used as political tools by organizations tied to various politicians. New York City communities such as Harlem, the Lower East Side, Williamsburg, Chelsea and Hell's Kitchen have experienced significant secondary displacement through the use of Inclusionary Zoning. Real Estate industry detractors note that inclusionary zoning levies an indirect tax on developers, so as to discourage them from building in areas that face supply shortages. Furthermore, to ensure that the affordable units are not resold for profit, deed restrictions generally fix a long-term resale price ceiling, eliminating a potential benefit of home ownership. Free market advocates oppose attempts to fix given social outcomes by government intervention in markets. They argue inclusionary zoning constitutes an onerous land use regulation that exacerbates housing shortages. Homeowners sometimes note that their property values will be reduced if low income families move into their community. Others counter consider their concerns thinly-concealed classism and racism. Some of the most widely publicized inclusionary zoning battles have involved the REIT AvalonBay Communities. According to the company's website, AvalonBay seeks to develop properties in "high barrier-to-entry markets" across the United States. In practice, AvalonBay uses inclusionary zoning laws, such as the Massachusetts Comprehensive Permit Act: Chapter 40B, to bypass local zoning laws and build large apartment complexes. In some cases, local residents fight back with a lawsuit. In Connecticut, similar developments by AvalonBay have resulted in attempts to condemn the land or reclaim it by eminent domain. In most cases AvalonBay has won these disputes and built extremely profitable apartments or condominiums. Other legal battles have occurred in California, where many cities have implemented inclusionary zoning policies that typically require 10 percent to 15 percent of units to be affordable housing. The definition of affordable housing includes both low-income housing and moderate-income housing. In California, low-income housing is typically designed for households making 51 percent to 80 percent of the median income, and moderate-income housing is typically for households making 81 percent to 120 percent of the median income. Developers have attempted to fight back these requirements by challenging local inclusionary zoning ordinances through the court legal system. In the case Home Builders Association of Northern California v. City of Napa, the California First District Court of Appeal upheld the inclusionary zoning ordinances of City of Napa that require 10 percent of units of the new development project to be moderate income housing against the Home Builders Association that challenged the City of Napa. Cities have also attempted to impose inclusionary requirements on rental units. However, the Costa-Hawkins Rental Housing Act prohibits cities in California from imposing limitation on rental rates on vacant units. Subsequently, developers have won cases, such as Palmer/Sixth Street Properties, L.P. v. City of Los Angeles (2009), against cities that imposed inclusionary requirements on rental units, as the state law supersedes local ordinances. Citizen groups and developers have also sought other ways to strengthen or defeat inclusionary zoning laws. For example, the initiative and referendum process in California allows citizen groups or developers to change local ordinances on affordable housing by popular vote. Any citizens or interest groups can participate in this process by gathering at least the required number of signatures so that the measure proposed can qualify to be on the ballot; once enough signatures are submitted and the ballot measure is cleared by election officials, the ballot measure is typically placed on the ballot for the upcoming election. One recent case is Proposition C in San Francisco. This ballot measure was placed on the ballot for the June 2016 California primary election. Passed in June 2016, this proposition amends the city's charter to increase the requirement for affordable housing for development projects of 25 units or more. The clash between these various interests is reflected in this study published by the libertarian-leaning Reason Foundation's public policy think tank, and the response of a peer review of that research. Local governments reflect and in some cases balance these competing interests. In California, the League of Cities has created a guide to inclusionary zoning which includes a section on the pros and cons of the policies. Failure in improving social integration coupled with increasing social cost It is suggested that IZ policies may not effectively disperse low-income units throughout the region, which actually contradicts the aim of the policy itself. For instances in Suffolk County, it is found that there is a spatial concentration of IZ units in poor neighbourhood coupled with higher proportions of Black and Hispanic, which are considered the minorities. Furthermore, 97.7% of the IZ units were built in only 10% of the census tract from 1980 to 2000, which is area with the lowest-income neighbourhood coupled with clustering of minorities. It is indispensable to notice that housing policies is controlled by local government rather than regional government in Suffolk County, therefore without regional coordinations of housing policy, it fails to consider the inter-municipality distribution of low-income household within the county. Besides, density bonuses given to property developers for the provision of IZ units have intensified the concentration of affordable units in poor neighborhood (Ryan & Enderle as cited in Mukhija, Das, Regus et al., 2012). This shows that IZ policies may fail to disperse the low-income distributions when it is carried out without taking regional coordination into account. Moreover, with density bonuses allocated to property developers for the provision of IZ units, it implies the community would be bearing the cost of increasing population density and sharing existing infrastructure. In practice Examples from the USA More than 200 communities in the United States have some sort of inclusionary zoning provision. Montgomery County, Maryland, is often held to be a pioneer in establishing inclusionary zoning policies. It is the sixth wealthiest county in the United States, yet it has built more than 10,000 units of affordable housing since 1974, many units door-to-door with market-rate housing. All municipalities in the state of Massachusetts are subject to that state's General Laws Chapter 40B, which allows developers to bypass certain municipal zoning restrictions in those municipalities which have fewer than the statutorily defined 10% affordable housing units. Developers taking advantage of Chapter 40B must construct 20% affordable units as defined under the statute. All municipalities in the state of New Jersey are subject to judicially imposed inclusionary zoning as a result of the New Jersey Supreme Court's Mount Laurel Decision and subsequent acts of the New Jersey state legislature. A 2006 study, found that 170 jurisdictions in California had some form of inclusionary housing. This was a 59% increase from 2003, when only 107 jurisdictions had inclusionary housing. In addition, state law requires that 15% of the housing units produced in redevelopment project areas must be affordable. At least 20% of revenue generated from a redevelopment project must be contributed to low-income and moderate-income housing. However, Governor Jerry Brown passed AB 1X 26 that dissolved all redevelopment agencies on February 1, 2012. However, Los Angeles, California's inclusionary zoning ordinance for rental housing was invalidated in 2009 by the California Court of Appeal for the Second Appellate District because it directly conflicted with a provision of the state's Costa-Hawkins Rental Housing Act of 1996 which specifically gave all landlords the right to set the "initial rental rate" for new housing units. Madison, Wisconsin's inclusionary zoning ordinance respecting rental housing was struck down by Wisconsin's 4th District Court of Appeals in 2006 because that appellate court construed inclusionary zoning to be rent control, which is prohibited by state statute. The Wisconsin Supreme Court declined the city's request to review the case. The ordinance was structured with a sunset in February 2009, unless extended by the Common Council. The Common Council did not extend the inclusionary zoning ordinance and therefore it expired and is no longer in effect. International Examples Johannesburg, South Africa On 21 Feb 2019, the City of Johannesburg Council approved its "Inclusionary Housing Incentives, Regulations and Mechanisms 2019". The policy is the first of its kind in South Africa and provides four options for inclusionary housing (including price limited, size limited or negotiated options) where at least 30% of dwelling units in new developments of 20 units or more, must be inclusionary housing. The trend of going mandatory over voluntary While inclusionary zoning can be either mandatory or voluntary, some studies have shown that mandatory approaches would be crucial to the success of inclusionary zoning programs in terms of providing a larger number of affordable housing. Below are some examples showing the greater effect of mandatory practice over voluntary practice: See also Visitability - Social Integration Beyond Independent Living Affordable housing Residential segregation Exclusionary zoning Office of Fair Housing and Equal Opportunity Woodward's building Notes References Business and Professional People for the Public Interest Issue Brief #4 Inclusionary Housing in Montgomery County, MD, Rusk, David; Nine Lessons for Inclusionary Zoning, National Inclusionary Housing Conference Waring, Tom; "Section 8 needs a dose of reform, Hoeffel says" Northeast Times, May 15, 2002 Inclusionary Housing for the City of Chicago: Facts and Myths, North Park University Affordable housing Price controls Zoning
Inclusionary zoning
[ "Engineering" ]
4,149
[ "Construction", "Zoning" ]
993,536
https://en.wikipedia.org/wiki/Strong%20programme
The strong programme or strong sociology is a variety of the sociology of scientific knowledge (SSK) particularly associated with David Bloor, Barry Barnes, Harry Collins, Donald A. MacKenzie, and John Henry. The strong programme's influence on science and technology studies is credited as being unparalleled (Latour 1999). The largely Edinburgh-based school of thought aims to illustrate how the existence of a scientific community, bound together by allegiance to a shared paradigm, is a prerequisite for normal scientific activity. The strong programme is a reaction against "weak" sociologies of science, which restricted the application of sociology to "failed" or "false" theories, such as phrenology. Failed theories would be explained by citing the researchers' biases, such as covert political or economic interests. Sociology would be only marginally relevant to successful theories, which succeeded because they had revealed a fact of nature. The strong programme proposed that both "true" and "false" scientific theories should be treated the same way. Both are caused by social factors or conditions, such as cultural context and self-interest. All human knowledge, as something that exists in the human cognition, must contain some social components in its formation process. Characteristics As formulated by David Bloor, the strong programme has four indispensable components: Causality: it examines the conditions (psychological, social, and cultural) that bring about claims to a certain kind of knowledge. Impartiality: it examines successful as well as unsuccessful knowledge claims. Symmetry: the same types of explanations are used for successful and unsuccessful knowledge claims alike. Reflexivity: it must be applicable to sociology itself. History Because the strong programme originated at the 'Science Studies Unit,' University of Edinburgh, it is sometimes termed the Edinburgh School. However, there is also a Bath School associated with Harry Collins that makes similar proposals. In contrast to the Edinburgh School, which emphasizes historical approaches, the Bath School emphasizes microsocial studies of laboratories and experiments. The Bath school, however, does depart from the strong programme on some fundamental issues. In the social construction of technology (SCOT) approach developed by Collins' student Trevor Pinch, as well as by the Dutch sociologist Wiebe Bijker, the strong programme was extended to technology. There are SSK-influenced scholars working in science and technology studies programs throughout the world. Criticism In order to study scientific knowledge from a sociological point of view, the strong programme has adhered to a form of radical relativism. In other words, it argues that – in the social study of institutionalised beliefs about "truth" – it would be unwise to use "truth" as an explanatory resource. To do so would (according to the relativist view) include the answer as part of the question (Barnes 1992), and propound a "whiggish" approach towards the study of history – a narrative of human history as an inevitable march towards truth and enlightenment. Alan Sokal has criticised radical relativism as part of the science wars, on the basis that such an understanding will lead inevitably towards solipsism and postmodernism. Markus Seidel attacks the main arguments – underdetermination and norm-circularity – provided by Strong Programme proponents for their relativism. It has also been argued that the strong programme has incited climate denial. Notes See also Bibliography Barnes, B. (1977). Interests and the Growth of Knowledge. London: Routledge & Kegan Paul. Barnes, B. (1982). T. S. Kuhn and Social Science. London: Macmillan. Barnes, B. (1985). About Science. Oxford: Blackwell. Barnes, B. (1987). 'Concept Application as Social Activity', Critica 19: 19–44. Barnes, B. (1992). "Realism, relativism and finitism". Pp. 131–147 in Cognitive Relativism and Social Science, eds. D. Raven, L. van Vucht Tijssen, and J. de Wolf. Barnes, B., D. Bloor, and J. Henry. (1996), Scientific Knowledge: A Sociological Analysis. University of Chicago Press. [An introduction and summary of strong sociology] Bijker, Wiebe E., et al. The social construction of technological systems: New directions in the sociology and history of technology (MIT press, 2012) Bloor, D. (1991 [1976]), Knowledge and Social Imagery, 2nd ed. Chicago: University of Chicago Press. [outlines the strong programme] Bloor, D. (1997). Wittgenstein, Rules and Institutions. London: Routledge. Bloor, D. (1999). "Anti-Latour," Studies in the History and Philosophy of Science Part A 20#1 pp: 81–112. Collins, Harry, and Trevor Pinch. The Golem at large: What you should know about technology (Cambridge University Press, 2014) Latour, B. (1999). "For David Bloor and Beyond ... a reply to David Bloor's 'Anti-Latour'," Studies in History & Philosophy of Science Part A 30(1): 113–129. External links STS Wiki WTMC Wiki Historical sociologist Simon Schaffer is interviewed on SSK Historical sociologist Steven Shapin is interviewed on SSK Strong Programme in Sociology of Knowledge and Actor-Network Theory: The Debate within Science Studies (includes questions posed to David Bloor and Bruno Latour, in Appendix) Science and technology studies Sociology of scientific knowledge Historiography of science
Strong programme
[ "Technology" ]
1,162
[ "Science and technology studies" ]
994,028
https://en.wikipedia.org/wiki/Mobile%20%28sculpture%29
A mobile (, ) is a type of kinetic sculpture constructed to take advantage of the principle of equilibrium. It consists of a number of rods, from which weighted objects or further rods hang. The objects hanging from the rods balance each other, so that the rods remain more or less horizontal. Each rod hangs from only one string, which gives it the freedom to rotate about the string. An ensemble of these balanced parts hang freely in space, by design without coming into contact with each other. Mobiles are popular in the nursery, where they hang over cribs to give infants entertainment and visual stimulation. Mobiles have inspired many composers, including Morton Feldman and Earle Brown who were inspired by Alexander Calder's mobiles to create mobile-like indeterminate pieces. John Cage wrote the music for the short film Works of Calder that focused on Calder's mobiles. Frank Zappa stated that his compositions employ a principle of balance similar to Calder mobiles. Origin The meaning of the term "mobile" as applied to sculpture has evolved since it was first suggested by Marcel Duchamp in 1931 to describe the early, mechanized creations of Alexander Calder. At this point, "mobile" was synonymous with the term "kinetic art", describing sculptural works in which motion is a defining property. While motor or crank-driven moving sculptures may have initially prompted it, the word "mobile" later came to refer more specifically to Calder's free-moving creations. Calder in many respects invented an art form where objects (typically brightly coloured, abstract shapes fashioned from sheet metal) are connected by wire much like a balance scale. By the sequential attachment of additional objects, the final creation consists of many balanced parts joined by lengths of wire whose individual elements are capable of moving independently or as a whole when prompted by air movement or direct contact. Thus, "mobile" has become a more well-defined term with its origin in the many such hanging constructs Calder produced in a prolific manner between the 1930s and his death in 1976. Similar works Calder's work is the only one defined by the term "mobile"; however, three other notable artists worked on a similar concept. Man Ray experimented with this idea around 1920, Armando Reverón who during the 30s made a series of movable skeletons and Bruno Munari created his "Useless Machines" in 1933, made in cardboard and playful colors. Also Julio Le Parc, Grand Prize winner at the Venice Biennale in 1966. See also Dreamcatcher Suncatcher Wind chime Straw mobile Stabile References External links Alexander Calder's Mobiles by Jean-Paul Sartre, Les Temps Modernes, 1963 Types of sculpture 1931 introductions Motion (physics)
Mobile (sculpture)
[ "Physics" ]
546
[ "Physical phenomena", "Motion (physics)", "Space", "Mechanics", "Spacetime" ]
994,039
https://en.wikipedia.org/wiki/Mobile%20office
A mobile office is an office built within a truck, motorhome, trailer or shipping container. The term is also used for people who don't work at a physical office location but instead carry their office materials with them. The mobile office can allow businesses to cut costs and avoid building physical locations where it would be too costly or simply unnecessary. See also Mobile home Virtual office References Office work Construction Portable buildings and shelters
Mobile office
[ "Engineering" ]
86
[ "Construction" ]
994,228
https://en.wikipedia.org/wiki/Environmental%20racism
Environmental racism, ecological racism, or ecological apartheid is a form of racism leading to negative environmental outcomes such as landfills, incinerators, and hazardous waste disposal disproportionately impacting communities of color, violating substantive equality. Internationally, it is also associated with extractivism, which places the environmental burdens of mining, oil extraction, and industrial agriculture upon indigenous peoples and poorer nations largely inhabited by people of color. Environmental racism is the disproportionate impact of environmental hazards, pollution, and ecological degradation experienced by marginalized communities, as well as those of people of color. Race, socio-economic status, and environmental injustice directly impact these communities in terms of their health outcomes as well as their quality of health. Communities are not all created equal. In the United States, some communities are continuously polluted while the government gives little to no attention. According to Robert D. Bullard, father of environmental justice, environmental regulations are not equally benefiting all of society; people of color (African Americans, Latinos, Asians, Pacific Islanders, and Native Americans) are disproportionately harmed by industrial toxins in their jobs and their neighborhoods. Within this context, understanding the intersectionality of race, socio-economic status, and environmental injustice through its history and the disproportionate impact is a starting point for leaning towards equitable solutions for environmental justice for all segments of society. Exploring the historical roots, impacts of environmental racism, governmental actions, grassroots efforts, and possible remedies can serve as a foundation for addressing this issue effectively. Response to environmental racism has contributed to the environmental justice movement, which developed in the United States and abroad throughout the 1970s and 1980s. Environmental racism may disadvantage minority groups or numerical majorities, as in South Africa where apartheid had debilitating environmental impacts on Black people. Internationally, trade in global waste disadvantages global majorities in poorer countries largely inhabited by people of color. It also applies to the particular vulnerability of indigenous groups to environmental pollution. Environmental racism is a form of institutional racism, which has led to the disproportionate disposal of hazardous waste in communities of color in Russia. Environmental racism is a type of inequality where people in communities of color and other low income communities face a disproportionate risk of exposure to pollution and related health conditions. History "Environmental racism" was a term coined in 1982 by Benjamin Chavis, previous executive director of the United Church of Christ (UCC) Commission for Racial Justice. In a speech opposing the placement of hazardous polychlorinated biphenyl (PCB) waste in the Warren County, North Carolina landfill, Chavis defined the term as:racial discrimination in environmental policy making, the enforcement of regulations and laws, the deliberate targeting of communities of color for toxic waste facilities, the official sanctioning of the life-threatening presence of poisons and pollutants in our communities, and the history of excluding people of color from leadership of the ecology movements.Recognition of environmental racism catalyzed the environmental justice movement that began in the 1970s and 1980s with influence from the earlier civil rights movement. Grassroots organizations and campaigns brought attention to environmental racism in policy making and emphasized the importance of minority input. While environmental racism has been historically tied to the environmental justice movement, throughout the years the term has been increasingly disassociated. Following the events in Warren County, the UCC and US General Accounting Office released reports showing that hazardous waste sites were disproportionately located in poor minority neighborhoods. Chavis and Dr. Robert D. Bullard pointed out institutionalized racism stemming from government and corporate policies that led to environmental racism. These racist practices included redlining, zoning, and colorblind adaptation planning. Residents experienced environmental racism due to their low socioeconomic status, and lack of political representation and mobility. Expanding the definition in "The Legacy of American Apartheid and Environmental Racism", Dr. Bullard said that environmental racism:refers to any policy, practice, or directive that differentially affects or disadvantages (whether intended or unintended) individuals, groups, or communities based on race or color. Institutional racism operates on a large scale within societal norms, policies, and procedures extending to environmental planning and decision-making, reinforcing environmental racism through government, legal, economic, and political institutions. Racism significantly increases exposure to environmental and health risks as well as access to health care. Government agencies, including the federal Environmental Protection Agency (EPA), have often failed to protect people of color from pollution and industrial infiltrations. This failure is evident in the disproportionate pollution burden borne by communities of color, with African American and Latino neighborhoods experiencing higher levels of pollution compared to predominantly white areas. For instance, in Los Angeles, over 71% of African Americans and 50% of Latinos live in areas with the most polluted air, while only 34% of the white population does. Nationally, a significant portion of whites, African Americans, and Hispanics reside in counties with substandard air quality, with people of color disproportionately affected by pollution-related health issues. Although the term was coined in the US, environmental racism also occurs on the international level. Studies have shown that since environmental laws have become prominent in developed countries, companies have moved their waste towards the Global South. Less developed countries frequently have fewer environmental regulations and become pollution havens. Causes There are four factors which lead to environmental racism: lack of affordable land, lack of political power, lack of mobility, and poverty. Cheap land is sought by corporations and governmental bodies. As a result, communities which cannot effectively resist these corporations' governmental bodies and cannot access political power are unable to negotiate just costs. Communities with minimized socio-economic mobility cannot relocate. Lack of financial contributions also reduces the communities' ability to act both physically and politically. Chavis defined environmental racism in five categories: racial discrimination in defining environmental policies, discriminatory enforcement of regulations and laws, deliberate targeting of minority communities as hazardous waste dumping sites, official sanctioning of dangerous pollutants in minority communities, and the exclusion of people of color from environmental leadership positions Minority communities often do not have the financial means, resources, and political representation to oppose hazardous waste sites. Known as locally unwanted land uses (LULUs), these facilities that benefit the whole community often reduce the quality of life of minority communities. These neighborhoods also may depend on the economic opportunities the site brings and are reluctant to oppose its location at the risk of their health. Additionally, controversial projects are less likely to be sited in non-minority areas that are expected to pursue collective action and succeed in opposing the siting of the projects in their area. In cities in the Global North, suburbanization and gentrification lead to patterns of environmental racism. For example, white flight from industrial zones for safer, cleaner, suburban locales leaves minority communities in the inner cities and in close proximity to polluted industrial zones. In these areas, unemployment is high and businesses are less likely to invest in area improvement, creating poor economic conditions for residents and reinforcing a social formation that reproduces racial inequality. Furthermore, the poverty of property owners and residents in a municipality may be taken into consideration by hazardous waste facility developers, since areas with depressed real estate values will save developers' money. Socioeconomic aspects Cost–benefit analysis (CBA) is a process that places a monetary value on costs and benefits to evaluate issues. Environmental CBA aims to provide policy solutions for intangible products such as clean air and water by measuring a consumer's willingness to pay for these goods. CBA contributes to environmental racism through the valuing of environmental resources based on their utility to society. When someone is willing and able to pay more for clean water or air, their payment financially benefits society more than when people cannot pay for these goods. This creates a burden on poor communities. Relocating toxic wastes is justified since poor communities are not able to pay as much as a wealthier area for a clean environment. The placement of toxic waste near poor people lowers the property value of already cheap land. Since the decrease in property value is less than that of a cleaner and wealthier area, the monetary benefits to society are greater by dumping the toxic waste in a "low-value" area. Fossil fuel racism Fossil fuels are interconnected with crises like climate change, racial injustice, and public health. Various stages of fossil fuels include extraction, processing, transport, and combustion, all contributing to harmful pollution and greenhouse gas emissions. The impacts of fossil fuel processing are not distributed equally with Black, Brown, Indigenous, and poor as opposed to white, or wealthy communities. These communities experience health hazards from air and water pollution as well as the risks from climate change. Sacrifice zones are the concept associated with these communities where systemic racism intersects with a fossil fuel-based economy. From a perspective by Energy Research & Social Science, the "fossil fuel racism" phenomenon is framed through the argument that systemic racism effectively subsidizes the fossil fuel industry by allowing it to externalize the costs of pollution onto communities of color. Fossil fuel racism allows for a shift in the focus to the systems and structures that perpetuate these injustices. Implications with this effort follow as climate policy approaches often fail to address racial disparities and focus on broader impacts on public health. There is an urgent need for political and policy solutions revolving around the fossil fuel industry to address systemic injustices perpetuated by fossil fuel production and consumption. Impacts on health Environmental racism impacts the health of the communities affected by poor environments. Various factors that can cause health problems include exposure to hazardous chemical toxins in landfills and rivers. Exposure to these toxins can also weaken or slow brain development. The animal protection organization In Defense of Animals claims intensive animal agriculture negatively affects the health of nearby communities. They believe that associated manure lagoons produce hydrogen sulfide and contaminate local water supplies, leading to higher levels of miscarriages, birth defects, and disease outbreaks. These farms are disproportionately placed in low-income areas and communities of color. Other risks include exposure to pesticides, chemical run-off and particulate matter in the air. Poor cleanliness in facilities and chemical exposure may also affect agricultural workers, who are frequently people of color. Pollution The southeastern part of the United States has experienced a large amount of pollution and minority populations have been hit with the brunt of those impacts. There are many cases of people who have died or are chronically ill from coal plants in places such as Detroit, Memphis, and Kansas City. Tennessee and West Virginia residents are frequently subject to breathing toxic ash due to blasting in the mountains for mining. Drought, flooding, the constant depletion of land and air quality determine the health and safety of the residents surrounding these areas. Communities of color and low-income status most often feel the brunt of these issues firsthand. There are many communities around the world that face the same problems. For example, the work of Desmond D'Sa focused on communities in South Durban where high pollution industries impact people forcibly relocated during Apartheid. Environmental racism limits improvement Environmental racism intensifies existing health disparities among marginalized communities, with BIPOC individuals disproportionately bearing the burden of environmental exposures and their health consequences. Black children, for example, are still more exposed to lead than children of other racial groups contributing to higher body burdens of toxins such as lead, polychlorinated biphenyls, and phthalates. Institutionalized racism in epidemiology and environmental health perpetuates the neglect of BIPOC experiences and contributes to the contribution of structural barriers in research funding and publication. For instance, studies on sperm health predominantly focus on White men, neglecting the reproductive health experiences of men of color despite their higher exposure to environmental toxins. This lack of inclusion in research both perpetuates health disparities and a lack of trust among BIPOC communities due to historical exploration in medical research. Structural racism within research contributes to the marginalization of BIPOC communities and limits the development of effective interventions that can address environmental health disparities. Reducing environmental racism Activists have called for "more participatory and citizen-centered conceptions of justice." The environmental justice (EJ) movement and climate justice (CJ) movement address environmental racism in bringing attention and enacting change so that marginalized populations are not disproportionately vulnerable to climate change and pollution. According to the United Nations Conference on Environment and Development, one possible solution is the precautionary principle, which states that "where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation." Under this principle, the initiator of the potentially hazardous activity is charged with demonstrating the activity's safety. Environmental justice activists also emphasize the need for waste reduction in general, which would act to reduce the overall burden, as well as reduce methane emissions which in turn reduce climate change. Studies In wartimes, environmental racism occurs in ways that the public later learn about through reports. For example, Friends of the Earth International's Environmental Nakba report brings attention to environmental racism that has occurred in the Gaza Strip during the Israeli-Palestinian Conflict. Some Israeli practices include cutting off three days of water supply to refugee Palestinians and destroying farms. Besides studies that point out cases of environmental racism, studies have also provided information on how to go about changing regulations and preventing environmental racism from happening. In a study by Daum, Stoler and Grant on e-waste management in Accra, Ghana, the importance of engaging with different fields and organizations such as recycling firms, communities, and scrap metal traders are emphasized over adaptation strategies such as bans on burning and buy-back schemes that have not caused much effect on changing practices. Environmental justice scholars such as Laura Pulido, Department Head of Ethnic Studies and Professor at the University of Oregon, and David Pellow, Dehlsen and Department Chair of Environmental Studies and Director of the Global Environmental Justice Project at the University of California, Santa Barbara, argue that recognizing environmental racism as an element stemming from the entrenched legacies of racial capitalism is crucial to the movement, with white supremacy continuing to shape human relationships with nature and labor. Procedural justice Current political ideologies surrounding how to make right issues of environmental racism and environmental justice are shifting towards the idea of employing procedural justice. Procedural justice is a concept that dictates the use of fairness in the process of making decisions, especially when said decisions are being made in diplomatic situations such as the allocation of resources or the settling of disagreements. Procedural justice calls for a fair, transparent, impartial decision-making process with equal opportunity for all parties to voice their positions, opinions, and concerns. Rather than just focusing on the outcomes of agreements and the effects those outcomes have on affected populations and interest groups, procedural justice looks to involve all stakeholders throughout the process from planning through implementation. In terms of combating environmental racism, procedural justice helps to reduce the opportunities for powerful actors such as often-corrupt states or private entities to dictate the entire decision-making process and puts some power back into the hands of those who will be directly affected by the decisions being made. Activism Activism takes many forms. One form is collective demonstrations or protests, which can take place on a number of different levels from local to international. Additionally, in places where activists feel as though governmental solutions will work, organizations and individuals alike can pursue direct political action. In many cases, activists and organizations will form partnerships both regionally and internationally to gain more clout in pursuit of their goals. Indigenous women's movements in Canada There have been many resistance movements in Canada initiated by Indigenous women against environmental racism. One that was prominent and had a great impact on the movement was, The Native Women's Association of Canada's (NWAC) Sisters in Spirit Initiative. This initiative aims to create reports on the deaths and disappearances of Indigenous women in order to raise awareness and get government and civil society groups to take action. Though the Canadian federal government decided to defund the Sisters in Spirit Initiative in 2010, the NWAC continues to support women, Two-Spirit and LGBTQ+ Indigenous peoples in their fight to be heard. In other Indigenous resistance movements there is an emphasis on healing from trauma by focusing on spirituality and traditional practices in order to fight against the forces of patriarchy and racism that have caused environmental racism. Activists and Indigenous communities have also gone through state official legal routes to voice their concerns such as discussing treaties, anti-human trafficking laws, anti-violence against women laws and UNDRIP.  These have been deemed insufficient solutions by Indigenous groups and communities because there are some voices that are not heard and because the state does not respect or recognize the sovereignty of Indigenous nations. Environmental reparations Some scientists and economists have looked into the prospect of Environmental Reparations, or forms of payment made to individuals who are affected by industry presence in some way. Potential groups to be impacted include individuals living in close proximity to industry, victims of natural disasters, and climate refugees who flee hazardous living conditions in their own country. Reparations can take many forms, from direct payouts to individuals, to money set aside for waste-site cleanups, to purchasing air monitors for low income residential neighborhoods, to investing in public transportation, which reduces green house gas emissions. As Robert Bullard writes,Environmental Reparations represent a bridge to sustainability and equity... Reparations are both spiritual and environmental medicine for healing and reconciliation. Policies and international agreements The export of hazardous waste to third world countries is another growing concern. Between 1989 and 1994, an estimated 2,611 metric tons of hazardous waste was exported from Organization for Economic Cooperation and Development (OECD) countries to non-OECD countries. Two international agreements were passed in response to the growing exportation of hazardous waste into their borders. The Organization of African Unity (OAU) was concerned that the Basel Convention adopted in March 1989 did not include a total ban on the trans-boundary movement on hazardous waste. In response to their concerns, on 30 January 1991, the Pan-African Conference on Environmental and Sustainable Development adopted the Bamako Convention banning the import of all hazardous waste into Africa and limiting their movement within the continent. In September 1995, the G-77 nations helped amend the Basel Convention to ban the export of all hazardous waste from industrial countries (mainly OECD countries and Lichtenstein) to other countries. A resolution was signed in 1988 by the OAU which declared toxic waste dumping to be a "crime against Africa and the African people". Soon after, the Economic Community of West African States (ECOWAS) passed a resolution that allowed for penalties, such as life imprisonment, to those who were caught dumping toxic wastes. Globalization and the increase in transnational agreements introduce possibilities for cases of environmental racism. For example, the 1994 North American Free Trade Agreement (NAFTA) attracted US-owned factories to Mexico, where toxic waste was abandoned in the Colonia Chilpancingo community and was not cleaned up until activists called for the Mexican government to clean up the waste. Environmental justice movements have grown to become an important part of world summits. This issue is gathering attention and features a wide array of people, workers, and levels of society that are working together. Concerns about globalization can bring together a wide range of stakeholders including workers, academics, and community leaders for whom increased industrial development is a common denominator". Many policies can be expounded based on the state of human welfare. This occurs because environmental justice is aimed at creating safe, fair, and equal opportunity for communities and to ensure things like redlining do not occur. With all of these unique elements in mind, there are serious ramifications for policy makers to consider when they make decisions. United States legislation and policies Relevant laws and regulations aimed to address environmental racism encompass a combination of tort law, civil rights law, and environmental law. Here's a quick breakdown of these laws: Tort law: This law allows individuals or communities to seek compensation for damages caused by the negligence or wrongful actions of others. In the context of environmental racism, plaintiffs can use tort law to claim compensation for health issues, property damage, or loss of quality of life due to pollution or other environmental harms. Civil rights law: Litigation under civil rights statutes focuses on challenging the discriminatory impact of environmental decisions and policies. Lawsuits may argue that certain actions or policies have a disparate impact on communities of color, violating their civil rights. Environmental law: Federal environmental statutes, such as the Clean Air Act, Clean Water Act, and the National Environmental Policy Act (NEPA) provide mechanisms for challenging the adequacy of environmental reviews or compliance with regulatory standards. Current initiatives in the United States Most initiatives currently focusing on environmental racism are more focused on the larger topic of environmental justice. They are at both the state and federal levels. On the state level, local politicians focus on their communities to introduce policies that will affect them, including land use policies, improving the environmental health impacts, and involving their community in the planning processes for these policies. Fourteen states have created offices that are specifically focused on environmental justice and advise policymakers on how their policies may impact minority populations. Maryland established their Commission on Environmental Justice and Sustainable Communities in 2001. The most recently formed councils were formed in 2022 by Vermont and Oregon. Federally, the EPA is responsible for environmental justice initiatives including the Environmental Justice Government-to-Government Program (EJG2G). The EJG2G provides a clearer line of communication and funding between all types of governments such as state, local, and tribal to make a strong effort to steer towards a more environmentally equitable society. In April 2023, President Biden affirmed his commitment to environmental justice by introducing the Justice40 Initiative. The Justice40 initiative is a goal to make 40 percent of federal environmental programs go into marginalized communities that have not typically been the target for such programs. This initiative includes things like the Climate and Economic Justice Screening Tool and the training for federal agencies on how to use it to identify communities who may benefit from these programs. This initiative includes several federal agencies including the U.S. Department of Agriculture, the U.S. Department of Commerce, the U.S. Department of Energy, the U.S. Environmental Protection Agency, and the U.S. Department of Housing and Urban Development. It's dedicated to community outreach by involving local governments and encouraging the community to have a say in the programs that may be implemented in their communities. Potential solutions Environmental racism is a crucial aspect that needs to be a part of the climate crisis conversation. Learning more about environmental racism, supporting a green economy that uplifts BIPOC communities, and making environmentalism a communal practice are approaches that can address these injustices. Environmentalism as a communal practice emphasizes the importance of viewing environmentalism as a communal effort rather than a competition between individuals by advocating for the well-being of these marginalized communities as well as supporting efforts that address overarching themes of environmental justice. Following this, understanding environmental racism highlights the concept of environmental racism where BIPOC communities disproportionately bear the burden of pollution and environmental hazards due to discrimination in public policies and industry practice. It is also important to understand the impact of environmental racism and to push for discussions that point out disparities imposed on communities of color. Supporting a green economy is also crucial, it's important to advocate for a transition to clean energy as well as uplifting BIPOC communities economically and socially. In addition, being involved within the clean energy sector for marginalized communities is another step to empowering BIPOC communities and leading in environmental protection efforts. Examples by region Africa Nigeria From 1956 to 2006, up to 1.5 million tons of oil were spilled in the Niger Delta, (50 times the volume spilled in the Exxon Valdez disaster). Indigenous people in the region have suffered the loss of their livelihoods as a result of these environmental issues, and they have received no benefits in return for enormous oil revenues extracted from their lands. Environmental conflicts have exacerbated ongoing conflict in the Niger Delta. Burning of toxic waste and urban air pollution are problems in more developed areas. Ogoni people, who are indigenous to Nigeria's oil-rich Delta region have protested the disastrous environmental and economic effects of Shell Oil's drilling and denounced human rights abuses by the Nigerian government and by Shell. Their international appeal intensified dramatically after the execution in 1995 of nine Ogoni activists, including Ken Saro-Wiwa, who was a founder of the nonviolent Movement for the Survival of the Ogoni People (MOSOP). South Africa The linkages between the mining industry and the negative impacts it has on community and individual health has been studied and well-documented by a number of organizations worldwide. Health implications of living in proximity to mining operations include effects such as pregnancy complications, mental health issues, various forms of cancer, and many more. During the Apartheid period in South Africa, the mining industry grew quite rapidly as a result of the lack of environmental regulation. Communities in which mining corporations operate are usually those with high rates of poverty and unemployment. Further, within these communities, there is typically a divide among the citizens on the issue of whether the pros of mining in terms of economic opportunity outweigh the cons in terms of the health of the people in the community. Mining companies often try to use these disagreements to their advantage by magnifying this conflict. Additionally, mining companies in South Africa have close ties with the national government, skewing the balance of power in their favor while simultaneously excluding local people from many decision-making processes. This legacy of exclusion has had lasting effects in the form of impoverished South Africans bearing the brunt of ecological impacts resulting from the actions of, for example, mining companies. Some argue that to effectively fight environmental racism and achieve some semblance of justice, there must also be a reckoning with the factors that form situations of environmental racism such as rooted and institutionalized mechanisms of power, social relations, and cultural elements. The term "energy poverty" is used to refer to "a lack of access to adequate, reliable, affordable and clean energy carriers and technologies for meeting energy service needs for cooking and those activities enabled by electricity to support economic and human development". Numerous communities in South Africa face some sort of energy poverty. South African women are typically in charge of taking care of both the home and the community as a whole. Those in economically impoverished areas not only have to take on this responsibility, but there are numerous other challenges they face. Discrimination on the basis of gender, race, and class are all still present in South African culture. Because of this, women, who are the primary users of public resources in their work at home and for the community, are often excluded from any decision-making about control and access to public resources. The resulting energy poverty forces women to use sources of energy that are expensive and may be harmful both to their own health and that of the environment. Consequently, several renewable energy initiatives have emerged in South Africa specifically targeting these communities and women to correct this situation. Asia China From the mid-1990s until about 2001, it is estimated that some 50 to 80 percent of the electronics collected for recycling in the western half of the United States was being exported for dismantling overseas, predominantly to China and Southeast Asia. This scrap processing is quite profitable and preferred due to an abundant workforce, cheap labour, and lax environmental laws. Guiyu, China, is one of the largest recycling sites for e-waste, where heaps of discarded computer parts rise near the riverbanks and compounds, such as cadmium, copper, lead, PBDEs, contaminate the local water supply. Water samples taken by the Basel Action Network in 2001 from the Lianjiang River contained lead levels 190 times higher than WHO safety standards. Despite contaminated drinking water, residents continue to use contaminated water over expensive trucked-in supplies of drinking water. Nearly 80 percent of children in the e-waste hub of Guiyu, China, suffer from lead poisoning, according to recent reports. Before being used as the destination of electronic waste, most of Guiyu was composed of small farmers who made their living in the agriculture business. However, farming has been abandoned for more lucrative work in scrap electronics. "According to the Western press and both Chinese university and NGO researchers, conditions in these workers' rural villages are so poor that even the primitive electronic scrap industry in Guiyu offers an improvement in income". Researchers have found that as rates of hazardous air pollution increase in China, the public has mobilized to implement measures to curb detrimental impacts. Areas with ethnic minorities and western regions of the country tend to carry disproportionate environmental burdens. India Union Carbide Corporation is the parent company of Union Carbide India Limited which outsources its production to an outside country. Located in Bhopal, India, Union Carbide India Limited primarily produced the chemical methyl isocyanate used for pesticide manufacture. On 3 December 1984, a cloud of methyl isocyanate leaked as a result of the toxic chemical mixing with water in the plant in Bhopal. Approximately 520,000 people were exposed to the toxic chemical immediately after the leak. Within the first 3 days after the leak an estimated 8,000 people living within the vicinity of the plant died from exposure to the methyl isocyanate. Some people survived the initial leak from the factory, but due to improper care and improper diagnoses many have died. As a consequence of improper diagnoses, treatment may have been ineffective and this was precipitated by Union Carbide refusing to release all the details regarding the leaked gases and lying about certain important information. The delay in supplying medical aid to the victims of the chemical leak made the situation for the survivors even worse. Many today are still experiencing the negative health impacts of the methyl isocyanate leak, such as lung fibrosis, impaired vision, tuberculosis, neurological disorders, and severe body pains. The operations and maintenance of the factory in Bhopal contributed to the hazardous chemical leak. The storage of huge volumes of methyl isocyanate in a densely inhabited area, was in contravention with company policies strictly practiced in other plants. The company ignored protests that they were holding too much of the dangerous chemical for one plant and built large tanks to hold it in a crowded community. Methyl isocyanate must be stored at extremely low temperatures, but the company cut expenses to the air conditioning system leading to less than optimal conditions for the chemical. Additionally, Union Carbide India Limited never created disaster management plans for the surrounding community around the factory in the event of a leak or spill. State authorities were in the pocket of the company and therefore did not pay attention to company practices or implementation of the law. The company also cut down on preventive maintenance staff to save money. Russia Europe Eastern Europe Predominantly living in Central and Eastern Europe, with pockets of communities in the Americas and Middle East, the ethnic Romani people have been subjected to environmental exclusion. Often referred to as gypsies or the gypsy threat, the Romani people of Eastern Europe mostly live under the poverty line in shanty towns or slums. Facing issues such as long term exposure to harmful toxins given their locations to waste dumps and industrial plants, along with being refused environmental assistance like clean water and sanitation, the Romani people have been facing racism via environmental means. Many countries such has Romania, Bulgaria and Hungary have tried to implement environmental protection initiatives across their respected countries; however, most have failed due to "addressing the conditions of Roma communities have been framed through an ethnic lens as "Roma issues". Only recently has some form of environmental justice for the Romani people come to light. Seeking environmental justice in Europe, the Environmental Justice Program is now working with human rights organizations to help fight environmental racism. It is important to note that in the "Discrimination in the EU in 2009" report, conducted by the European Commission, "64% of citizens with Roma friends believe discrimination is widespread, compared to 61% of citizens without Roma friends." France Exporting toxic wastes to countries in the Global South is one form of environmental racism that occurs on an international basis. In one alleged instance, in 2006, the French aircraft carrier Clemenceau was prohibited from entering Alang, an Indian ship-breaking yard, due to a lack of clear documentation about its toxic contents. French President Jacques Chirac ultimately ordered the carrier, which contained tons of hazardous materials including asbestos and PCBs, to return to France. United Kingdom In the UK environmental racism (or also climate racism) has been called out by multiple action groups such as the Wretched of the Earth call out letter in 2015 and Black Lives Matter in 2016. North America Canada See more: Environmental racism in Nova Scotia In Canada, progress is being made to address environmental racism (especially in Nova Scotia's Africville community) with the passing of Bill 111, An Act to Address Environmental Racism in the Nova Scotia Legislature. Still, indigenous communities such as the Aamjiwnaang First Nation continue to be harmed by pollution from the Canadian chemical industry centered in Southeast Ontario. Forty percent of Canada's petrochemical industry is packed into a 15-square mile radius of Sarnia, Ontario. Immediately south of the petrochemical plants is the Aamjiwnaang reservation with a population of 850 Aamjiwnaang First Nation members. Since 2002, coalitions of indigenous individuals have fought the disproportionate concentration of pollution in their neighborhood. Environmental racism affects particularly women and especially Indigenous women and women of color. Many of these communities reside in rural areas rich in natural resources that are very attractive to extractive industries. These effects not only pollute the environment but also have detrimental effects on both physical and mental health. Many of these extractive industries such as oil and gas and mining have caused pollution to water sources, food sources as well as effects in air quality. This has started to affect people's bodies, especially those of women. This is because the toxins and poisons from extractive industries affect women's reproductive organs, can cause cancer as well as the health of their children. The harms of this activity last through generations in these communities; for example in the Indigenous community of Grassy Narrows in Northern Ontario, they are still dealing with health effects from high mercury levels that have affected drinking water and fish in the region that occurred from a spill in the 1960s. Mexico The Cucapá are a group of indigenous people that live near the U.S.-Mexico border, mainly in Mexico but some in Arizona as well. For many generations, fishing on the Colorado River was the Cucapá's main means of subsistence. In 1944, the United States and Mexico signed a treaty that effectively awarded the United States rights to about 90% of the water in the Colorado River, leaving Mexico with the remaining 10%. Over the last few decades, the Colorado River has mostly dried up south of the border, presenting many challenges for people such as the Cucapá. Shaylih Meuhlmann, author of the ethnography Where the River Ends: Contested Indigeneity in the Mexican Colorado Delta, gives a first-hand account of the situation from Meuhlmann's point of view as well as many accounts from the Cucapá themselves. In addition to the Mexican portion of the Colorado River being left with a small fraction of the overall available water, the Cucapá are stripped of the right to fish on the river, the act being made illegal by the Mexican government in the interest of preserving the river's ecological health. The Cucapá are, thus, living without access to sufficient natural sources of freshwater as well as without their usual means of subsistence. The conclusion drawn in many such cases is that the negotiated water rights under the US-Mexican treaty that lead to the massive disparity in water allotments between the two countries boils down to environmental racism. 1,900 maquiladoras are found near the US-Mexico border. Maquiladoras are companies that are usually owned by foreign entities and import raw materials, pay workers in Mexico to assemble them, and ship the finish products overseas to be sold. While Maquiladoras provide jobs, they often pay very little. These plants also bring pollution to rural Mexican towns, creating health impacts for the poor families that live nearby. In Mexico, industrial extraction of oil, mining, and gas, as well as the mass removal of slowly renewable resources such as aquatic life, forests, and crops. Legally, the state owns natural resources but is able to grant concessions to industry through the form of taxes paid. In recent decades, a shift towards refocusing these tax dollars accumulated on the communities most impacted by the health, social, and economic impacts of extractivism has taken place. However, many indigenous and rural community leaders argue that they ought to consent to companies extracting and polluting their resources, rather than be paid reparations after the fact. United States A US Government Accountability Office study, completed in response to the 1982 protests of the PCB landfill in Warren County, was among the first studies that drew correlations between the racial and economic background of communities and the location of hazardous waste facilities. Nevertheless, the study was limited in scope by focusing only on off-site hazardous waste landfills in the Southeastern United States. In response to this limitation, in 1987, the United Church of Christ Commission for Racial Justice (CRJ) directed a comprehensive national study on demographic patterns associated with the location of hazardous waste sites. The CRJ national study conducted two examinations of areas surrounding commercial hazardous waste facilities and the location of uncontrolled toxic waste sites. The first study examined the association between race and socio-economic status and the location of commercial hazardous waste treatment, storage, and disposal facilities. After statistical analysis, the first study concluded that "the percentage of community residents that belonged to a racial or ethnic group was a stronger predictor of the level of commercial hazardous waste activity than was household income, the value of the homes, the number of uncontrolled waste sites, or the estimated amount of hazardous wastes generated by industry". A second study examined the presence of uncontrolled toxic waste sites in ethnic and racial minority communities and found that three of every five African and Hispanic Americans lived in communities with uncontrolled waste sites. A separate 1991 study found race to be the most influential variable in predicting where waste facilities were located. In 1994, President Bill Clinton's issued Executive Order 12898 which directed agencies to develop a strategy to manage environmental justice. In 2002, Faber and Krieg found a correlation between higher air pollution exposure and low performance in schools and found that 92% of children at five Los Angeles public schools with the poorest air quality were of a minority background disproportionate to Los Angeles' then 70% minority population. As a result of the placement of hazardous waste facilities, minority populations experience greater exposure to harmful chemicals and suffer from health outcomes that affect their ability at work and in schools. A comprehensive study of particulate emissions across the United States, published in 2018, found that Black people were exposed to 54% more particulate matter emissions (soot) than the average American. In a study that analyzed exposure to air pollution from vehicles in the American Mid-Atlantic and American North-East, it was found that African Americans were exposed to 61% more particulate matter than whites, with Latinos exposed to 75% more and Asians exposed to 73% more. Overall, minorities experienced 66% more pollution exposure from particulate matter than the white population. Carl Zimring states that environmental racism is often engrained in day-to-day work and living conditions. Examples cited of environmental racism in the US include the Dakota Access Pipeline (where a portion of the proposed 1,172 mile pipeline would pass near to the Standing Rock Indian Reservation), the Flint water crisis (which affected a town that was 55% African American), cancer alley (Louisiana), as well as the government response to hurricane Katrina (where a mandatory evacuation was not ordered in the majority-Black city of New Orleans until 20 hours before Hurricane Katrina made landfall). Overall, the US has worked to reduce environmental racism with municipality changes. These policies help develop further change. Some cities and counties have taken advantage of environmental justice policies and applied it to the public health sector. Native American peoples Native scholars have discussed whether the concept of Environmental Justice make sense in the context of Native Americans and settler colonialism. This is because Native Americans' legal status differs from other marginalized peoples in the United States. As such, Colville scholar Dina Gilio-Whitaker explains that "because Indigenous peoples' relationships to the state (i.e. the United States) are different than those of ethnic minorities, environmental justice must exceed equality and be able to live up to the concepts of tribal sovereignty, treaty rights, and government-to-government relationships." Gilio-Whitaker further argues that the distributive justice model on which environmental racism is based is not helpful to Native communities: "Frameworks for EJ in non-Native communities that rely on distributive justice are built on capitalistic American values of land as commodity — i.e. private property — on lands that were expropriated from Native peoples." In contrast, Native peoples have very different relationships to land beyond the modes of land as commodity. Indigenous studies scholars have argued that environmental racism, however, began in the United States with the arrival of settler colonialism. Potawatomi philosopher Kyle Powys Whyte and Lower Brule Sioux historian Nick Estes explain that Native peoples have already lived through one environmental apocalypse, the coming of colonialism. Métis geographer Zoe Todd and academic Heather Davis have also argued that settler colonialism is "responsible for contemporary environmental crisis." In that way, it has been shown that climate change has been weaponized against Indigenous American peoples, as Founding Fathers such as Thomas Jefferson and Benjamin Franklin deforested the Americas and welcomed warmer weather, which they thought would displace Native peoples and enrich the United States. Thus, "the United States, from its birth, played a key role in causing catastrophic environmental change." Whyte explains further that "Anthropogenic (human-caused) climate change is an intensification of environmental change imposed on Indigenous peoples by colonialism." Anishinaabe scholar Leanne Betasamosake Simpson has also argued, "We should be thinking of climate change as part of a much longer series of ecological catastrophes caused by colonialism and accumulation-based society." The Indian Removal Act of 1830 and the Trail of Tears may also be considered early examples of environmental racism in the United States. As a result of the former, by 1850, all tribes east of the Mississippi had been removed to western lands and essentially confined them to "lands that were too dry, remote, or barren to attract the attention of settlers and corporations." During World War II, military facilities were often located conterminous to Indian reservations, which led to a situation in which "a disproportionate number of the most dangerous military facilities are located near Native American lands." A study analyzing the approximately 3,100 counties in the Continental United States found that Native American lands are positively associated with the count of sites with unexploded ordnance deemed extremely dangerous. The study also found that the risk assessment code (RAC), which is used to measure dangerousness of sites with unexploded ordnance, can sometimes conceal how much of a threat these sites are to Native Americans. The hazard probability, or probability that a hazard will harm people or ecosystems, is sensitive to the proximity of public buildings such as schools and hospitals. Those parameters neglect elements of tribal life such as subsistence consumption, ceremonial use of plants and animals, and low population densities. Because those tribal-unique factors are not considered, Native American lands can often receive low-risk scores, despite threats to their way of life. The hazard probability does not take Native Americans into account when considering the people or ecosystems that could be harmed. Locating military facilities coterminous to reservations lead to a situation in which "a disproportionate number of the most dangerous military facilities are located near Native American lands." More recently, Native American lands have been used for waste disposal and illegal dumping by the US and multinational corporations. The International Tribunal of Indigenous People and Oppressed Nations, convened in 1992 to examine the history of criminal activity against indigenous groups in the United States, and published a Significant Bill of Particulars outlining grievances indigenous peoples had with the US. This included allegations that the US "deliberately and systematically permitted, aided, and abetted, solicited and conspired to commit the dumping, transportation, and location of nuclear, toxic, medical, and otherwise hazardous waste materials on Native American territories in North America and has thus created a clear and present danger to the health, safety, and physical and mental well-being of Native American People." Oceania Australia The Australian Environmental Justice (AEJ) is a multidisciplinary organization which is closely partnered with Friends of the Earth Australia (FoEA). The AEJ focuses on recording and remedying the effects of environmental injustice throughout Australia. The AEJ has addressed issues which include "production and spread of toxic wastes, pollution of water, soil and air, erosion and ecological damage of landscapes, water systems, plants and animals". The project looks for environmental injustices that disproportionately affect a group of people or impact them in a way they did not agree to. The Western Oil Refinery started operating in Bellevue, Western Australia, in 1954. It was permitted rights to operate in Bellevue by the Australian government in order to refine cheap and localized oil. In the decades following, many residents of Bellevue claimed they felt respiratory burning due to the inhalation of toxic chemicals and nauseating fumes. Lee Bell from Curtin University and Mariann Lloyd-Smith from the National Toxic Network in Australia stated in their article, "Toxic Disputes and the Rise of Environmental Justice in Australia" that "residents living close to the site discovered chemical contamination in the ground- water surfacing in their back yards". Under immense civilian pressure, the Western Oil Refinery (now named Omex) stopped refining oil in 1979. Years later, citizens of Bellevue formed the Bellevue Action Group (BAG) and called for the government to give aid towards the remediation of the site. The government agreed and $6.9 million was allocated to clean up the site. Remediation of site began in April 2000. Micronesia Papua New Guinea Starting production in 1972, the Panguna mine in Papua New Guinea has been a source of environmental racism. Although closed since 1989 due to conflict on the island, the indigenous peoples (Bougainvillean) have suffered both economically and environmentally from the creation of the mine. Terrance Wesley-Smith and Eugene Ogan, University of Hawaii and University of Minnesota respectively, stated that the Bougainvillean's "were grossly disadvantaged from the beginning and no subsequent renegotiation has been able to remedy the situation". These indigenous people faced issues such as losing land which could have been used for agricultural practices for the Dapera and Moroni villages, undervalued payment for the land, poor relocation housing for displaced villagers and significant environmental degradation in the surrounding areas. Polynesia South America The Andes Extracitivism, or the process of humans removing natural, raw resources from land to be used in product manufacturing, can have detrimental environmental and social repercussions. Research analyzing environmental conflicts in four Andean countries (Colombia, Ecuador, Peru, and Bolivia) found that conflicts tend to disproportionately affect indigenous populations and those with Afro-descent, and peasant communities. These conflicts can arise as a result of shifting economic patterns, land use policies, and social practices due to extractivist industries. Chile Beginning in the late 15th century when European explorers began sailing to the New World, the violence towards and oppression of indigenous populations have had lasting effects to this day. The Mapuche-Chilean land conflict has roots dating back several centuries. When the Spanish went to conquer parts of South America, the Mapuche were one of the only indigenous groups to successfully resist Spanish domination and maintain their sovereignty. Moving forward, relations between the Mapuche and the Chilean state declined into a condition of malice and resentment. Chile won its independence from Spain in 1818 and, wanting the Mapuche to assimilate into the Chilean state, began crafting harmful legislation that targeted the Mapuche. The Mapuche have based their economy, both historically and presently, on agriculture. By the mid-19th century, the state resorted to outright seizure of Mapuche lands, forcefully appropriating all but 5% of Mapuche lineal lands. An agrarian economy without land essentially meant that the Mapuche no longer had their means of production and subsistence. While some land has since been ceded back to the Mapuche, it is still a fraction of what the Mapuche once owned. Further, as the Chilean state has attempted to rebuild its relationship with the Mapuche community, the connection between the two is still strained by the legacy of the aforementioned history. Today, the Mapuche people are the largest population of indigenous people in Chile, with 1.5 million people accounting for over 90% of the country's indigenous population. Ecuador Due to their lack of environmental laws, emerging countries like Ecuador have been subjected to environmental pollution, sometimes causing health problems, loss of agriculture, and poverty. In 1993, 30,000 Ecuadorians, which included Cofan, Siona, Huaorani, and Quichua indigenous people, filed a lawsuit against Texaco oil company for the environmental damages caused by oil extraction activities in the Lago Agrio oil field. After handing control of the oil fields to an Ecuadorian oil company, Texaco did not properly dispose of its hazardous waste, causing great damages to the ecosystem and crippling communities. Additionally, UN experts have said that Afro-Ecuadorians and other people of African descent in Ecuador have faced greater challenges than other groups in accessing clean water, with minimal response from the State. Haiti Legacies of racism exist in Haiti, and affect the way that food grown by peasants domestically is viewed compared to foreign food. Racially coded hierarchies are associated with food that differs in origin – survey respondents reported that food such as millet and root crops are associated with negative connotations, while foreign-made food such as corn flakes and spaghetti are associated with positive connotations. This reliance on imports over domestic products reveals how racism ties to commercial tendencies – a reliance on imports can increase costs, fossil fuel emissions, and further social inequality as local farmers loose business. See also Biological inequity Climate change and poverty Electronic waste Environmental determinism Environmental discrimination in the United States Environmental dumping Environmental struggles of the Romani Fenceline community Green Imperialism Health inequality and environmental influence Intergenerational equity Intersectionality Netherlands fallacy NIMBY Pollution haven hypothesis Racial capitalism Sacrifice zone Pollution is Colonialism Toxic colonialism References External links United States Environmental Protection Agency - Environmental Justice Environmental Justice and Environmental Racism Marathon for Justice, 2016 - Film on Environmental Racism Water and Environmental Racism. Lesson by Matt Reed and Ursula Wolfe-Rocca Environmental controversies Environmental history of Canada Definition of racism controversy Urban decay Environmental social science concepts Environment and society Apartheid Racism
Environmental racism
[ "Environmental_science" ]
10,577
[ "Environmental social science concepts", "Environmental social science" ]
994,407
https://en.wikipedia.org/wiki/Jesus%20wept
"Jesus wept" (, ) is a phrase famous for being the shortest verse in the King James Version of the Bible, as well as in many other translations. It is not the shortest in the original languages. The phrase is found in the Gospel of John, chapter 11, verse 35. Verse breaks—or versification—were introduced into the Greek text by Robert Estienne in 1551 in order to make the texts easier to cite and compare. Context This verse occurs in John's narrative of the death of Lazarus of Bethany, a follower of Jesus. Lazarus's sisters—Mary and Martha—sent word to Jesus of their brother's illness and impending death, but Jesus arrived four days after Lazarus died. Jesus, after talking to the grieving sisters and seeing Lazarus's friends weeping, was deeply troubled and moved. After asking where Lazarus had been laid and being invited to come see him, Jesus wept. He then went to the tomb and told the people to remove the stone covering it, prayed aloud to his Father, and ordered Lazarus to come out, resurrected. The Gospel of Luke also records that Jesus wept as he entered Jerusalem before his trial and death, anticipating the destruction of the Temple. Text Interpretation Significance has been attributed to Jesus's deep emotional response to his friends' weeping, and his own tears, including the following: Weeping demonstrates that Christ was a true man, with real bodily functions (such as tears, sweat, blood, eating and drinking—note, for comparison, the emphasis laid on Jesus' eating during the post-resurrection appearances). His emotions and reactions were real; Christ was not an illusion or spirit (see the heresy of Docetism). Pope Leo the Great referred to this passage when he discussed the two natures of Jesus: "In His humanity Jesus wept for Lazarus; in His divinity he raised him from the dead." The sorrow, sympathy, and compassion Jesus felt for all mankind. The rage he felt against the tyranny of death over mankind. Although the bystanders interpreted his weeping to mean that Jesus loved Lazarus (verse 36), Witness Lee considered the Jews' opinion to be unreasonable, given Jesus' intention to resurrect Lazarus. Lee argued instead that every person to whom Jesus talked in John 11 (his disciples, Martha, Mary, and the Jews) was blinded by their misconceptions. Thus he "groaned in his spirit" because even those who were closest to him failed to recognize that he was, as he declared in verse 26, "the resurrection and the life". Finally, at the graveside, he "wept in sympathy with their sorrow over Lazarus' death". In history Jesus's tears have figured among the relics attributed to Jesus. Use as an expletive In some parts of the English-speaking world, including Great Britain, Ireland (particularly Dublin and Belfast) and Australia, the phrase "Jesus wept" is an expletive some people use when something goes wrong or to express incredulity. In Christianity, this usage is considered blasphemous and offensive by the devout, as it is seen as violating the second or third of the Ten Commandments. Historically, certain Christian states had laws against profane use of Jesus Christ, among other religious terms. The Harris Poll conducted a 2017 study and found 90% of evangelical Christians would not view a film that disrespectfully used the name of Jesus Christ. In Catholic Christianity, the faithful pray Acts of Reparation to Jesus Christ for abuse of the Holy Name, which constitutes a sin. In 1965, broadcaster Richard Dimbleby accidentally used the expletive live on air during the state visit of Elizabeth II to West Germany. It is a common expletive in novels by author Stephen King. Other authors using it as an expletive include Neil Gaiman in the Sandman series, Bernard Cornwell in the Sharpe series, Mick Herron in the Slough House series, David Lodge in Nice Work, Mike Carey in the Hellblazer series and The Devil You Know, Garth Ennis in The Boys (comics), Peter F. Hamilton in The Night's Dawn Trilogy, Mark Haddon in The Curious Incident of the Dog in the Night-Time, Dan Simmons in Hyperion Cantos, Minette Walters in Fox Evil, Elly Griffiths in the Dr Ruth Galloway series, and Jason Matthews in Red Sparrow. This usage is also evidenced in films and television programmes including Lawrence of Arabia (1962), Get Carter (1971), Razorback (1984), Hellraiser (1987), Drop the Dead Donkey (1990), The Stand (1994), Michael Collins (1996), The Long Kiss Goodnight (1996), Dogma (1999), Notes on a Scandal (2006), True Blood, Cranford, The Bank Job (all 2008), Blitz (2011), Call the Midwife (2013), Community (2015), The Magnificent Seven (2016), The Haunting of Hill House, Derry Girls (both 2018), Troop Zero (2019), Silent Witness (2023), and Murder in a Small Town (2024). Car journalist Jeremy Clarkson of the hit show Top Gear used the expletive many times during the show’s 22nd season. The verse is also used in the The's song "Angels of Deception" from the 1986 album Infected. Kanye West uses the verse to end "Bound 2", the last song on his 2013 album Yeezus. See also Dominus Flevit Church (including shortest verses) References External links King James Bible - Book of John, Chapter 11 Crying New Testament words and phrases Sayings of Jesus Gospel episodes Gospel of John Mary of Bethany Lazarus of Bethany
Jesus wept
[ "Biology" ]
1,172
[ "Crying", "Behavior", "Human behavior" ]
994,446
https://en.wikipedia.org/wiki/NGC%202175
Open Cluster NGC 2175 (also known as OCL 476 or Cr 84) is an open cluster in the Orion constellation, embedded in a diffusion nebula. It was discovered by Giovanni Batista Hodierna before 1654 and independently discovered by Karl Christian Bruhns in 1857. NGC 2175 is at a distance of about 6,350 light years away from Earth. The nebula surrounding it is Sharpless catalog Sh 2-252, and it is sometimes called the Monkey Head Nebula due to its appearance. There is some equivocation in the use of the identifiers NGC 2174 and NGC 2175. These may apply to the entire nebula, to its brightest knot, or to the star cluster it includes. Burnham's Celestial Handbook lists the entire nebula as 2174/2175 and does not mention the star cluster. The NGC Project (working from the original descriptive notes) assigns NGC 2174 to the prominent knot at J2000 , and NGC 2175 to the entire nebula, and by extension to the star cluster. Simbad uses NGC 2174 for the nebula and NGC 2175 for the star cluster. References External links NGC 2175 @ SEDS NGC objects pages 2175 Open clusters Orion (constellation)
NGC 2175
[ "Astronomy" ]
248
[ "Constellations", "Orion (constellation)" ]
994,459
https://en.wikipedia.org/wiki/NGC%202204
NGC 2204 is an open cluster of stars in the Canis Major constellation. It was discovered by the German-English astronomer William Herschel on 6 February 1785. The cluster has an integrated visual magnitude of 8.6 and spans a diameter of . Resolving the individual member stars is a challenge with a 10 to 12-inch amateur telescope. It is located at a distance of approximately 13,400 light years from the Sun. The cluster shows a mean radial velocity of relative to the Sun, and is orbiting the inner galactic disk region about 1 kpc below the galactic plane. This is a rich but diffuse cluster with a Trumpler class of III 3m, spanning a physical diameter of about . It is an older cluster with an estimated age of . The metallicity is correspondingly poor, showing an abundance of iron about 59% of that in the Sun. There is a prominent giant branch clump on the HR diagram. The cluster has a significant population of blue stragglers, an indicator of past stellar mergers. It has a pair of candidate chemically peculiar stars, and five variable stars have been discovered, including four eclipsing variables. References External links Canis Major Open clusters 2204 Discoveries by William Herschel
NGC 2204
[ "Astronomy" ]
247
[ "Canis Major", "Constellations" ]
994,465
https://en.wikipedia.org/wiki/NGC%202349
NGC 2349 is an open cluster of stars in the Monoceros constellation. It was discovered by Caroline Herschel in 1783. References External links NGC 2349 @ SEDS NGC objects pages Monoceros 2349 Open clusters
NGC 2349
[ "Astronomy" ]
47
[ "Monoceros", "Constellations" ]
994,471
https://en.wikipedia.org/wiki/NGC%202360
NGC 2360 (also known as Caroline's Cluster or Caldwell 58) is an open cluster in the constellation Canis Major. It was discovered on 26 February 1783 by Caroline Herschel, who described it as a "beautiful cluster of pretty compressed stars near 1/2 degree in diameter". Her notes were overlooked until her brother William included the cluster in his 1786 catalogue of 1000 clusters and nebulae and acknowledged her as the discoverer. The cluster lies 3.5 degrees east of Gamma Canis Majoris and less than one degree northwest of the eclipsing binary star R Canis Majoris; it has a combined apparent magnitude of 7.2. It is 13 arc minutes in diameter. By the western edge of the cluster is the unrelated star, 5.5-magnitude HD 56405. American astronomer Olin J. Eggen surveyed the cluster in 1968, concluding that the brightest star in the field, magnitude-8.96 HD 56847, is likely to lie in the field and not a true member of the cluster. He also identified one or possibly two blue stragglers. These are unexpectedly hot and luminous stars that appear younger than surrounding stars, and have likely developed by sucking matter off companion stars. Four are now recognised to be in the cluster. By analysing the masses of the smallest stars that have evolved into red giants—namely, stars of 1.8 or 1.9 solar masses—Swiss astronomers Jean-Claude Mermilliod and Michel Mayor were able to date the age of the cluster at 2.2 billion years. The cluster has a diameter of around 15 light-years and is located 3700 light-years from Earth. Notes External links Canis Major 2360 Open clusters 058b Astronomical objects discovered in 1783
NGC 2360
[ "Astronomy" ]
359
[ "Canis Major", "Constellations" ]
994,484
https://en.wikipedia.org/wiki/NGC%202362
NGC 2362, also known as Caldwell 64, is an open cluster of stars in the southern constellation of Canis Major. It was discovered by the Italian court astronomer Giovanni Batista Hodierna, who published his finding in 1654. William Herschel called it a "beautiful cluster", while William Henry Smyth said it "has a beautiful appearance, the bright white star being surrounded by a rich gathering of minute companions, in a slightly elongated form, and nearly vertical position". In the past it has also been listed as a nebula, but in 1930 Robert J. Trumpler found no evidence of nebulosity. The brightest member star system is Tau Canis Majoris, and therefore it is sometimes called the Tau Canis Majoris Cluster. The cluster is located at a distance of approximately 1.48 kpc from the Sun, and appears associated with the giant nebula Sh2-310 that lies at the same distance, about one degree to the east. This giant H II region is being ionized by the brighter members of the NGC 2362 cluster. NGC 2362 is a relatively young 4–5 million years in age but is devoid of star-forming gas and dust, indicating that the star formation process has come to a halt. It is a massive open cluster, with more than 500 solar masses, an estimated 100-150 member stars, and an additional 500 forming a halo around the cluster. Of these cluster members, only around 35 show evidence of a debris disk. There is one slightly evolved O-type star, Tau Canis Majoris, and around 40 B-type stars still on the main sequence. Only one candidate classical Be star has been found, as of 2005. Gallery References External links Open clusters Canis Major 2362 064b Sh2-310 NGC 2362
NGC 2362
[ "Astronomy" ]
365
[ "Canis Major", "Constellations" ]
994,505
https://en.wikipedia.org/wiki/NGC%202419
NGC 2419 (also known as Caldwell 25) is a globular cluster in the constellation Lynx. It was discovered by William Herschel on December 31, 1788. NGC 2419 is at a distance of about 300,000 light years from the Solar System and at the same distance from the Galactic Center. NGC 2419 bears the nickname "the Intergalactic Wanderer," which was bestowed when it was erroneously thought not to be in orbit around the Milky Way. Its orbit takes it farther away from the galactic center than the Magellanic Clouds, but it can (with qualifications) be considered as part of the Milky Way. At this great distance it takes three billion years to make one trip around the galaxy. The cluster is dim in comparison to more famous globular clusters such as M13. Nonetheless, NGC 2419 is a 9th magnitude object and is readily viewed, in good sky conditions, with good quality telescopes as small as 102mm (four inches) in aperture. Intrinsically it is one of the brightest and most massive globular clusters of our galaxy, having an absolute magnitude of −9.42 and being 900,000 times more massive than the Sun. It was proposed that NGC 2419 could be, as Omega Centauri, the remnant of a dwarf spheroidal galaxy disrupted and accreted by the Milky Way. However, that hypothesis has been disputed. Astronomer Leos Ondra has noted that NGC 2419 would be the "best and brightest" for any observers in the Andromeda Galaxy, looking for globular clusters in our galaxy since it lies outside the obscuring density of the main disk. This is analogous to the way the cluster G1 can be seen orbiting outside of the Andromeda Galaxy from Earth. It was found to be composed of two different populations, one being more helium-rich than the other, which does not fit the current model for globular cluster formation (which leads to a very homogeneous population in the cluster). This raises new questions on how this globular cluster was formed. Gallery References External links SEDS – NGC 2419 perseus.gr – NGC 2419 in a LRGB CCD image based on 2 hrs total exposure APOD (2009-01-23) – NGC 2419 Globular clusters Lynx (constellation) 2419 025b Astronomical objects discovered in 1788
NGC 2419
[ "Astronomy" ]
487
[ "Lynx (constellation)", "Constellations" ]
994,519
https://en.wikipedia.org/wiki/NGC%202438
NGC 2438 is a planetary nebula in the southern constellation of Puppis. Parallax measurements by Gaia put the central star at a distance of roughly 1,370 light years. It was discovered by William Herschel on March 19, 1786. NGC 2438 appears to lie within the cluster M46, but it is most likely unrelated since it does not share the cluster's radial velocity. The object is a multi-shell planetary nebula with a bright inner nebula with a diameter of , consisting of two somewhat detached shells. It is expanding with a velocity of . The structure is surrounded by a fainter, mostly circular halo that is more visible on the western half, and has a diameter of . The mass of the main nebula is estimated at , while the shell has 0.5–. The main nebula has a temperature of about 10–13,000 K, rising to 15–17,000 K at the inner edge. The nebula consists of material ejected from the central star during the asymptotic giant branch stage, beginning about 8,500 years ago. The main nebula was formed at about half that age. The central star of this planetary nebula is a 17.7-magnitude white dwarf, with a surface temperature of about . References External links NGC 2438 @ SEDS NGC objects pages Planetary nebulae Puppis 2438
NGC 2438
[ "Astronomy" ]
272
[ "Puppis", "Constellations" ]
994,527
https://en.wikipedia.org/wiki/NGC%202451
NGC 2451 is an open cluster in the Puppis constellation, probably discovered by Giovanni Battista Hodierna before 1654 and John Herschel in 1835. In 1994, it was postulated that this was actually two open clusters that lie along the same line of sight. This was confirmed in 1996. The respective clusters are labeled NGC 2451 A and NGC 2451 B, and they are located at distances of 600 and 1,200 light-years, respectively. References External links NGC 2451 @ SEDS NGC objects pages 2451 Open clusters Puppis
NGC 2451
[ "Astronomy" ]
112
[ "Puppis", "Constellations" ]
994,531
https://en.wikipedia.org/wiki/NGC%202477
NGC 2477 (also known as Caldwell 71 or the Termite Hole Cluster) is an open cluster in the constellation Puppis. It contains about 300 stars, and was discovered by Abbé Lacaille in 1751. The cluster's age has been estimated at 700 million years. Visual appearance NGC 2477 is a stunning cluster, almost as extensive in the sky as the full moon. It has been called "one of the top open clusters in the sky", like a highly resolved globular cluster without the dense center characteristic of globular clusters. Burnham notes that several observers have remarked on its richness, and that although it is smaller than M46 (also an open cluster in Puppis), it is richer and more compact. Distance Burnham cites several published distances, ranging from to , where "ly" is the abbreviation for light year. Notes External links 2477 Open clusters Puppis 071b ?
NGC 2477
[ "Astronomy" ]
189
[ "Puppis", "Constellations" ]
994,556
https://en.wikipedia.org/wiki/Born%E2%80%93Haber%20cycle
The Born–Haber cycle is an approach to analyze reaction energies. It was named after two German scientists, Max Born and Fritz Haber, who developed it in 1919. It was also independently formulated by Kasimir Fajans and published concurrently in the same journal. The cycle is concerned with the formation of an ionic compound from the reaction of a metal (often a Group I or Group II element) with a halogen or other non-metallic element such as oxygen. Born–Haber cycles are used primarily as a means of calculating lattice energy (or more precisely enthalpy), which cannot otherwise be measured directly. The lattice enthalpy is the enthalpy change involved in the formation of an ionic compound from gaseous ions (an exothermic process), or sometimes defined as the energy to break the ionic compound into gaseous ions (an endothermic process). A Born–Haber cycle applies Hess's law to calculate the lattice enthalpy by comparing the standard enthalpy change of formation of the ionic compound (from the elements) to the enthalpy required to make gaseous ions from the elements. This lattice calculation is complex. To make gaseous ions from elements it is necessary to atomise the elements (turn each into gaseous atoms) and then to ionise the atoms. If the element is normally a molecule then we first have to consider its bond dissociation enthalpy (see also bond energy). The energy required to remove one or more electrons to make a cation is a sum of successive ionization energies; for example, the energy needed to form Mg2+ is the ionization energy required to remove the first electron from Mg, plus the ionization energy required to remove the second electron from Mg+. Electron affinity is defined as the amount of energy released when an electron is added to a neutral atom or molecule in the gaseous state to form a negative ion. The Born–Haber cycle applies only to fully ionic solids such as certain alkali halides. Most compounds include covalent and ionic contributions to chemical bonding and to the lattice energy, which is represented by an extended Born–Haber thermodynamic cycle. The extended Born–Haber cycle can be used to estimate the polarity and the atomic charges of polar compounds. Examples Formation of LiF The enthalpy of formation of lithium fluoride (LiF) from its elements in their standard states (Li(s) and F2(g)) is modeled in five steps in the diagram: Atomization enthalpy of lithium Ionization enthalpy of lithium Atomization enthalpy of fluorine Electron affinity of fluorine Lattice enthalpy The sum of the energies for each step of the process must equal the enthalpy of formation of lithium fluoride, . is the enthalpy of sublimation for metal atoms (lithium) is the bond enthalpy (of F2). The coefficient 1/2 is used because the formation reaction is Li + 1/2 F2 → LiF. \mathit{IE}_M is the ionization energy of the metal atom: {M} + \mathit{IE}_{M} -> {M+} + e^- \mathit{EA}_X is the electron affinity of non-metal atom X (fluorine) is the lattice enthalpy (defined as exothermic here) The net enthalpy of formation and the first four of the five energies can be determined experimentally, but the lattice enthalpy cannot be measured directly. Instead, the lattice enthalpy is calculated by subtracting the other four energies in the Born–Haber cycle from the net enthalpy of formation. A similar calculation applies for any metal other than lithium and/or any non-metal other than fluorine. The word cycle refers to the fact that one can also equate to zero the total enthalpy change for a cyclic process, starting and ending with LiF(s) in the example. This leads to which is equivalent to the previous equation. Formation of NaBr At ordinary temperatures, Na is solid and Br2 is liquid, so the enthalpy of vaporization of liquid bromine is added to the equation: In the above equation, is the enthalpy of vaporization of Br2 at the temperature of interest (usually in kJ/mol). See also Ionic liquids Notes References External links ChemGuy on the Born-Haber Cycle Solid-state chemistry Thermochemistry Fritz Haber 1916 in science 1916 in Germany Max Born
Born–Haber cycle
[ "Physics", "Chemistry", "Materials_science" ]
954
[ "Thermochemistry", "Condensed matter physics", "nan", "Solid-state chemistry" ]
994,649
https://en.wikipedia.org/wiki/Don%27t%20Make%20Me%20Think
Don't Make Me Think is a book by Steve Krug about human–computer interaction and web usability. The book's premise is that a good software program or web site should let users accomplish their intended tasks as easily and directly as possible. Krug points out that people are good at satisficing, or taking the first available solution to their problem, so design should take advantage of this. He frequently cites Amazon.com as an example of a well-designed web site that manages to allow high-quality interaction, even though the web site gets bigger and more complex every day. The book is intended to exemplify brevity and focus. The goal, according to the book's introduction, was to make a text that could be read by an executive on a two-hour airplane flight. Originally published in 2000, the book was revised in 2005, and again 2013 to add a section about mobile UX, and has sold more than 700,000 copies. In 2010, the author published a sequel, Rocket Surgery Made Easy, which explains how anyone working on a web site, mobile app, or desktop software can do their own usability testing to ensure that what they're building will be usable. The book has been referenced in college courses and online courses on usability. References External links Book description on author's website, www.sensible.com Human–computer interaction 2000 non-fiction books
Don't Make Me Think
[ "Technology", "Engineering" ]
291
[ "Human–computer interaction", "Computing stubs", "Computer book stubs", "Human–machine interaction" ]
994,704
https://en.wikipedia.org/wiki/Mental%20model
A mental model is an internal representation of external reality: that is, a way of representing reality within one's mind. Such models are hypothesized to play a major role in cognition, reasoning and decision-making. The term for this concept was coined in 1943 by Kenneth Craik, who suggested that the mind constructs "small-scale models" of reality that it uses to anticipate events. Mental models can help shape behaviour, including approaches to solving problems and performing tasks. In psychology, the term mental models is sometimes used to refer to mental representations or mental simulation generally. The concepts of schema and conceptual models are cognitively adjacent. Elsewhere, it is used to refer to the "mental model" theory of reasoning developed by Philip Johnson-Laird and Ruth M. J. Byrne. History The term mental model is believed to have originated with Kenneth Craik in his 1943 book The Nature of Explanation. Georges-Henri Luquet in Le dessin enfantin (Children's drawings), published in 1927 by Alcan, Paris, argued that children construct internal models, a view that influenced, among others, child psychologist Jean Piaget. Jay Wright Forrester defined general mental models thus: The image of the world around us, which we carry in our head, is just a model. Nobody in his head imagines all the world, government or country. He has only selected concepts, and relationships between them, and uses those to represent the real system (Forrester, 1971). Philip Johnson-Laird published Mental Models: Towards a Cognitive Science of Language, Inference and Consciousness in 1983. In the same year, Dedre Gentner and Albert Stevens edited a collection of chapters in a book also titled Mental Models. The first line of their book explains the idea further: "One function of this chapter is to belabor the obvious; people's views of the world, of themselves, of their own capabilities, and of the tasks that they are asked to perform, or topics they are asked to learn, depend heavily on the conceptualizations that they bring to the task." (see the book: Mental Models). Since then, there has been much discussion and use of the idea in human-computer interaction and usability by researchers including Donald Norman and Steve Krug (in his book Don't Make Me Think). Walter Kintsch and Teun A. van Dijk, using the term situation model (in their book Strategies of Discourse Comprehension, 1983), showed the relevance of mental models for the production and comprehension of discourse. Charlie Munger popularized the use of multi-disciplinary mental models for making business and investment decisions. Mental models and reasoning One view of human reasoning is that it depends on mental models. In this view, mental models can be constructed from perception, imagination, or the comprehension of discourse (Johnson-Laird, 1983). Such mental models are similar to architects' models or to physicists' diagrams in that their structure is analogous to the structure of the situation that they represent, unlike, say, the structure of logical forms used in formal rule theories of reasoning. In this respect, they are a little like pictures in the picture theory of language described by philosopher Ludwig Wittgenstein in 1922. Philip Johnson-Laird and Ruth M.J. Byrne developed their mental model theory of reasoning which makes the assumption that reasoning depends, not on logical form, but on mental models (Johnson-Laird and Byrne, 1991). Principles of mental models Mental models are based on a small set of fundamental assumptions (axioms), which distinguish them from other proposed representations in the psychology of reasoning (Byrne and Johnson-Laird, 2009). Each mental model represents a possibility. A mental model represents one possibility, capturing what is common to all the different ways in which the possibility may occur (Johnson-Laird and Byrne, 2002). Mental models are iconic, i.e., each part of a model corresponds to each part of what it represents (Johnson-Laird, 2006). Mental models are based on a principle of truth: they typically represent only those situations that are possible, and each model of a possibility represents only what is true in that possibility according to the proposition. However, mental models can represent what is false, temporarily assumed to be true, for example, in the case of counterfactual conditionals and counterfactual thinking (Byrne, 2005). Reasoning with mental models People infer that a conclusion is valid if it holds in all the possibilities. Procedures for reasoning with mental models rely on counter-examples to refute invalid inferences; they establish validity by ensuring that a conclusion holds over all the models of the premises. Reasoners focus on a subset of the possible models of multiple-model problems, often just a single model. The ease with which reasoners can make deductions is affected by many factors, including age and working memory (Barrouillet, et al., 2000). They reject a conclusion if they find a counterexample, i.e., a possibility in which the premises hold, but the conclusion does not (Schroyens, et al. 2003; Verschueren, et al., 2005). Criticisms Scientific debate continues about whether human reasoning is based on mental models, versus formal rules of inference (e.g., O'Brien, 2009), domain-specific rules of inference (e.g., Cheng & Holyoak, 2008; Cosmides, 2005), or probabilities (e.g., Oaksford and Chater, 2007). Many empirical comparisons of the different theories have been carried out (e.g., Oberauer, 2006). Mental models of dynamics systems: mental models in system dynamics Characteristics A mental model is generally: founded on unquantifiable, impugnable, obscure, or incomplete facts; flexible – considerably variable in positive as well as in negative sense; an information filter that causes selective perception, perception of only selected parts of information; very limited, compared with the complexities of the world, and even when a scientific model is extensive and in accordance with a certain reality in the derivation of logical consequences of it, it must take into account such restrictions as working memory; i.e., rules on the maximum number of elements that people are able to remember, gestaltisms or failure of the principles of logic, etc.; dependent on sources of information, which one cannot find anywhere else, are available at any time and can be used. Mental models are a fundamental way to understand organizational learning. Mental models, in popular science parlance, have been described as "deeply held images of thinking and acting". Mental models are so basic to understanding the world that people are hardly conscious of them. Expression of mental models of dynamic systems S.N. Groesser and M. Schaffernicht (2012) describe three basic methods which are typically used: Causal loop diagrams – displaying tendency and a direction of information connections and the resulting causality and feedback loops System structure diagrams – another way to express the structure of a qualitative dynamic system Stock and flow diagrams - a way to quantify the structure of a dynamic system These methods allow showing a mental model of a dynamic system, as an explicit, written model about a certain system based on internal beliefs. Analyzing these graphical representations has been an increasing area of research across many social science fields. Additionally software tools that attempt to capture and analyze the structural and functional properties of individual mental models such as Mental Modeler, "a participatory modeling tool based in fuzzy-logic cognitive mapping", have recently been developed and used to collect/compare/combine mental model representations collected from individuals for use in social science research, collaborative decision-making, and natural resource planning. Mental model in relation to system dynamics and systemic thinking In the simplification of reality, creating a model can find a sense of reality, seeking to overcome systemic thinking and system dynamics. These two disciplines can help to construct a better coordination with the reality of mental models and simulate it accurately. They increase the probability that the consequences of how to decide and act in accordance with how to plan. System dynamics – extending mental models through the creation of explicit models, which are clear, easily communicated and can be compared with each other. Systemic thinking – seeking the means to improve the mental models and thereby improve the quality of dynamic decisions that are based on mental models. Experimental studies carried out in weightlessness and on Earth using neuroimaging showed that humans are endowed with a mental model of the effects of gravity on object motion. Single and double-loop learning After analyzing the basic characteristics, it is necessary to bring the process of changing the mental models, or the process of learning. Learning is a back-loop process, and feedback loops can be illustrated as: single-loop learning or double-loop learning. Single-loop learning Mental models affect the way that people work with information, and also how they determine the final decision. The decision itself changes, but the mental models remain the same. It is the predominant method of learning, because it is very convenient. Double-loop learning Double-loop learning (see diagram below) is used when it is necessary to change the mental model on which a decision depends. Unlike single loops, this model includes a shift in understanding, from simple and static to broader and more dynamic, such as taking into account the changes in the surroundings and the need for expression changes in mental models. See also All models are wrong Cognitive map Cognitive psychology Conceptual model Educational psychology Folk psychology Internal model (motor control) Knowledge representation Lovemap Macrocognition Map–territory relation Model-dependent realism Neuro-linguistic programming Neuroeconomics Neuroplasticity OODA loop Psyche (psychology) Self-stereotyping Social intuitionism Space mapping System dynamics Text and conversation theory Notes References Barrouillet, P. et al. (2000). Conditional reasoning by mental models: chronometric and developmental evidence. Cognit. 75, 237-266. Byrne, R.M.J. (2005). The Rational Imagination: How People Create Counterfactual Alternatives to Reality. Cambridge MA: MIT Press. Byrne, R.M.J. & Johnson-Laird, P.N. (2009). 'If' and the problems of conditional reasoning. Trends in Cognitive Sciences. 13, 282-287 Cheng, P.C. and Holyoak, K.J. (2008) Pragmatic reasoning schemas. In Reasoning: studies of human inference and its foundations (Adler, J.E. and Rips, L.J., eds), pp. 827–842, Cambridge University Press Cosmides, L. et al. (2005) Detecting cheaters. Trends in Cognitive Sciences. 9,505–506 Forrester, J. W. (1971) Counterintuitive behavior of social systems. Technology Review. Oberauer K. (2006) Reasoning with conditionals: A test of formal models of four theories. Cognit. Psychol. 53:238–283. O’Brien, D. (2009). Human reasoning includes a mental logic. Behav. Brain Sci. 32, 96–97 Oaksford, M. and Chater, N. (2007) Bayesian Rationality. Oxford University Press Johnson-Laird, P.N. (1983). Mental Models: Towards a Cognitive Science of Language, Inference, and Consciousness. Cambridge: Cambridge University Press. Johnson-Laird, P.N. (2006) How We Reason. Oxford University Press Johnson-Laird, P.N. and Byrne, R.M.J. (2002) Conditionals: a theory of meaning, inference, and pragmatics. Psychol. Rev. 109, 646–678 Schroyens, W. et al. (2003). In search of counterexamples: Deductive rationality in human reasoning. Quart. J. Exp. Psychol. 56(A), 1129–1145. Verschueren, N. et al. (2005). Everyday conditional reasoning: A working memory-dependent tradeoff between counterexample and likelihood use. Mem. Cognit. 33, 107-119. Further reading Georges-Henri Luquet (2001). Children's Drawings. Free Association Books. Chater, N. et al. (2006) Probabilistic Models of Cognition: Conceptual Foundations. Trends Cogn Sci 10(7):287-91. . Gentner, Dedre; Stevens, Albert L., eds. (1983). Mental Models. Hillsdale: Erlbaum 1983. Groesser, S.N. (2012). Mental model of dynamic systems. In N.M. Seel (Ed.). The encyclopedia of the sciences of learning (Vol. 5, pp. 2195–2200). New York: Springer. Groesser, S.N. & Schaffernicht, M. (2012). Mental Models of Dynamic Systems: Taking Stock and Looking Ahead. System Dynamics Review, 28(1): 46-68, Wiley. Johnson-Laird, P.N. 2005. The History of Mental Models Jones, N. A. et al. (2011). "Mental Models: an interdisciplinary synthesis of theory and methods" Ecology and Society.16 (1): 46. Jones, N. A. et al. (2014). "Eliciting mental models: a comparison of interview procedures in the context of natural resource management" Ecology and Society.19 (1): 13. Prediger, S. (2008). "Discontinuities for mental models - a source for difficulties with the multiplication of fractions" Proceedings of ICME-11, Topic Study Group 10, Research and Development of Number Systems and Arithmetic. (See also Prediger's references to Fischbein 1985 and Fischbein 1989, "Tacit models and mathematical reasoning".) Robles-De-La-Torre, G. & Sekuler, R. (2004). "Numerically Estimating Internal Models of Dynamic Virtual Objects ". In: ACM Transactions on Applied Perception 1(2), pp. 102–117. Sterman, John D. A Skeptic’s Guide to Computer Models, Massachusetts Institute of Technology External links Mental Models and Reasoning Laboratory Systems Analysis, Modelling and Prediction Group, University of Oxford System Dynamics Society Conceptual models Cognitive modeling Cognitive psychology Cognitive science Information Information science
Mental model
[ "Biology" ]
3,003
[ "Behavioural sciences", "Behavior", "Cognitive psychology" ]
994,800
https://en.wikipedia.org/wiki/Titanium%20tetrachloride
Titanium tetrachloride is the inorganic compound with the formula . It is an important intermediate in the production of titanium metal and the pigment titanium dioxide. is a volatile liquid. Upon contact with humid air, it forms thick clouds of titanium dioxide () and hydrochloric acid, a reaction that was formerly exploited for use in smoke machines. It is sometimes referred to as "tickle" or "tickle 4", as a phonetic representation of the symbols of its molecular formula (). Properties and structure is a dense, colourless liquid, although crude samples may be yellow or even red-brown. It is one of the rare transition metal halides that is a liquid at room temperature, being another example. This property reflects the fact that molecules of weakly self-associate. Most metal chlorides are polymers, wherein the chloride atoms bridge between the metals. Its melting point is similar to that of . has a "closed" electronic shell, with the same number of electrons as the noble gas argon. The tetrahedral structure for is consistent with its description as a d0 metal center () surrounded by four identical ligands. This configuration leads to highly symmetrical structures, hence the tetrahedral shape of the molecule. adopts similar structures to and ; the three compounds share many similarities. and react to give mixed halides , where x = 0, 1, 2, 3, 4. Magnetic resonance measurements also indicate that halide exchange is also rapid between and . is soluble in toluene and chlorocarbons. Certain arenes form complexes of the type . reacts exothermically with donor solvents such as THF to give hexacoordinated adducts. Bulkier ligands (L) give pentacoordinated adducts . Production is produced by the chloride process, which involves the reduction of titanium oxide ores, typically ilmenite (), with carbon under flowing chlorine at 900 °C. Impurities are removed by distillation. The coproduction of is undesirable, which has motivated the development of alternative technologies. Instead of directly using ilmenite, "rutile slag" is used. This material, an impure form of , is derived from ilmenite by removal of iron, either using carbon reduction or extraction with sulfuric acid. Crude contains a variety of other volatile halides, including vanadyl chloride (), silicon tetrachloride (), and tin tetrachloride (), which must be separated. Applications Production of titanium metal The world's supply of titanium metal, about 250,000 tons per year, is made from . The conversion involves the reduction of the tetrachloride with magnesium metal. This procedure is known as the Kroll process: In the Hunter process, liquid sodium is the reducing agent instead of magnesium. Production of titanium dioxide Around 90% of the production is used to make the pigment titanium dioxide (). The conversion involves hydrolysis of , a process that forms hydrogen chloride: In some cases, is oxidised directly with oxygen: Smoke screens It has been used to produce smoke screens since it produces a heavy, white smoke that has little tendency to rise. "Tickle" was the standard means of producing on-set smoke effects for motion pictures, before being phased out in the 1980s due to concerns about hydrated HCl's effects on the respiratory system. Chemical reactions Titanium tetrachloride is a versatile reagent that forms diverse derivatives including those illustrated below. Alcoholysis and related reactions A characteristic reaction of is its easy hydrolysis, signaled by the release of HCl vapors and titanium oxides and oxychlorides. Titanium tetrachloride has been used to create naval smokescreens, as the hydrochloric acid aerosol and titanium dioxide that is formed scatter light very efficiently. This smoke is corrosive, however. Alcohols react with to give alkoxides with the formula (R = alkyl, n = 1, 2, 4). As indicated by their formula, these alkoxides can adopt complex structures ranging from monomers to tetramers. Such compounds are useful in materials science as well as organic synthesis. A well known derivative is titanium isopropoxide, which is a monomer. Titanium bis(acetylacetonate)dichloride results from treatment of titanium tetrachloride with excess acetylacetone: Organic amines react with to give complexes containing amido (-containing) and imido (-containing) complexes. With ammonia, titanium nitride is formed. An illustrative reaction is the synthesis of tetrakis(dimethylamido)titanium , a yellow, benzene-soluble liquid: This molecule is tetrahedral, with planar nitrogen centers. Complexes with simple ligands is a Lewis acid as implicated by its tendency to hydrolyze. With the ether THF, reacts to give yellow crystals of . With chloride salts, reacts to form sequentially , (see figure above), and . The reaction of chloride ions with depends on the counterion. and gives the pentacoordinate complex , whereas smaller gives . These reactions highlight the influence of electrostatics on the structures of compounds with highly ionic bonding. Redox Reduction of with aluminium results in one-electron reduction. The trichloride () and tetrachloride have contrasting properties: the trichloride is a colored solid, being a coordination polymer, and is paramagnetic. When the reduction is conducted in THF solution, the Ti(III) product converts to the light-blue adduct . Organometallic chemistry The organometallic chemistry of titanium typically starts from . An important reaction involves sodium cyclopentadienyl to give titanocene dichloride, . This compound and many of its derivatives are precursors to Ziegler–Natta catalysts. Tebbe's reagent, useful in organic chemistry, is an aluminium-containing derivative of titanocene that arises from the reaction of titanocene dichloride with trimethylaluminium. It is used for the "olefination" reactions. Arenes, such as react to give the piano-stool complexes (R = H, ; see figure above). This reaction illustrates the high Lewis acidity of the entity, which is generated by abstraction of chloride from by . Reagent in organic synthesis finds occasional use in organic synthesis, capitalizing on its Lewis acidity, its oxophilicity, and the electron-transfer properties of its reduced titanium halides. It is used in the Lewis acid catalysed aldol addition Key to this application is the tendency of to activate aldehydes (RCHO) by formation of adducts such as . Toxicity and safety considerations Hazards posed by titanium tetrachloride generally arise from its reaction with water that releases hydrochloric acid, which is severely corrosive itself and whose vapors are also extremely irritating. is a strong Lewis acid, which exothermically forms adducts with even weak bases such as THF and water. References General reading External links Titanium tetrachloride: Health Hazard Information NIST Standard Reference Database ChemSub Online: Titanium tetrachloride Titanium(IV) compounds Chlorides Titanium halides Reagents for organic chemistry
Titanium tetrachloride
[ "Chemistry" ]
1,523
[ "Reagents for organic chemistry", "Chlorides", "Inorganic compounds", "Salts" ]
994,834
https://en.wikipedia.org/wiki/Korean%20numerals
The Korean language has two regularly used sets of numerals: a native Korean system and Sino-Korean system. The native Korean number system is used for general counting, like counting up to 99. It is also used to count people, hours, objects, ages, and more. Sino-Korean numbers on the other hand are used for purposes such as dates, money, minutes, addresses, phone numbers, and numbers above 99. Construction For both native and Sino- Korean numerals, the teens (11 through 19) are represented by a combination of tens and the ones places. For instance, 15 would be sib-o (), but not usually il-sib-o in the Sino-Korean system, and yeol-daseot () in native Korean. Twenty through ninety are likewise represented in this place-holding manner in the Sino-Korean system, while Native Korean has its own unique set of words, as can be seen in the chart below. The grouping of large numbers in Korean follows the Chinese tradition of myriads (10000) rather than thousands (1000). The Sino-Korean system is nearly entirely based on the Chinese numerals. The distinction between the two numeral systems is very important. Everything that can be counted will use one of the two systems, but seldom both. Sino-Korean words are sometimes used to mark ordinal usage: yeol beon () means "ten times" while sip beon () means "number ten." When denoting the age of a person, one will usually use sal () for the native Korean numerals, and se () for Sino-Korean. For example, seumul-daseot sal () and i-sib-o se () both mean 'twenty-five-year-old'. See also East Asian age reckoning. The Sino-Korean numerals are used to denote the minute of time. For example, sam-sib-o bun () means "__:35" or "thirty-five minutes." The native Korean numerals are used for the hours in the 12-hour system and for the hours 0:00 to 12:00 in the 24-hour system. The hours 13:00 to 24:00 in the 24-hour system are denoted using both the native Korean numerals and the Sino-Korean numerals. For example, se si () means '03:00' or '3:00 a.m./p.m.' and sip-chil si () or yeol-ilgop si () means '17:00'. Some of the native numbers take a different form in front of measure words: The descriptive forms for 1, 2, 3, 4, and 20 are formed by "dropping the last letter" from the original native cardinal, so to speak. Examples: han beon ("once") du gae ("two things") se si ("three o'clock"), in contrast, in North Korea the Sino-Korean numeral "sam" would normally be used; making it "sam si" ne myeong ("four people") seumu mari ("twenty animals") Something similar also occurs in some Sino-Korean cardinals: onyuwol ("May and June") yuwol ("June") siwol ("October") The cardinals for three and four have alternative forms in front of some measure words: seok dal ("three months") neok jan ("four cups") Korean has several words formed with two or three consecutive numbers. Some of them have irregular or alternative forms. 한둘 handul ("one or two") / 한두 handu ("one or two" in front of measure words) 두셋 duset ("two or three") / 두세 duse ("two or three" in front of measure words) 서넛 seoneot ("three or four") / 서너 seoneo ("three or four" in front of measure words) 두서넛 duseoneot ("two or three or four") / 두서너 duseoneo ("two or three or four" in front of measure words) 너덧 neodeot, 네댓 nedaet, 네다섯 nedaseot, 너더댓 neodeodaet ("four or five") 대여섯 daeyeoseot, 대엿 daeyeot ("five or six") 예닐곱 yenilgop ("six or seven") 일고여덟 ilgoyeodeol, 일여덟 ilyeodeol ("seven or eight") 여덟아홉 yeodeolahop, 엳아홉 yeotahop ("eight or nine") As for counting days in native Korean, another set of unique words are used: 하루 haru ("one day") 이틀 iteul ("two days") 사흘 saheul ("three days") 사나흘 sanaheul, 사날 sanal ("three or four days") 나흘 naheul ("four days") 네댓새 nedaessae, 너댓새 neodaessae, 너더댓새 neodeodaessae, 나달 nadal ("four or five days") 닷새 dassae ("five days") 대엿새 daeyeossae ("five or six days") 엿새 yeossae ("six days") 예니레 yenire ("six or seven days") 이레 ire ("seven days") 일여드레 ilyeodeure ("seven or eight days") 여드레 yeodeure ("eight days") 아흐레 aheure ("nine days") 열흘 yeolheul ("ten days") The native Korean saheul () is often misunderstood as the Sino-Korean sail () due to similar sounds. The two words are different in origin and have different meanings. Cardinal numerals Larger numbers In numbers above 10, elements are combined from largest to smallest, and zeros are implied. Hanja and Hangul numerals are both multiplicative additive rather than positional; to write the number 20 you get the character for two (二/이) and then the character for ten (十/십) to get two tens or twenty (二十/이십). Pronunciation The initial consonants of measure words and numbers following the native cardinals ('eight', only when the is not pronounced) and ('ten') become tensed consonants when possible. Thus for example: (twelve) is pronounced like (eight (books)) is pronounced like Several numerals have long vowels, namely (two), (three) and (four), but these become short when combined with other numerals / nouns (such as in twelve, thirteen, fourteen and so on). The usual liaison and consonant-tensing rules apply, so for example, yesun-yeoseot (sixty-six) is pronounced like (yesun-nyeoseot) and chil-sip (seventy) is pronounced like chil-ssip. Constant suffixes used in Sino-Korean ordinal numerals Beon (), ho (), cha (), and hoe () are always used with Sino-Korean or Arabic ordinal numerals. For example, Yihoseon () is Line Number Two in a metropolitan subway system. Samsipchilbeongukdo () is highway number 37. They cannot be used interchangeably. is 'Apt #906' in a mailing address. 906 without ho () is not used in spoken Korean to imply apartment number or office suite number. The special prefix je () is usually used in combination with suffixes to designate a specific event in sequential things such as the Olympics. Substitution for disambiguation In commerce or the financial sector, some Hanja for each Sino-Korean numbers are replaced by alternative ones to prevent ambiguity or retouching. For verbally communicating number sequences such as phone numbers, ID numbers, etc., especially over the phone, native Korean numbers for 1 and 2 are sometimes substituted for the Sino-Korean numbers. For example, o-o-o hana-dul-hana-dul () instead of o-o-o il-i-il-i () for '555-1212', or sa-o-i-hana () instead of sa-o-i-il () for '4521', because of the potential confusion between the two similar-sounding Sino-Korean numbers. For the same reason, military transmissions are known to use mixed native Korean and Sino-Korean numerals: Notes Note 1: Korean assimilation rules apply as if the underlying form were |sip.ryuk|, giving sim-nyuk instead of the expected sib-yuk. Note 2: These names are considered archaic, and are not used. Note 3: The numbers higher than 1020 (hae) are not usually used. Note 4: The names for these numbers are from Buddhist texts; they are not usually used. Dictionaries sometimes disagree on which numbers the names represent. References J.J. Song The Korean language: Structure, Use and Context (2005 Routledge) pp. 81ff. See also Korean language Korean count word Korean language Numerals
Korean numerals
[ "Mathematics" ]
1,997
[ "Numeral systems", "Numerals" ]
994,887
https://en.wikipedia.org/wiki/Petroleum%20industry
The petroleum industry, also known as the oil industry, includes the global processes of exploration, extraction, refining, transportation (often by oil tankers and pipelines), and marketing of petroleum products. The largest volume products of the industry are fuel oil and gasoline (petrol). Petroleum is also the raw material for many chemical products, including pharmaceuticals, solvents, fertilizers, pesticides, synthetic fragrances, and plastics. The industry is usually divided into three major components: upstream, midstream, and downstream. Upstream regards exploration and extraction of crude oil, midstream encompasses transportation and storage of crude, and downstream concerns refining crude oil into various end products. Petroleum is vital to many industries, and is necessary for the maintenance of industrial civilization in its current configuration, making it a critical concern for many nations. Oil accounts for a large percentage of the world's energy consumption, ranging from a low of 32% for Europe and Asia, to a high of 53% for the Middle East. Other geographic regions' consumption patterns are as follows: South and Central America (44%), Africa (41%), and North America (40%). The world consumes 36 billion barrels (5.8 km3) of oil per year, with developed nations being the largest consumers. The United States consumed 18% of the oil produced in 2015. The production, distribution, refining, and retailing of petroleum taken as a whole represents the world's largest industry in terms of dollar value. History Prehistory Petroleum is a naturally occurring liquid found in rock formations. It consists of a complex mixture of hydrocarbons of various molecular weights, plus other organic compounds. It is generally accepted that oil is formed mostly from the carbon rich remains of ancient plankton after exposure to heat and pressure in Earth's crust over hundreds of millions of years. Over time, the decayed residue was covered by layers of mud and silt, sinking further down into Earth's crust and preserved there between hot and pressured layers, gradually transforming into oil reservoirs. Early history Petroleum in an unrefined state has been utilized by humans for over 5000 years. Oil in general has been used since early human history to keep fires ablaze and in warfare. Its importance to the world economy however, evolved slowly, with whale oil being used for lighting in the 19th century and wood and coal used for heating and cooking well into the 20th century. Even though the Industrial Revolution generated an increasing need for energy, this was initially met mainly by coal, and from other sources including whale oil. However, when it was discovered that kerosene could be extracted from crude oil and used as a lighting and heating fuel, the demand for petroleum increased greatly, and by the early twentieth century had become the most valuable commodity traded on world markets. Modern history Imperial Russia produced 3,500 tons of oil in 1825 and doubled its output by mid-century. After oil drilling began in the region of present-day Azerbaijan in 1846, in Baku, the Russian Empire built two large pipelines: the 833 km long pipeline to transport oil from the Caspian to the Black Sea port of Batum (Baku-Batum pipeline), completed in 1906, and the 162 km long pipeline to carry oil from Chechnya to the Caspian. The first drilled oil wells in Baku were built in 1871–1872 by Ivan Mirzoev, an Armenian businessman who is referred to as one of the 'founding fathers' of Baku's oil industry. At the turn of the 20th century, Imperial Russia's output of oil, almost entirely from the Apsheron Peninsula, accounted for half of the world's production and dominated international markets. Nearly 200 small refineries operated in the suburbs of Baku by 1884. As a side effect of these early developments, the Apsheron Peninsula emerged as the world's "oldest legacy of oil pollution and environmental negligence". In 1846 Baku (Bibi-Heybat settlement) featured the first ever well drilled with percussion tools to a depth of 21 meters for oil exploration. In 1878 Ludvig Nobel and his Branobel company "revolutionized oil transport" by commissioning the first oil tanker and launching it on the Caspian Sea. Samuel Kier established America's first oil refinery in Pittsburgh on Seventh avenue near Grant Street in 1853. Ignacy Łukasiewicz built one of the first modern oil-refineries near Jasło (then in the Austrian dependent Kingdom of Galicia and Lodomeria in Central European Galicia), present-day Poland, in 1854–56. Galician refineries were initially small, as demand for refined fuel was limited. The refined products were used in artificial asphalt, machine oil and lubricants, in addition to Łukasiewicz's kerosene lamp. As kerosene lamps gained popularity, the refining industry grew in the area. The first commercial oil-well in Canada became operational in 1858 at Oil Springs, Ontario (then Canada West). Businessman James Miller Williams dug several wells between 1855 and 1858 before discovering a rich reserve of oil four metres below ground. Williams extracted 1.5 million litres of crude oil by 1860, refining much of it into kerosene-lamp oil. Some historians challenge Canada's claim to North America's first oil field, arguing that Pennsylvania's famous Drake Well was the continent's first. But there is evidence to support Williams, not least of which is that the Drake well did not come into production until August 28, 1859. The controversial point might be that Williams found oil above bedrock while Edwin Drake's well located oil within a bedrock reservoir. The discovery at Oil Springs touched off an oil boom which brought hundreds of speculators and workers to the area. Canada's first gusher (flowing well) erupted on January 16, 1862, when local oil-man John Shaw struck oil at 158 feet (48 m). For a week the oil gushed unchecked at levels reported as high as 3,000 barrels per day. The first modern oil-drilling in the United States began in West Virginia and Pennsylvania in the 1850s. Edwin Drake's 1859 well near Titusville, Pennsylvania, typically considered the first true modern oil well, touched off a major boom. In the first quarter of the 20th century, the United States overtook Russia as the world's largest oil producer. By the 1920s, oil fields had been established in many countries including Canada, Poland, Sweden, Ukraine, the United States, Peru and Venezuela. The first successful oil tanker, the Zoroaster, was built in 1878 in Sweden, designed by Ludvig Nobel. It operated from Baku to Astrakhan. A number of new tanker designs developed in the 1880s. In the early 1930s the Texas Company developed the first mobile steel barges for drilling in the brackish coastal areas of the Gulf of Mexico. In 1937 Pure Oil Company (now part of Chevron Corporation) and its partner Superior Oil Company (now part of ExxonMobil Corporation) used a fixed platform to develop a field in of water, one mile (1.6 km) offshore of Calcasieu Parish, Louisiana. In early 1947 Superior Oil erected a drilling/production oil-platform in of water some 18 miles off Vermilion Parish, Louisiana. Kerr-McGee Oil Industries, as operator for partners Phillips Petroleum (ConocoPhillips) and Stanolind Oil & Gas (BP), completed its historic Ship Shoal Block 32 well in November 1947, months before Superior actually drilled a discovery from their Vermilion platform farther offshore. In any case, that made Kerr-McGee's Gulf of Mexico well, Kermac No. 16, the first oil discovery drilled out of sight of land. Forty-four Gulf of Mexico exploratory wells discovered 11 oil and natural gas fields by the end of 1949. During World War II (1939–1945) control of oil supply from Romania, Baku, the Middle East and the Dutch East Indies played a huge role in the events of the war and the ultimate victory of the Allies. The Anglo-Soviet invasion of Iran (1941) secured Allied control of oil-production in the Middle East. The expansion of Imperial Japan to the south aimed largely at accessing the oil-fields of the Dutch East Indies. Germany, cut off from sea-borne oil supplies by Allied blockade, failed in Operation Edelweiss to secure the Caucasus oil-fields for the Axis military in 1942, while Romania deprived the Wehrmacht of access to Ploesti oilfields – the largest in Europe – from August 1944. Cutting off the East Indies oil-supply (especially via submarine campaigns) considerably weakened Japan in the latter part of the war. After World War II ended in 1945, the countries of the Middle East took the lead in oil production from the United States. Important developments since World War II include deep-water drilling, the introduction of the drillship, and the growth of a global shipping network for petroleum – relying upon oil tankers and pipelines. In 1949 the first offshore oil-drilling at Oil Rocks (Neft Dashlari) in the Caspian Sea off Azerbaijan eventually resulted in a city built on pylons. In the 1960s and 1970s, multi-governmental organizations of oil–producing nations – OPEC and OAPEC – played a major role in setting petroleum prices and policy. Oil spills and their cleanup have become an issue of increasing political, environmental, and economic importance. New fields of hydrocarbon production developed in places such as Siberia, Sakhalin, Venezuela and North and West Africa. With the advent of hydraulic fracturing and other horizontal drilling techniques, shale play has seen an enormous uptick in production. Areas of shale such as the Permian Basin and Eagle-Ford have become huge hotbeds of production for the largest oil corporations in the United States. Structure The American Petroleum Institute divides the petroleum industry into five sectors: upstream (exploration, development and production of crude oil or natural gas) downstream (oil tankers, refiners, retailers and consumers) pipeline marine service and supply Upstream Oil companies used to be classified by sales as "supermajors" (BP, Chevron, ExxonMobil, ConocoPhillips, Shell, Eni and TotalEnergies), "majors", and "independents" or "jobbers". In recent years however, National Oil Companies (NOC, as opposed to IOC, International Oil Companies) have come to control the rights over the largest oil reserves; by this measure the top ten companies all are NOC. The following table shows the ten largest national oil companies ranked by reserves and by production in 2012. Most upstream work in the oil field or on an oil well is contracted out to drilling contractors and oil field service companies. Aside from the NOCs which dominate the Upstream sector, there are many international companies that have a market share. For example: BG Group BHP ConocoPhillips Chevron Eni ExxonMobil First Texas Energy Corporation Hess Marathon Oil OMV TotalEnergies Tullow Oil Rosneft Midstream Midstream operations are sometimes classified within the downstream sector, but these operations compose a separate and discrete sector of the petroleum industry. Midstream operations and processes include the following: Gathering: The gathering process employs narrow, low-pressure pipelines to connect oil- and gas-producing wells to larger, long-haul pipelines or processing facilities. Processing/refining: Processing and refining operations turn crude oil and gas into marketable products. In the case of crude oil, these products include heating oil, gasoline for use in vehicles, jet fuel, and diesel oil. Oil refining processes include distillation, vacuum distillation, catalytic reforming, catalytic cracking, alkylation, isomerization and hydrotreating. Natural gas processing includes compression; glycol dehydration; amine treating; separating the product into pipeline-quality natural gas and a stream of mixed natural gas liquids; and fractionation, which separates the stream of mixed natural gas liquids into its components. The fractionation process yields ethane, propane, butane, isobutane, and natural gasoline. Transportation: Oil and gas are transported to processing facilities, and from there to end users, by pipeline, tanker/barge, truck, and rail. Pipelines are the most economical transportation method and are most suited to movement across longer distances, for example, across continents. Tankers and barges are also employed for long-distance, often international transport. Rail and truck can also be used for longer distances but are most cost-effective for shorter routes. Storage: Midstream service providers provide storage facilities at terminals throughout the oil and gas distribution systems. These facilities are most often located near refining and processing facilities and are connected to pipeline systems to facilitate shipment when product demand must be met. While petroleum products are held in storage tanks, natural gas tends to be stored in underground facilities, such as salt dome caverns and depleted reservoirs. Technological applications: Midstream service providers apply technological solutions to improve efficiency during midstream processes. Technology can be used during compression of fuels to ease flow through pipelines; to better detect leaks in pipelines; and to automate communications for better pipeline and equipment monitoring. While some upstream companies carry out certain midstream operations, the midstream sector is dominated by a number of companies that specialize in these services. Midstream companies include: Aux Sable Bridger Group DCP Midstream Partners Enbridge Energy Partners Enterprise Products Partners Genesis Energy Gibson Energy Inergy Midstream Kinder Morgan Energy Partners Oneok Partners Plains All American Sunoco Logistics Targa Midstream Services Targray Natural Gas Liquids TransCanada Williams Companies Social impact The oil and gas industry spends only 0.4% of its net sales on research & development (R&D) which is in comparison with a range of other industries the lowest share. Governments such as the United States government provide a heavy public subsidy to petroleum companies, with major tax breaks at various stages of oil exploration and extraction, including the costs of oil field leases and drilling equipment. In recent years, enhanced oil recovery techniques – most notably multi-stage drilling and hydraulic fracturing ("fracking") – have moved to the forefront of the industry as this new technology plays a crucial and controversial role in new methods of oil extraction. Environmental impact Water pollution Some petroleum industry operations have been responsible for water pollution through by-products of refining and oil spills. Though hydraulic fracturing has significantly increased natural gas extraction, there is some belief and evidence to support that consumable water has seen increased in methane contamination due to this gas extraction. Leaks from underground tanks and abandoned refineries may also contaminate groundwater in surrounding areas. Hydrocarbons that comprise refined petroleum are resistant to biodegradation and have been found to remain present in contaminated soils for years. To hasten this process, bioremediation of petroleum hydrocarbon pollutants is often employed by means of aerobic degradation. More recently, other bioremediative methods have been explored such as phytoremediation and thermal remediation. Air pollution The industry is the largest industrial source of emissions of volatile organic compounds (VOCs), a group of chemicals that contribute to the formation of ground-level ozone (smog). The combustion of fossil fuels produces greenhouse gases and other air pollutants as by-products. Pollutants include nitrogen oxides, sulphur dioxide, volatile organic compounds and heavy metals. Researchers have discovered that the petrochemical industry can produce ground-level ozone pollution at higher amounts in winter than in summer. Climate change Greenhouse gases caused by burning fossil fuels drive climate change. In 1959, at a symposium organised by the American Petroleum Institute for the centennial of the American oil industry, the physicist Edward Teller warned of the danger of global climate change. Edward Teller explained that carbon dioxide "in the atmosphere causes a greenhouse effect" and that burning more fossil fuels could "melt the icecaps and submerge New York". The Intergovernmental Panel on Climate Change, founded by the United Nations in 1988, concludes that human-sourced greenhouse gases are responsible for most of the observed temperature increase since the middle of the twentieth century. As a result of climate change concerns, many people have begun using other methods of energy such as solar and wind. This recent shift has some petroleum enthusiasts skeptical about the future of the industry. See also References Further reading Nevins, Alan. John D. Rockefeller The Heroic Age Of American Enterprise (1940); 710pp; favorable scholarly biography; online Ordons Oil & Gas Information & News Robert Sobel The Money Manias: The Eras of Great Speculation in America, 1770–1970 (1973) reprinted (2000). Daniel Yergin, The Prize: The Epic Quest for Oil, Money, and Power, (Simon and Schuster 1991; paperback, 1993), . Matthew R. Simmons, Twilight in the Desert: The Coming Saudi Oil Shock and the World Economy, John Wiley & Sons, 2005, . Matthew Yeomans, Oil: Anatomy of an Industry (New Press, 2004), . Smith, GO (1920): Where the World Gets Its Oil: National Geographic, February 1920, pp 181–202 Marius Vassiliou, Historical Dictionary of the Petroleum Industry, 2nd Ed.. Lanham, MD: Rowman & Littlefield, 2018, 621 pp. . Miryusif Mirbabayev, Concise History of Azerbaijani Oil. Baku, Azerneshr, (2008), 340pp. Miryusif Mirbabayev, "Brief history of the first drilled oil well; and the people involved". Oil-Industry History (USA), 2017, v. 18, #1, pp. 25–34. James Douet, The Heritage of the Oil Industry TICCIH Thematic Study , The International Committee for the Conservation of the Industrial Heritage, 2020, 79pp. External links Mir-Yusif Mir-Babayev: Petroleum History. The first Baku oil magazine Mir-Yusif Mir-Babayev: The construction of unique pipeline in the Trans-Caucasus Mir-Yusif Mir-Babayev: Brief history of oil and gas production Fossil fuels Industries (economics)
Petroleum industry
[ "Chemistry" ]
3,735
[ "Chemical process engineering", "Petroleum", "Petroleum industry" ]
995,019
https://en.wikipedia.org/wiki/Mass%20gap
In quantum field theory, the mass gap is the difference in energy between the lowest energy state, the vacuum, and the next lowest energy state. The energy of the vacuum is zero by definition, and assuming that all energy states can be thought of as particles in plane-waves, the mass gap is the mass of the lightest particle. Since the energies of exact (i.e. nonperturbative) energy eigenstates are spread out and therefore are not technically eigenstates, a more precise definition is that the mass gap is the greatest lower bound of the energy of any state which is orthogonal to the vacuum. The analog of a mass gap in many-body physics on a discrete lattice arises from a gapped Hamiltonian. Mathematical definitions For a given real-valued quantum field , where , we can say that the theory has a mass gap if the two-point function has the property with being the lowest energy value in the spectrum of the Hamiltonian and thus the mass gap. This quantity, easy to generalize to other fields, is what is generally measured in lattice computations. It was proved in this way that Yang–Mills theory develops a mass gap on a lattice. The corresponding time-ordered value, the propagator, will have the property with the constant being finite. A typical example is offered by a free massive particle and, in this case, the constant has the value 1/m2. In the same limit, the propagator for a massless particle is singular. Examples from classical theories An example of mass gap arising for massless theories, already at the classical level, can be seen in spontaneous breaking of symmetry or the Higgs mechanism. In the former case, one has to cope with the appearance of massless excitations, Goldstone bosons, that are removed in the latter case due to gauge freedom. Quantization preserves this gauge freedom property. A quartic massless scalar field theory develops a mass gap already at classical level. Consider the equation This equation has the exact solution —where and are integration constants, and sn is a Jacobi elliptic function—provided At the classical level, a mass gap appears while, at quantum level, one has a tower of excitations, and this property of the theory is preserved after quantization in the limit of momenta going to zero. Yang–Mills theory While lattice computations have suggested that Yang–Mills theory indeed has a mass gap and a tower of excitations, a theoretical proof is still missing. This is one of the Clay Institute Millennium problems and it remains an open problem. Such states for Yang–Mills theory should be physical states, named glueballs, and should be observable in the laboratory. Källén–Lehmann representation If Källén–Lehmann spectral representation holds, at this stage we exclude gauge theories, the spectral density function can take a very simple form with a discrete spectrum starting with a mass gap being the contribution from multi-particle part of the spectrum. In this case, the propagator will take the simple form being approximatively the starting point of the multi-particle sector. Now, using the fact that we arrive at the following conclusion for the constants in the spectral density . This could not be true in a gauge theory. Rather it must be proved that a Källén–Lehmann representation for the propagator holds also for this case. Absence of multi-particle contributions implies that the theory is trivial, as no bound states appear in the theory and so there is no interaction, even if the theory has a mass gap. In this case we have immediately the propagator just setting in the formulas above. See also Coleman–Mandula theorem Scalar field theory References External links Sadun, Lorenzo. Yang-Mills and the Mass Gap. Video lecture outlining the nature of the mass gap problem within the Yang-Mills formulation. Mass gaps for scalar field theories on Dispersive Wiki Quantum field theory
Mass gap
[ "Physics" ]
812
[ "Quantum field theory", "Quantum mechanics" ]
995,061
https://en.wikipedia.org/wiki/Micromonosporaceae
Micromonosporaceae is a family of bacteria of the class Actinomycetia. They are gram-positive, spore-forming soil organisms that form a true mycelium. Phylogeny The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature (LPSN) and National Center for Biotechnology Information (NCBI). Genera incertae sedis: "Solwaraspora" Magarvey et al. 2004 Notes See also Bacterial taxonomy List of bacterial orders List of bacteria genera References Micromonosporaceae Soil biology
Micromonosporaceae
[ "Biology" ]
121
[ "Soil biology" ]
995,062
https://en.wikipedia.org/wiki/Erik%20Demaine
Erik D. Demaine (born February 28, 1981) is a Canadian-American professor of computer science at the Massachusetts Institute of Technology and a former child prodigy. Early life and education Demaine was born in Halifax, Nova Scotia, to mathematician and sculptor Martin L. Demaine and Judy Anderson. From the age of 7, he was identified as a child prodigy and spent time traveling across North America with his father. He was home-schooled during that time span until entering university at the age of 12. Demaine completed his bachelor's degree at 14 years of age at Dalhousie University in Canada, and completed his PhD at the University of Waterloo by the time he was 20 years old. Demaine's PhD dissertation, a work in the field of computational origami, was completed at the University of Waterloo under the supervision of Anna Lubiw and Ian Munro. This work was awarded the Canadian Governor General's Gold Medal from the University of Waterloo and the NSERC Doctoral Prize (2003) for the best PhD thesis and research in Canada. Some of the work from this thesis was later incorporated into his book Geometric Folding Algorithms on the mathematics of paper folding published with Joseph O'Rourke in 2007. Professional accomplishments Demaine joined the faculty of the Massachusetts Institute of Technology (MIT) in 2001 at age 20, reportedly the youngest professor in the history of MIT, and was promoted to full professorship in 2011. Demaine is a member of the Theory of Computation group at MIT Computer Science and Artificial Intelligence Laboratory. Mathematical origami artwork by Erik and Martin Demaine was part of the Design and the Elastic Mind exhibit at the Museum of Modern Art in 2008, and has been included in the MoMA permanent collection. That same year, he was one of the featured artists in Between the Folds, an international documentary film about origami practitioners which was later broadcast on PBS television. In connection with a 2012 exhibit, three of his curved origami artworks with Martin Demaine are in the permanent collection of the Renwick Gallery of the Smithsonian Museum. Demaine was a fan of Martin Gardner and in 2001 he teamed up with his father Martin Demaine and Gathering 4 Gardner founder Tom M. Rodgers to edit a tribute book for Gardner on his 90th birthday. From 2016 to 2020 he was president of the board of directors of Gathering 4 Gardner. Honours and awards In 2003, Demaine was awarded the MacArthur Fellowship, known colloquially as the "genius grant". In 2013, Demaine received the EATCS Presburger Award for young scientists. The award citation listed accomplishments including his work on the carpenter's rule problem, hinged dissection, prefix sum data structures, competitive analysis of binary search trees, graph minors, and computational origami. That same year, he was awarded a fellowship by the John Simon Guggenheim Memorial Foundation. For his work on bidimensionality, he was the winner of the Nerode Prize in 2015 along with his co-authors Fedor Fomin, Mohammad T. Hajiaghayi, and Dimitrios Thilikos. The work was the study of a general technique for developing both fixed-parameter tractable exact algorithms and approximation algorithms for a class of algorithmic problems on graphs. In 2016, he became a fellow at the Association for Computing Machinery. He was given an honorary doctorate by Bard College in 2017. See also List of University of Waterloo people References External links Erik Demaine Biography in MIT News Between the Folds Documentary film featuring Erik Demaine and 14 other international origami practitioners 1981 births Living people MacArthur Fellows Canadian computer scientists Theoretical computer scientists Origami artists Researchers in geometric algorithms Recreational mathematicians People from Halifax, Nova Scotia MIT School of Engineering faculty University of Waterloo alumni Dalhousie University alumni Mathematical artists 2016 fellows of the Association for Computing Machinery MIT Computer Science and Artificial Intelligence Laboratory people
Erik Demaine
[ "Mathematics" ]
780
[ "Recreational mathematics", "Recreational mathematicians" ]
995,064
https://en.wikipedia.org/wiki/Automated%20attendant
In telephony, an automated attendant (also auto attendant, auto-attendant, autoattendant, automatic phone menus, AA, or virtual receptionist) allows callers to be automatically transferred to an extension without the intervention of an operator/receptionist. Many AAs will also offer a simple menu system ("for sales, press 1, for service, press 2," etc.). An auto attendant may also allow a caller to reach a live operator by dialing a number, usually "0". Typically the auto attendant is included in a business's phone system such as a PBX, but some services allow businesses to use an AA without such a system. Modern AA services (which now overlap with more complicated interactive voice response or IVR systems) can route calls to mobile phones, VoIP virtual phones, other AAs/IVRs, or other locations using traditional land-line phones or voice message machines. Feature description Telephone callers will recognize an automated attendant system as one that greets calls incoming to an organization with a recorded greeting of the form, "Thank you for calling .... If you know your party's extension, you may dial it any time during this message." Callers who have a touch-tone (DTMF) phone can dial an extension number or, in most cases, wait for operator ("attendant") assistance. Since the telephone network does not transmit the DC signals from rotary dial telephones (except for audible clicks), callers who have rotary dial phones have to wait for assistance. On a purely technical level it could be argued that an automated attendant is a very simple kind of IVR; however, in the telecom industry the terms IVR and auto attendant are generally considered distinct. An automated attendant serves a very specific purpose (replace live operator and route calls), whereas an IVR can perform all sorts of functions (telephone banking, account inquiries, etc.). An AA will often include a directory which will allow a caller to dial by name in order to find a user on a system. There is no standard format to these directories, and they can use combinations of first name, last name, or both. The following lists common routing steps that are components of an automated attendant: Transfer to extension Transfer to voicemail Play message (i.e., "our address is ...") Go to a sub-menu Repeat choices In addition, an automated attendant would be expected to have values for the following: '0' - where to go when the caller dials '0' Timeout - what to do if the caller does nothing (usually go to the same place as '0') Default mailbox - where to send calls if '0' is not answered (or is not pointing to a live person) Background PBXs (private branch exchanges) or PABXs (private automatic branch exchanges) are telephone systems that serve an organization that has many telephone extensions but fewer telephone lines (sometimes called "trunks") that connect that organization to the rest of the global telecommunications network. While persons within an enterprise served by a PBX can call each other by dialing their extension numbers, incoming calls, i.e., calls originating from a telephone not served by the PBX but intended for a party served by the PBX, required assistance from a switchboard operator (also called a "switchboard attendant") or a telephone service called DID ("direct inward dialing"). Direct inward dialing has advantages such as rapid connection to the destination party and disadvantages including cost, lack of identification of the called organization and use of ten-digit telephone numbers. Automated attendants provide, among many other things, a way for an external caller to be directed to an extension or department served by a PBX system without using direct inward dialing or without switchboard attendant assistance. History Automated attendants are not part of voicemail systems. Voice messaging (or voicemail or VM) technology has existed since the late 1970s; in the early 1980s companies provided voice-prompting systems that allowed callers to reach (route the call) to an intended party, not necessarily to leave a message. Automated attendant systems are also referred to as automated menu systems and much early work in this field was done by Michael J. Freeman, Ph.D. Time-based routing Many auto attendants will have options to allow for time-of-day routing, as well as weekend and holiday routing. The specifics of these features will depend entirely on the particular automated attendant, but typically there would be a normal greeting and routing steps that would take place during normal business hours, and a different greeting and routing for non-business hours. See also Call avoidance IVR Line hunting Call whisper References Notes Sources "What is a Phone Tree?" Wisegeek Telephony Automation Customer service
Automated attendant
[ "Engineering" ]
993
[ "Control engineering", "Automation" ]
995,169
https://en.wikipedia.org/wiki/Landau%20theory
Landau theory (also known as Ginzburg–Landau theory, despite the confusing name) in physics is a theory that Lev Landau introduced in an attempt to formulate a general theory of continuous (i.e., second-order) phase transitions. It can also be adapted to systems under externally-applied fields, and used as a quantitative model for discontinuous (i.e., first-order) transitions. Although the theory has now been superseded by the renormalization group and scaling theory formulations, it remains an exceptionally broad and powerful framework for phase transitions, and the associated concept of the order parameter as a descriptor of the essential character of the transition has proven transformative. Mean-field formulation (no long-range correlation) Landau was motivated to suggest that the free energy of any system should obey two conditions: Be analytic in the order parameter and its gradients. Obey the symmetry of the Hamiltonian. Given these two conditions, one can write down (in the vicinity of the critical temperature, Tc) a phenomenological expression for the free energy as a Taylor expansion in the order parameter. Second-order transitions Consider a system that breaks some symmetry below a phase transition, which is characterized by an order parameter . This order parameter is a measure of the order before and after a phase transition; the order parameter is often zero above some critical temperature and non-zero below the critical temperature. In a simple ferromagnetic system like the Ising model, the order parameter is characterized by the net magnetization , which becomes spontaneously non-zero below a critical temperature . In Landau theory, one considers a free energy functional that is an analytic function of the order parameter. In many systems with certain symmetries, the free energy will only be a function of even powers of the order parameter, for which it can be expressed as the series expansion In general, there are higher order terms present in the free energy, but it is a reasonable approximation to consider the series to fourth order in the order parameter, as long as the order parameter is small. For the system to be thermodynamically stable (that is, the system does not seek an infinite order parameter to minimize the energy), the coefficient of the highest even power of the order parameter must be positive, so . For simplicity, one can assume that , a constant, near the critical temperature. Furthermore, since changes sign above and below the critical temperature, one can likewise expand , where it is assumed that for the high-temperature phase while for the low-temperature phase, for a transition to occur. With these assumptions, minimizing the free energy with respect to the order parameter requires The solution to the order parameter that satisfies this condition is either , or It is clear that this solution only exists for , otherwise is the only solution. Indeed, is the minimum solution for , but the solution minimizes the free energy for , and thus is a stable phase. Furthermore, the order parameter follows the relation below the critical temperature, indicating a critical exponent for this Landau mean-theory model. The free-energy will vary as a function of temperature given by From the free energy, one can compute the specific heat, which has a finite jump at the critical temperature of size . This finite jump is therefore not associated with a discontinuity that would occur if the system absorbed latent heat, since . It is also noteworthy that the discontinuity in the specific heat is related to the discontinuity in the second derivative of the free energy, which is characteristic of a second-order phase transition. Furthermore, the fact that the specific heat has no divergence or cusp at the critical point indicates its critical exponent for is . Irreducible representations Landau expanded his theory to consider the restraints that it imposes on the symmetries before and after a transition of second order. They need to comply with a number of requirements: The distorted (or ordered) symmetry needs to be a subgroup of the higher one. The order parameter that embodies the distortion needs to transform as a single irreducible representation (irrep) of the parent symmetry The irrep should not contain a third order invariant If the irrep allows for more than one fourth order invariant, the resulting symmetry minimizes a linear combination of these invariants In the latter case more than one daughter structure should be reacheable through a continuous transition. A good example of this are the structure of MnP (space group Cmca) and the low temperature structure of NbS (space group P63mc). They are both daughters of the NiAs-structure and their distortions transform according to the same irrep of that spacegroup. Applied fields In many systems, one can consider a perturbing field that couples linearly to the order parameter. For example, in the case of a classical dipole moment , the energy of the dipole-field system is . In the general case, one can assume an energy shift of due to the coupling of the order parameter to the applied field , and the Landau free energy will change as a result: In this case, the minimization condition is One immediate consequence of this equation and its solution is that, if the applied field is non-zero, then the magnetization is non-zero at any temperature. This implies there is no longer a spontaneous symmetry breaking that occurs at any temperature. Furthermore, some interesting thermodynamic and universal quantities can be obtained from this above condition. For example, at the critical temperature where , one can find the dependence of the order parameter on the external field: indicating a critical exponent . Furthermore, from the above condition, it is possible to find the zero-field susceptibility , which must satisfy In this case, recalling in the zero-field case that at low temperatures, while for temperatures above the critical temperature, the zero-field susceptibility therefore has the following temperature dependence: which is reminiscent of the Curie-Weiss law for the temperature dependence of magnetic susceptibility in magnetic materials, and yields the mean-field critical exponent . It is noteworthy that although the critical exponents so obtained are incorrect for many models and systems, they correctly satisfy various exponent equalities such as the Rushbrook equality: . First-order transitions Landau theory can also be used to study first-order transitions. There are two different formulations, depending on whether or not the system is symmetric under a change in sign of the order parameter. I. Symmetric Case Here we consider the case where the system has a symmetry and the energy is invariant when the order parameter changes sign. A first-order transition will arise if the quartic term in is negative. To ensure that the free energy remains positive at large , one must carry the free-energy expansion to sixth-order, where , and is some temperature at which changes sign. We denote this temperature by and not , since it will emerge below that it is not the temperature of the first-order transition, and since there is no critical point, the notion of a "critical temperature" is misleading to begin with. and are positive coefficients. We analyze this free energy functional as follows: (i) For , the and terms are concave upward for all , while the term is concave downward. Thus for sufficiently high temperatures is concave upward for all , and the equilibrium solution is . (ii) For , both the and terms are negative, so is a local maximum, and the minimum of is at some non-zero value , with . (iii) For just above , turns into a local minimum, but the minimum at continues to be the global minimum since it has a lower free energy. It follows that as the temperature is raised above , the global minimum cannot continuously evolve from to 0. Rather, at some intermediate temperature , the minima at and must become degenerate. For , the global minimum will jump discontinuously from to 0. To find , we demand that free energy be zero at (just like the solution), and furthermore that this point should be a local minimum. These two conditions yield two equations, which are satisfied when . The same equations also imply that . That is, From this analysis both points made above can be seen explicitly. First, the order parameter suffers a discontinuous jump from to 0. Second, the transition temperature is not the same as the temperature where vanishes. At temperatures below the transition temperature, , the order parameter is given by which is plotted to the right. This shows the clear discontinuity associated with the order parameter as a function of the temperature. To further demonstrate that the transition is first-order, one can show that the free energy for this order parameter is continuous at the transition temperature , but its first derivative (the entropy) suffers from a discontinuity, reflecting the existence of a non-zero latent heat. II. Nonsymmetric Case Next we consider the case where the system does not have a symmetry. In this case there is no reason to keep only even powers of in the expansion of , and a cubic term must be allowed (The linear term can always be eliminated by a shift + constant.) We thus consider a free energy functional Once again , and are all positive. The sign of the cubic term can always be chosen to be negative as we have done by reversing the sign of if necessary. We analyze this free energy functional as follows: (i) For , we have a local maximum at , and since the free energy is bounded below, there must be two local minima at nonzero values and . The cubic term ensures that is the global minimum since it is deeper. (ii) For just above , the minimum at disappears, the maximum at turns into a local minimum, but the minimum at persists and continues to be the global minimum. As the temperature is further raised, rises until it equals zero at some temperature . At we get a discontinuous jump in the global minimum from to 0. (The minima cannot coalesce for that would require the first three derivatives of to vanish at .) To find , we demand that free energy be zero at (just like the solution), and furthermore that this point should be a local minimum. These two conditions yield two equations, which are satisfied when . The same equations also imply that . That is, As in the symmetric case the order parameter suffers a discontinuous jump from to 0. Second, the transition temperature is not the same as the temperature where vanishes. Applications It was known experimentally that the liquid–gas coexistence curve and the ferromagnet magnetization curve both exhibited a scaling relation of the form , where was mysteriously the same for both systems. This is the phenomenon of universality. It was also known that simple liquid–gas models are exactly mappable to simple magnetic models, which implied that the two systems possess the same symmetries. It then followed from Landau theory why these two apparently disparate systems should have the same critical exponents, despite having different microscopic parameters. It is now known that the phenomenon of universality arises for other reasons (see Renormalization group). In fact, Landau theory predicts the incorrect critical exponents for the Ising and liquid–gas systems. The great virtue of Landau theory is that it makes specific predictions for what kind of non-analytic behavior one should see when the underlying free energy is analytic. Then, all the non-analyticity at the critical point, the critical exponents, are because the equilibrium value of the order parameter changes non-analytically, as a square root, whenever the free energy loses its unique minimum. The extension of Landau theory to include fluctuations in the order parameter shows that Landau theory is only strictly valid near the critical points of ordinary systems with spatial dimensions higher than 4. This is the upper critical dimension, and it can be much higher than four in more finely tuned phase transition. In Mukhamel's analysis of the isotropic Lifschitz point, the critical dimension is 8. This is because Landau theory is a mean field theory, and does not include long-range correlations. This theory does not explain non-analyticity at the critical point, but when applied to superfluid and superconductor phase transition, Landau's theory provided inspiration for another theory, the Ginzburg–Landau theory of superconductivity. Including long-range correlations Consider the Ising model free energy above. Assume that the order parameter and external magnetic field, , may have spatial variations. Now, the free energy of the system can be assumed to take the following modified form: where is the total spatial dimensionality. So, Assume that, for a localized external magnetic perturbation , the order parameter takes the form . Then, That is, the fluctuation in the order parameter corresponds to the order-order correlation. Hence, neglecting this fluctuation (like in the earlier mean-field approach) corresponds to neglecting the order-order correlation, which diverges near the critical point. One can also solve for , from which the scaling exponent, , for correlation length can deduced. From these, the Ginzburg criterion for the upper critical dimension for the validity of the Ising mean-field Landau theory (the one without long-range correlation) can be calculated as: In our current Ising model, mean-field Landau theory gives and so, it (the Ising mean-field Landau theory) is valid only for spatial dimensionality greater than or equal to 4 (at the marginal values of , there are small corrections to the exponents). This modified version of mean-field Landau theory is sometimes also referred to as the Landau–Ginzburg theory of Ising phase transitions. As a clarification, there is also a Ginzburg–Landau theory specific to superconductivity phase transition, which also includes fluctuations. See also Ginzburg–Landau theory Landau–de Gennes theory Ginzburg criterion Stuart–Landau equation Footnotes Further reading Landau L.D. Collected Papers (Nauka, Moscow, 1969) Michael C. Cross, Landau theory of second order phase transitions, (Caltech statistical mechanics lecture notes). Yukhnovskii, I R, Phase Transitions of the Second Order – Collective Variables Method, World Scientific, 1987, Statistical mechanics Phase transitions Lev Landau
Landau theory
[ "Physics", "Chemistry" ]
2,956
[ "Physical phenomena", "Phase transitions", "Phases of matter", "Critical phenomena", "Statistical mechanics", "Matter" ]
995,197
https://en.wikipedia.org/wiki/Promethazine
Promethazine, sold under the brand name Phenergan among others, is a first-generation antihistamine, sedative, and antiemetic used to treat allergies, insomnia, and nausea. It may also help with some symptoms associated with the common cold and may also be used for sedating people who are agitated or anxious, an effect that has led to some recreational use (especially with codeine). Promethazine is taken by mouth (oral), as a rectal suppository, or by injection into a muscle (IM). Common side effects of promethazine include confusion and sleepiness; consumption of alcohol or other sedatives can make these symptoms worse. It is unclear if use of promethazine during pregnancy or breastfeeding is safe for the fetus. Use of promethazine is not recommended in those less than two years old, due to potentially negative effects on breathing. Use of promethazine by injection into a vein is not recommended, due to potential skin damage. Promethazine is in the phenothiazine family of medications. It is also a strong anticholinergic, which produces its sedative effects. This also means high or toxic doses can act as a deliriant. Promethazine was made in the 1940s by a team of scientists from Rhône-Poulenc laboratories. It was approved for medical use in the United States in 1951. It is a generic medication and is available under many brand names globally. In 2022, it was the 198th most commonly prescribed medication in the United States, with more than 2million prescriptions. In 2022, the combination with dextromethorphan was the 260th most commonly prescribed medication in the United States, with more than 1million prescriptions. Medical uses Promethazine has a variety of medical uses, including: Sedation In Germany, it is approved for the treatment of agitation and agitation associated with underlying psychiatric disorders with a maximum daily dose of 200 mg. For nausea and vomiting associated with anesthesia or chemotherapy. It is commonly used postoperatively as an antiemetic. The antiemetic activity increases with increased dosing; however, side effects also increase, which often limits maximal dosing. For moderate to severe morning sickness and hyperemesis gravidarum: In the UK, Promethazine is the drug of first choice. Promethazine is preferred during pregnancy because it is an older drug and there is more data regarding the use of it during pregnancy. Second-choice medications, which are used if Promethazine isn't tolerated or the patient cannot take it, are metoclopramide or prochlorperazine. For allergies such as hay fever and together with other medications in anaphylaxis To aid with symptoms of the common cold Motion sickness, including space sickness Hemolytic disease of the newborn Anxiety before surgery Short-term insomnia Side effects Some documented side effects include: Tardive dyskinesia, pseudoparkinsonism, acute dystonia (effects due to dopamine D2 receptor antagonism) Confusion in the elderly Drowsiness, dizziness, fatigue, more rarely vertigo Known to have effects on serotonin and dopamine receptors. Dry mouth Nausea Respiratory depression in patients under the age of two and those with severely compromised pulmonary function Blurred vision, xerostomia, dry nasal passages, dilated pupils, constipation, and urinary retention. (due to its anticholinergic effects) Chest discomfort/pressure (In children less than 2 years old) Akathisia Less frequent: Cardiovascular side effects to include arrhythmias and hypotension Neuroleptic malignant syndrome Liver damage and cholestatic jaundice Bone marrow suppression, potentially resulting in agranulocytosis, thrombocytopenia, and leukopenia Depression of the thermoregulatory mechanism resulting in hypothermia/hyperthermia Rare side effects include: Seizures Because of the potential for more severe side effects, this drug is on the list to avoid in the elderly. In many countries (including the US and UK), promethazine is contraindicated in children less than two years of age, and strongly cautioned against in children between two and six, due to problems with respiratory depression and sleep apnea. Promethazine is listed as one of the drugs with the highest anticholinergic activity in a study of anticholinergic burden, including long-term cognitive impairment. Overdose Promethazine in overdose can produce signs and symptoms including CNS depression, hypotension, respiratory depression, unconsciousness, and sudden death. Other reactions may include hyperreflexia, hypertonia, ataxia, athetosis, and extensor-plantar reflexes. Atypically and/or rarely, stimulation, convulsions, hyperexcitability, and nightmares may occur. Anticholinergic effects like dry mouth, dilated pupils, flushing, gastrointestinal symptoms, and delirium may occur as well. Treatment of overdose is supportive and based on symptoms. Pharmacology Promethazine, a phenothiazine derivative, is structurally different from the neuroleptic phenothiazines, with similar but different effects. Despite structural differences, promethazine exhibits a strikingly similar binding profile to promazine, another phenothiazine compound. Both promethazine and promazine exhibit comparable neuroleptic potency, with a neuroleptic potency of 0.5. However, dosages used therapeutically, such as for sedation or sleep disorders, have no antipsychotic effect. It acts primarily as a strong antagonist of the H1 receptor (antihistamine, Ki = 1.4 nM) and a moderate mACh receptor antagonist (anticholinergic), and also has weak to moderate affinity for the 5-HT2A, 5-HT2C, D2, and α1-adrenergic receptors, where it acts as an antagonist at all sites, as well. New studies have shown that promethazine acts as a strong non-competitive selective NMDA receptor antagonist, with an EC50 of 20 μM; which might promote sedation in addition with the strong antihistaminergic effects of the H1 receptor, but also as a weaker analgesic. It does not, however, affect the AMPA receptors. Another notable use of promethazine is as a local anesthetic, by blockage of sodium channels. Chemistry Solid promethazine hydrochloride is a white to faint-yellow, practically odorless, crystalline powder. Slow oxidation may occur upon prolonged exposure to air, usually causing blue discoloration. Its hydrochloride salt is freely soluble in water and somewhat soluble in alcohol. Promethazine is a chiral compound, occurring as a mixture of enantiomers. History Promethazine was first synthesized by a group at Rhone-Poulenc (which later became part of Sanofi) led by Paul Charpentier in the 1940s. The team was seeking to improve on diphenhydramine; the same line of medical chemistry led to the creation of chlorpromazine. Society and culture As of July 2017, it is marketed under many brand names worldwide: Allersoothe, Antiallersin, Anvomin, Atosil, Avomine, Closin N, Codopalm, Diphergan, Farganesse, Fenazil, Fenergan, Fenezal, Frinova, Hiberna, Histabil, Histaloc, Histantil, Histazin, Histazine, Histerzin, Lenazine, Lergigan, Nufapreg, Otosil, Pamergan, Pharmaniaga, Phenadoz, Phenerex, Phenergan, Phénergan, Pipolphen, Polfergan, Proazamine, Progene, Prohist, Promet, Prometal, Prometazin, Prometazina, Promethazin, Prométhazine, Promethazinum, Promethegan, Promezin, Proneurin, Prothazin, Prothiazine, Prozin, Pyrethia, Quitazine, Reactifargan, Receptozine, Romergan, Sominex, Sylomet, Xepagan, Zinmet, and Zoralix. It is also marketed in many combination drug formulations: with carbocisteine as Actithiol Antihistaminico, Mucoease, Mucoprom, Mucotal Prometazine, and Rhinathiol; with paracetamol (acetaminophen) as Algotropyl, Calmkid, Fevril, Phen Plus, and Velpro-P; with paracetamol and dextromethorphan as Choligrip na noc, Coldrex Nočná Liečba, Fedril Night Cold and Flu, Night Nurse, and Tachinotte; with paracetamol, phenylephrine, and salicylamide as Flukit; with dextromethorphan as Axcel Dextrozine and Hosedyl DM; with dextromethorphan and ephedrine as Methorsedyl; with dextromethorphan and pseudoephedrine as Sedilix-DM; with dextromethorphan and phenylephedrine as Sedilix-RX; with pholcodine as Codo-Q Junior and Tixylix; with pholcodine and ephedrine as Phensedyl Dry Cough Linctus; with pholcodine and phenylephedrine as Russedyl Compound Linctus; with pholcodine and phenylpropanolamine as Triple 'P'; with codeine as Kefei and Procodin; with codeine and ephedrine as Dhasedyl, Fendyl, and P.E.C.; with ephedrine and dextromethorphan as Dhasedyl DM; with glutamic acid as Psico-Soma, and Psicosoma; with noscapine as Tussisedal; and with chlorpromazine and phenobarbital as Vegetamin. Recreational use The recreational drug lean, also known as purple drank among other names, often contains a combination of promethazine with codeine-containing cold medication. Product liability lawsuit In 2009, the US Supreme Court ruled on a product liability case involving promethazine. Diana Levine, a woman with a migraine, was administered Wyeth's Phenergan via IV push. The drug was injected improperly, resulting in gangrene and subsequent amputation of her right forearm below the elbow. A state jury awarded her $6 million in punitive damages. The case was appealed to the Supreme Court on grounds of federal preemption and substantive due process. The Supreme Court upheld the lower courts' rulings, stating that "Wyeth could have unilaterally added a stronger warning about IV-push administration" without acting in opposition to federal law. In effect, this means drug manufacturers can be held liable for injuries if warnings of potential adverse effects, approved by the US Food and Drug Administration (FDA), are deemed insufficient by state courts. In September 2009, the FDA required a boxed warning be put on promethazine for injection, stating the contraindication for subcutaneous administration. The preferred administrative route is intramuscular, which reduces the risk of surrounding muscle and tissue damage. References Antiemetics Antimigraine drugs CYP2D6 inhibitors Dimethylamino compounds H1 receptor antagonists HERG blocker Hypnotics M1 receptor antagonists M2 receptor antagonists M3 receptor antagonists M4 receptor antagonists M5 receptor antagonists NMDA receptor antagonists Phenothiazines Sodium channel blockers Sigma receptor ligands Wikipedia medicine articles ready to translate
Promethazine
[ "Biology" ]
2,562
[ "Hypnotics", "Behavior", "Sleep" ]
995,332
https://en.wikipedia.org/wiki/Teicoplanin
Teicoplanin is an semisynthetic glycopeptide antibiotic with a spectrum of activity similar to vancomycin. Its mechanism of action is to inhibit bacterial cell wall peptidoglycan synthesis. It is used in the prophylaxis and treatment of serious infections caused by Gram-positive bacteria, including methicillin-resistant Staphylococcus aureus and Enterococcus faecalis. Teicoplanin is widely available in many European, Asian, and South American countries, however it is not currently approved by the US Food and Drug Administration and is not commercially available in the United States. Teicoplanin is marketed by Sanofi-Aventis under the trade name Targocid. Other trade names include Ticocin marketed by Cipla(India). Its strength is considered to be due to the length of the hydrocarbon chain. History Teicoplanin was first isolated in 1978 from Actinoplanes teichomyceticus (ATCC 31121), a rare species of actinobacteria in the family Micromonosporaceae. The bacteria were obtained from a soil sample collected in Nimodi Village, Indore, India. The chemical structure of teicoplanin was determined and published in 1984. Teicoplanin was first introduced into clinical use in 1984. Following the publication of studies demonstrating its efficacy against infections such as bone and soft tissue infections, endocarditis, pneumonia, and sepsis in 1986, it received regulatory approval in Europe in 1988. The biosynthetic pathway leading to teicoplanin, as well as the regulatory circuit governing the biosynthesis, were studied intensively in recent years, allowing for the creation of an integrated model of its biosynthesis. Indications Teicoplanin treats a wide range of infections with Gram-positive bacteria, including endocarditis, sepsis, soft tissue and skin infections, and venous catheter-associated infections. Studies have investigated the use of oral teicoplanin in the treatment of pseudomembranous colitis and Clostridioides difficile-associated diarrhea, finding it to demonstrate efficacy comparable to that of vancomycin. Susceptible organisms Teicoplanin has demonstrated in vitro efficacy against Gram-positive bacteria including staphylococci (including MRSA), streptococci, enterococci, and against anaerobic Gram-positive bacteria including Clostridium spp. Teicoplanin is ineffective against Gram-negative bacteria as the large, polar molecules of the compound are unable to pass through the external membrane of these organisms. The following represents MIC susceptibility data for a few medically significant pathogens: Clostridioides difficile: 0.06 μg/ml - 0.5 μg/ml Staphylococcus aureus: ≤0.06 μg/ml - ≥128 μg/ml Staphylococcus epidermidis: ≤0.06 μg/ml - 32 μg/ml Pharmacology Pharmacokinetics Due to poor oral absorption, teicoplanin requires intravenous or intramuscular administration for systemic effect. Intramuscular administration achieves approximately 90% bioavailability. The drug exhibits high protein binding (90-95%) and is primarily eliminated through the kidneys unchanged, with minimal liver metabolism (2-3%) via hydroxylation. Clearance is reduced in patients with kidney impairment and is not significantly removed by hemodialysis. Teicoplanin exhibits a long half-life of 45-70 hours, allowing for once-daily dosing after loading doses. Pharmacodynamics Teicoplanin is a glycopeptide antibiotic that inhibits bacterial cell wall synthesis. It binds to the D-alanyl-D-alanine (D-Ala-D-Ala) terminus of the peptidoglycan precursor, preventing the transpeptidation reaction necessary for cell wall cross-linking. This binding also interferes with the polymerization of peptidoglycan, ultimately leading to cell death. In addition to its binding to the D-Ala-D-Ala terminus, teicoplanin may also interact with the lipid II substrate in the bacterial cell membrane through its hydrophobic tail. This interaction could facilitate the antibiotic's proximity to the nascent peptidoglycan, enhancing its inhibitory effect. However, this mechanism has not been fully confirmed. Adverse effects Adverse effects of teicoplanin are usually limited to local effects or hypersensitivity reactions. While there is potential for nephrotoxicity and ototoxicity, the incidence of such organ toxicity is rare if recommended serum concentrations are successfully maintained. Considerations Reduced kidney function slows teicoplanin clearance, consequently increasing its elimination half-life. Elimination half-life is longer in the elderly due to the reduced kindey function in this population. Chemistry Teicoplanin (TARGOCID, marketed by Sanofi Aventis Ltd) is actually a mixture of several compounds, five major (named teicoplanin A2-1 through A2-5) and four minor (named teicoplanin RS-1 through RS-4). All teicoplanins share a same glycopeptide core, termed teicoplanin A3-1 — a fused ring structure to which two carbohydrates (mannose and N-acetylglucosamine) are attached. The major and minor components also contain a third carbohydrate moiety — β-D-glucosamine — and differ only by the length and conformation of a side-chain attached to it. Teicoplanin A2-4 and RS-3 have chiral side chains while all other side chains are achiral. Teicoplanin A3 lacks both the side chains as well as the β-D-glucosamine moiety. The structures of the teicoplanin core and the side-chains that characterize the five major as well as four minor teicoplanin compounds are shown below. Teicoplanin refers to a complex of related natural products isolated from the fermentation broth of a strain of Actinoplanes teichomyceticus, consisting of a group of five structures. These structures possess a common aglycone, or core, consisting of seven amino acids bound by peptide and ether bonds to form a four-ring system. These five structures differ by the identity of the fatty acyl side-chain attached to the sugar. The origin of these seven amino acids in the biosynthesis of teicoplanin was studied by 1H and 13C nuclear magnetic resonance. The studies indicate amino acids 4-Hpg, 3-Cl-Tyr, and 3-chloro-β-hydroxytyrosine are derived from tyrosine, and the amino acid 3,5-dihydroxyphenylglycine (3,5-Dpg) is derived from acetate. Teicoplanin contains 6 non-proteinogenic amino acids and three sugar moieties, N-acyl-β-D-glucosamine, N-acetyl-β-D-glucosamine, and D-mannose. Gene cluster The study of the genetic cluster encoding the biosynthesis of teicoplanin identified 49 putative open reading frames (ORFs) involved in the compound's biosynthesis, export, resistance, and regulation. Thirty-five of these ORFs are similar to those found in other glycopeptide gene clusters. The function of each of these genes is described by Li and co-workers. A summary of the gene layout and purpose is shown below. Gene layout. The genes are numbered. The letters L and R designate transcriptional direction. The presence of the * symbol means a gene is found after NRPs, which are represented by A, B, C, and D. Based on the figure from: Li, T-L.; Huang, F.; Haydock, S. F.; Mironenko, T.; Leadlay, P. F.; Spencer, J. B. Chemistry & Biology. 2004, 11, p. 109. [11-L] [10-L] [9-R] [8-R] [7-R] [6-R] [5-R] [4-L][3-L] [2-L] [1-R] [A-R] [B-R] [C-R] [D-R] [1*-R] [2*-R] [3*-R] [4*-R] [5*-R] [6*-R] [7*-R] [8*-R] [9*-R] [10*-R] [11*-R] [12*-R] [13*-R] [14*-R] [15*-R] [16*-R] [17*-R] [18*-R] [19*-R] [20*-R] [21*-R] [22*-R] [23*-R] [24*-R] [25*-L] [26*-L] [27*-R] [28*-R] [29*-R] [30*-R][31*-R] [32*-L] [33*-L] [34*-R] Heptapeptide backbone synthesis The heptapeptide backbone of teicoplanin is assembled by the nonribosomal peptide synthetases (NRPSs) TeiA, TeiB, TeiC and TeiD. Together these comprise seven modules, each containing a number of domains, with each module responsible for the incorporation of a single amino acid. Modules 1, 4, and 5 activate L-4-Hpg as the aminoacyl-AMP, modules 2 and 6 activate L-Tyr, and modules 3 and 7 activate L-3,5-Dpg. The activated amino acids are covalently bound to the NRPS as thioesters by a phosphopantetheine cofactor, which is attached to the peptidyl carrier protein (PCP) domain. The enzyme bound amino acids are then joined by amide bonds by the action of the condensation (C) domain. The heptapetide of teicoplanin contains 4 D-amino acids, formed by epimerization of the activated L-amino acids. Modules 2, 4 and 5 each contain an epimerization (E) domain which catalyzes this change. Module 1 does not contain an E domain, and epimerization is proposed to be catalysed by the C domain. In all, six of the seven total amino acids of the teicoplanin backbone are composed of nonproteinogenic or modified amino acids. Eleven enzymes are coordinatively induced to produce these six required residues. Teicoplanin contains two chlorinated positions, 2 (3-Cl-Tyr) and 6 (3-Cl-β-Hty). The halogenase Tei8* has been acts to catalyze the halogenation of both tyrosine residues. Chlorination occurs at the amino acyl-PCP level during the biosynthesis, prior to phenolic oxidative coupling, with the possibility of tyrosine or β-hydroxytyrosine being the substrate of chlorination. Hydroxylation of the tyrosine residue of module 6 also occurs in trans during the assembly of the heptapeptide backbone. Modification after heptapeptide backbone formation Once the heptapeptide backbone has been formed, the linear enzyme-bound intermediate is cyclized. Gene disruption studies indicate cytochrome P450 oxygenases as the enzymes that performs the coupling reactions. The X-domain in the final NRPS module is required to recruit the oxygenase enzymes. OxyB forms the first ring by coupling residues 4 and 6, and OxyE then couples residues 1 and 3. OxyA couples residues 2 and 4, followed by the formation of a C-C bond between residues 5 and 7 by OxyC. The regioselectivity and atropisomer selectivity of these probable one-electron coupling reactions has been suggested to be due to the folding and orientation requirements of the partially crossed-linked substrates in the enzyme active site. The coupling reactions are shown below. Specific glycosylation has been shown to occur after the formation of the heptpeptide aglycone. Three separate glycosyl transferases are required for the glycosylation of the teicoplanin aglycone. Tei10* catalyses the addition of GlcNAc to residue 4, followed by deacetylation by Tei2*. The acyl chain (produced by the action of Tei30* and Tei13*) is then added by Tei11*. Tei1 then adds a second GlcNAc to the β-hydroxyl group of residue 6, followed by mannosylation of residue 7 catalysed by Tei3*. Research Researchers have explored the potential of teicoplanin as an antiviral agent against various viruses, including SARS-CoV-2. Laboratory studies indicate that teicoplanin inhibits cathepsin L, a host cell protease utilized by SARS-CoV-2 for cell entry via the endocytic pathway. In vitro experiments have demonstrated teicoplanin's ability to reduce SARS-CoV-2 infection, with reported IC50 values in the low micromolar range. This suggests potential efficacy against various SARS-CoV-2 variants due to conserved cathepsin L cleavage sites on the SARS-CoV-2 spike protein. Animal studies have also shown a protective effect against SARS-CoV-2 infection with teicoplanin pre-treatment. References Glycopeptide antibiotics Halogen-containing natural products Sanofi
Teicoplanin
[ "Chemistry" ]
3,023
[ "Glycopeptide antibiotics", "Glycopeptides" ]
995,417
https://en.wikipedia.org/wiki/Geodetic%20datum
A geodetic datum or geodetic system (also: geodetic reference datum, geodetic reference system, or geodetic reference frame, or terrestrial reference frame) is a global datum reference or reference frame for unambiguously representing the position of locations on Earth by means of either geodetic coordinates (and related vertical coordinates) or geocentric coordinates. Datums are crucial to any technology or technique based on spatial location, including geodesy, navigation, surveying, geographic information systems, remote sensing, and cartography. A horizontal datum is used to measure a horizontal position, across the Earth's surface, in latitude and longitude or another related coordinate system. A vertical datum is used to measure the elevation or depth relative to a standard origin, such as mean sea level (MSL). A three-dimensional datum enables the expression of both horizontal and vertical position components in a unified form. The concept can be generalized for other celestial bodies as in planetary datums. Since the rise of the global positioning system (GPS), the ellipsoid and datum WGS 84 it uses has supplanted most others in many applications. The WGS84 is intended for global use, unlike most earlier datums. Before GPS, there was no precise way to measure the position of a location that was far from reference points used in the realization of local datums, such as from the Prime Meridian at the Greenwich Observatory for longitude, from the Equator for latitude, or from the nearest coast for sea level. Astronomical and chronological methods have limited precision and accuracy, especially over long distances. Even GPS requires a predefined framework on which to base its measurements, so WGS84 essentially functions as a datum, even though it is different in some particulars from a traditional standard horizontal or vertical datum. A standard datum specification (whether horizontal, vertical, or 3D) consists of several parts: a model for Earth's shape and dimensions, such as a reference ellipsoid or a geoid; an origin at which the ellipsoid/geoid is tied to a known (often monumented) location on or inside Earth (not necessarily at 0 latitude 0 longitude); and multiple control points or reference points that have been precisely measured from the origin and physically monumented. Then the coordinates of other places are measured from the nearest control point through surveying. Because the ellipsoid or geoid differs between datums, along with their origins and orientation in space, the relationship between coordinates referred to one datum and coordinates referred to another datum is undefined and can only be approximated. Using local datums, the disparity on the ground between a point having the same horizontal coordinates in two different datums could reach kilometers if the point is far from the origin of one or both datums. This phenomenon is called datum shift or, more generally, datum transformation, as it may involve rotation and scaling, in addition to displacement. Because Earth is an imperfect ellipsoid, local datums can give a more accurate representation of some specific area of coverage than WGS84 can. OSGB36, for example, is a better approximation to the geoid covering the British Isles than the global WGS84 ellipsoid. However, as the benefits of a global system outweigh the greater accuracy, the global WGS84 datum has become widely adopted. History The spherical nature of Earth was known by the ancient Greeks, who also developed the concepts of latitude and longitude, and the first astronomical methods for measuring them. These methods, preserved and further developed by Muslim and Indian astronomers, were sufficient for the global explorations of the 15th and 16th Centuries. However, the scientific advances of the Age of Enlightenment brought a recognition of errors in these measurements, and a demand for greater precision. This led to technological innovations such as the 1735 Marine chronometer by John Harrison, but also to a reconsideration of the underlying assumptions about the shape of Earth itself. Isaac Newton postulated that the conservation of momentum should make Earth oblate (wider at the equator), while the early surveys of Jacques Cassini (1720) led him to believe Earth was prolate (wider at the poles). The subsequent French geodesic missions (1735-1739) to Lapland and Peru corroborated Newton, but also discovered variations in gravity that would eventually lead to the geoid model. A contemporary development was the use of the trigonometric survey to accurately measure distance and location over great distances. Starting with the surveys of Jacques Cassini (1718) and the Anglo-French Survey (1784–1790), by the end of the 18th century, survey control networks covered France and the United Kingdom. More ambitious undertakings such as the Struve Geodetic Arc across Eastern Europe (1816-1855) and the Great Trigonometrical Survey of India (1802-1871) took much longer, but resulted in more accurate estimations of the shape of the Earth ellipsoid. The first triangulation across the United States was not completed until 1899. The U.S. survey resulted in the North American Datum (horizontal) of 1927 (NAD27) and the Vertical Datum of 1929 (NAVD29), the first standard datums available for public use. This was followed by the release of national and regional datums over the next several decades. Improving measurements, including the use of early satellites, enabled more accurate datums in the later 20th century, such as NAD 83 in North America, ETRS89 in Europe, and GDA94 in Australia. At this time global datums were also first developed for use in satellite navigation systems, especially the World Geodetic System (WGS84) used in the U.S. global positioning system (GPS), and the International Terrestrial Reference System and Frame (ITRF) used in the European Galileo system. Dimensions Horizontal datum A horizontal datum is a model used to precisely measure positions on Earth; it is thus a crucial component of any spatial reference system or map projection. A horizontal datum binds a specified reference ellipsoid, a mathematical model of the shape of the earth, to the physical earth. Thus, the geographic coordinate system on that ellipsoid can be used to measure the latitude and longitude of real-world locations. Regional horizontal datums, such as NAD 27 and NAD 83, usually create this binding with a series of physically monumented geodetic control points of known location. Global datums, such as WGS 84 and ITRF, are typically bound to the center of mass of the Earth (making them useful for tracking satellite orbits and thus for use in satellite navigation systems. A specific point can have substantially different coordinates, depending on the datum used to make the measurement. For example, coordinates in NAD83 can differ from NAD27 by up to several hundred feet. There are hundreds of local horizontal datums around the world, usually referenced to some convenient local reference point. Contemporary datums, based on increasingly accurate measurements of the shape of Earth, are intended to cover larger areas. The WGS 84 datum, which is almost identical to the NAD 83 datum used in North America and the ETRS89 datum used in Europe, is a common standard datum. Vertical datum A vertical datum is a reference surface for vertical positions, such as the elevations of Earth features including terrain, bathymetry, water level, and human-made structures. An approximate definition of sea level is the datum WGS 84, an ellipsoid, whereas a more accurate definition is Earth Gravitational Model 2008 (EGM2008), using at least 2,159 spherical harmonics. Other datums are defined for other areas or at other times; ED50 was defined in 1950 over Europe and differs from WGS84 by a few hundred meters depending on where in Europe you look. Mars has no oceans and so no sea level, but at least two martian datums have been used to locate places there. Geodetic coordinates In geodetic coordinates, Earth's surface is approximated by an ellipsoid, and locations near the surface are described in terms of geodetic latitude (), longitude (), and ellipsoidal height (). Earth reference ellipsoid Defining and derived parameters The ellipsoid is completely parameterised by the semi-major axis and the flattening . From and it is possible to derive the semi-minor axis , first eccentricity and second eccentricity of the ellipsoid Parameters for some geodetic systems The two main reference ellipsoids used worldwide are the GRS80 and the WGS84. A more comprehensive list of geodetic systems can be found here. Geodetic Reference System 1980 (GRS80) World Geodetic System 1984 (WGS84) The Global Positioning System (GPS) uses the World Geodetic System 1984 (WGS84) to determine the location of a point near the surface of Earth. Datum transformation The difference in co-ordinates between datums is commonly referred to as datum shift. The datum shift between two particular datums can vary from one place to another within one country or region, and can be anything from zero to hundreds of meters (or several kilometers for some remote islands). The North Pole, South Pole and Equator will be in different positions on different datums, so True North will be slightly different. Different datums use different interpolations for the precise shape and size of Earth (reference ellipsoids). For example, in Sydney there is a 200 metres (700 feet) difference between GPS coordinates configured in GDA (based on global standard WGS84) and AGD (used for most local maps), which is an unacceptably large error for some applications, such as surveying or site location for scuba diving. Datum conversion is the process of converting the coordinates of a point from one datum system to another. Because the survey networks upon which datums were traditionally based are irregular, and the error in early surveys is not evenly distributed, datum conversion cannot be performed using a simple parametric function. For example, converting from NAD 27 to NAD 83 is performed using NADCON (later improved as HARN), a raster grid covering North America, with the value of each cell being the average adjustment distance for that area in latitude and longitude. Datum conversion may frequently be accompanied by a change of map projection. Discussion and examples A geodetic reference datum is a known and constant surface which is used to describe the location of unknown points on Earth. Since reference datums can have different radii and different center points, a specific point on Earth can have substantially different coordinates depending on the datum used to make the measurement. There are hundreds of locally developed reference datums around the world, usually referenced to some convenient local reference point. Contemporary datums, based on increasingly accurate measurements of the shape of Earth, are intended to cover larger areas. The most common reference Datums in use in North America are NAD27, NAD83, and WGS 84. The North American Datum of 1927 (NAD27) is "the horizontal control datum for the United States that was defined by a location and azimuth on the Clarke spheroid of 1866, with origin at (the survey station) Meades Ranch (Kansas)." ... The geoidal height at Meades Ranch was assumed to be zero, as sufficient gravity data was not available, and this was needed to relate surface measurements to the datum. "Geodetic positions on the North American Datum of 1927 were derived from the (coordinates of and an azimuth at Meades Ranch) through a readjustment of the triangulation of the entire network in which Laplace azimuths were introduced, and the Bowie method was used." NAD27 is a local referencing system covering North America. The North American Datum of 1983 (NAD 83) is "The horizontal control datum for the United States, Canada, Mexico, and Central America, based on a geocentric origin and the Geodetic Reference System 1980 (GRS80). "This datum, designated as NAD83…is based on the adjustment of 250,000 points including 600 satellite Doppler stations which constrain the system to a geocentric origin." NAD83 may be considered a local referencing system. WGS84 is the World Geodetic System of 1984. It is the reference frame used by the U.S. Department of Defense (DoD) and is defined by the National Geospatial-Intelligence Agency (NGA) (formerly the Defense Mapping Agency, then the National Imagery and Mapping Agency). WGS84 is used by the DoD for all its mapping, charting, surveying, and navigation needs, including its GPS "broadcast" and "precise" orbits. WGS84 was defined in January 1987 using Doppler satellite surveying techniques. It was used as the reference frame for broadcast GPS Ephemerides (orbits) beginning January 23, 1987. At 0000 GMT January 2, 1994, WGS84 was upgraded in accuracy using GPS measurements. The formal name then became WGS84 (G730), since the upgrade date coincided with the start of GPS Week 730. It became the reference frame for broadcast orbits on June 28, 1994. At 0000 GMT September 30, 1996 (the start of GPS Week 873), WGS84 was redefined again and was more closely aligned with International Earth Rotation Service (IERS) frame ITRF 94. It was then formally called WGS84 (G873). WGS84 (G873) was adopted as the reference frame for broadcast orbits on January 29, 1997. Another update brought it to WGS84 (G1674). The WGS84 datum, within two meters of the NAD83 datum used in North America, is the only world referencing system in place today. WGS84 is the default standard datum for coordinates stored in recreational and commercial GPS units. Users of GPS are cautioned that they must always check the datum of the maps they are using. To correctly enter, display, and to store map related map coordinates, the datum of the map must be entered into the GPS map datum field. Examples Examples of map datums are: WGS 84, 72, 66 and 60 of the World Geodetic System NAD 83, the North American Datum which is very similar to WGS84 NAD 27, the older North American Datum, of which NAD83 was basically a readjustment OSGB36 of the Ordnance Survey of Great Britain ETRS89, the European Datum, related to ITRS ED50, the older European Datum GDA94, the Australian Datum JGD2011, the Japanese Datum, adjusted for changes caused by 2011 Tōhoku earthquake and tsunami Tokyo97, the older Japanese Datum KGD2002, the Korean Datum TWD67 and TWD97, different datum currently used in Taiwan. BJS54 and XAS80, old geodetic datum used in China GCJ-02 and BD-09, Chinese encrypted geodetic datum. PZ-90.11, the current geodetic reference used by GLONASS Galileo Terrestrial Reference Frame (GTRF), the geodetic reference used by Galileo; currently defined as ITRF2005 CGCS2000, or CGS-2000, the geodetic reference used by BeiDou Navigation Satellite System; based on ITRF97 International Terrestrial Reference Frames (ITRF88, 89, 90, 91, 92, 93, 94, 96, 97, 2000, 2005, 2008, 2014), different realizations of the ITRS. Hong Kong Principal Datum, a vertical datum used in Hong Kong. SAD69 - South American Datum 1969 Plate movement The Earth's tectonic plates move relative to one another in different directions at speeds on the order of per year. Therefore, locations on different plates are in motion relative to one another. For example, the longitudinal difference between a point on the equator in Uganda, on the African Plate, and a point on the equator in Ecuador, on the South American Plate, increases by about 0.0014 arcseconds per year. These tectonic movements likewise affect latitude. If a global reference frame (such as WGS 84) is used, the coordinates of a place on the surface generally will change from year to year. Most mapping, such as within a single country, does not span plates. To minimize coordinate changes for that case, a different reference frame can be used, one whose coordinates are fixed to that particular plate. Examples of these reference frames are "NAD 83" for North America and "ETRS89" for Europe. See also Axes conventions ECEF ECI (coordinates) Engineering datum Figure of the Earth Geoid Geographic coordinate conversion Grid reference International Terrestrial Reference System Kilometre zero Local tangent plane coordinates Ordnance Datum Milestone Planetary coordinate system Reference frame World Geodetic System Footnotes References Further reading Babcock, Alice K.; Wilkins, George A. (1988) The Earth's Rotation and Reference Frames for Geodesy and Geodynamics Springer List of geodetic parameters for many systems from University of Colorado Gaposchkin, E. M. and Kołaczek, Barbara (1981) Reference Coordinate Systems for Earth Dynamics Taylor & Francis Kaplan, Understanding GPS: principles and applications, 1 ed. Norwood, MA 02062, USA: Artech House, Inc, 1996. GPS Notes P. Misra and P. Enge, Global Positioning System Signals, Measurements, and Performance. Lincoln, Massachusetts: Ganga-Jamuna Press, 2001. Peter H. Dana: Geodetic Datum Overview – Large amount of technical information and discussion. US National Geodetic Survey External links GeographicLib includes a utility CartConvert which converts between geodetic and geocentric (ECEF) or local Cartesian (ENU) coordinates. This provides accurate results for all inputs including points close to the center of Earth. A collection of geodetic functions that solve a variety of problems in geodesy in Matlab. NGS FAQ – What is a geodetic datum? About the surface of the Earth on kartoweb.itc.nl es:Sistema de referencia geodésico zh:大地测量系统
Geodetic datum
[ "Mathematics" ]
3,866
[ "Geodetic datums", "Coordinate systems" ]
995,455
https://en.wikipedia.org/wiki/Savi%20Technology
Savi Technology was founded in 1989 and is based in Alexandria, Virginia. The company was spun-off from Lockheed Martin in 2012. The company offers a variety of hardware including tags (also called sensors) that enable governments and organizations to access real-time information on the location, condition, and security status of assets and shipments; mobile IoT sensors, fixed and mobile readers; active radio-frequency identification devices and sensors; and portable deployment kits (PDKs). References Bloomberg The Washington Post Radio-frequency identification Supply chain software companies Logistics industry in the United States Supply chain analytics
Savi Technology
[ "Engineering" ]
118
[ "Radio-frequency identification", "Radio electronics" ]
995,636
https://en.wikipedia.org/wiki/Course%20%28navigation%29
In navigation, the course of a watercraft or aircraft is the cardinal direction in which the craft is to be steered. The course is to be distinguished from the heading, which is the direction where the watercraft's bow or the aircraft's nose is pointed. The path that a vessel follows is called a track or, in the case of aircraft, ground track (also known as course made good or course over the ground). The intended track is a route. Discussion For ships and aircraft, routes are typically straight-line segments between waypoints. A navigator determines the bearing (the compass direction from the craft's current position) of the next waypoint. Because water currents or wind can cause a craft to drift off course, a navigator sets a course to steer that compensates for drift. The helmsman or pilot points the craft on a heading that corresponds to the course to steer. If the predicted drift is correct, then the craft's track will correspond to the planned course to the next waypoint. Course directions are specified in degrees from north, either true or magnetic. In aviation, north is usually expressed as 360°. Navigators used ordinal directions, instead of compass degrees, e.g. "northeast" instead of 45° until the mid-20th century when the use of degrees became prevalent. See also Acronyms and abbreviations in avionics Glossary of navigation terms Bearing (navigation) Breton plotter E6B Great circle Ground track Navigation Navigation room Rhumb line References External links Pilot's Handbook of Aeronautical Knowledge glossary Aircraft instruments Marine navigation Tracking Air navigation
Course (navigation)
[ "Technology", "Engineering" ]
327
[ "Tracking", "Wireless locating", "Aircraft instruments", "Measuring instruments" ]
995,719
https://en.wikipedia.org/wiki/Burton%20process
The Burton process is a thermal cracking process invented by William Merriam Burton and Robert E. Humphreys, both of whom held a PhD in chemistry from Johns Hopkins University. The process they developed is commonly referred to as the Burton process. However, it should be recognized as the Burton-Humphreys process, as both individuals played pivotal roles in its development. The legal dispute surrounding this matter was eventually settled, although the decision primarily recognized Burton's contributions. The process involves the destructive distillation of crude oil, which is heated under pressure in a still. The innovative design of this still allows various products to emerge from a bubble tower at different temperatures and pressures. One crucial aspect of the process is that it significantly increased gasoline production from various types of oil, more than doubling the output. The first large-scale implementation of these towers occurred when Standard Oil of Indiana made the decision to construct 120 stills using an authorized budget of $709,000 in 1911. Notably, this decision coincided with the US Supreme Court's ruling to dissolve the Standard Oil Trust. This thermal cracking process was patented on January 7, 1913 (Patent No. 1,049,667). The first thermal cracking method, the Shukhov cracking process, was invented by Vladimir Shukhov (Patent of Russian Empire No. 12926 on November 27, 1891). While the Russians contended that the Burton process was essentially a slight modification of the Shukhov process, Americans refused to concede and the Burton-Humphreys patent remained in use. Ultimately, it contributed to the development of petrochemicals. In 1937 the Burton process was superseded by catalytic cracking, but it is still in use today to produce diesel. See also Cracking (chemistry) William Merriam Burton Robert E. Humphreys Shukhov cracking process References Chemical processes Petroleum technology
Burton process
[ "Chemistry", "Engineering" ]
368
[ "Petroleum engineering", "Petroleum technology", "Chemical processes", "nan", "Chemical process engineering" ]
995,743
https://en.wikipedia.org/wiki/FourCC
A FourCC ("four-character code") is a sequence of four bytes (typically ASCII) used to uniquely identify data formats. It originated from the OSType or ResType metadata system used in classic Mac OS and was adopted for the Amiga/Electronic Arts Interchange File Format and derivatives. The idea was later reused to identify compressed data types in QuickTime and DirectShow. History In 1984, the earliest version of a Macintosh OS, System 1, was released. It used the single-level Macintosh File System with metadata fields including file types, creator (application) information, and forks to store additional resources. It was possible to change this information without changing the data itself, so that they could be interpreted differently. Identical codes were used throughout the system, as type tags for all kinds of data. In 1985, Electronic Arts introduced the Interchange File Format (IFF) meta-format (family of file formats), originally devised for use on the Amiga. These files consisted of a sequence of "chunks", which could contain arbitrary data, each chunk prefixed by a four-byte ID. The IFF specification explicitly mentions that the origins of the FourCC idea lie with Apple. This IFF was adopted by a number of developers including Apple for AIFF files and Microsoft for RIFF files (which were used as the basis for the AVI and WAV file formats). Apple referred to many of these codes as OSTypes. Microsoft and Windows developers refer to their four-byte identifiers as FourCCs or Four-Character Codes. FourCC codes were also adopted by Microsoft to identify data formats used in DirectX, specifically within DirectShow and DirectX Graphics. In Apple systems Since Mac OS X Panther, OSType signatures are one of several sources that may be examined to determine a Uniform Type Identifier and are no longer used as the primary data type signature. Mac OS X (macOS) prefers the more colloquial convention of labelling file types using file name extensions. At the time of the change, the change was a source of great contention among older users, who believed that Apple was reverting to a more primitive way that misplaces metadata in the filename. Filesystem-associated type codes are not readily accessible for users to manipulate, although they can be viewed and changed with certain software, most notably the macOS command line tools GetFileInfo and SetFile which are installed as part of the developer tools into /Developer/Tools, or the ResEdit utility available for older Macs. Technical details The byte sequence is usually restricted to ASCII printable characters, with space characters reserved for padding shorter sequences. Case sensitivity is preserved, unlike in file extensions. FourCCs are sometimes encoded in hexadecimal (e.g., "0x31637661" for 'avc1') and sometimes encoded in a human-readable way (e.g., "mp4a"). Some FourCCs however, do contain non-printable characters, and are not human-readable without special formatting for display; for example, 10bit Y'CbCr 4:2:2 video can have a FourCC of ('Y', '3', 10, 10) which ffmpeg displays as rawvideo (Y3[10] [10] / 0x0A0A3359), yuv422p10le. Four-byte identifiers are useful because they can be made up of four human-readable characters with mnemonic qualities, while still fitting in the four-byte memory space typically allocated for integers in 32-bit systems (although endian issues may make them less readable). Thus, the codes can be used efficiently in program code as integers, as well as giving cues in binary data streams when inspected. Compiler support FourCC is written in big endian relative to the underlying ASCII character sequence, so that it appears in the correct byte order when read as a string. Many C compilers, including GCC, define a multi-character literal behavior of right-aligning to the least significant byte, so that becomes 0x31323334 in ASCII. This is the conventional way of writing FourCC codes used by Mac OS programmers for OSType. (Classic Mac OS was exclusively big-endian.) On little-endian machines, a byte-swap on the value is required to make the result correct. Taking the avc1 example from above: although the literal already converts to the integer value , a little-endian machine would have reversed the byte order and stored the value as . To yield the correct byte sequence , the pre-swapped value is used. Common uses One of the most well-known uses of FourCCs is to identify the video codec or video coding format in AVI files. Common identifiers include DIVX, XVID, and H264. For audio coding formats, AVI and WAV files use a two-byte identifier, usually written in hexadecimal (such as 0055 for MP3). In QuickTime files, these two-byte identifiers are prefixed with the letters "ms" to form a four-character code. RealMedia files also use four-character codes, however, the actual codes used differ from those found in AVI or QuickTime files. Other file formats that make important use of the four-byte ID concept are the Standard MIDI File (SMF) format, the PNG image file format, the 3DS (3D Studio Max) mesh file format and the ICC profile format. Four-character codes are also used in applications other than file formats, for example: UEFI Forum for vendor in the ACPI ID Registry ACPI specification defines four-character identifiers in ACPI Source Language (ASL) Synopsys to tell component ID via registers of an IP (DesignWare collection) Other uses for OSTypes include: as record field IDs and event type and class IDs in AppleEvents for identifying components in the Component Manager as “atom” IDs in the QuickTime movie and image file formats as a localization-independent way of identifying standard folders in the Folder Manager in QuickDraw GX, they were used as gxTag types and also as types of collection items in the Collection Manager. Enumeration constants in Apple APIs (as an integer; host endianess) "OSStatus" error codes in certain libraries, such as QuickTime (as an integer; host endianess) See also Filename extension (also known as "file extension") Interchange File Format Magic number OSType creator code type code References General references Official Registration Authority for the ISOBMFF family of standards Apple Inc. software Apple Inc. file systems Macintosh operating systems Metadata Four character code
FourCC
[ "Technology" ]
1,424
[ "Metadata", "Data" ]
995,862
https://en.wikipedia.org/wiki/Tyranny%20of%20numbers
The tyranny of numbers was a problem faced in the 1960s by computer engineers. Engineers were unable to increase the performance of their designs due to the huge number of components involved. In theory, every component needed to be wired to every other component (or at least many other components) and were typically strung and soldered by hand. In order to improve performance, more components would be needed, and it seemed that future designs would consist almost entirely of wiring. History The first known recorded use of the term in this context was made by the Vice President of Bell Labs in an article celebrating the 10th anniversary of the invention of the transistor, for the "Proceedings of the IRE" (Institute of Radio Engineers), June 1958 . Referring to the problems many designers were having, he wrote: At the time, computers were typically built up from a series of "modules", each module containing the electronics needed to perform a single function. A complex circuit like an adder would generally require several modules working in concert. The modules were typically built on printed circuit boards of a standardized size, with a connector on one edge that allowed them to be plugged into the power and signaling lines of the machine, and were then wired to other modules using twisted pair or coaxial cable. Since each module was relatively custom, modules were assembled and soldered by hand or with limited automation. As a result, they suffered major reliability problems. Even a single bad component or solder joint could render the entire module inoperative. Even with properly working modules, the mass of wiring connecting them together was another source of construction and reliability problems. As computers grew in complexity, and the number of modules increased, the complexity of making a machine actually work grew more and more difficult. This was the "tyranny of numbers". Motivation for the integrated circuit It was precisely this problem that Jack Kilby was thinking about while working at Texas Instruments. Theorizing that germanium could be used to make all common electronic components (transistors, resistors, capacitors, etc.), he set about building a single-slab component that combined the functionality of an entire module. Although successful in this goal, it was Robert Noyce's silicon version and the associated fabrication techniques that make the integrated circuit (IC) truly practical. Unlike modules, ICs were built using photoetching techniques on an assembly line, greatly reducing their cost. Although any given IC might have the same chance of working or not working as a module, they cost so little that if they didn't work you simply threw it away and tried another. In fact, early IC assembly lines had failure rates around 90% or greater, which kept their prices high. The U.S. Air Force and NASA were major purchasers of early ICs, where their small size and light weight overcame any cost issues. They demanded high reliability, and the industry's response not only provided the desired reliability but meant that the increased yield had the effect of driving down prices. ICs from the early 1960s were not complex enough for general computer use, but as the complexity increased through the 1960s, practically all computers switched to IC-based designs. The result was what are today referred to as the third-generation computers, which became commonplace during the early 1970s. The progeny of the integrated circuit, the microprocessor, eventually superseded the use of individual ICs as well, placing the entire collection of modules onto one chip. Seymour Cray was particularly well known for making complex designs work in spite of the tyranny of numbers. His attention to detail and ability to fund several attempts at a working design meant that pure engineering effort could overcome the problems they faced. Yet even Cray eventually succumbed to the problem during the CDC 8600 project, which eventually led to him leaving Control Data. References Computer engineering Tyranny of numbers Quotations from science 1950s neologisms
Tyranny of numbers
[ "Technology", "Engineering" ]
795
[ "Electrical engineering", "Computer engineering" ]
995,903
https://en.wikipedia.org/wiki/Mesna
Mesna, sold under the brand name Mesnex among others, is a medication used in those taking cyclophosphamide or ifosfamide to decrease the risk of bleeding from the bladder. It is used either by mouth or injection into a vein. Common side effects include headache, vomiting, sleepiness, loss of appetite, cough, rash, and joint pain. Serious side effects include allergic reactions. Use during pregnancy appears to be safe for the baby but this use has not been well studied. Mesna is an organosulfur compound. It works by altering the breakdown products of cyclophosphamide and ifosfamide found in the urine making them less toxic. Mesna was approved for medical use in the United States in 1988. It is on the World Health Organization's List of Essential Medicines. Medical uses Chemotherapy adjuvant Mesna is used therapeutically to reduce the incidence of haemorrhagic cystitis and haematuria when a patient receives ifosfamide or cyclophosphamide for cancer chemotherapy. These two anticancer agents, in vivo, may be converted to urotoxic metabolites, such as acrolein. Mesna assists to detoxify these metabolites by reaction of its sulfhydryl group with α,β-unsaturated carbonyl containing compounds such as acrolein. This reaction is known as a Michael addition. Mesna also increases urinary excretion of cysteine. Other Outside North America, mesna is also used as a mucolytic agent, working in the same way as acetylcysteine; it is sold for this indication as Mistabron and Mistabronco. Administration It is administered intravenously or orally (through the mouth). The IV mesna infusions would be given with IV ifosfamide, while oral mesna would be given with oral cyclophosphamide. The oral doses must be double the intravenous (IV) mesna dose due to bioavailability issues. The oral preparation allows patients to leave the hospital sooner, instead of staying four to five days for all the IV mesna infusions. Mechanism of action Mesna reduces the toxicity of urotoxic compounds that may form after chemotherapy administration. Mesna is a water-soluble compound with antioxidant properties, and is given concomitantly with the chemotherapeutic agents cyclophosphamide and ifosfamide. Mesna concentrates in the bladder where acrolein accumulates after administration of chemotherapy and through a Michael addition, forms a conjugate with acrolein and other urotoxic metabolites. This conjugation reaction inactivates the urotoxic compounds to harmless metabolites. The metabolites are then excreted in the urine. Names It is marketed by Baxter as Uromitexan and Mesnex. The name of the substance is an acronym for 2-mercaptoethane sulfonate Na (Na being the chemical symbol for sodium). See also Coenzyme M—a coenzyme with the same structure used by methanogenic bacteria References External links BC Cancer Agency NIH/MedlinePlus patient information Chemotherapeutic adjuvants Thiols Expectorants Organic sodium salts World Health Organization essential medicines Wikipedia medicine articles ready to translate Sulfonates
Mesna
[ "Chemistry" ]
733
[ "Organic compounds", "Organic sodium salts", "Thiols", "Salts" ]
995,908
https://en.wikipedia.org/wiki/Tomographic%20reconstruction
Tomographic reconstruction is a type of multidimensional inverse problem where the challenge is to yield an estimate of a specific system from a finite number of projections. The mathematical basis for tomographic imaging was laid down by Johann Radon. A notable example of applications is the reconstruction of computed tomography (CT) where cross-sectional images of patients are obtained in non-invasive manner. Recent developments have seen the Radon transform and its inverse used for tasks related to realistic object insertion required for testing and evaluating computed tomography use in airport security. This article applies in general to reconstruction methods for all kinds of tomography, but some of the terms and physical descriptions refer directly to the reconstruction of X-ray computed tomography. Introducing formula The projection of an object, resulting from the tomographic measurement process at a given angle , is made up of a set of line integrals (see Fig. 1). A set of many such projections under different angles organized in 2D is called a sinogram (see Fig. 3). In X-ray CT, the line integral represents the total attenuation of the beam of X-rays as it travels in a straight line through the object. As mentioned above, the resulting image is a 2D (or 3D) model of the attenuation coefficient. That is, we wish to find the image . The simplest and easiest way to visualise the method of scanning is the system of parallel projection, as used in the first scanners. For this discussion we consider the data to be collected as a series of parallel rays, at position , across a projection at angle . This is repeated for various angles. Attenuation occurs exponentially in tissue: where is the attenuation coefficient as a function of position. Therefore, generally the total attenuation of a ray at position , on the projection at angle , is given by the line integral: Using the coordinate system of Figure 1, the value of onto which the point will be projected at angle is given by: So the equation above can be rewritten as where represents and is the Dirac delta function. This function is known as the Radon transform (or sinogram) of the 2D object. The Fourier Transform of the projection can be written as where represents a slice of the 2D Fourier transform of at angle . Using the inverse Fourier transform, the inverse Radon transform formula can be easily derived. where is the derivative of the Hilbert transform of In theory, the inverse Radon transformation would yield the original image. The projection-slice theorem tells us that if we had an infinite number of one-dimensional projections of an object taken at an infinite number of angles, we could perfectly reconstruct the original object, . However, there will only be a finite number of projections available in practice. Assuming has effective diameter and desired resolution is , a rule of thumb for the number of projections needed for reconstruction is Reconstruction algorithms Practical reconstruction algorithms have been developed to implement the process of reconstruction of a three-dimensional object from its projections. These algorithms are designed largely based on the mathematics of the X-ray transform, statistical knowledge of the data acquisition process and geometry of the data imaging system. Fourier-domain reconstruction algorithm Reconstruction can be made using interpolation. Assume projections of are generated at equally spaced angles, each sampled at the same rate. The discrete Fourier transform (DFT) on each projection yields sampling in the frequency domain. Combining all the frequency-sampled projections generates a polar raster in the frequency domain. The polar raster is sparse, so interpolation is used to fill the unknown DFT points, and reconstruction can be done through the inverse discrete Fourier transform. Reconstruction performance may improve by designing methods to change the sparsity of the polar raster, facilitating the effectiveness of interpolation. For instance, a concentric square raster in the frequency domain can be obtained by changing the angle between each projection as follow: where is highest frequency to be evaluated. The concentric square raster improves computational efficiency by allowing all the interpolation positions to be on rectangular DFT lattice. Furthermore, it reduces the interpolation error. Yet, the Fourier-Transform algorithm has a disadvantage of producing inherently noisy output. Back projection algorithm In practice of tomographic image reconstruction, often a stabilized and discretized version of the inverse Radon transform is used, known as the filtered back projection algorithm. With a sampled discrete system, the inverse Radon transform is where is the angular spacing between the projections and is a Radon kernel with frequency response . The name back-projection comes from the fact that a one-dimensional projection needs to be filtered by a one-dimensional Radon kernel (back-projected) in order to obtain a two-dimensional signal. The filter used does not contain DC gain, so adding DC bias may be desirable. Reconstruction using back-projection allows better resolution than interpolation method described above. However, it induces greater noise because the filter is prone to amplify high-frequency content. Iterative reconstruction algorithm The iterative algorithm is computationally intensive but it allows the inclusion of a priori information about the system . Let be the number of projections and be the distortion operator for the th projection taken at an angle . are a set of parameters to optimize the conversion of iterations. An alternative family of recursive tomographic reconstruction algorithms are the algebraic reconstruction techniques and iterative sparse asymptotic minimum variance. Fan-beam reconstruction Use of a noncollimated fan beam is common since a collimated beam of radiation is difficult to obtain. Fan beams will generate series of line integrals, not parallel to each other, as projections. The fan-beam system requires a 360-degree range of angles, which imposes mechanical constraints, but it allows faster signal acquisition time, which may be advantageous in certain settings such as in the field of medicine. Back projection follows a similar two-step procedure that yields reconstruction by computing weighted sum back-projections obtained from filtered projections. Deep learning reconstruction Deep learning methods are widely applied to image reconstruction nowadays and have achieved impressive results in various image reconstruction tasks, including low-dose denoising, sparse-view reconstruction, limited angle tomography and metal artifact reduction. An excellent overview can be found in the special issue of IEEE Transaction on Medical Imaging. One group of deep learning reconstruction algorithms apply post-processing neural networks to achieve image-to-image reconstruction, where input images are reconstructed by conventional reconstruction methods. Artifact reduction using the U-Net in limited angle tomography is such an example application. However, incorrect structures may occur in an image reconstructed by such a completely data-driven method, as displayed in the figure. Therefore, integration of known operators into the architecture design of neural networks appears beneficial, as described in the concept of precision learning. For example, direct image reconstruction from projection data can be learnt from the framework of filtered back-projection. Another example is to build neural networks by unrolling iterative reconstruction algorithms. Except for precision learning, using conventional reconstruction methods with deep learning reconstruction prior is also an alternative approach to improve the image quality of deep learning reconstruction. Tomographic reconstruction software Tomographic systems have significant variability in their applications and geometries (locations of sources and detectors). This variability creates the need for very specific, tailored implementations of the processing and reconstruction algorithms. Thus, most CT manufacturers provide their own custom proprietary software. This is done not only to protect intellectual property, but may also be enforced by a government regulatory agency. Regardless, there are a number of general purpose tomographic reconstruction software packages that have been developed over the last couple decades, both commercial and open-source. Most of the commercial software packages that are available for purchase focus on processing data for benchtop cone-beam CT systems. A few of these software packages include Volume Graphics, InstaRecon, iTomography, Livermore Tomography Tools (LTT), and Cone Beam Software Tools (CST). Some noteworthy examples of open-source reconstruction software include: Reconstruction Toolkit (RTK), CONRAD, TomoPy, the ASTRA toolbox, PYRO-NN, ODL, TIGRE, and LEAP. Gallery Shown in the gallery is the complete process for a simple object tomography and the following tomographic reconstruction based on ART. See also Operation of computed tomography#Tomographic reconstruction Cone beam reconstruction Industrial computed tomography Industrial Tomography Systems plc References Further reading Avinash Kak & Malcolm Slaney (1988), Principles of Computerized Tomographic Imaging, IEEE Press, . Bruyant, P.P. "Analytic and iterative reconstruction algorithms in SPECT" Journal of Nuclear Medicine 43(10):1343-1358, 2002 External links Insight ToolKit; open-source tomographic support software ASTRA (All Scales Tomographic Reconstruction Antwerp) toolbox; very flexible, fast open-source software for computed tomographic reconstruction NiftyRec; comprehensive open-source tomographic reconstruction software; Matlab and Python scriptable Open-source tomographic reconstruction and visualization tool Inverse problems Medical imaging Multidimensional signal processing Radiology Signal processing Tomography
Tomographic reconstruction
[ "Mathematics", "Technology", "Engineering" ]
1,859
[ "Telecommunications engineering", "Computer engineering", "Signal processing", "Applied mathematics", "Inverse problems" ]
995,969
https://en.wikipedia.org/wiki/Consolidated%20Mine
The Consolidated Mine was a gold mine in Lumpkin County, Georgia, United States, just east of Dahlonega. Like most of the area around Dahlonega, the placer mining on the land on which the mine is located probably started during the Georgia Gold Rush. By 1880, the placer deposits were exhausted and the land was down to hard rock. Gold was soon discovered in a huge quartz vein system, and mined. "The richest acre" was mined deep into the ground, and the resulting shaft became known as the "Glory Hole.” After an interruption of operations at the mine, a group of investors purchased about 7,000 acres (28 km2) of land around the discovery site and formed the Dahlonega Consolidated Gold Mining Co. in 1895. After constructing the largest stamp mill east of the Mississippi River at the Consolidated Mine property, the Mining Co. folded in 1906. The mine's lower workings became flooded and lay dormant until seventy-five years later, when the site came under new ownership. There is still gold to be mined here – but the cost of extracting the gold from the mine exceeds the value of the gold, at least for the time being. Today, a part of the upper level of the mine remains open for tourists, who can tour portions of the "Glory Hole" underground and pan for gold. Original cart rails, electrical lines and even an operational pneumatic drill recovered from the mine may be viewed. This mine and Crisson Mine are the two mines in the Dahlonega area that remain open for tourists. The Consolidated Mine remains the only mine in the area safe enough to take tourists into. References External links Consolidated Mine - official site Georgia Division of Archives and History Photo - Beginning construction of Consolidated Mine Company, 1899 Georgia Division of Archives and History Photo - Inside of the stamp mill, 1899-1906 'Thar's Gold in Them Thar Hills': Gold and Gold Mining in Georgia, 1830s-1940s from the Digital Library of Georgia Georgia Gold Rush Historic districts on the National Register of Historic Places in Georgia (U.S. state) Gold mines in Georgia Geology of Georgia (U.S. state) Underground mines in the United States Mines in Lumpkin County, Georgia Mining museums in Georgia (U.S. state) Museums in Lumpkin County, Georgia Tourist attractions in Lumpkin County, Georgia National Register of Historic Places in Lumpkin County, Georgia Stamp mills
Consolidated Mine
[ "Chemistry", "Engineering" ]
493
[ "Stamp mills", "Metallurgical facilities", "Mining equipment" ]
995,994
https://en.wikipedia.org/wiki/Electro-galvanic%20oxygen%20sensor
An electro-galvanic fuel cell is an electrochemical device which consumes a fuel to produce an electrical output by a chemical reaction. One form of electro-galvanic fuel cell based on the oxidation of lead is commonly used to measure the concentration of oxygen gas in underwater diving and medical breathing gases. Electronically monitored or controlled diving rebreather systems, saturation diving systems, and many medical life-support systems use galvanic oxygen sensors in their control circuits to directly monitor oxygen partial pressure during operation. They are also used in oxygen analysers in recreational, technical diving and surface supplied mixed gas diving to analyse the proportion of oxygen in a nitrox, heliox or trimix breathing gas before a dive. These cells are lead/oxygen galvanic cells where oxygen molecules are dissociated and reduced to hydroxyl ions at the cathode. The ions diffuse through the electrolyte and oxidize the lead anode. A current proportional to the rate of oxygen consumption is generated when the cathode and anode are electrically connected through a resistor Function The cell reaction for a lead/oxygen cell is: 2Pb + O2 → 2PbO, made up of the cathode reaction: O2 + 2H2O + 4e− → 4OH−, and anode reaction: 2Pb + 4OH− → 2PbO + 2H2O + 4e−. The cell current is proportional to the rate of oxygen reduction at the cathode, but this is not linearly dependent on the partial pressure of oxygen in the gas to which the cell is exposed: Linearity is achieved by placing a diffusion barrier between the gas and the cathode, which limits the amount of gas reaching the cathode to an amount that can be fully reduced without significant delay, making the partial pressure in the immediate vicinity of the electrode close to zero. As a result of this the amount of oxygen reaching the electrode follows Fick's laws of diffusion and is proportional to the partial pressure in the gas beyond the membrane. This makes the current proportional to PO2. The load resistor over the cell allows the electronics to measure a voltage rather than a current. This voltage depends on the construction and age of the sensor, and typically varies between 7 and 28 mV for a PO2 of 0.21 bar Diffusion is linearly dependent on the partial pressure gradient, but is also temperature dependent, and the current rises about two to three percent per kelvin rise in temperature. A negative temperature coefficient resistor is used to compensate, and for this to be effective it must be at the same temperature as the cell. Oxygen cells which may be exposed to relatively large or rapid temperature changes, like rebreathers, generally use thermally conductive paste between the temperature compensating circuit and the cell to speed up the balancing of temperature. Temperature also affects the signal response time, which is generally between 6 and 15 seconds at room temperature for a 90% response to a step change in partial pressure. Cold cells react much slower and hot cells much faster. As the anode material is oxidised the output current drops and eventually will cease altogether. The oxidation rate depends on the oxygen reaching the anode from the sensor membrane. Lifetime is measured in oxygen-hours, and also depends on temperature and humidity Applications Gas mixture analysis The oxygen content of a stored gas mixture can be analysed by passing a small flow of the gas over a recently calibrated cell for long enough that the output stabilises. The stable output represents the fraction of oxygen in the mixture. Care must be taken to ensure that the gas flow is not diluted by ambient air, as this would affect the reading. Breathing gas composition monitoring The partial pressure of oxygen in anaesthetic gases is monitored by siting the cell in the gas flow, which is at local atmospheric pressure, and can be calibrated to directly indicate the fraction of oxygen in the mix. The partial pressure of oxygen in diving chambers and surface supplied breathing gas mixtures can also be monitored using these cells. This can either be done by placing the cell directly in the hyperbaric environment, wired through the hull to the monitor, or indirectly, by bleeding off gas from the hyperbaric environment or diver gas supply and analysing at atmospheric pressure, then calculating the partial pressure in the hyperbaric environment. This is frequently required in saturation diving and surface oriented surface supplied mixed gas commercial diving. Diving rebreather control systems The breathing gas mixture in a diving rebreather loop is usually measured using oxygen cells, and the output of the cells is used by either the diver or an electronic control system to control addition of oxygen to increase partial pressure when it is below the chosen lower set-point, or to flush with diluent gas when it is above the upper set-point. When the partial pressure is between the upper and lower set-points, it is suitable for breathing at that depth and is left until it changes as a result of consumption by the diver, or a change in ambient pressure as a result of a depth change. Accuracy and reliability of measurement is important in this application for two basic reasons. Firstly, if the oxygen content is too low, the diver will lose consciousness due to hypoxia and probably die, or if the oxygen content is too high, the risk of central nervous system oxygen toxicity causing convulsions and loss of consciousness, with a high risk of drowning becomes unacceptable. Secondly, decompression obligations cannot be accurately or reliably calculated if the breathing gas composition is not known. Pre-dive calibration of the cells can only check response to partial pressures up to 100% at atmospheric pressure, or 1 bar. As the set points are commonly in the range of 1.2 to 1.6 bar, special hyperbaric calibration equipment would be required to reliably test the response at the set-points. This equipment is available, but is expensive and not in common use, and requires the cells to be removed from the rebreather and installed in the test unit. To compensate for the possibility of a cell failure during a dive, three cells are generally fitted, on the principle that failure of one cell at a time is most likely, and that if two cells indicate the same PO2, they are more likely to be correct than the single cell with a different reading. Voting logic allows the control system to control the circuit for the rest of the dive according to the two cells assumed to be correct. This is not entirely reliable, as it is possible for two cells to fail on the same dive. The sensors should be placed in the rebreather where a temperature gradient between the gas and the electronics in the back of the cells will not occur. Lifespan Oxygen cells behave in a similar way to electrical batteries in that they have a finite lifespan which is dependent upon use. The chemical reaction described above causes the cell to create an electrical output that has a predicted voltage which is dependent on the materials used. In theory they should give that voltage from the day they are made until they are exhausted, except that one component of the planned chemical reaction has been left out of the assembly: oxygen. Oxygen is one of the fuels of the cell so the more oxygen there is at the reaction surface, the more electrical current is generated. The chemistry sets the voltage and the oxygen concentration controls the electric current output. If an electrical load is connected across the cell it can draw up to this current but if the cell is overloaded the voltage will drop. When the lead electrode has been substantially oxidised, the maximum current that the cell can produce will drop, and eventually linearity of output current to partial pressure of oxygen at the reactive surface will fail within the required range of measurement, and the cell will no longer be accurate. There are two commonly used ways to specify expected sensor life span: The time in months at room temperature in air, or volume percentage oxygen hours (Vol%O2h). Storage at low oxygen partial pressure when not in use would seem an effective way to extend cell life, but when stored in anoxic conditions the sensor current will cease and the surface of the electrode may be passivated, which can lead to sensor failure. High ambient temperatures will increase sensor current, and reduce cell life. In diving service a cell typically lasts for 12 to 18 months, with perhaps 150 hours service in the diving loop at an oxygen partial pressure of about 1.2 bar and the rest of the time in storage in air at room temperature. Failures in cells can be life-threatening for technical divers and in particular, rebreather divers. The failure modes common to these cells are: failing with a higher than expected output due to electrolyte leaks, which is usually attributable to physical damage, contamination, or other defects in manufacture, or current limitation due to exhausted cell life and non linear output across its range. Shelf life can be maximised by keeping the cell in the sealed bag as supplied by the manufacturer until being put into service, storing the cell before and between use at or below room temperature, - a range of from 10 to 22 °C is recommended by a manufacturer - and avoid storing the cell in warm or dry environments for prolonged periods, particularly areas exposed to direct sunlight. Failure modes When new, a sensor can produce a linear output for over 4 bar partial pressure of oxygen, and as the anode is consumed the linear output range drops, eventually to below the range of partial pressures which may be expected in service, at which stage it is no longer fit to control the system. The maximum output current eventually drops below the amount needed to indicate the full range of partial pressures expected in operation. This state is called current-limited. Current limited cells do not give a high enough output in high concentrations of oxygen. The rebreather control circuit responds as if there is insufficient oxygen in the loop and injects more oxygen in an attempt to reach a setpoint the cell can never indicate, resulting in hyperoxia. When a current limited sensor can no longer reliably activate the control system at the upper set-point in a life support system, there is a severe risk of an excessive oxygen partial pressure occurring which will not be noticed, which can be life-threatening. Other failure modes include mechanical damage, such as broken conductors, corroded contacts and loss of electrolyte due to damaged membranes. Failing high – producing an output indicating partial pressure higher than reality – is invariably a result of a manufacturing fault or mechanical damage. In rebreathers, failing high will result in the rebreather assuming that there is more oxygen in the loop than there actually is which can result in hypoxia. Non-linear cells do not perform in the expected manner across the required range of oxygen partial pressures. Two-point calibration against diluent and oxygen at atmospheric pressure will not pick up this fault which results in inaccurate loop contents of a rebreather. This gives the potential for decompression illness if the loop is maintained at a lower partial pressure than indicated by the cell output, or hyperoxia if the loop is maintained at a higher partial pressure than indicated by cell output. Testing cells in the field Preventing accidents in rebreathers from cell failures is possible in most cases by accurately testing the cells before use. Some divers carry out in-water checks by pushing the oxygen content in the loop to a pressure that is above that of pure oxygen at sea level to indicate if the cell is capable of high outputs. This test is only a spot check and does not accurately assess the quality of that cell or predict its failure. The only way to accurately test a cell is with a test chamber which can hold a calibrated static pressure above the upper set-point without deviation and the ability to record the output voltage over the full range of working partial pressures and graph them. Managing cell failure in a life-support system If more than one statistically independent cell is used, it is unlikely that more than one will fail at a time. If one assumes that only one cell will fail, then comparing three or more outputs which have been calibrated at two points is likely to pick up the cell which has failed by assuming that any two cells that produce the same output are correct and the one which produces a different output is defective. This assumption is usually correct in practice, particularly if there is some difference in the history of the cells involved. The concept of comparing the output from three cells at the same place in the loop and controlling the gas mixture based on the average output of the two with the most similar output at any given time is known as voting logic, and is more reliable than control based on a single cell. If the third cell output deviates sufficiently from the other two, an alarm indicates probable cell failure. If this occurs before the dive, the rebreather is deemed unsafe and should not be used. If it occurs during a dive, it indicates an unreliable control system, and the dive should be aborted. Continuing a dive using a rebreather with a failed cell alarm significantly increases the risk of a fatal loop control failure. This system is not totally reliable. There has been at least one case reported where two cells failed similarly and the control system voted out the remaining good cell. If the probability of failure of each cell was statistically independent of the others, and each cell alone was sufficient to allow safe function of the rebreather, the use of three fully redundant cells in parallel would reduce risk of failure by five or six orders of magnitude. The voting logic changes this considerably. A majority of cells must not fail for safe function of the unit. In order to decide whether a cell is functioning correctly, it must be compared with an expected output. This is done by comparing it against the outputs of other cells. In the case of two cells, if the outputs differ, then one at least must be wrong, but it is not known which one. In such a case the diver should assume the unit is unsafe and bail out to open circuit. With three cells, if they all differ within an accepted tolerance, they may all be deemed functional. If two differ within tolerance, and the third does not, the two within tolerance may be deemed functional, and the third faulty. If none are within tolerance of each other, they may all be faulty, and if one is not, there is no way of identifying it. Using this logic, the improvement in reliability gained by use of voting logic where at least two sensors must function for the system to function is greatly reduced compared to the fully redundant version. Improvements are only in the order of one to two orders of magnitude. This would be great improvement over the single sensor, but the analysis above has assumed statistical independence of the failure of the sensors, which is generally not realistic. Factors which make the cell outputs in a rebreather statistically dependent include: Common calibration gas - They are all calibrated together in the pre-dive check using the same diluent and oxygen supply. Sensors are often from the same manufacturing batch - Components, materials and processes are likely to be very similar. Sensors are often installed together and have since been exposed to the same PO2, temperature profile over the subsequent time. Common working environment, particularly with regards to temperature and relative humidity, as they are usually mounted in very close proximity in the loop, to ensure that they measure similar gas. Common measurement systems Common firmware for processing the signals This statistical dependency can be minimised and mitigated by: Using sensors from different manufacturers or batches, so that no two are from the same batch Changing sensors at different times, so they each have a different history Ensuring that the calibration gases are correct Adding an statistically independent PO2 measuring system to the loop at a different place, using a different model sensor, and using different electronics and software to process the signal. Calibrating this sensor using a different gas source to the others An alternative method of providing redundancy in the control system is to recalibrate the sensors periodically during the dive by exposing them to a flow of either diluent or oxygen or both at different times, and using the output to check whether the cell is reacting appropriately to the known gas as the known depth. This method has the added advantage of allowing calibration at higher oxygen partial pressure than 1 bar. This procedure may be done automatically, where the system has been designed to do it, or the diver can manually perform a diluent flush at any depth at which the diluent is breathable to compare the cell PO2 readings against a known FO2 and absolute pressure to verify the displayed values. This test does not only validate the cell. If the sensor does not display the expected value, it is possible that the oxygen sensor, the pressure sensor (depth), or the gas mixture FO2, or any combination of these may be faulty. As all three of these possible faults could be life-threatening, the test is quite powerful. Testing The first commercially available certified oxygen cell checking device was launched in 2005 by Narked at 90, but did not achieve commercial success. A much revised model was released in 2007 and won the "Gordon Smith Award" for Innovation at the Diving Equipment Manufacturers Exhibition in Florida. Narked at 90 Ltd also won the Innovation Award for "an technical diving product that has made diving safer" at EUROTEK.2010 for their Oxygen Cell Checker. . The Cell Checker has been used by organisations such as Teledyne, Vandagraph, National Oceanic and Atmospheric Administration, NURC (NATO Undersea Research Centre), and Diving Diseases Research Centre. A small pressure vessel for hyperbaric testing of cells is also available in which a pressurised oxygen atmosphere of up to 2 bar can be used to check linearity at higher pressures using the electronics of the rebreather. See also References Fuel cells Underwater diving safety equipment Sensors Oxygen
Electro-galvanic oxygen sensor
[ "Technology", "Engineering" ]
3,679
[ "Sensors", "Measuring instruments" ]
996,096
https://en.wikipedia.org/wiki/Sentence%20extraction
Sentence extraction is a technique used for automatic summarization of a text. In this shallow approach, statistical heuristics are used to identify the most salient sentences of a text. Sentence extraction is a low-cost approach compared to more knowledge-intensive deeper approaches which require additional knowledge bases such as ontologies or linguistic knowledge. In short, sentence extraction works as a filter that allows only meaningful sentences to pass. The major downside of applying sentence-extraction techniques to the task of summarization is the loss of coherence in the resulting summary. Nevertheless, sentence extraction summaries can give valuable clues to the main points of a document and are frequently sufficiently intelligible to human readers. Procedure Usually, a combination of heuristics is used to determine the most important sentences within the document. Each heuristic assigns a (positive or negative) score to the sentence. After all heuristics have been applied, the highest-scoring sentences are included in the summary. The individual heuristics are weighted according to their importance. Early approaches and some sample heuristics Seminal papers which laid the foundations for many techniques used today have been published by Hans Peter Luhn in 1958 and H. P Edmundson in 1969. Luhn proposed to assign more weight to sentences at the beginning of the document or a paragraph. Edmundson stressed the importance of title-words for summarization and was the first to employ stop-lists in order to filter uninformative words of low semantic content (e.g. most grammatical words such as of, the, a). He also distinguished between bonus words and stigma words, i.e. words that probably occur together with important (e.g. the word form significant) or unimportant information. His idea of using key-words, i.e. words which occur significantly frequently in the document, is still one of the core heuristics of today's summarizers. With large linguistic corpora available today, the tf–idf value which originated in information retrieval, can be successfully applied to identify the key words of a text: If for example the word cat occurs significantly more often in the text to be summarized (TF = "term frequency") than in the corpus (IDF means "inverse document frequency"; here the corpus is meant by document), then cat is likely to be an important word of the text; the text may in fact be a text about cats. See also Sentence boundary disambiguation Text segmentation References Computational linguistics Natural language processing
Sentence extraction
[ "Technology" ]
526
[ "Natural language processing", "Natural language and computing", "Computational linguistics" ]
996,107
https://en.wikipedia.org/wiki/Universal%20coefficient%20theorem
In algebraic topology, universal coefficient theorems establish relationships between homology groups (or cohomology groups) with different coefficients. For instance, for every topological space , its integral homology groups: completely determine its homology groups with coefficients in , for any abelian group : Here might be the simplicial homology, or more generally the singular homology. The usual proof of this result is a pure piece of homological algebra about chain complexes of free abelian groups. The form of the result is that other coefficients may be used, at the cost of using a Tor functor. For example it is common to take to be , so that coefficients are modulo 2. This becomes straightforward in the absence of 2-torsion in the homology. Quite generally, the result indicates the relationship that holds between the Betti numbers of and the Betti numbers with coefficients in a field . These can differ, but only when the characteristic of is a prime number for which there is some -torsion in the homology. Statement of the homology case Consider the tensor product of modules . The theorem states there is a short exact sequence involving the Tor functor Furthermore, this sequence splits, though not naturally. Here is the map induced by the bilinear map . If the coefficient ring is , this is a special case of the Bockstein spectral sequence. Universal coefficient theorem for cohomology Let be a module over a principal ideal domain (e.g., or a field.) There is also a universal coefficient theorem for cohomology involving the Ext functor, which asserts that there is a natural short exact sequence As in the homology case, the sequence splits, though not naturally. In fact, suppose and define: Then above is the canonical map: An alternative point-of-view can be based on representing cohomology via Eilenberg–MacLane space where the map takes a homotopy class of maps from to to the corresponding homomorphism induced in homology. Thus, the Eilenberg–MacLane space is a weak right adjoint to the homology functor. Example: mod 2 cohomology of the real projective space Let , the real projective space. We compute the singular cohomology of with coefficients in using the integral homology, i.e. . Knowing that the integer homology is given by: We have , so that the above exact sequences yield In fact the total cohomology ring structure is Corollaries A special case of the theorem is computing integral cohomology. For a finite CW complex , is finitely generated, and so we have the following decomposition. where are the Betti numbers of and is the torsion part of . One may check that and This gives the following statement for integral cohomology: For an orientable, closed, and connected -manifold, this corollary coupled with Poincaré duality gives that . Universal coefficient spectral sequence There is a generalization of the universal coefficient theorem for (co)homology with twisted coefficients. For cohomology we have Where is a ring with unit, is a chain complex of free modules over , is any -bimodule for some ring with a unit , is the Ext group. The differential has degree . Similarly for homology for Tor the Tor group and the differential having degree . Notes References Allen Hatcher, Algebraic Topology, Cambridge University Press, Cambridge, 2002. . A modern, geometrically flavored introduction to algebraic topology. The book is available free in PDF and PostScript formats on the author's homepage. Jerome Levine. “Knot Modules. I.” Transactions of the American Mathematical Society 229 (1977): 1–50. https://doi.org/10.2307/1998498 External links Universal coefficient theorem with ring coefficients Homological algebra Theorems in algebraic topology
Universal coefficient theorem
[ "Mathematics" ]
787
[ "Mathematical structures", "Theorems in topology", "Fields of abstract algebra", "Category theory", "Theorems in algebraic topology", "Homological algebra" ]
996,154
https://en.wikipedia.org/wiki/Timberjack
Timberjack is a manufacturer of forestry machinery for both cut-to-length and whole tree logging, and was a subsidiary of John Deere from 2000 to 2006. History Timberjack was founded in Woodstock, Ontario, in the 1950s by Wes Magill and Robert Simmons, who designed an articulated four-wheel drive tractor with a winch at the back. They produced a prototype and production took off from there. There were affiliations with King Trailer ind. and with Timberland Ellicott Corp. before Eaton Corporation purchased Timberjack and named it the Forestry Equipment Division. The traditional color of all Timberjack products was a reddish orange. In 1992, the color was changed to green with black and yellow trim. John Deere purchased Timberjack and continued the green, black and yellow paint scheme. Timberjack was owned by the Eaton Corporation in the 1960s, 1970s and early 1980s. In 1984 Timberjack made a leveraged buyout from Eaton to become an independent company. Timberjack was acquired by FMG (ForestMachineGroup), owned by Finnish Rauma-Repola. After a short period carrying the double-name FMG-Timberjack, in 1993 Timberjack became the brandname for the group. Other well known forest machine brands, which have been incorporated into FMG-Timberjack were Swedish Kockums, ÖSA and Bruun System as well as Finnish LOKOMO. In December 2000 John Deere bought Timberjack from Metso Corporation (formerly Rauma-Repola) for $570 million which also included the purchase of a separate company, Waratah, a leading manufacturer of heavy-duty harvester heads. As of June, 2006, at the forestry fair "Florence Wood", the Timberjack product line was discontinued, and John Deere, its parent company, became the largest single brand of forestry equipment. Its global market share for both cut-to-length and full tree equipment was very strong shortly after the acquisition. References External links John Deere vehicles Log transport
Timberjack
[ "Engineering" ]
406
[ "Engineering vehicles", "John Deere vehicles" ]
996,230
https://en.wikipedia.org/wiki/Meander%20%28art%29
A meander or meandros () is a decorative border constructed from a continuous line, shaped into a repeated motif. Among some Italians, these patterns are known as "Greek Lines". Such a design may also be called the Greek fret or Greek key design, although these terms are modern designations; this decorative motif appears much earlier and among Near and Far eastern cultures that are far from Greece. Usually the term is used for motifs with straight lines and right angles and the many versions with rounded shapes are called running scrolls or, following the etymological origin of the term, may be identified as water wave motifs. Meaning of the name On one hand, the name "meander" recalls the twisting and turning path of the Maeander River in Asia Minor (present day Turkey) that is typical of river pathways. On another hand, as Karl Kerenyi pointed out, "the meander is the figure of a labyrinth in linear form". Decorative uses Meanders are common decorative elements in Greek and Roman art. In ancient Greece they appear in many architectural friezes, and in bands on the pottery of ancient Greece from the Geometric period onward. The design is common to the present-day in classicizing architecture, and is adopted frequently as a decorative motif for borders for many modern printed materials. Labyrinthine meanders in China The meander is a fundamental design motif in regions far from a Hellenic orbit: labyrinthine meanders ("thunder" pattern ) appear in bands and as infill on Shang bronzes (), and many traditional buildings in and around China still bear geometric designs almost identical to meanders. Although space-filling curves have a long history in China in motifs more than 2,000 years earlier, extending back to Zhukaigou Culture () and Xiajiadian Culture ( and ), frequently there is speculation that meanders of Greek origin may have come to China during the time of the Han dynasty () by way of trade with the Greco-Bactrian Kingdom. A meander motif also appears in prehistoric Mayan design motifs in the western hemisphere, centuries before any European contacts. Gallery See also Mezine Vitruvian scroll Citations Sources External links Illustrated Architecture Dictionary: "Fret"—a short description, with a list of links to photographs of meander designs in art and architecture Culture of Greece Ornaments Visual motifs Labyrinths
Meander (art)
[ "Mathematics" ]
481
[ "Symbols", "Visual motifs" ]
996,278
https://en.wikipedia.org/wiki/Molecular%20geometry
Molecular geometry is the three-dimensional arrangement of the atoms that constitute a molecule. It includes the general shape of the molecule as well as bond lengths, bond angles, torsional angles and any other geometrical parameters that determine the position of each atom. Molecular geometry influences several properties of a substance including its reactivity, polarity, phase of matter, color, magnetism and biological activity. The angles between bonds that an atom forms depend only weakly on the rest of molecule, i.e. they can be understood as approximately local and hence transferable properties. Determination The molecular geometry can be determined by various spectroscopic methods and diffraction methods. IR, microwave and Raman spectroscopy can give information about the molecule geometry from the details of the vibrational and rotational absorbance detected by these techniques. X-ray crystallography, neutron diffraction and electron diffraction can give molecular structure for crystalline solids based on the distance between nuclei and concentration of electron density. Gas electron diffraction can be used for small molecules in the gas phase. NMR and FRET methods can be used to determine complementary information including relative distances, dihedral angles, angles, and connectivity. Molecular geometries are best determined at low temperature because at higher temperatures the molecular structure is averaged over more accessible geometries (see next section). Larger molecules often exist in multiple stable geometries (conformational isomerism) that are close in energy on the potential energy surface. Geometries can also be computed by ab initio quantum chemistry methods to high accuracy. The molecular geometry can be different as a solid, in solution, and as a gas. The position of each atom is determined by the nature of the chemical bonds by which it is connected to its neighboring atoms. The molecular geometry can be described by the positions of these atoms in space, evoking bond lengths of two joined atoms, bond angles of three connected atoms, and torsion angles (dihedral angles) of three consecutive bonds. Influence of thermal excitation Since the motions of the atoms in a molecule are determined by quantum mechanics, "motion" must be defined in a quantum mechanical way. The overall (external) quantum mechanical motions translation and rotation hardly change the geometry of the molecule. (To some extent rotation influences the geometry via Coriolis forces and centrifugal distortion, but this is negligible for the present discussion.) In addition to translation and rotation, a third type of motion is molecular vibration, which corresponds to internal motions of the atoms such as bond stretching and bond angle variation. The molecular vibrations are harmonic (at least to good approximation), and the atoms oscillate about their equilibrium positions, even at the absolute zero of temperature. At absolute zero all atoms are in their vibrational ground state and show zero point quantum mechanical motion, so that the wavefunction of a single vibrational mode is not a sharp peak, but approximately a Gaussian function (the wavefunction for n = 0 depicted in the article on the quantum harmonic oscillator). At higher temperatures the vibrational modes may be thermally excited (in a classical interpretation one expresses this by stating that "the molecules will vibrate faster"), but they oscillate still around the recognizable geometry of the molecule. To get a feeling for the probability that the vibration of molecule may be thermally excited, we inspect the Boltzmann factor , where ΔE is the excitation energy of the vibrational mode, k the Boltzmann constant and T the absolute temperature. At 298 K (25 °C), typical values for the Boltzmann factor β are: β = 0.089 for ΔE = 500 cm−1 β = 0.008 for ΔE = 1000 cm−1 β = 0.0007 for ΔE = 1500 cm−1. (The reciprocal centimeter is an energy unit that is commonly used in infrared spectroscopy; 1 cm−1 corresponds to ). When an excitation energy is 500 cm−1, then about 8.9 percent of the molecules are thermally excited at room temperature. To put this in perspective: the lowest excitation vibrational energy in water is the bending mode (about 1600 cm−1). Thus, at room temperature less than 0.07 percent of all the molecules of a given amount of water will vibrate faster than at absolute zero. As stated above, rotation hardly influences the molecular geometry. But, as a quantum mechanical motion, it is thermally excited at relatively (as compared to vibration) low temperatures. From a classical point of view it can be stated that at higher temperatures more molecules will rotate faster, which implies that they have higher angular velocity and angular momentum. In quantum mechanical language: more eigenstates of higher angular momentum become thermally populated with rising temperatures. Typical rotational excitation energies are on the order of a few cm−1. The results of many spectroscopic experiments are broadened because they involve an averaging over rotational states. It is often difficult to extract geometries from spectra at high temperatures, because the number of rotational states probed in the experimental averaging increases with increasing temperature. Thus, many spectroscopic observations can only be expected to yield reliable molecular geometries at temperatures close to absolute zero, because at higher temperatures too many higher rotational states are thermally populated. Bonding Molecules, by definition, are most often held together with covalent bonds involving single, double, and/or triple bonds, where a "bond" is a shared pair of electrons (the other method of bonding between atoms is called ionic bonding and involves a positive cation and a negative anion). Molecular geometries can be specified in terms of 'bond lengths', 'bond angles' and 'torsional angles'. The bond length is defined to be the average distance between the nuclei of two atoms bonded together in any given molecule. A bond angle is the angle formed between three atoms across at least two bonds. For four atoms bonded together in a chain, the torsional angle is the angle between the plane formed by the first three atoms and the plane formed by the last three atoms. There exists a mathematical relationship among the bond angles for one central atom and four peripheral atoms (labeled 1 through 4) expressed by the following determinant. This constraint removes one degree of freedom from the choices of (originally) six free bond angles to leave only five choices of bond angles. (The angles θ11, θ22, θ33, and θ44 are always zero and that this relationship can be modified for a different number of peripheral atoms by expanding/contracting the square matrix.) Molecular geometry is determined by the quantum mechanical behavior of the electrons. Using the valence bond approximation this can be understood by the type of bonds between the atoms that make up the molecule. When atoms interact to form a chemical bond, the atomic orbitals of each atom are said to combine in a process called orbital hybridisation. The two most common types of bonds are sigma bonds (usually formed by hybrid orbitals) and pi bonds (formed by unhybridized p orbitals for atoms of main group elements). The geometry can also be understood by molecular orbital theory where the electrons are delocalised. An understanding of the wavelike behavior of electrons in atoms and molecules is the subject of quantum chemistry. Isomers Isomers are types of molecules that share a chemical formula but have difference geometries, resulting in different properties: A pure substance is composed of only one type of isomer of a molecule (all have the same geometrical structure). Structural isomers have the same chemical formula but different physical arrangements, often forming alternate molecular geometries with very different properties. The atoms are not bonded (connected) together in the same orders. Functional isomers are special kinds of structural isomers, where certain groups of atoms exhibit a special kind of behavior, such as an ether or an alcohol. Stereoisomers may have many similar physicochemical properties (melting point, boiling point) and at the same time very different biochemical activities. This is because they exhibit a handedness that is commonly found in living systems. One manifestation of this chirality or handedness is that they have the ability to rotate polarized light in different directions. Protein folding concerns the complex geometries and different isomers that proteins can take. Types of molecular structure A bond angle is the geometric angle between two adjacent bonds. Some common shapes of simple molecules include: Linear: In a linear model, atoms are connected in a straight line. The bond angles are set at 180°. For example, carbon dioxide and nitric oxide have a linear molecular shape. Trigonal planar: Molecules with the trigonal planar shape are somewhat triangular and in one plane (flat). Consequently, the bond angles are set at 120°. For example, boron trifluoride. Angular: Angular molecules (also called bent or V-shaped) have a non-linear shape. For example, water (H2O), which has an angle of about 105°. A water molecule has two pairs of bonded electrons and two unshared lone pairs. Tetrahedral: Tetra- signifies four, and -hedral relates to a face of a solid, so "tetrahedral" literally means "having four faces". This shape is found when there are four bonds all on one central atom, with no extra unshared electron pairs. In accordance with the VSEPR (valence-shell electron pair repulsion theory), the bond angles between the electron bonds are arccos(−) = 109.47°. For example, methane (CH4) is a tetrahedral molecule. Octahedral: Octa- signifies eight, and -hedral relates to a face of a solid, so "octahedral" means "having eight faces". The bond angle is 90 degrees. For example, sulfur hexafluoride (SF6) is an octahedral molecule. Trigonal pyramidal: A trigonal pyramidal molecule has a pyramid-like shape with a triangular base. Unlike the linear and trigonal planar shapes but similar to the tetrahedral orientation, pyramidal shapes require three dimensions in order to fully separate the electrons. Here, there are only three pairs of bonded electrons, leaving one unshared lone pair. Lone pair – bond pair repulsions change the bond angle from the tetrahedral angle to a slightly lower value. For example, ammonia (NH3). VSEPR table The bond angles in the table below are ideal angles from the simple VSEPR theory (pronounced "Vesper Theory"), followed by the actual angle for the example given in the following column where this differs. For many cases, such as trigonal pyramidal and bent, the actual angle for the example differs from the ideal angle, and examples differ by different amounts. For example, the angle in H2S (92°) differs from the tetrahedral angle by much more than the angle for H2O (104.48°) does. The greater the number of lone pairs contained in a molecule, the smaller the angles between the atoms of that molecule. The VSEPR theory predicts that lone pairs repel each other, thus pushing the different atoms away from them. In art Molecule Art is a relatively obscure form of abstract art in which Molecular Geometry, most often a skeletal formation. 3D representations Line or stick – atomic nuclei are not represented, just the bonds as sticks or lines. As in 2D molecular structures of this type, atoms are implied at each vertex. Electron density plot – shows the electron density determined either crystallographically or using quantum mechanics rather than distinct atoms or bonds. Ball and stick – atomic nuclei are represented by spheres (balls) and the bonds as sticks. Spacefilling models or CPK models (also an atomic coloring scheme in representations) – the molecule is represented by overlapping spheres representing the atoms. Cartoon – a representation used for proteins where loops, beta sheets, and alpha helices are represented diagrammatically and no atoms or bonds are explicitly represented (e.g. the protein backbone is represented as a smooth pipe). See also Jemmis mno rules Lewis structure Molecular design software Molecular graphics Molecular mechanics Molecular modelling Molecular symmetry Molecule editor Polyhedral skeletal electron pair theory Quantum chemistry Ribbon diagram Styx rule (for boranes) Topology (chemistry) References External links Molecular Geometry & Polarity Tutorial 3D visualization of molecules to determine polarity. Molecular Geometry using Crystals 3D structure visualization of molecules using Crystallography.
Molecular geometry
[ "Physics", "Chemistry" ]
2,577
[ "Molecular geometry", "Molecules", "Stereochemistry", "Matter" ]
996,298
https://en.wikipedia.org/wiki/Exhaust%20manifold
In automotive engineering, an exhaust manifold collects the exhaust gases from multiple cylinders into one pipe. The word manifold comes from the Old English word manigfeald (from the Anglo-Saxon manig [many] and feald [fold]) and refers to the folding together of multiple inputs and outputs (in contrast, an inlet or intake manifold supplies air to the cylinders). Exhaust manifolds are generally simple cast iron or stainless steel units which collect engine exhaust gas from multiple cylinders and deliver it to the exhaust pipe. For many engines, there are aftermarket tubular exhaust manifolds known as headers in American English, as extractor manifolds in British and Australian English, and simply as "tubular manifolds" in British English. These consist of individual exhaust headpipes for each cylinder, which then usually converge into one tube called a collector. Headers that do not have collectors are called zoomie headers. The most common types of aftermarket headers are made of mild steel or stainless steel tubing for the primary tubes along with flat flanges and possibly a larger diameter collector made of a similar material as the primaries. They may be coated with a ceramic-type finish (sometimes both inside and outside), or painted with a heat-resistant finish, or bare. Chrome plated headers are available but these tend to blue after use. Polished stainless steel will also color (usually a yellow tint), but less than chrome in most cases. Another form of modification used is to insulate a standard or aftermarket manifold. This decreases the amount of heat given off into the engine bay, therefore reducing the intake manifold temperature. There are a few types of thermal insulation but three are particularly common: Ceramic paint is sprayed or brushed onto the manifold and then cured in an oven. These are usually thin, so have little insulatory properties; however, they reduce engine bay heating by lessening the heat output via radiation. A ceramic mixture is bonded to the manifold via thermal spraying to give a tough ceramic coating with very good thermal insulation. This is often used on performance production cars and track-only racers. Exhaust wrap is wrapped completely around the manifold. Although this is cheap and fairly simple, it can lead to premature degradation of the manifold. The goal of performance exhaust headers is mainly to decrease flow resistance (back pressure), and to increase the volumetric efficiency of an engine, resulting in a gain in power output. The processes occurring can be explained by the gas laws, specifically the ideal gas law and the combined gas law. Exhaust scavenging When an engine starts its exhaust stroke, the piston moves up the cylinder bore, decreasing the total chamber volume. When the exhaust valve opens, the high pressure exhaust gas escapes into the exhaust manifold or header, creating an "exhaust pulse" comprising three main parts: The high-pressure head is created by the large pressure difference between the exhaust in the combustion chamber and the atmospheric pressure outside of the exhaust system As the exhaust gases equalize between the combustion chamber and the atmosphere, the difference in pressure decreases and the exhaust velocity decreases. This forms the medium-pressure body component of the exhaust pulse The remaining exhaust gas forms the low-pressure tail component. This tail component may initially match ambient atmospheric pressure, but the momentum of the high and medium-pressure components reduces the pressure in the combustion chamber to a lower-than-atmospheric level. This relatively low pressure helps to extract all the combustion products from the cylinder and induct the intake charge during the overlap period when both intake and exhaust valves are partially open. The effect is known as "scavenging". Length, cross-sectional area, and shaping of the exhaust ports and pipeworks influences the degree of scavenging effect, and the engine speed range over which scavenging occurs. The magnitude of the exhaust scavenging effect is a direct function of the velocity of the high and medium pressure components of the exhaust pulse. Performance headers work to increase the exhaust velocity as much as possible. One technique is tuned-length primary tubes. This technique attempts to time the occurrence of each exhaust pulse, to occur one after the other in succession while still in the exhaust system. The lower pressure tail of an exhaust pulse then serves to create a greater pressure difference between the high pressure head of the next exhaust pulse, thus increasing the velocity of that exhaust pulse. In V6 and V8 engines where there is more than one exhaust bank, "Y-pipes" and "X-pipes" work on the same principle of using the low pressure component of an exhaust pulse to increase the velocity of the next exhaust pulse. Great care must be used when selecting the length and diameter of the primary tubes. Tubes that are too large will cause the exhaust gas to expand and slow down, decreasing the scavenging effect. Tubes that are too small will create exhaust flow resistance which the engine must work to expel the exhaust gas from the chamber, reducing power and leaving exhaust in the chamber to dilute the incoming intake charge. Since engines produce more exhaust gas at higher speeds, the header(s) are tuned to a particular engine speed range according to the intended application. Typically, wide primary tubes offer the best gains in power and torque at higher engine speeds, while narrow tubes offer the best gains at lower speeds. Many headers are also resonance tuned, to utilize the low-pressure reflected wave rarefaction pulse which can help scavenging the combustion chamber during valve overlap. This pulse is created in all exhaust systems each time a change in density occurs, such as when exhaust merges into the collector. For clarification, the rarefaction pulse is the technical term for the same process that was described above in the "head, body, tail" description. By tuning the length of the primary tubes, usually by means of resonance tuning, the rarefaction pulse can be timed to coincide with the exact moment valve overlap occurs. Typically, long primary tubes resonate at a lower engine speed than short primary tubes. Why a cross plane V8 needs an H or X exhaust pipe Crossplane V8 engines have a left and right bank each containing 4 cylinders. When the engine is running, pistons are firing according to the engine firing order. If a bank has two consecutive piston firings it will create a high pressure area in the exhaust pipe, because two exhaust pulses are moving through it close in time. As the two pulses move in the exhaust pipe they should encounter either an X or H pipe. When they encounter the pipe, part of the pulse diverts into the X-H pipe which lowers the total pressure by a small amount. The reason for this decrease in pressure is that the fluid (liquid, air or gas) will travel along a pipe and when it comes at a crossing the fluid will take the path of least resistance and some will bleed off, thus lowering the pressure slightly. Without an X-H pipe the flow of exhaust would be jerky or inconsistent, and the engine would not run at its highest efficiency. The double exhaust pulse would cause part of the next exhaust pulse in that bank to not exit that cylinder completely and cause either a detonation (because of a lean air-fuel ratio (AFR)), or a misfire due to a rich AFR, depending on how much of the double pulse was left and what the mixture of that pulse was. Dynamic exhaust geometry Today's understanding of exhaust systems and fluid dynamics has given rise to a number of mechanical improvements. One such improvement can be seen in the exhaust ultimate power valve ("EXUP") fitted to some Yamaha motorcycles. It constantly adjusts the back pressure within the collector of the exhaust system to enhance pressure wave formation as a function of engine speed. This ensures good low to mid-range performance. At low engine speeds the wave pressure within the pipe network is low. A full oscillation of the Helmholtz resonance occurs before the exhaust valve is closed, and to increase low-speed torque, large amplitude exhaust pressure waves are artificially induced. This is achieved by partial closing of an internal valve within the exhaust—the EXUP valve—at the point where the four primary pipes from the cylinders join. This junction point essentially behaves as an artificial atmosphere, hence the alteration of the pressure at this point controls the behavior of reflected waves at this sudden increase in area discontinuity. Closing the valve increases the local pressure, thus inducing the formation of larger amplitude negative reflected expansion waves. This enhances low speed torque up to a speed at which the loss due to increased back pressure outweighs the EXUP tuning effect. At higher speeds the EXUP valve is fully opened and the exhaust is allowed to flow freely. See also Cylinder head porting Fusible core injection molding Tuned exhaust Thermal spraying Exhaust Heat Management Thermal barrier coating Zircotec References Manifold Engine technology Auto parts
Exhaust manifold
[ "Technology" ]
1,790
[ "Engine technology", "Engines" ]
996,315
https://en.wikipedia.org/wiki/Downhill%20creep
Downhill creep, also known as soil creep or commonly just creep, is a type of creep characterized by the slow, downward progression of rock and soil down a low grade slope; it can also refer to slow deformation of such materials as a result of prolonged pressure and stress. Creep may appear to an observer to be continuous, but it really is the sum of numerous minute, discrete movements of slope material caused by the force of gravity. Friction, being the primary force to resist gravity, is produced when one body of material slides past another offering a mechanical resistance between the two which acts to hold objects (or slopes) in place. As slope on a hill increases, the gravitational force that is perpendicular to the slope decreases and results in less friction between the material that could cause the slope to slide. Overview Water is a very important factor when discussing soil deformation and movement. For instance, a sandcastle will only stand up when it is made with damp sand. The water offers cohesion to the sand which binds the sand particles together. However, pouring water over the sandcastle destroys it. This is because the presence of too much water fills the pores between the grains with water creating a slip plane between the particles and offering no cohesion causing them to slip and slide away. This holds for hillsides and creeps as well. The presence of water may help the hillside stay put and give it cohesion, but in a very wet environment or during or after a large amount of precipitation the pores between the grains can become saturated with water and cause the ground to slide along the slip plane it creates. Creep can also be caused by the expansion of materials such as clay when they are exposed to water. Clay expands when wet, then contracts after drying. The expansion portion pushes downhill, then the contraction results in consolidation at the new offset. Objects resting on top of the soil are carried by it as it descends the slope. This can be seen in churchyards, where older headstones are often situated at an angle and several meters away from where they were originally erected. Vegetation plays a role in slope stability and creep. When a hillside contains much flora their roots create an interlocking network that can strengthen unconsolidated material. They also aid in absorbing the excess water in the soil to help keep the slope stable. However, they do add to the weight of the slope giving gravity that much more of a driving force to act on in pushing the slope downward. In general, though, slopes without vegetation have a greater chance of movement. Design engineers sometimes need to guard against downhill creep during their planning to prevent building foundations from being undermined. Pilings are planted sufficiently deep into the surface material to guard against this action taking place. Modeling regolith diffusion For shallow to moderate slopes, diffusional sediment flux is modeled linearly as (Culling, 1960; McKean et al., 1993) where is the diffusion constant, and is slope. For steep slopes, diffusional sediment flux is more appropriately modeled as a non-linear function of slope where is the critical gradient for sliding of dry soil. On long timescales, diffusive creep in hillslope soils leads to a characteristic rounding of ridges in the landscape. See also Creep (deformation) Colluvium Mass wasting Sediment transport Solifluction References Bibliography Culling, 1960. McKean et al., 1993. Monkhouse, F. J. (University of Southampton). A Dictionary of Geography. London: Edward Arnold (Publishers) Ltd. 1978. Roering, Kirchner and Dietrich, 1999. Evidence for nonlinear diffusive sediment transport on hilslopes and implications for landscape morphology. Water Resour. Res., 35:853–87. Strahler, Arthur N. Physical Geography. New York: John Wiley & Sons, 1960, 2nd edition, 7th printing, pp. 318–19 Easterbrook, Don J., 1999, Surface Processes and Landforms, Prentice-Hall, Inc. Environmental soil science Geomorphology Soil erosion
Downhill creep
[ "Environmental_science" ]
823
[ "Environmental soil science" ]
996,317
https://en.wikipedia.org/wiki/Fetal%20position
Fetal position (British English: also foetal) is the positioning of the body of a prenatal fetus as it develops. In this position, the back is curved, the head is bowed, and the limbs are bent and drawn up to the torso. A compact position is typical for fetuses. Many newborn mammals, especially rodents, remain in a fetal position well after birth. This type of compact position is used in the medical profession to minimize injury to the neck and chest. Some people assume a fetal position when sleeping, especially when the body becomes cold. In some cultures bodies have been buried in fetal position. Sometimes, when a person has suffered extreme physical or psychological trauma (including massive stress), they will assume a similar compact position in which the back is curved forward, the legs are brought up as tightly against the abdomen as possible, the head is bowed as close to the abdomen as possible, and the arms are wrapped around the head to prevent further trauma. This type of position has been observed in drug addicts, who enter the position when experiencing withdrawal. Sufferers of anxiety are also known to assume the same type of position during panic attacks. Assuming this type of position and playing dead is often recommended as a strategy to end a bear attack. See also Neutral body posture Position (obstetrics) References Anatomy Infancy Human positions
Fetal position
[ "Biology" ]
273
[ "Behavior", "Human positions", "Human behavior", "Anatomy" ]
996,341
https://en.wikipedia.org/wiki/Spindle%20checkpoint
The spindle checkpoint, also known as the metaphase-to-anaphase transition, the spindle assembly checkpoint (SAC), the metaphase checkpoint, or the mitotic checkpoint, is a cell cycle checkpoint during metaphase of mitosis or meiosis that prevents the separation of the duplicated chromosomes (anaphase) until each chromosome is properly attached to the spindle. To achieve proper segregation, the two kinetochores on the sister chromatids must be attached to opposite spindle poles (bipolar orientation). Only this pattern of attachment will ensure that each daughter cell receives one copy of the chromosome. The defining biochemical feature of this checkpoint is the stimulation of the anaphase-promoting complex by M-phase cyclin-CDK complexes, which in turn causes the proteolytic destruction of cyclins and proteins that hold the sister chromatids together. Overview and importance The beginning of metaphase is characterized by the connection of the microtubules to the kinetochores of the chromosomes, as well as the alignment of the chromosomes in the middle of the cell. Each chromatid has its own kinetochore, and all of the microtubules that are bound to kinetochores of sister chromatids radiate from opposite poles of the cell. These microtubules exert a pulling force on the chromosomes towards the opposite ends of the cells, while the cohesion between the sister chromatids opposes this force. At the metaphase to anaphase transition, this cohesion between sister chromatids is dissolved, and the separated chromatids are pulled to opposite sides of the cell by the spindle microtubules. The chromatids are further separated by the physical movement of the spindle poles themselves. Premature dissociation of the chromatids can lead to chromosome missegregation and aneuploidy in the daughter cells. Thus, the job of the spindle checkpoint is to prevent this transition into anaphase until the chromosomes are properly attached, before the sister chromatids separate. In order to preserve the cell's identity and proper function, it is necessary to maintain the appropriate number of chromosomes after each cell division. An error in generating daughter cells with fewer or greater number of chromosomes than expected (a situation termed aneuploidy), may lead in best case to cell death, or alternatively it may generate catastrophic phenotypic results. Examples include: In cancer cells, aneuploidy is a frequent event, indicating that these cells present a defect in the machinery involved in chromosome segregation, as well as in the mechanism ensuring that segregation is correctly performed. In humans, Down syndrome appears in children carrying in their cells one extra copy of chromosome 21, as a result of a defect in chromosome segregation during meiosis in one of the progenitors. This defect will generate a gamete (spermatozoide or oocyte) with an extra chromosome 21. After fertilisation, this gamete will generate an embryo with three copies of chromosome 21. Discovery of the spindle assembly checkpoint (SAC) Zirkle (in 1970) was one of the first researchers to observe that, when just one chromosome is retarded to arrive at the metaphase plate, anaphase onset is postponed until some minutes after its arrival. This observation, together with similar ones, suggested that a control mechanism exists at the metaphase-to-anaphase transition. Using drugs such as nocodazole and colchicine, the mitotic spindle disassembles and the cell cycle is blocked at the metaphase-to-anaphase transition. Using these drugs (see the review from Rieder and Palazzo in 1992), the putative control mechanism was named Spindle Assembly Checkpoint (SAC). This regulatory mechanism has been intensively studied since. Using different types of genetic studies, it has been established that diverse kinds of defects are able to activate the SAC: spindle depolymerization, the presence of dicentric chromosomes (with two centromeres), centromeres segregating in an aberrant way, defects in the spindle pole bodies in S. cerevisiae, defects in the kinetochore proteins, mutations in the centromeric DNA or defects in the molecular motors active during mitosis. A summary of these observations can be found in the article from Hardwick and collaborators in 1999. Using its own observations, Zirkle was the first to propose that "some (…) substance, necessary for the cell to proceed to anaphase, appears some minutes after C (moment of the arrival of the last chromosome to the metaphase plate), or after a drastic change in the cytoplasmic condition, just at C or immediately after C", suggesting that this function is located on kinetochores unattached to the mitotic spindle. McIntosh extended this proposal, suggesting that one enzyme sensitive to tension located at the centromeres produces an inhibitor to the anaphase onset when the two sister kinetochores are not under bipolar tension. Indeed, the available data suggested that the signal "wait to enter in anaphase" is produced mostly on or close to unattached kinetochores. However, the primary event associated to the kinetochore attachment to the spindle, which is able to inactivate the inhibitory signal and release the metaphase arrest, could be either the acquisition of microtubules by the kinetochore (as proposed by Rieder and collaborators in 1995), or the tension stabilizing the anchoring of microtubules to the kinetochores (as suggested by the experiments realized at Nicklas' lab). Subsequent studies in cells containing two independent mitotic spindles in a sole cytoplasm showed that the inhibitor of the metaphase-to-anaphase transition is generated by unattached kinetochores and is not freely diffusible in the cytoplasm. Yet in the same study it was shown that, once the transition from metaphase to anaphase is initiated in one part of the cell, this information is extended all along the cytoplasm, and can overcome the signal "wait to enter in anaphase" associated to a second spindle containing unattached kinetochores. Background on sister chromatid duplication, cohesion, and segregation Cell division: duplication of material and distribution to daughter cells When cells are ready to divide, because cell size is big enough or because they receive the appropriate stimulus, they activate the mechanism to enter into the cell cycle, and they duplicate most organelles during S (synthesis) phase, including their centrosome. Therefore, when the cell division process will end, each daughter cell will receive a complete set of organelles. At the same time, during S phase all cells must duplicate their DNA very precisely, a process termed DNA replication. Once DNA replication has finished, in eukaryotes the DNA molecule is compacted and condensed, to form the mitotic chromosomes, each one constituted by two sister chromatids, which stay held together by the establishment of cohesion between them; each chromatid is a complete DNA molecule, attached via microtubules to one of the two centrosomes of the dividing cell, located at opposed poles of the cell. The structure formed by the centrosomes and the microtubules is named mitotic spindle, due to its characteristic shape, holding the chromosomes between the two centrosomes. The sister chromatids stay together until anaphase, when each travels toward the centrosome to which it is attached. In this way, when the two daughter cells separate at the end of the division process, each one will contain a complete set of chromatids. The mechanism responsible for the correct distribution of sister chromatids during cell division is named chromosome segregation. To ensure that chromosome segregation takes place correctly, cells have developed a precise and complex mechanism. In the first place, cells must coordinate centrosome duplication with DNA replication, and a failure in this coordination will generate monopolar or multipolar mitotic spindles, which generally will produce abnormal chromosome segregation, because in this case, chromosome distribution will not take place in a balanced way. Mitosis: anchoring of chromosomes to the spindle and chromosome segregation During S phase, the centrosome starts to duplicate. Just at the beginning of mitosis, both centrioles achieve their maximal length, recruit additional material and their capacity to nucleate microtubules increases. As mitosis progresses, both centrosomes separate to generate the mitotic spindle. In this way, the mitotic spindle has two poles emanating microtubules. Microtubules (MTs) are long proteic filaments, with asymmetric extremities: one end termed "minus" (-) end, relatively stable and close to the centrosome, and an end termed "plus" (+) end, with alternating phases of growth and retraction, exploring the center of the cell searching the chromosomes. Each chromatid has a special region, named the centromere, on top of which is assembled a proteic structure termed kinetochore, which is able to stabilize the microtubule plus end. Therefore, if by chance a microtubule exploring the center of the cell encounters a kinetochore, it may happen that the kinetochore will capture it, so that the chromosome will become attached to the spindle via the kinetochore of one of its sister chromatids. The chromosome plays an active role in the attachment of kinetochores to the spindle. Bound to the chromatin is a Ran guanine nucleotide exchange factor (GEF) that stimulates cytosolic Ran near the chromosome to bind GTP in place of GDP. The activated GTP-bound form of Ran releases microtubule-stabilizing proteins, such as TPX2, from protein complexes in the cytosol, which induces nucleation and polymerization of microtubules around the chromosomes. These kinetochore-derived microtubules, along with kinesin motor proteins in the outer kinetochore, facilitate interactions with the lateral surface of a spindle pole-derived microtubule. These lateral attachments are unstable, however, and must be converted to an end-on attachment. Conversion from lateral to end-on attachments allows the growth and shrinkage of the microtubule plus-ends to be converted into forces that push and pull chromosomes to achieve proper bi-orientation. As it happens that sister chromatids are attached together and both kinetochores are located back-to-back on both chromatids, when one kinetochore becomes attached to one centrosome, the sister kinetochore becomes exposed to the centrosome located in the opposed pole; for this reason, in most cases the second kinetochore becomes associated to the centrosome in the opposed pole, via its microtubules, so that the chromosomes become "bi-oriented", a fundamental configuration (also named amphitelic) to ensure that chromosome segregation will take place correctly when the cell will divide. Occasionally, one of the two sister kinetochores may attach simultaneously to MTs generated by both poles, a configuration named merotelic, which is not detected by the spindle checkpoint but that may generate lagging chromosomes during anaphase and, consequently, aneuploidy. Merotelic orientation (characterized by the absence of tension between sister kinetochores) is frequent at the beginning of mitosis, but the protein Aurora B (a kinase conserved from yeast to vertebrates) detects and eliminates this type of anchoring. (Aurora B is frequently overexpressed in various types of tumors and currently is a target for the development of anticancer drugs.) Sister chromatid cohesion during mitosis Cohesin: SMC proteins Sister chromatids stay associated from S phase (when DNA is replicated to generate two identical copies, the two chromatids) until anaphase. At this point, the two sister chromatids separate and travel to opposite poles in the dividing cell. Genetic and biochemical studies in yeast and in egg's extracts in Xenopus laevis identified a polyprotein complex as an essential player in sister chromatids cohesion (see the review from Hirano in 2000). This complex is known as the cohesin complex and in Saccharomyces cerevisiae is composed of at least four subunits: Smc1p, Smc3p, Scc1p (or Mcd1p) and Scc3p. Both Smc1p and Smc3p belong to the family of proteins for the Structural Maintenance of Chromosomes (SMC), which constitute a group of chromosomic ATPases highly conserved, and form an heterodimer (Smc1p/Smc3p). Scc1p is the homolog in S.cerevisiae of Rad21, first identified as a protein involved in DNA repair in S. pombe. These four proteins are essential in yeast, and a mutation in any of them will produce premature sister chromatid separation. In yeast, cohesin binds to preferential sites along chromosome arms, and is very abundant close to the centromeres, as it was shown in a study using chromatin immunoprecipitation. The role of heterochromatin Classical cytologic observations suggested that sister chromatids are more strongly attached at heterochromatic regions, and this suggested that the special structure or composition of heterochromatin might favour cohesin recruitment. In fact, it has been shown that Swi6 (the homolog of HP-1 in S. pombe) binds to methylated Lys 9 of histone H3 and promotes the binding of cohesin to the centromeric repeats in S. pombe. More recent studies indicate that the RNAi machinery regulates heterochromatin establishment, which in turn recruits cohesin to this region, both in S. pombe and in vertebrate cells. However, there must be other mechanisms than heterochromatin to ensure an augmented cohesion at centromeres, because S. cerevisiae lacks heterochromatin next to centromeres, but the presence of a functional centromere induces an increase of cohesin association in a contiguous region, spanning 20-50kb. In this direction, Orc2 (one protein included in the origin recognition complex, ORC, implicated in the initiation of DNA replication during S phase) is also located on kinetochores during mitosis in human cells; in agreement with this localization, some observations indicate that Orc2 in yeast is implicated in sister chromatid cohesion, and its removal induces SAC activation. It has also been observed that other components of the ORC complex (such as orc5 in S. pombe) are implicated in cohesion. However, the molecular pathway involving the ORC proteins seems to be additive to the cohesins' pathway, and it is mostly unknown. Function of cohesion and its dissolution Centromeric cohesion resists the forces exerted by spindle microtubules towards the poles, which generate tension between sister kinetochores. In turn, this tension stabilizes the attachment microtubule-kinetochore, through a mechanism implicating the protein Aurora B (a review about this issue : Hauf and Watanabe 2004). Indeed, a decrease in the cellular levels of cohesin generates the premature separation of sister chromatids, as well as defects in chromosome congression at the metaphase plate and delocalization of the proteins in the chromosomal passenger complex, which contains the protein Aurora B. The proposed structure for the cohesin complex suggests that this complex connects directly both sister chromatids. In this proposed structure, the SMC components of cohesin play a structural role, so that the SMC heterodimer may function as a DNA binding protein, whose conformation is regulated by ATP. Scc1p and Scc3p, however, would play a regulatory role. In S. cerevisiae, Pds1p (also known as securin) regulates sister chromatids cohesion, because it binds and inhibits the protease Esp1p (separin or separase). When anaphase onset is triggered, the anaphase-promoting complex (APC/C or Cyclosome) degrades securin. APC/C is a ring E3 ubiquitin ligase that recruits an E2 ubiquitin-conjugating enzyme loaded with ubiquitin. Securin is recognized only if Cdc20, the activator subunit, is bound to the APC/C core. When securin, Cdc20, and E2 are all bound to APC/C E2 ubiquitinates securin and selectively degrades it. Securin degradation releases the protease Esp1p/separase, which degrades the cohesin rings that link the two sister chromatids, therefore promoting sister chromatids separation. It has been also shown that Polo/Cdc5 kinase phosphorylates serine residues next to the cutting site for Scc1, and this phosphorylation would facilitate the cutting activity. Although this machinery is conserved through evolution, in vertebrates most cohesin molecules are released in prophase, independently of the presence of the APC/C, in a process dependent on Polo-like 1 (PLK1) and Aurora B. Yet it has been shown that a small quantity of Scc1 remains associated to centromeres in human cells until metaphase, and a similar amount is cut in anaphase, when it disappears from centromeres. On the other hand, some experiments show that sister chromatids cohesion in the arms is lost gradually after sister centromeres have separated, and sister chromatids move toward the opposite poles of the cell. According to some observations, a fraction of cohesins in the chromosomal arms and the centromeric cohesins are protected by the protein Shugoshin (Sgo1), avoiding their release during prophase. To be able to function as protector for the centromeric cohesion, Sgo1 must be inactivated at the beginning of anaphase, as well as Pds1p. In fact, both Pds1p and Sgo1 are substrates of APC/C in vertebrates. Meiosis In mouse oocytes, DNA damage induces meiotic prophase I arrest that is mediated by the spindle assembly checkpoint. Arrested oocytes do not enter the subsequent stage, anaphase I. DNA double strand breaks, UVB and ionizing radiation induced DNA damage cause an effective block to anaphase promoting complex activity. This checkpoint may help prevent oocytes with damaged DNA from progressing to become fertilizable mature eggs. During prophase arrest mouse oocytes appear to use both homologous recombinational repair and non-homologous end joining to repair DNA double-strand breaks. Spindle assembly checkpoint overview The spindle assembly checkpoint (SAC) is an active signal produced by improperly attached kinetochores, which is conserved in all eukaryotes. The SAC stops the cell cycle by negatively regulating CDC20, thereby preventing the activation of the polyubiquitynation activities of anaphase promoting complex (APC). The proteins responsible for the SAC signal compose the mitotic checkpoint complex (MCC), which includes SAC proteins, MAD2/MAD3 (mitotic arrest deficient), BUB3 (budding uninhibited by benzimidazole), and CDC20. Other proteins involved in the SAC include MAD1, BUB1, MPS1, and Aurora B. For higher eukaryotes, additional regulators of the SAC include constituents of the ROD-ZW10 complex, p31comet, MAPK, CDK1-cyclin-B, NEK2, and PLK1. Checkpoint activation The SAC monitors the interaction between improperly connected kinetochores and spindle microtubules, and is maintained until kinetochores are properly attached to the spindle. During prometaphase, CDC20 and the SAC proteins concentrate at the kinetochores before attachment to the spindle assembly. These proteins keep the SAC activated until they are removed and the correct kinetochore-microtubule attachment is made. Even a single unattached kinetochore can maintain the spindle checkpoint. After attachment of microtubule plus-ends and formation of kinetochore microtubules, MAD1 and MAD2 are depleted from the kinetochore assembly. Another regulator of checkpoint activation is kinetochore tension. When sister kinetochores are properly attached to opposite spindle poles, forces in the mitotic spindle generate tension at the kinetochores. Bi-oriented sister kinetochores stabilize the kinetochore-microtubule assembly whereas weak tension has a destabilizing effect. In response to incorrect kinetochore attachments such as syntelic attachment, where both kinetochores becomes attached to one spindle pole, the weak tension generated destabilizes the incorrect attachment and allows the kinetochore to reattach correctly to the spindle body. During this process, kinetochores that are attached to the mitotic spindle but that are not under tension trigger the spindle checkpoint. Aurora-B/Ipl1 kinase of the chromosomal passenger complex functions as the tensions sensor in improper kinetochore attachments. It detects and destabilizes incorrect attachments through control of the microtubule-severing KINI kinesin MCAK, the DASH complex, and the Ndc80/Hec1 complex at the microtubule-kinetochore interface. The Aurora-B/Ipl1 kinase is also critical in correcting merotelic attachments, where one kinetochore is simultaneously attached to both spindle poles. Merotelic attachments generate sufficient tension and are not detected by the SAC, and without correction, may result in chromosome mis-segregation due to slow chromatid migration speed. While microtubule attachment is independently required for SAC activation, it is unclear whether tension is an independent regulator of SAC, although it is clear that differing regulatory behaviors arise with tension. Once activated, the spindle checkpoint blocks anaphase entry by inhibiting the anaphase-promoting complex via regulation of the activity of mitotic checkpoint complex. The mechanism of inhibition of APC by the mitotic checkpoint complex is poorly understood, although it is hypothesized that the MCC binds to APC as a pseudosubstrate using the KEN-box motif in BUBR1. At the same time that mitotic checkpoint complex is being activated, the centromere protein CENP-E activates BUBR1, which also blocks anaphase. Mitotic checkpoint complex formation The mitotic checkpoint complex is composed of BUB3 together with MAD2 and MAD3 bound to Cdc20. MAD2 and MAD3 have distinct binding sites on CDC20, and act synergistically to inhibit APC/C. The MAD3 complex is composed of BUB3, which binds to Mad3 and BUB1B through the short linear motif known as the GLEBS motif. The exact order of attachments which must take place in order to form the MCC remains unknown. It is possible that Mad2-Cdc20 form a complex at the same time as BUBR1-BUB3-Cdc20 form another complex, and these two subcomplexes are consequently combined to form the mitotic checkpoint complex. In human cells, binding of BUBR1 to CDC20 requires prior binding of MAD2 to CDC20, so it is possible that the MAD2-CDC20 subcomplex acts as an initiator for MCC formation. BUBR1 depletion leads only to a mild reduction in Mad2-Cdc20 levels while Mad2 is required for the binding of BubR1-Bub3 to Cdc20. Nevertheless, BUBR1 is still required for checkpoint activation. The mechanism of formation for the MCC is unclear and there are competing theories for both kinetochore-dependent and kinetochore-independent formation. In support of the kinetochore-independent theory, MCC is detectable in S. cerevisiae cells in which core kinetocore assembly proteins have been mutated and cells in which the SAC has been deactivated, which suggests that the MCC could be assembled during mitosis without kinetochore localization. In one model, unattached prometaphase kinetochores can 'sensitize' APC to inhibition of MCC by recruiting the APC to kinetochores via a functioning SAC. Furthermore, depletions of various SAC proteins have revealed that MAD2 and BUBR1 depletions affect the timing of mitosis independently of kinetochores, while depletions of other SAC proteins result in a dysfunctional SAC without altering the duration of mitosis. Thus it is possible that the SAC functions through a two-stage timer where MAD2 and BUBR1 control the duration of mitosis in the first stage, which may be extended in the second stage if there are unattached kinetochores as well as other SAC proteins. However, there are lines of evidence which are in disfavor of the kinetochore-independent assembly. MCC has yet to be found during interphase, while MCC does not form from its constituents in X. laevis meiosis II extracts without the addition of sperm of nuclei and nocodazole to prevent spindle assembly. The leading model of MCC formation is the "MAD2-template model", which depends on the kinetochore dynamics of MAD2 to create the MCC. MAD1 localizes to unattached kinetochores while binding strongly to MAD2. The localization of MAD2 and BubR1 to the kinetochore may also be dependent on the Aurora B kinase. Cells lacking Aurora B fail to arrest in metaphase even when chromosomes lack microtubule attachment. Unattached kinetochores first bind to a MAD1-C-MAD2-p31comet complex and releases the p31comet through unknown mechanisms. The resulting MAD1-C-MAD2 complex recruits the open conformer of Mad2 (O-Mad2) to the kinetochores. This O-Mad2 changes its conformation to closed Mad2 (C-Mad2) and binds Mad1. This Mad1/C-Mad2 complex is responsible for the recruitment of more O-Mad2 to the kinetochores, which changes its conformation to C-Mad2 and binds Cdc20 in an auto-amplification reaction. Since MAD1 and CDC20 both contain a similar MAD2-binding motif, the empty O-MAD2 conformation changes to C-MAD2 while binding to CDC20. This positive feedback loop is negatively regulated by p31comet, which competitively binds to C-MAD2 bound to either MAD1 or CDC20 and reduces further O-MAD2 binding to C-MAD2. Further control mechanisms may also exist, considering that p31comet is not present in lower eukaryotes. The 'template model' nomenclature is thus derived from the process where MAD1-C-MAD2 acts as a template for the formation of C-MAD2-CDC20 copies. This sequestration of Cdc20 is essential for maintaining the spindle checkpoint. Checkpoint deactivation Several mechanisms exist to deactivate the SAC after correct bi-orientation of sister chromatids. Upon microtubule-kinetochore attachment, a mechanism of stripping via a dynein-dynein motor complex transports spindle checkpoint proteins away from the kinetochores. The stripped proteins, which include MAD1, MAD2, MPS1, and CENP-F, are then redistributed to the spindle poles. The stripping process is highly dependent on undamaged microtubule structure as well as dynein motility along microtubules. As well as functioning as a regulator of the C-MAD2 positive feedback loop, p31comet also may act as a deactivator of the SAC. Unattached kinetochores temporarily inactivate p31comet, but attachment reactivates the protein and inhibits MAD2 activation, possibly by inhibitory phosphorylation. Another possible mechanism of SAC inactivation results from energy-dependent dissociation of the MAD2-CDC20 complex through non-degradative ubiquitylation of CDC20. Conversely, the de-ubiquitylating enzyme protectin is required to maintain the SAC. Thus, unattached kinetochores maintain the checkpoint by continuously recreating the MAD2-CDC20 subcomplex from its components. The SAC may also be deactivated by APC activation induced proteolysis. Since the SAC is not reactivated by the loss of sister-chromatid cohesion during anaphase, the proteolysis of cyclin B and inactivation of the CDK1-cyclin-B kinase also inhibits SAC activity. Degradation of MPS1 during anaphase prevents the reactivation of SAC after removal of sister-chromatid cohesion. After checkpoint deactivation and during the normal anaphase of the cell cycle, the anaphase promoting complex is activated through decreasing MCC activity. When this happens the enzyme complex polyubiquitinates the anaphase inhibitor securin. The ubiquitination and destruction of securin at the end of metaphase releases the active protease called separase. Separase cleaves the cohesion molecules that hold the sister chromatids together to activate anaphase. New model for SAC deactivation in S. cerevisiae: the mechanical switch A new mechanism has been suggested to explain how end-on microtubule attachment at the kinetochore is able to disrupt specific steps in SAC signaling. In an unattached kinetochore, the first step in the formation of the MCC is phosphorylation of Spc105 by the kinase Mps1. Phosphorylated Spc105 is then able to recruit the downstream signaling proteins Bub1 and 3; Mad 1,2, and 3; and Cdc20. Association with Mad1 at unattached kinetochores causes Mad2 to undergo a conformational change that converts it from an open form (O-Mad2) to a closed form (C-Mad2.) The C-Mad2 bound to Mad1 then dimerizes with a second O-Mad2 and catalyzes its closure around Cdc20. This C-Mad2 and Cdc20 complex, the MCC, leaves Mad1 and C-Mad2 at the kinetochore to form another MCC. The MCCs each sequester two Cdc20 molecules to prevent their interaction with the APC/C, thereby maintaining the SAC. Mps1's phosphorylation of Spc105 is both necessary and sufficient to initiate the SAC signaling pathway, but this step can only occur in the absence of microtubule attachment to the kinetochore. Endogenous Mps1 is shown to associate with the calponin-homology (CH) domain of Ndc80, which is located in the outer kinetochore region that is distant from the chromosome. Though Mps1 is docked in the outer kinetochore, it is still able to localize within the inner kinetochore and phosphorylate Spc105 because of flexible hinge regions on Ndc80. However, the mechanical switch model proposes that end-on attachment of a microtubule to the kinetochore deactivates the SAC through two mechanisms. The presence of an attached microtubule increases the distance between the Ndc80 CH domain and Spc105. Additionally, Dam1/DASH, a large complex consisting of 160 proteins that forms a ring around the attached microtubule, acts as a barrier between the two proteins. Separation prevents interactions between Mps1 and Spc105 and thus inhibits the SAC signaling pathway. This model is not applicable to SAC regulation in higher order organisms, including animals. A main facet of the mechanical switch mechanism is that in S. cerevisiae the structure of the kinetochore only allows for attachment of one microtubule. Kinetochores in animals, on the other hand, are much more complex meshworks that contain binding sites for a multitude of microtubules. Microtubule attachment at all of the kinetochore binding sites is not necessary for deactivation of the SAC and progression to anaphase. Therefore, microtubule-attached and microtubule-unattached states coexist in the animal kinetochore while the SAC is inhibited. This model does not include a barrier that would prevent Mps1 associated with an attached kinetochore from phosphorylating Spc105 in an adjacent unattached kinetochore. Furthermore, the yeast Dam1/DASH complex is not present in animal cells. Spindle checkpoint defects and cancer When the spindle checkpoint misfunctions, this can lead to chromosome missegregation, aneuploidy and even tumorigenesis. Transformation occurs and is accelerated when maintenance of genomic integrity breaks down especially at the gross level of whole chromosomes or large portions of them. In fact, aneuploidy is the most common characteristic of human solid tumors and thus the spindle assembly checkpoint might be regarded as a possible target for anti-tumour therapy. This is a much underappreciated fact since mutations in specific genes known as oncogenes or tumor suppressor are primarily thought to be behind genetic instability and tumorigenesis. Usually the various checkpoints in the cell cycle take care of genomic integrity via highly conserved redundant mechanisms that are important for maintaining cellular homeostasis and preventing tumorigenesis. Several spindle assembly checkpoint proteins act both as positive and negative regulators to ensure the proper chromosome segregation in each cell cycle preventing chromosome instability (CIN) also known as genome instability. Genomic integrity is now appreciated at several levels where some tumors display instability manifested as base substitutions, insertions, and deletions, while the majority displays gains or losses of whole chromosomes. Due to the fact that alterations in mitotic regulatory proteins can lead to aneuploidy and this is a frequent event in cancer, it was initially thought that these genes could be mutated in cancerous tissues. Mutated genes in cancers In some cancers the genes that underlie the defects resulting in transformation are well characterized. In the hematological cancers such as multiple myeloma cytogenetic abnormalities are very common due to the inherent nature of DNA breaks needed for immunoglobulin gene rearrangement. However, defects in proteins such as MAD2 that function predominantly at the SAC also are characterized in multiple myeloma. Most solid tumors are also predominantly aneuploid. For colorectal cancer, BUB1 and BUBR1 and amplification of STK15 are key regulators that have been implicated in the genomic instability resulting in cancer. In breast cancer, the genetic form characterized by the BRCA-1 gene exhibits greater levels of genomic instability than sporadic forms. Experiments showed that BRCA-1 null mice have decreased expression of the key spindle checkpoint protein MAD2 . For other cancers, more work is warranted to identify the causes of aneuploidy. Other genes not traditionally associated with the SAC in cancer Clearly variations in the physiological levels of these proteins (such as Mad2 or BubR1) are associated with aneuploidy and tumorigenesis, and this has been demonstrated using animal models. However, recent studies indicate that what seems to happen is a more complicated scenario: aneuploidy would drive a high incidence of tumorigenesis only when alterations in the levels of specific mitotic checkpoint components (either reduction or overexpression) in tissues is also inducing other defects able to predispose them to tumors. That is, defects such as an increase in DNA damage, chromosomal rearrangements, and/or a decreased incidence of cell death. For some mitotic checkpoint components, it is known that they are implicated in functions outside mitosis: nuclear import (Mad1), transcriptional repression (Bub3), and cell death, DNA damage response, aging, and megakaryopoiesis for BubR1. All this supports the conclusion that increase in tumorigenesis is associated with defects other than aneuploidy alone. Cancer-associated mutations affecting known checkpoint genes like BUB1 or BUBR1 are actually rare. However, several proteins implicated in cancer have intersections to spindle assembly networks. Key tumor suppressors such as p53 also play a role in the spindle checkpoint. Absence of p53, the most commonly mutated gene in human cancer, has a major effect on cell cycle checkpoint regulators and has been shown to act at the G1 checkpoint in the past, but now appears to be important in regulating the spindle checkpoint as well. Another key aspect of cancer is inhibition of cell death or apoptosis. Survivin, a member of the inhibitor of apoptosis (IAP) family, is localized in pools at microtubules of the mitotic spindle near the centrosomes and at the kinetochores of metaphase chromosomes. Not only does survivin inhibit apoptosis to promote tumorigenesis, but it has been implicated (through experimental knockout mice) as an important regulator of chromosome segregation, and late stage mitosis similar to its role in more primitive organisms. Other aspects of the spindle assembly checkpoint such as kinetochore attachment, microtubule function, and sister chromatid cohesion are likely to be defective as well to cause aneuploidy. Cancer cells have been observed to divide in multiple directions by evading the spindle assembly checkpoint resulting in multipolar mitoses. The multipolar metaphase-anaphase transition occurs through an incomplete separase cycle that results in frequent nondisjunction events which amplify aneuploidy in cancer cells. SAC cancer therapies Advances in this field have led to the introduction of development of some therapies targeted at spindle assembly defects. Older treatments such as vinca alkaloids and taxanes target microtubules that accompany mitotic spindle formation via disruption of microtubule dynamics which engage the SAC arresting the cell and eventually leading to its death. Taxol and Docetaxel, which can induce mitotic catastrophe, both are still used in the treatment of breast cancer, ovarian cancer and other types of epithelial cancer. However, these treatments are often characterized by high rates of side effects and drug resistance. Other targets within the network of regulators that influence the SAC are also being pursued; strong interest has shifted towards the aurora kinase proteins. The kinase gene Aurora A when amplified acts as an oncogene overriding the SAC leading to abnormal initiation of anaphase and subsequent aneuploidy and also resistance to TAXOL . Excitingly, a small molecule inhibitor of Aurora A has shown antitumor effects in an in vivo model suggesting that this might be a good target for further clinical development. Aurora B inhibitors, which are also in clinical development lead to abnormal kinetochore to microtubule attachment and abrogate the mitotic checkpoint as well. Survivin is also an attractive molecular target for clinical therapeutic development as it acts as a major node in a multitude of pathways, one of which is spindle formation and checkpoint control. Even further approaches have included a look at inhibition of mitotic motor proteins like KSP. These inhibitors, which have recently entered clinical trials, cause mitotic arrest and by engaging the spindle assembly checkpoint and induce apoptosis. References Further reading External links Ted Salmon's lab: dividing cells movies. Andrea Musacchio's lab: spindle checkpoint schemes. http://www.uniprot.org/uniprot/O60566 Cell cycle
Spindle checkpoint
[ "Biology" ]
8,516
[ "Cell cycle", "Cellular processes" ]
996,410
https://en.wikipedia.org/wiki/Forwarder
A forwarder is a forestry vehicle that carries big felled logs cut by a harvester from the stump to a roadside landing for later acquisition. Forwarders can use rubber tires or tracks. Unlike a skidder, a forwarder carries logs clear of the ground, which can reduce soil impacts but tends to limit the size of the logs it can move. Forwarders are typically employed together with harvesters in cut-to-length logging operations. Forwarders originated in Scandinavia. Load capacity Forwarders are commonly categorized on their load carrying capabilities. Other classifications include whether they are wheeled or tracked and the axle arrangement. The smallest are trailers designed for towing behind all-terrain vehicles which can carry a load between 1 and 3 tonnes. Agricultural self-loading trailers designed to be towed by farm tractors can handle load weights up to around 12 to 15 tonnes. Light weight purpose-built machines utilised in commercial logging and early thinning operations can handle payloads of up to 8 tonnes. Medium-sized forwarders used in clearfells and later thinnings carry between 12 and 16 tonnes. The largest class specialized for clearfells handles up to 25 tonnes. Forwarders also carry their load at least 2 feet above the ground. Manufacturers Barko Hydraulics, LLC Caterpillar Inc. John Deere (Timberjack) EcoLog Fabtek HSM HSM (Hohenloher Spezial Maschinenbau GmbH, Germany) Komatsu Forest (Valmet) Kronos Logset Malwa Neuson Forest PM Pfanzelt Maschinenbau Ponsse Rottne Strojirna Novotny Tigercat Timber Pro Zanello References External links Engineering vehicles Log transport Forestry equipment
Forwarder
[ "Engineering" ]
347
[ "Engineering vehicles" ]
996,437
https://en.wikipedia.org/wiki/Crisson%20Mine
Crisson Mine was a gold mine in Lumpkin County, Georgia, USA, located just east of Dahlonega. Like many mines in the area, the property probably started as a placer mine during the Georgia Gold Rush. Once the placer deposits had been exhausted, an open pit gold mine was established in 1847 and commercial operations continued until the early 1980s. A small stamp mill was also established here. Much of the gold used for the gold leaf dome of the Georgia State Capitol was mined at this mine, which was among the most productive mines in the Georgia Gold Belt. The mine is located just north of the site of the Consolidated Mine, which is itself north of the Calhoun Mine. In 1969, the owners of Crisson Mine opened to the public to allow tourists to pan for gold. The ore sold for panning is still crushed by the stamp mill, which is now well over 100 years old. It is likely that panning the ore provided at the mine will yield small amounts of gold (flakes, specks, small nuggets). External links Crisson Mine Website Crisson Mine on Mindat.org Crisson Mine on TopoQuest.com Map to the Crisson Mine "Thar's Gold in Them Thar Hills': Gold and Gold Mining in Georgia, 1830s-1940s from the Digital Library of Georgia Gold mines in Georgia Mines in Lumpkin County, Georgia Surface mines in the United States Tourist attractions in Lumpkin County, Georgia Stamp mills
Crisson Mine
[ "Chemistry", "Engineering" ]
301
[ "Stamp mills", "Metallurgical facilities", "Mining equipment" ]
996,555
https://en.wikipedia.org/wiki/Mincemeat
Mincemeat is a mixture of chopped apples and dried fruit, distilled spirits or vinegar, spices, and optionally, meat and beef suet. Mincemeat is usually used as a pie or pastry filling. Traditional mincemeat recipes contain meat, notably beef or venison, as this was a way of preserving meat prior to modern preservation methods. Modern recipes often replace the suet with vegetable shortening or other oils (e.g., coconut oil) and/or omit the meat. However, many people continue to prepare and serve the traditional meat-based mincemeat for holidays. Etymology The "mince" in mincemeat comes from the Middle English mincen, and the Old French mincier both traceable to the Vulgar Latin minutiare, meaning chop finely. The word mincemeat is an adaptation of an earlier term minced meat, meaning finely chopped meat. Meat was also a term for food in general, not only animal flesh. Variants and history English recipes from the 15th, 16th, and 17th centuries describe a fermented mixture of meat and fruit used as a pie filling. These early recipes included vinegars and wines, but by the 18th century, distilled spirits, frequently brandy, were often substituted. The use of spices like clove, nutmeg, mace and cinnamon was common in late medieval and renaissance meat dishes. The increase of sweetness from added sugar made mincemeat less a savoury dinner course and helped to direct its use toward desserts. 16th-century recipe Pyes of mutton or beif must be fyne mynced & seasoned with pepper and salte and a lytel saffron to colour it / suet or marrow a good quantitie / a lytell vynegre / pruynes / great reasons / and dates / take the fattest of the broath of powdred beefe. And if you will have paest royall / take butter and yolkes of egges & so to temper the floure to make the paest. Pies of mutton or beef must be finely minced and seasoned with pepper and salt, and a little saffron to colour it. [Add] a good quantity of suet or marrow, a little vinegar, prunes, raisins and dates. [Put in] the fattest of the broth of salted beef. And, if you want Royal pastry, take butter and egg yolks and [combine them with] flour to make the paste. In the mid- to late eighteenth century, mincemeat in Europe had become associated with old-fashioned, rural, or homely foods. Victorian England rehabilitated the preparation as a traditional Yuletide treat. 19th-century recipe Ingredients — 2 lb. of raisins, 3 lb. currants, lb. of lean beef, 3 lb. of beef suet, 2 lb. of moist sugar, 2 oz. of citron, 2 oz. of candied lemon-peel, 2 oz. of candied orange-peel, 1 small nutmeg, 1 pottle of apples, the rind of 2 lemons, the juice of 1, pint of brandy. Mode — Stone and cut the raisins once or twice across, but do not chop them; wash, dry, and pick the currants free from stalks and grit, and mince the beef and suet, taking care the latter is chopped very fine; slice the citron and candied peel, grate the nutmeg, and pare, core, and mince the apples; mince the lemon-peel, strain the juice, and when all the ingredients are thus prepared, mix them well together, adding the brandy when the other things are well blended; press the whole into a jar, carefully exclude the air, and the mincemeat will be ready for use in a fortnight. Apple mincemeat By the late 19th century, "apple mincemeat" was recommended as a "hygienic" alternative, using apples, suet, currants, brown sugar, raisins, allspice, orange juice, lemons, mace and apple cider, but no meat. A recipe for apple mincemeat appears in a 1910 issue of The Irish Times, made with apples, suet, currants, sugar, raisins, orange juice, lemons, spice and brandy. There is also a similar recipe using green tomatoes instead of apples to create mincemeat in the 1970s book Putting Food By. 20th century By the mid-twentieth century, most mincemeat recipes did not include meat, but might include animal fat in the form of suet or butter, or alternatively solid vegetable fats, making it vegan. Some recipes continue to include venison, minced beef sirloin or minced heart, along with dried fruit, spices, chopped apple, fresh citrus peel, Zante currants, candied fruits, citron, and brandy, rum, or other liquor. Mincemeat is aged to deepen flavours, with alcohol changing the overall texture of the mixture by breaking down the meat proteins. Preserved mincemeat may be stored for up to ten years. Mincemeat can be produced at home. Commercial preparations, primarily without meat, packaged in jars, foil-lined boxes, or tins, are commonly available. Mince pies and tarts are frequently consumed during the Christmas holiday season. In the northeast United States, mincemeat pies are also a traditional part of the Thanksgiving holiday. Like other pies, mince pies are sometimes served with cheese, notably cheddar. References Further reading Cunningham, Marion. The Fannie Farmer Cookbook. Alfred A. Knopf: 1979. . Kiple, Kenneth F. and Kriemhild Coneè Ornelas. The Cambridge World History of Food. Cambridge University Press: 2000. . External links Food ingredients Fruit dishes
Mincemeat
[ "Technology" ]
1,250
[ "Food ingredients", "Components" ]