id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
37,040,123
https://en.wikipedia.org/wiki/Theta%20Sagittae
Theta Sagittae (θ Sagittae) is a double star in the northern constellation of Sagitta. With a combined apparent visual magnitude of +6, it is near the limit of stars that can be seen with the naked eye. According to the Bortle scale the star is visible in dark suburban/rural skies. Based upon an annual parallax shift of as seen from Earth, it is located roughly from the Sun. The binary pair consists of two stars separated by . The primary, component A, is an F-type main sequence star with a stellar classification of F3V. This star is about two billion years old with 52% more mass than the Sun. It forms a double star with a magnitude 8.85 companion, which is located at an angular separation of along a position angle of 331.1°, as of 2011. The star is sometimes described as a triple star, with a 7th magnitude companion away. This is an unrelated giant star much further away than the close pair. A fainter star separated by nearly was also listed as a companion by Struve, again just an accidental optical association. References F-type main-sequence stars Sagittae, Theta Sagitta Durchmusterung objects Sagittae, 17 191570 099351/2 7705 G-type main-sequence stars
Theta Sagittae
Astronomy
280
70,624,318
https://en.wikipedia.org/wiki/Nir%20Ben-Tal
Nir Ben-Tal (Hebrew: ניר בן-טל) is the Abraham E. Kazan Chair of Structural Biology at Tel Aviv University. Early life Nir Ben-Tal is a professor at Tel Aviv University, where he has held the Abraham E. Kazan Chair of Structural Biology since 2018 as a member of the School of Neurobiology, Biochemistry & Biophysics. He received his undergraduate degree in Biology, Chemistry and Physics from the Hebrew University of Jerusalem in 1988 and his DSc in Chemistry at the Technion, Israel Institute of Technology, in 1993. Career His research group at Tel Aviv University developed Conservation Surface Mapping (ConSurf) in response to certain algorithmic limitations. He received a 2018 NATO Science for Peace and Security Programme Prize in radiological and nuclear (CBRN) defence for his project “The Anthrax MntABC Transporter: Structure, Functional Dynamics and Drug Discovery”.   In 2022 he co-authored the book From Molecules to Cells: The Origin of Life on Earth, which hypothesizes on the beginning of life on Earth. Over his career he has published articles in scientific journals as well as scientific textbooks such as Introduction to Proteins: Structure, Function and Motion. References Year of birth missing (living people) Living people Academic staff of Tel Aviv University Structural biologists Israeli biologists Hebrew University of Jerusalem alumni Technion – Israel Institute of Technology alumni
Nir Ben-Tal
Chemistry
288
14,875,781
https://en.wikipedia.org/wiki/CNBP
Cellular nucleic acid-binding protein is a protein that in humans is encoded by the CNBP gene. Function The ZNF9 protein contains 7 zinc finger domains and is believed to function as an RNA-binding protein. A CCTG expansion in intron 1 of the ZNF9 gene results in myotonic dystrophy type 2 (MIM 602668).[supplied by OMIM] References Further reading External links GeneReviews/NCBI/NIH/UW entry on Myotonic Dystrophy Type 2 Transcription factors
CNBP
Chemistry,Biology
121
303,802
https://en.wikipedia.org/wiki/Phase%20rule
In thermodynamics, the phase rule is a general principle governing multi-component, multi-phase systems in thermodynamic equilibrium. For a system without chemical reactions, it relates the number of freely varying intensive properties () to the number of components (), the number of phases (), and number of ways of performing work on the system (): Examples of intensive properties that count toward are the temperature and pressure. For simple liquids and gases, pressure-volume work is the only type of work, in which case . The rule was derived by American physicist Josiah Willard Gibbs in his landmark paper titled On the Equilibrium of Heterogeneous Substances, published in parts between 1875 and 1878. The number of degrees of freedom (also called the variance) is the number of independent intensive properties, i.e., the largest number of thermodynamic parameters such as temperature or pressure that can be varied simultaneously and independently of each other. An example of a one-component system () is a pure chemical. A two-component system () has two chemically independent components, like a mixture of water and ethanol. Examples of phases that count toward are solids, liquids and gases. Foundations A phase is a form of matter that is homogeneous in chemical composition and physical state. Typical phases are solid, liquid and gas. Two immiscible liquids (or liquid mixtures with different compositions) separated by a distinct boundary are counted as two different phases, as are two immiscible solids. The number of components (C) is the number of chemically independent constituents of the system, i.e. the minimum number of independent species necessary to define the composition of all phases of the system. The number of degrees of freedom (F) in this context is the number of intensive variables which are independent of each other. The basis for the rule is that equilibrium between phases places a constraint on the intensive variables. More rigorously, since the phases are in thermodynamic equilibrium with each other, the chemical potentials of the phases must be equal. The number of equality relationships determines the number of degrees of freedom. For example, if the chemical potentials of a liquid and of its vapour depend on temperature (T) and pressure (p), the equality of chemical potentials will mean that each of those variables will be dependent on the other. Mathematically, the equation , where , the chemical potential, defines temperature as a function of pressure or vice versa. (Caution: do not confuse as pressure with , number of phases.) To be more specific, the composition of each phase is determined by intensive variables (such as mole fractions) in each phase. The total number of variables is , where the extra two are temperature T and pressure p. The number of constraints is , since the chemical potential of each component must be equal in all phases. Subtract the number of constraints from the number of variables to obtain the number of degrees of freedom as . The rule is valid provided the equilibrium between phases is not influenced by gravitational, electrical or magnetic forces, or by surface area, and only by temperature, pressure, and concentration. Consequences and examples Pure substances (one component) For pure substances so that . In a single phase () condition of a pure component system, two variables (), such as temperature and pressure, can be chosen independently to be any pair of values consistent with the phase. However, if the temperature and pressure combination ranges to a point where the pure component undergoes a separation into two phases (), decreases from 2 to 1. When the system enters the two-phase region, it is no longer possible to independently control temperature and pressure. In the phase diagram to the right, the boundary curve between the liquid and gas regions maps the constraint between temperature and pressure when the single-component system has separated into liquid and gas phases at equilibrium. The only way to increase the pressure on the two phase line is by increasing the temperature. If the temperature is decreased by cooling, some of the gas condenses, decreasing the pressure. Throughout both processes, the temperature and pressure stay in the relationship shown by this boundary curve unless one phase is entirely consumed by evaporation or condensation, or unless the critical point is reached. As long as there are two phases, there is only one degree of freedom, which corresponds to the position along the phase boundary curve. The critical point is the black dot at the end of the liquid–gas boundary. As this point is approached, the liquid and gas phases become progressively more similar until, at the critical point, there is no longer a separation into two phases. Above the critical point and away from the phase boundary curve, and the temperature and pressure can be controlled independently. Hence there is only one phase, and it has the physical properties of a dense gas, but is also referred to as a supercritical fluid. Of the other two-boundary curves, one is the solid–liquid boundary or melting point curve which indicates the conditions for equilibrium between these two phases, and the other at lower temperature and pressure is the solid–gas boundary. Even for a pure substance, it is possible that three phases, such as solid, liquid and vapour, can exist together in equilibrium (). If there is only one component, there are no degrees of freedom () when there are three phases. Therefore, in a single-component system, this three-phase mixture can only exist at a single temperature and pressure, which is known as a triple point. Here there are two equations , which are sufficient to determine the two variables T and p. In the diagram for CO2 the triple point is the point at which the solid, liquid and gas phases come together, at 5.2 bar and 217 K. It is also possible for other sets of phases to form a triple point, for example in the water system there is a triple point where ice I, ice III and liquid can coexist. If four phases of a pure substance were in equilibrium (), the phase rule would give , which is meaningless, since there cannot be −1 independent variables. This explains the fact that four phases of a pure substance (such as ice I, ice III, liquid water and water vapour) are not found in equilibrium at any temperature and pressure. In terms of chemical potentials there are now three equations, which cannot in general be satisfied by any values of the two variables T and p, although in principle they might be solved in a special case where one equation is mathematically dependent on the other two. In practice, however, the coexistence of more phases than allowed by the phase rule normally means that the phases are not all in true equilibrium. Two-component systems For binary mixtures of two chemically independent components, so that . In addition to temperature and pressure, the other degree of freedom is the composition of each phase, often expressed as mole fraction or mass fraction of one component. As an example, consider the system of two completely miscible liquids such as toluene and benzene, in equilibrium with their vapours. This system may be described by a boiling-point diagram which shows the composition (mole fraction) of the two phases in equilibrium as functions of temperature (at a fixed pressure). Four thermodynamic variables which may describe the system include temperature (T), pressure (p), mole fraction of component 1 (toluene) in the liquid phase (x1L), and mole fraction of component 1 in the vapour phase (x1V). However, since two phases are present () in equilibrium, only two of these variables can be independent (). This is because the four variables are constrained by two relations: the equality of the chemical potentials of liquid toluene and toluene vapour, and the corresponding equality for benzene. For given T and p, there will be two phases at equilibrium when the overall composition of the system (system point) lies in between the two curves. A horizontal line (isotherm or tie line) can be drawn through any such system point, and intersects the curve for each phase at its equilibrium composition. The quantity of each phase is given by the lever rule (expressed in the variable corresponding to the x-axis, here mole fraction). For the analysis of fractional distillation, the two independent variables are instead considered to be liquid-phase composition (x1L) and pressure. In that case the phase rule implies that the equilibrium temperature (boiling point) and vapour-phase composition are determined. Liquid–vapour phase diagrams for other systems may have azeotropes (maxima or minima) in the composition curves, but the application of the phase rule is unchanged. The only difference is that the compositions of the two phases are equal exactly at the azeotropic composition. Aqueous solution of 4 kinds of salts Consider an aqueous solution containing sodium chloride (NaCl), potassium chloride (KCl), sodium bromide (NaBr), and potassium bromide (KBr), in equilibrium with their respective solid phases. Each salt, in solid form, is a different phase, because each possesses a distinct crystal structure and composition. The aqueous solution itself is another phase, because it forms a homogeneous liquid phase separate from the solid salts, with its own distinct composition and physical properties. Thus we have P = 5 phases. There are 6 elements present (H, O, Na, K, Cl, Br), but we have 2 constraints: The stoichiometry of water: n(H) = 2n(O). Charge balance in the solution: n(Na) + n(K) = n(Cl) + n(Br). giving C = 6 - 2 = 4 components. The Gibbs phase rule states that F = 1. So, for example, if we plot the P-T phase diagram of the system, there is only one line at which all phases coexist. Any deviation from the line would either cause one of the salts to completely dissolve or one of the ions to completely precipitate from the solution. Phase rule at constant pressure For applications in materials science dealing with phase changes between different solid structures, pressure is often imagined to be constant (for example at 1 atmosphere), and is ignored as a degree of freedom, so the formula becomes: This is sometimes incorrectly called the "condensed phase rule", but it is not applicable to condensed systems subject to high pressures (for example, in geology), since the effects of these pressures are important. Phase rule in colloidal mixtures In colloidal mixtures quintuple and sixtuple points have been described in violation of Gibbs phase rule but it is argued that in these systems the rule can be generalized to where accounts for additional parameters of interaction among the components like the diameter of one type of particle in relation to the diameter of the other particles in the solution. References Further reading Chapter 9. Thermodynamics Aspects of Stability Equilibrium chemistry Laws of thermodynamics
Phase rule
Physics,Chemistry
2,269
11,151,410
https://en.wikipedia.org/wiki/List%20of%20palindromic%20places
A palindromic place is a city or town whose name can be read the same forwards or backwards. An example of this would be Navan in Ireland. Some of the entries on this list are only palindromic if the next administrative division they are a part of is also included in the name, such as Adaven, Nevada. Issues Because the names here come from a variety of languages, several issues arise. Unbalanced diacritics Diacritics are marks placed on or near letters to give them a modified pronunciation. Some languages treat such as completely different letters; others treat them as variants of the base letter. The latter group is summarized here. Only place names where the language of the country is in the latter group are included here when diacritics make for an apparent non-palindrome. Turkic vowels Some Turkic languages (Turkish, Azerbaijan, Kazakh) have two or more vowels that resemble the I. They are differentiated by the number of dots above the letter: zero, one, or two. These dots appear on both lower and upper case letters. For places in Turkey, Azerbaijan, and Kazakhstan, only those vowels that have the same number of dots will be considered equal here. ʻOkina in Polynesian languages The ʻokina is a consonant found in several Polynesian languages. It is pronounced as a glottal stop and is often represented by an apostrophe when the correct character ʻ is not available. Because English wordplay generally ignores apostrophes, it is common to ignore ʻokinas in deciding whether a Polynesian name is a palindrome. However, this list does not follow that rule. Unbalanced ʻokinas will not be found in this list. However that rule has not been applied consistently to the Arabic hamza, which also represents a glottal stop. List Palindromic place names in the Latin alphabet are: 12 letters Adaven, Nevada, United States Adanac, Canada (Nipissing District, Ontario) Adanac, Canada (Parry Sound District, Ontario) Adanac, Canada (Saskatchewan) 11 letters Wassamassaw, South Carolina, United States (with several variant spellings during the colonial era) Anahanahana, Madagascar. 10 letters Saxet, Texas (United States) 9 letters Ellemelle, Belgium Kanakanak, Alaska, United States Kinikinik, Alberta, Canada Kinikinik, Colorado, United States Oktahatko, Florida, United States Paraparap, Victoria, Australia 8 letters Burggrub, Germany Idappadi, Tamil Nadu, India Nari, Iran (Razavi Khorasan) Nari, Iran (Dul Rural District, Urmia County, West Azerbaijan) Nari, Iran (Silvaneh District, Urmia County, West Azerbaijan) 7 letters Abiriba, Nigeria Acaiaca, Brazil Akasaka, Japan (Okayama) Akasaka, Japan (Tokyo) Alavala, Andhra Pradesh, India Aramara, Australia Ateleta, Italy (L'Aquila) Aworowa, Ghana Ebenebe, Anambra, Nigeria Etsaste, Estonia Glenelg, Highland, Scotland Glenelg, South Australia, Australia Glenelg, Nova Scotia, Canada Glenelg, Maryland, United States Hadidah, Syria Ikazaki, Ehime, Japan Itamati, Odisha, India Margram, West Bengal, India Noagaon, Bangladesh Neuquén, Argentina Okonoko, West Virginia, United States Planalp, Switzerland Qaanaaq, Greenland Senones, Vosges, France Ubulubu, Delta State, Nigeria 6 letters 5 letters 4 letters 3 letters 2 letters Aa, Estonia Aa, Indonesia Ee, Cook Islands Ee, Netherlands Ii, Finland Oo, Indonesia 1 letter See List of short place names#One-letter place names. These are arguably not a palindrome, or perhaps a degenerate palindrome. Palindromes that include abbreviations Some place names make a palindrome when they include the abbreviation of the state or province they are in. 8 letters Apollo PA (Pennsylvania, United States) 7 letters Omaha MO (Missouri, United States) 6 letters Linn IL (variant name of Orio, Illinois, United States) 5 letters Lis IL (Illinois, United States) Roy OR (Oregon, United States) See also Palindrome Anagram Palindroma List of geographic anagrams and ananyms Sources GotoEuro Information City Name Database (at nona.net) I Love Me, Vol. I, Palindrome Encyclopedia, Michael Donner, Algonquin Books of Chapel Hill, 1996 Geographic Names Information Service Canadian Geographic Names Data base store/MapMarker Palindromes Palindromic
List of palindromic places
Physics
993
606,411
https://en.wikipedia.org/wiki/Walter%20Bradford%20Cannon
Walter Bradford Cannon (October 19, 1871 – October 1, 1945) was an American physiologist, professor and chairman of the Department of Physiology at Harvard Medical School. He coined the term "fight or flight response", and developed the theory of homeostasis. He popularized his theories in his book The Wisdom of the Body, first published in 1932. Life and career Cannon was born on October 19, 1871, in Prairie du Chien, Wisconsin, the son of Colbert Hanchett Cannon and his wife Wilma Denio. His sister Ida Maud Cannon (1877-1960) became a noted hospital social worker at Massachusetts General Hospital. In his autobiography The Way of an Investigator, Cannon counts himself among the descendants of Jacques de Noyon, a French Canadian explorer and coureur des bois. His Calvinist family was intellectually active, including readings from James Martineau, John Fiske (philosopher), and James Freeman Clarke. Cannon's curiosity also led him to Thomas Henry Huxley, John Tyndall, George Henry Lewes, and William Kingdon Clifford. A high school teacher, Mary Jeannette Newson, became his mentor. "Miss May" Newson motivated him and helped him take his academic skills into Harvard University in 1892. Upon finishing his undergraduate studies in 1896, he entered Harvard Medical School. He started using X-rays to study the physiology of digestion while working with Henry P. Bowditch. In 1900 he received his medical degree. After graduation, Cannon was hired by William Townsend Porter at Harvard as an instructor in the Department of Physiology while continuing his digestion study. Cannon was promoted to an assistant professor of physiology in 1902. He was a close friend of the physicist, G. W. Pierce, and together they founded the Wicht Club with other young instructors for social and professional purposes. In 1906, Cannon succeeded Bowditch as the Higginson Professor and chairman of the Department of Physiology at Harvard Medical School until 1942. From 1914 to 1916, Cannon was also President of the American Physiological Society. He was married to Cornelia James Cannon, a best-selling author and feminist reformer. On July 19, 1901, during their honeymoon in Montana, they were the first people to reach the summit of the unclimbed southwest peak (2657 m or 8716 ft) of Goat Mountain, between Lake McDonald and Logan Pass. That area is now Glacier National Park. The peak was subsequently named, Mount Cannon, by the United States Geological Survey The couple had five children; A son, Dr. Bradford Cannon, a military plastic surgeon and radiation researcher. The daughters were Wilma Cannon Fairbank (who was married to John K. Fairbank), Linda Cannon Burgess, Helen Cannon Bond, and Marian Cannon Schlesinger, a painter and author living in Cambridge, Massachusetts. His actions and his statements may infer his philosophy of life. Born into a Calvinistic family, he broke away from religious authoritarianism and became independent from his prior dogma. Later in life, he states that naturally occurring events are what makes for a useful end. He took on the role of a naturalist where believed that the body and mind are inseparable as an organismic unit. The explanations of his work should enable man to live more wisely, happily, and intelligently without the interjection of supernatural interference. E. Digby Baltzell said that Dr. Cannon was once offered a job at the Mayo Clinic for twice his Harvard salary. Cannon declined, saying "I don't need twice as much money. All I need is fifty cents for a haircut once a month, and fifty cents a day to get lunch." Cannon was elected to the American Academy of Arts and Sciences in 1906, the American Philosophical Society in 1908, and the United States National Academy of Sciences in 1914. Cannon supported animal experimentation and opposed the arguments of anti-vivisectionists. In 1911, he authored a booklet for the American Medical Association criticizing the arguments of anti-vivisectionists. Walter Cannon died on October 1, 1945, in Franklin, New Hampshire. Work Walter Cannon began his career in science as a Harvard undergraduate in the year 1892. Henry Pickering Bowditch, who had worked with Claude Bernard, directed the laboratory in physiology at Harvard. Here Cannon began his research: he used the newly discovered x-rays to study the mechanism of swallowing and the motility of the stomach. Withi his first experiments, he was able to watch the course of a button down a dog's esophagus. He said in his autobiography, The Way of an Investigator, "The whole purpose of my effort was to see the peristaltic waves to learn their effects. Only after some time did I note that the absence of activity was accompanied by signs of perturbation, and when serenity was restored the waves promptly reappeared." He demonstrated deglutition in a goose at the APS meeting in December 1896 and published his first paper on this research in the first issue of the American Journal of Physiology in January 1898. In 1945 Cannon summarized his career in physiology by describing his focus at different ages: Age 26 – 40: digestion and the bismuth meal Age 40 – 46: bodily effects of emotional excitement Age 46 – 51: wound shock investigations Age 51 – 59: stable states of the organism Age 59 – 68: chemical mediation of nerve impulses (collaboration with Arturo Rosenblueth) Age 68 + : chemical sensitivity of nerve-isolated organs Scientific contributions Use of salts of heavy metals in X-rays He was one of the first researchers to mix salts of heavy metals (including bismuth subnitrate, bismuth oxychloride, and barium sulfate) into foodstuffs to improve the contrast of x-ray images of the digestive tract. The barium meal is a modern derivative of this research. Fight or flight In 1915, he coined the term fight or flight to describe an animal's response to threats in Bodily Changes in Pain, Hunger, Fear and Rage: An Account of Recent Researches into the Function of Emotional Excitement. He asserted that not only physical emergencies, such as blood loss from trauma but also psychological emergencies, such as antagonistic encounters between members of the same species, evoke the release of adrenaline into the bloodstream. As per Cannon, adrenaline exerts several important effects on different body organs, all of which maintain homeostasis in fight-or-flight situations. For example, in the skeletal muscle of the limbs, adrenaline relaxes blood vessels which increases local blood flow. Adrenaline constricts blood vessels in the skin and minimizes blood loss from physical trauma. Adrenaline also releases the key metabolic fuel, glucose, from the liver into the bloodstream. However, the fact that aggressive attack and fearful escape both involve adrenaline release into the bloodstream does not imply an equivalence of “fight” with “flight” from a physiological or biochemical point of view. Wound shock As a military physician in World War I he discovered that the blood of shocked men was acidic. As a member of the British Medical Research Council's Special Committee on Shock and Allied Conditions, he advocated treating shocked wounded by infusing sodium bicarbonate to neutralize the acid. He and William Bayliss infused acid into an anesthetized cat, which died. However, a second trial done with Bayliss and Henry Dale failed to produce shock. The shock was successfully treated by infusing saline containing some larger molecules. Homeostasis He developed the concept of homeostasis from the earlier idea of Claude Bernard of milieu interieur, and popularized it in his book The Wisdom of the Body. Cannon presented four tentative propositions to describe the general features of homeostasis: Constancy in an open system requires mechanisms that act to maintain this system, just like our bodies. Cannon based this proposition on insights into steady states such as glucose concentrations, body temperature, and acid-base balance. Steady-state conditions require that any tendency toward change automatically meets with factors that resist change. An increase in blood sugar results in thirst as the body attempts to dilute the concentration of sugar in the extracellular fluid. The regulating system that determines the homeostatic state consists of many cooperating mechanisms acting simultaneously or successively. Blood sugar is regulated by insulin, glucagon, and other hormones that control its release from the liver or its uptake by the tissues. Homeostasis does not occur by chance, but is the result of organized self-government. The Sympathoadrenal System Cannon proposed the existence and functional unity of the sympathoadrenal (or “sympathoadrenomedullary” or “sympathico-adrenal”) system. He theorized that the sympathetic nervous system and the adrenal gland work together as a unit to maintain homeostasis in emergencies. To identify and quantify adrenaline release during stress, beginning in about 1919 Cannon exploited an ingenious experimental setup. He would surgically excise the nerves supplying the heart of a laboratory animal such as a dog or cat. Then he would subject the animal to a stressor and record the heart rate response. With the nerves to the heart removed, he could deduce that if the heart rate increased in response to the perturbation, then the increase in heart rate must have resulted from the actions of a hormone. Finally, he would compare the results of an animal with intact adrenal glands with those in an animal from which he had removed the adrenal glands. From the difference in the heart rate between the two animals, he could further infer that the hormone responsible for the increase in heart rate came from the adrenal glands. Moreover, the amount of increase in the heart rate provided a measure of the amount of hormone released. Cannon became so convinced that the sympathetic nervous system and adrenal gland functioned as a unit that in the 1930s that he formally proposed that the sympathetic nervous system uses the same chemical messenger—adrenaline—as does the adrenal gland. Cannon’s notion of a unitary sympathoadrenal system persists to this day. Researchers in the area have come to question the validity of the notion of a unitary sympathoadrenal system, although clinicians often continue to lump together the two components. Cannon-Bard theory Cannon developed the Cannon-Bard theory with physiologist Philip Bard to try to explain why people feel emotions first and then act upon them. Dry mouth He put forward the Dry Mouth Hypothesis, stating that people get thirsty because their mouths get dry. He experimented on two dogs. He made incisions in their throats and inserted small tubes. Any water swallowed would go through their mouths and out by the tubes, never reaching their stomachs. He found out that these dogs would lap up the same amount of water as control dogs. Publication Cannon wrote several books and articles. 1910, A Laboratory Course in Physiology, Harvard University Press 6th ed. 1927. 1910, 'Medical Control of Vivisection' 1911, Some Characteristics of Antivivisection Literature 1911, The Mechanical Factors of Digestion 1915, Bodily Changes in Pain, Hunger, Fear and Rage 1920, Bodily Changes in Pain, Hunger, Fear and Rage (2 ed.) 1923, Traumatic Shock 1926, 'Physiological Regulation of Normal States' 1932, The Wisdom of the Body 1933, Some modern extensions of Beaumont's studies on Alexis St. Martin 1937, Digestion and Health 1937, Autonomic Neuro-effector Systems, with Arturo Rosenblueth 1942, '"Voodoo" Death' 1945, The Way of an Investigator: a scientist's experiences in medical research See also Cannon-Washburn Hunger Experiment (1912) References Further reading Benison, Saul A., Clifford Barger, Elin L. Wolfe (1987) Walter B. Cannon: The Life and Times of a Young Scientist. Cannon, Bradford. "Walter Bradford Cannon: Reflections on the Man and His Contributions". International Journal of Stress Management, vol. 1, no. 2, 1994. Kuznick, Peter. "The Birth of Scientific Activism". Bulletin of the Atomic Scientists, December 1988 Schlesinger, Marian Cannon. Snatched from Oblivion: A Cambridge Memoir. Boston: Little, Brown and Company, 1979. Wolfe, Elin L., A. Clifford Barger, Saul Benison (2000) Walter B. Cannon, Science and Society. External links 6th APS President at the American Physiological Society Walter Bradford Cannon: Experimental Physiologist: 1871-1945 - biography at Harvard Square Library Chapter 9 of Explorers of the Body, by Steven Lehrer (contains information about X-ray experiments) The Walter Bradford Cannon papers can be found at The Center for the History of Medicine at the Countway Library, Harvard Medical School. Walter Bradford Cannon, Homeostasis (1932) W. B. Cannon (1915), Bodily changes in pain, hunger, fear, and rage, New York: D. Appleton and Company 1871 births 1945 deaths American physiologists Cyberneticists Foreign members of the Royal Society Harvard College alumni Harvard Medical School alumni Harvard Medical School faculty Honorary members of the USSR Academy of Sciences People from Franklin, New Hampshire People from Prairie du Chien, Wisconsin Vivisection activists Writers from Massachusetts Writers from Wisconsin Members of the American Philosophical Society
Walter Bradford Cannon
Chemistry
2,727
52,642,122
https://en.wikipedia.org/wiki/Samarium%28II%29%20bromide
Samarium(II) bromide is an inorganic compound with the chemical formula . It is a brown solid that is insoluble in most solvents but degrades readily in air. Structure In the gas phase, is a bent molecule with Sm–Br distance 274.5 pm and bond angle 131±6°. History Samarium(II) bromide was first synthesized in 1934 by P. W. Selwood, when he reduced samarium tribromide (SmBr3) with hydrogen (H2). Kagan also synthesized it by converting samarium(III) oxide (Sm2O3) to SmBr3 and then reducing with a lithium dispersion in THF. Robert A. Flowers synthesized it by adding two equivalent of lithium bromide (LiBr) to samarium diiodide (SmI2) in tetrahydrofuran. Namy managed to synthesize it by mixing tetrabromoethane (C2H2Br4) with samarium metal, and Hilmerson found that heating the tetrabromoethane or samarium greatly improved the production of samarium(II) bromide. Reactions Samarium(II) bromide has reducing properties reminiscent of the more commonly used samarium diiodide. It is an effective for pinacol homocouplings of aldehydes, ketones, and cross-coupling carbonyl compounds. Reports have shown that samarium(II) bromide is capable of selectively reducing ketones if it is in the presence of an alkyl halide. Samarium(II) bromide forms soluble adducts with hexamethylphosphoramide. This species reduces imines to amines and alkyl chlorides to hydrocarbons. For example, SmBr2(hmpa)x converts cyclohexyl chloride to cyclohexane. Samarium(II) bromide will reduce ketones in tetrahydrofuran if an activator is absent. References Bromides Samarium(II) compounds Reducing agents Lanthanide halides
Samarium(II) bromide
Chemistry
438
43,447,946
https://en.wikipedia.org/wiki/Scanning%20Habitable%20Environments%20with%20Raman%20and%20Luminescence%20for%20Organics%20and%20Chemicals
Scanning Habitable Environments with Raman and Luminescence for Organics and Chemicals (SHERLOC) is an ultraviolet Raman spectrometer that uses fine-scale imaging and an ultraviolet (UV) laser to determine fine-scale mineralogy, and detect organic compounds designed for the Perseverance rover as part of the Mars 2020 mission. It was constructed at the Jet Propulsion Laboratory with major subsystems being delivered from Malin Space Science Systems and Los Alamos National Laboratory. SHERLOC has a calibration target with possible Mars suit materials, and it will measure how they change over time in the Martian surface environment. Goals According to a 2017 Universities Space Research Association (USRA) report: Construction There are three locations on the rover where SHERLOC components are located. The SHERLOC Turret Assembly (STA) is mounted at the end of the rover arm. The STA contains spectroscopy and imaging components. The SHERLOC Body Assembly (SBA) is located on the rover chassis and acts as the interface between the STA and the Mars 2020 rover. The SBA deals with command and data handling, along with power distribution. The SHERLOC Calibration Target (SCT) is located on the front of the rover chassis and hold spectral standards. SHERLOC consists of both imaging and spectroscopic elements. It has two imaging components consisting of heritage hardware from the MSL MAHLI instrument. The Wide Angle Topographic Sensor for Operations and eNgineering (WATSON) is a built to print re-flight that can generate color images over multiple scales. The other, Autofocus Context Imager (ACI), acts as the mechanism that allows the instrument to get a contextual image of a sample and to autofocus the laser spot for the spectroscopic part of the SHERLOC investigation. For Spectroscopy, it utilizes a NeCu laser to generate UV photons (248.6 nm) which can generate characteristic Raman and fluorescence photons from a scientifically interesting sample. The deep UV laser is co-boresighted to a context imager and integrated into an autofocusing/scanning optical system that allows correlation of spectral signatures to surface textures, morphology and visible features. The context imager has a spatial resolution of 30 μm and currently is designed to operate in the 400-500 nm wavelength range. Results from Mars Over the course of three years, SHERLOC and WATSON have been successfully collecting spectra and images of minerals and organics on the surface of Mars. Utilizing WATSON and ACI images, there was confirmation that the Jezero Crater floor consists of aqueously altered mafic material with various igneous origins. In addition, WATSON has been used to collect selfies of the Perseverance rover and the Ingenuity helicopter. Recently, it successfully sealed and stored the first two rock samples from Mars. Because of it, We now know that these rocks derived from a volcanic environment, and that there was liquid water there in Mars's past, that formed salts that SHERLOC has seen. See also Composition of Mars Curiosity rover Exploration of Mars Geology of Mars List of rocks on Mars Mars Science Laboratory MOXIE PIXL Scientific information from the Mars Exploration Rover mission Timeline of Mars Science Laboratory References External links Mars 2020 Mission - Home Page - NASA/JPL Scientific instruments Mars 2020 instruments Raman spectroscopy Spectrometers
Scanning Habitable Environments with Raman and Luminescence for Organics and Chemicals
Physics,Chemistry,Technology,Engineering
688
58,527,422
https://en.wikipedia.org/wiki/Computational%20Spectroscopy%20In%20Natural%20Sciences%20and%20Engineering
COmputational Spectroscopy In Natural Sciences and Engineering (COSINE) is a Marie Skłodowska-Curie Innovative Training Network in the field of theoretical and computational chemistry, focused on computational spectroscopy. The main goal of the projects is to develop theoretical tools: computational codes based on electronic structure theory for the investigation of organic photochemistry and for simulation of spectroscopic experiments. It is part of the European Union's Horizon 2020 research funding framework. Objective The main purpose of COSINE is the development of ab-initio research tools to study optical properties and excited electronic states, which are dominated by electron correlation. This tools are developed for the investigation of organic photochemistry with the aim of accurate simulation of spectroscopic experiments on the computer. To this end a complementary series of tools, rooted in coupled cluster, algebraic diagrammatic construction, density functional theory, as well as selected multi-reference methods, are developed. Nodes The project is divided into 8 different nodes: Node 1, Heidelberg University is the coordinating node, led by Andreas Dreuw Node 2, KTH Royal Institute of Technology in Stockholm, led by Patrick Norman Node 3, Ludwig Maximilian University of Munich, led by Christian Ochsenfeld Node 4, Scuola Normale Superiore in Pisa, led by Chiara Cappelli Node 5, University of Southern Denmark in Odense, led by Jacob Kongsted Node 6, L'École Nationale Supérieure de Chimie de Paris, led by Ilaria Ciofini Node 7, Norwegian University of Science and Technology in Trondheim, led by Henrik Koch Node 8, Technical University of Denmark in Lyngby, led by Sonia Coriani Partner organisations ELETTRA, Sincotrone Trieste, Italy; Electromagnetic Geoservices ASA, Norway; EXACT LAB SRL, Italy; Nvidia GmbH, Germany; DELL S.P.A., Italy; Inc., United States; PDC Center for High-Performance Computing, KTH, Sweden; Dipartimento di Scienze Chimiche e Farmaceutiche, Università degli Studi di Trieste, Italy. References External links COSINE homepage ITN Marie Skłodowska-Curie actions CORDIS Community REsearch and Development Information Service College and university associations and consortia in Europe Computational chemistry Engineering university associations and consortia
Computational Spectroscopy In Natural Sciences and Engineering
Chemistry
482
31,913,717
https://en.wikipedia.org/wiki/Two-state%20trajectory
A two-state trajectory (also termed two-state time trajectory or a trajectory with two states) is a dynamical signal that fluctuates between two distinct values: ON and OFF, open and closed, , etc. Mathematically, the signal has, for every either the value or . In most applications, the signal is stochastic; nevertheless, it can have deterministic ON-OFF components. A completely deterministic two-state trajectory is a square wave. There are many ways one can create a two-state signal, e.g. flipping a coin repeatedly. A stochastic two-state trajectory is among the simplest stochastic processes. Extensions include: three-state trajectories, higher discrete state trajectories, and continuous trajectories in any dimension. Two state trajectories in biophysics, and related fields Two state trajectories are very common. Here, we focus on relevant trajectories in scientific experiments: these are seen in measurements in chemistry, physics, and the biophysics of individual molecules (e.g. measurements of protein dynamics and DNA and RNA dynamics, activity of ion channels, enzyme activity, quantum dots). From these experiments, one aims at finding the correct model explaining the measured process. We explain about various relevant systems in what follows. Ion channels Since the ion channel is either opened or closed, when recording the number of ions that go through the channel when time elapses, observed is a two-state trajectory of the current versus time. Enzymes Here, there are several possible experiments on the activity of individual enzymes with a two-state signal. For example, one can create substrate that only upon the enzymatic activity shines light when activated (with a laser pulse). So, each time the enzyme acts, we see a burst of photons during the time period that the product molecule is in the laser area. Dynamics of biological molecules Structural changes of molecules are viewed in various experiments' type. Förster resonance energy transfer is an example. In many cases one sees a time trajectory that fluctuates among several cleared defined states. Quantum dots Another system that fluctuates among an on state and an off state is a quantum dot. Here, the fluctuations are since the molecule is either in a state that emits photons or in a dark state that does not emit photons (the dynamics among the states are influenced also from its interactions with the surroundings). See also Single-molecule experiment Reduced dimensions form Kinetic scheme Master equation Wave References Statistical mechanics Stochastic processes
Two-state trajectory
Physics
527
5,235,137
https://en.wikipedia.org/wiki/Dynamics%20of%20Markovian%20particles
Dynamics of Markovian particles (DMP) is the basis of a theory for kinetics of particles in open heterogeneous systems. It can be looked upon as an application of the notion of stochastic process conceived as a physical entity; e.g. the particle moves because there is a transition probability acting on it. Two particular features of DMP might be noticed: (1) an ergodic-like relation between the motion of particle and the corresponding steady state, and (2) the classic notion of geometric volume appears nowhere (e.g. a concept such as flow of "substance" is not expressed as liters per time unit but as number of particles per time unit). Although primitive, DMP has been applied for solving a classic paradox of the absorption of mercury by fish and by mollusks. The theory has also been applied for a purely probabilistic derivation of the fundamental physical principle: conservation of mass; this might be looked upon as a contribution to the old and ongoing discussion of the relation between physics and probability theory. Sources Bergner—DMP, a kinetics of macroscopic particles in open heterogeneous systems Dynamics (mechanics) Markov models
Dynamics of Markovian particles
Physics
245
7,547,202
https://en.wikipedia.org/wiki/Prostration
Prostration is the gesture of placing one's body in a reverentially or submissively prone position. Typically prostration is distinguished from the lesser acts of bowing or kneeling by involving a part of the body above the knee, especially the hands, touching the ground. Major world religions employ prostration as an act of submissiveness or worship to an entity or to the Supreme Being (i.e. God), as in the metanoia in Christian prayer used in the Eastern Orthodox and Oriental Orthodox Churches, and in the sujud of the Islamic prayer, salat. In various cultures and traditions, prostrations are similarly used to show respect to rulers, civil authorities and social elders or superiors, as in the Chinese kowtow or Ancient Greek proskynesis. The act has often traditionally been an important part of religious, civil and traditional rituals and ceremonies, and remains in use in many cultures. Traditional religious practices Many religious institutions (listed alphabetically below) use prostrations to embody the lowering, submitting or relinquishing of the individual ego before a greater spiritual power or presence. Baháʼí Faith In the Baháʼí Faith, prostrations are performed as a part of one of the alternatives of obligatory prayer (the "Long" one) and in the case of traveling, a prostration is performed in place of each missed obligatory prayer in addition to saying "Glorified be God, the Lord of Might and Majesty, of Grace and Bounty". However, if unable to do so, saying "Glorified be God" is sufficient. There are specifics about where the prostration can take place including, "God hath granted you leave to prostrate yourselves on any surface that is clean ..." (note #10) and "He also condemns such practices as prostrating oneself before another person and other forms of behaviour that abase one individual in relation to another". (note #57) Buddhism In Buddhism, prostrations are commonly used and the various stages of the physical movement are traditionally counted in threes and related to the Triple Gem, consisting of: the Awakened One (Sanskrit/Pali: Buddha) (in this meaning, to own potential) his teaching (Sanskrit: Dharma; Pali: Dhamma) his community (Sangha) of noble disciples (ariya-savaka). In addition, different schools within Buddhism use prostrations in various ways, such as the Tibetan tantric preliminary practice of a 100,000 prostrations as a means of overcoming pride (see Ngöndro). Tibetan pilgrims often progress by prostrating themselves fully at each step, then moving forward as they get up, in such a way that they have lain on their face on each part of their route. Each three paces involves a full prostration; the number three is taken to refer to the Triple Gem. This is often done round a stupa, and in an extremely arduous pilgrimage, Mount Kailash is circumnavigated entirely by this method, which takes about four weeks to complete the 52-kilometre route. It is also not unusual to see pilgrims prostrating all the way from their home to Lhasa, sometimes a distance of over 2000 km, the process taking up to two years to complete. Christianity In Oriental Orthodox Christianity and Western Orthodox Christianity, believers prostrate during the seven fixed prayer times; prayer rugs are used by some adherents to provide a clean space for believers to offer their Christian prayers to God, e.g. the canonical hours. Oriental Orthodox Christians, such as Copts, incorporate prostrations in their prayers that are performed facing eastward in anticipation of the Second Coming of Jesus, "prostrating three times in the name of the Trinity; at the end of each Psalm … while saying the ‘Alleluia’; and multiple times" during the forty-one Kyrie eleisons" (cf. Agpeya). Syriac Orthodox and Indian Orthodox Christians, as well as Christians belonging to the Mar Thoma Syrian Church (an Oriental Protestant denomination), make multiple prostrations at the seven fixed prayer times during which the canonical hours are prayed, thrice during the Qauma prayer, at the words "Crucified for us, Have mercy on us!", thrice during the recitation of the Nicene Creed at the words "And was incarnate of the Holy Spirit...", "And was crucified for us...", & "And on the third day rose again...", as well as thrice during the Prayer of the Cherubim while praying the words "Blessed is the glory of the Lord, from His place forever!" (cf. Shehimo). Oriental Catholic and Oriental Protestant rites also use prostrations in a similar way as the Oriental Orthodox Churches. Among Old Ritualists, a prayer rug known as the Podruchnik is used to keep one's face and hands clean during prostrations, as these parts of the body are used to make the sign of the cross. The Catholic, Lutheran, and Anglican Churches use full prostrations, lying flat on the floor face down, during the imposition of Holy Orders, Religious Profession and the Consecration of Virgins. Additionally, in the Roman Catholic Church and United Methodist Church, at the beginning of the Good Friday Liturgy, the celebrating priest and the deacon prostrate themselves in front of the altar. Dominican practice on Good Friday services in priory churches includes prostration by all friars in the aisle of the church. In the Roman Catholic, Lutheran and Anglican churches, partial prostrations ("profound bows") can be used in place of genuflections for those who are unable to genuflect. The prostration is always performed before God, and in the case of holy orders, profession or consecration the candidates prostrate themselves in front of the altar which is a symbol of Christ. In Eastern Orthodox (Byzantine Rite) worship, prostrations are preceded by making the sign of the cross and consist of kneeling and touching the head to the floor. They are commonly performed both at specific moments during the services and when venerating relics or icons. However, prostrations are forbidden on the Lord's Day (Sunday) and during Paschaltide (Easter season) in honour of the Resurrection and are traditionally discouraged on Great Feasts of the Lord. During Great Lent, and Holy Week, frequent prostrations are prescribed (see Prayer of St. Ephraim). Orthodox Christian may also make prostrations in front of people (though in this case without the Sign of the Cross, as it is not an act of veneration or divine worship), such as the bishop, one's spiritual father or one another when asking forgiveness (in particular at the Vespers service which begins Great Lent on the afternoon of the Sunday of Forgiveness.) Those who are physically unable to make full prostrations may instead substitute metanias (bows at the waist). Hinduism In Hinduism, eight-limbed (ashtanga pranama, also called dandavat, meaning "like a stick") and five-limbed (panchanga pranama) prostrations are included in the religious ritual of puja. Islam In Islam, prostrations (sajadat, plural of sujud or sajda) are used to praise, glorify and humble oneself in front of Allah (The God), and are a vital part of the five obligatory prayers performed daily; this is deemed obligatory for every Muslim whether the prayers are being performed individually or in the congregation. Additionally, the thirty-second chapter (sura) of the Qur'an is titled As-Sajdah ("The Prostration": see ), while the Arabic word sujud (also meaning prostration) appears about 90 times in the Qur'an, a fact which many Muslim scholars claim to be another example of its significance in Islam. According to a traditional account of the words and deeds of Muhammad as contained in the collection of hadith of Ibn Majah, Muhammad is reported to have said that "The prayer [salah] is a cure for many diseases" and have advised people to perform prostration gracefully. It is also important to note that in Islam, the prostration to anyone but Allah is absolutely forbidden. Muhammad strictly prohibited Muslims from prostrating before him. Regardless of the circumstances, no Muslim should request or accept prostration from others, as prostration of anyone but Allah is strictly prohibited in Islam. Jainism In Jainism, there is a great importance placed on prostration, especially when a devotee is in the temples or in front of high souls. It represents the surrendering of ego. Judaism In Judaism, the Tanakh and Talmudic texts as well as writings of Gaonim and Rishonim indicate that prostration was very common among Jewish communities until some point during the Middle Ages. In Mishneh Torah, Maimonides states full prostration (with one's body pressed flat to the earth) should be practiced at the end of the Amidah, recited thrice daily. Members of the Karaite denomination practice full prostrations during prayers. Traditionally, Orthodox Ashkenazi Jews prostrated during Rosh Hashana and Yom Kippur, as did Yemenite Jews during the Tachanun part of daily Jewish prayer. Ethiopian Jews traditionally prostrated during a holiday specific to their community known as Sigd. Sigd comes from a root word meaning prostration in Ge'ez, Aramaic, and Arabic. There is a movement among Talmide haRambam to revive prostration as a regular part of daily Jewish worship. Rabbinical Judaism teaches that when the High Priest spoke the Tetragrammaton in the Holy of Holies of the Temple in Jerusalem on Yom Kippur, the people in the courtyard were to prostrate themselves completely as they heard the name spoken aloud. Judaism forbids prostration directly on a stone surface in order to prevent conflation with similar practices of Canaanite polytheists. Sikhism Sikhs prostrate in front of Guru Granth Sahib, the holy scripture of the Sikhs. Sikhs consider Guru Granth Sahib as their living Guru and the unchanging word of God: thus, by prostrating, Sikhs present their head to their Guru, awaiting command, which is taken in the form of a hukamnama, or a random opening of Guru Granth Sahib to reveal an edict for the individual or congregation (similar to the ancient Roman practice of sortes sanctorum, a form of bibliomancy). Sikhs call the prostration mutha tekna ("lowering the forehead"). Whenever and however many times a Sikh is in the presence of Guru Granth Sahib he will prostrate, usually upon the initial sight of Guru Granth Sahib and again upon leaving the presence of Guru Granth Sahib. Sikhs, in their personal worship (morning Nitnem and evening Rehras), will prostrate upon the completion of prayers and the ardās. The direction of prostration is not important as Sikhs place emphasis on the omnipresence of God: however, if it is possible, Sikhs tend to prostrate in the direction in which bani (books containing the word of God, such as the Gutka Sahib or Pothi Sahib) are kept. Other prostrations practiced by Sikhs from an Indian culture are touching of the feet to show respect and great humility (generally done to grandparents and other family elders). Full prostration is reserved for Guru Granth Sahib, as prostration is considered to be the ultimate act of physical humility and veneration. Other contexts Outside of traditional religious institutions, prostrations are used to show deference to worldly power, in the pursuit general spiritual advancement and as part of a physical-health regimen. Hawaii In ancient Hawaii, a form of prostration known as kapu moe required all to prostrate in the presence of a nīʻaupiʻo or a piʻo chief on the pain of death. The only people exempt from this were chiefs of the next grade the naha and wohi chiefs who were required to sit in their presence. Other Polynesian groups are known to practice this. Imperial China In Imperial China, a form of prostration known as a kowtow or kētou was used as a sign of respect and reverence. Japan In Japan, a common form of prostration is called dogeza, which was used as a sign of deep respect and submission for the elders of a family, guests, samurai, daimyōs and the Emperor. In modern times, it is generally used only in extreme circumstances, such as when apologizing for very serious transgressions or begging for an incredible favor. To perform dogeza, a person first enters the sitting/kneeling position known as seiza, and then proceeds to touch the head to the ground. This practice may be related to rites of the Shinto religion and culture of Japan dating back centuries. Martial arts Shugyo in martial arts, particularly in the Shōtōkai and Kyokushin styles of Karate, it is a form of extreme spiritual discipline. Yoga In modern yoga practice, "sun salutations" (sūrya namaskāra) are a regular part of practitioners' routines. Such a practice may be used for both maintaining physical well-being and spiritual attainment. Yoruba Ìdọ̀bálẹ̀ and Ìkúnlẹ̀ In traditional and contemporary Yoruba culture, younger male family and community members greet elders by assuming a position called "ìdọ̀bálẹ̀". The traditional, full Yoruba prostration involves the prostrator lying down almost prone with his feet extended behind his torso while the rest of his weight is propped up on both hands. This traditional form is being replaced by a more informal bow and touching the fingertips to the floor in front of an elder with one hand, while bending slightly at the knee. The female form of the greeting is the "ìkúnlẹ̀", a form of kneeling where the younger party bows to one or both knees in front of an elder relative or community member. Both gestures are widely practiced; to not perform them would be considered ill-mannered. Modified versions of both greetings are also common in traditional Yoruba religious and cultural contexts in the African diaspora, particularly in Brazil and Cuba. See also Bowing Genuflection Kowtow Salat Subordinate Zemnoy poklon Notes and references External links Stand, Bow, Prostrate: The Prayerful Body of Coptic Christianity by Bishoy Dawood - Clarion Review Prostrations in Oriental Orthodox Christianity demonstrated by a Coptic Monk Prostrations in Orthodox Christianity by Fr. Seraphim Holland - St Nicholas Russian Orthodox Church Human positions Prayer Gestures of respect Sacramentals
Prostration
Biology
3,092
919,204
https://en.wikipedia.org/wiki/The%20Baroque%20Cycle
The Baroque Cycle is a series of novels by American writer Neal Stephenson. It was published in three volumes containing eight books in 2003 and 2004. The story follows the adventures of a sizable cast of characters living amidst some of the central events of the late 17th and early 18th centuries in Europe, Africa, Asia, and Central America. Despite featuring a literary treatment consistent with historical fiction, Stephenson has characterized the work as science fiction, because of the presence of some anomalous occurrences and the work's particular emphasis on themes relating to science and technology. The sciences of cryptology and numismatics feature heavily in the series, as they do in some of Stephenson's other works. Books The Baroque Cycle consists of several novels "lumped together into three volumes because it is more convenient from a publishing standpoint"; Stephenson felt calling the works a trilogy would be "bogus". Appearing in print in 2003 and 2004, the cycle contains eight books originally published in three volumes: Quicksilver, Vol. I of the Baroque Cycle – Arthur C. Clarke Award winner, Locus Award nominee, 2004 Book 1 – Quicksilver Book 2 – King of the Vagabonds Book 3 – Odalisque The Confusion, Vol. II of the Baroque Cycle – Locus Award winner Book 4 – Bonanza Book 5 – The Juncto The System of the World, Vol. III of the Baroque Cycle – Locus Award winner, Arthur C. Clarke Award nominee, 2005 Book 6 – Solomon's Gold Book 7 – Currency Book 8 – The System of the World Setting The books travel throughout early modern Europe between the Restoration of the Stuart monarchy and the beginning of the 18th century. Though most of the focus is in Europe, the adventures of one character, Jack Shaftoe, do take him throughout the world, and the fledgling British colonies in North America are important to another (Daniel Waterhouse). Quicksilver takes place mainly in the years between the Restoration of the Stuart monarchy in England (1660) and the Glorious Revolution of 1688. The Confusion follows Quicksilver without temporal interruption, but ranges geographically from Europe and the Mediterranean through India to the Philippines, Japan and Mexico. The System of the World takes place principally in London in 1714, about ten years after the events of The Confusion. Themes A central theme in the series is Europe's transformation away from feudal rule and control toward the rational, scientific, and more merit-based systems of government, finance, and social development that define what is now considered "western" and "modern". Characters include Sir Isaac Newton, Gottfried Leibniz, Nicolas Fatio de Duillier, William of Orange, Louis XIV of France, Oliver Cromwell, Peter the Great, John Churchill, 1st Duke of Marlborough and many other people of note of that time. The fictional characters of Eliza, Jack and Daniel collectively cause real historic effects. The books feature considerable sections concerning alchemy. The principal alchemist of the tale is the mysterious Enoch Root, who, along with the descendants of several characters in this series, is also featured in the Stephenson novels Cryptonomicon and Fall. Mercury provides a unifying theme, both in the form of the common name "quicksilver" for the element Mercury, long associated with alchemy and the title of the first volume of the cycle, and the Roman god Mercury, especially the god's various patronages: financial gain, commerce, eloquence, messages, communication, travelers, boundaries, luck, trickery, and thieves, all of which are central themes in the plot. Astronomy is also a significant (although secondary) theme in the cycle; a transit of Mercury was notably observed in London on day of the coronation of King Charles II of England, whose Restoration marks, chronologically, the earliest key historical event in the cycle. Inspiration Stephenson was inspired to write The Baroque Cycle when, while working on Cryptonomicon, he encountered a statement by George Dyson in Darwin among the Machines that suggests Leibniz was "arguably the founder of symbolic logic and he worked with computing machines". He also had heard considerable discussion of the Leibniz–Newton calculus controversy and Newton's work at the treasury during the last 30 years of his life, and in particular the case against Leibniz as summed up in the Commercium Epistolicum of 1712 was a huge inspiration which went on to inform the project. He found "this information striking when [he] was already working on a book about money and a book about computers". Further research into the period excited Stephenson and he embarked on writing the historical piece that became The Baroque Cycle. Characters Main characters Daniel Waterhouse, an English natural philosopher and Dissenter Jack Shaftoe, an illiterate adventurer of great resourcefulness and charisma Eliza, a girl abducted into slavery, and later freed, who becomes a spy and a financier Enoch Root, a mysterious and ageless man who also appears in Cryptonomicon, set in World War II and the 1990s. He also appears in Fall; or, Dodge in Hell. Bob Shaftoe, a soldier in the service of John Churchill, and brother of Jack Shaftoe Minor characters Louis Anglesey, Earl of Upnor, best swordsman in England Thomas More Anglesey, Cavalier, Duke of Gunfleet Duc d'Arcachon, French admiral who dabbles in slavery Etienne d'Arcachon, son of the duke; most polite man in France Henri Arlanc, Huguenot, friend of Jack Shaftoe. Henry Arlanc, Son of Henri Arlanc, porter of the Royal Society Mrs. Arlanc, wife of Henry Gomer Bolstrood, dissident agitator, future legendary furniture maker Clarke, English alchemist, boards young Isaac Newton Charles Comstock, son of John Comstock John Comstock, Earl of Epsom and Lord Chancellor Roger Comstock, Marquis of Ravenscar, Whig Patron of Daniel Waterhouse Will Comstock, Earl of Lostwithiel Moseh de la Cruz, galley slave, Spanish Jew Dappa, Nigerian linguist aboard Minerva Vrej Esphanian, galley slave, Armenian Trader Mr. Foot, galley slave, erstwhile bar-owner from Dunkirk Édouard de Gex, Jesuit fanatic, court priest at Versailles Gabriel Goto, galley slave, Jesuit priest from Japan Lothar von Hacklheber, German banker obsessed with alchemy Thomas Ham, of Ham Bros Goldsmiths, half-brother-in-law of Daniel Waterhouse Otto van Hoek, galley slave, Captain of the Minerva Jeronimo, galley slave, a high-born Spaniard with Tourette's syndrome Mr. Kikin, Russian diplomat in London Nyazi, galley slave, camel-trader of the Upper Nile Norman Orney, London shipbuilder and Dissenter Danny Shaftoe, son of Jack Shaftoe Jimmy Shaftoe, son of Jack Shaftoe Mr. Sluys, Dutch merchant and traitor Mr. Threader, Tory money-scrivener Drake Waterhouse, Puritan father of Daniel Waterhouse Faith Waterhouse, wife of Daniel Waterhouse Godfrey Waterhouse, son of Daniel Waterhouse Mayflower Waterhouse, half-sister of Daniel Waterhouse, wife of Thomas Ham Raleigh Waterhouse, half-brother of Daniel Waterhouse Sterling Waterhouse, half-brother of Daniel Waterhouse Charles White, Tory, Captain of the King's Messengers, who has the habit of biting off people's ears Yevgeny the Raskolnik, Russian heretic, whaler and anti-tsarist rebel Peter Hoxton (Saturn), horologist Colonel Barnes, peg-legged commander of dragoons Queen Kottakkal, sovereign of the Malabar pirates Teague Partry, distant relative of the Shaftoes in Connaught, Ireland Historical figures who appear as characters Jean Bart Catherine Barton Henry St John, 1st Viscount Bolingbroke Robert Boyle Henrietta Braithwaite, mistress of George II Caroline of Ansbach Charles II of England John Churchill, later 1st Duke of Marlborough Sir William Curtius, Baron Curtius of Sweden D'Artagnan Nicolas Fatio de Duillier John Flamsteed Benjamin Franklin (as a young boy) Eleanor Erdmuthe Louise, widow of John Frederick Elizabeth Charlotte of the Palatine George I of Great Britain George II of Great Britain, the Prince of Wales Nell Gwyn George Frideric Handel Robert Hooke Christiaan Huygens James Stuart, Duke of York, then James VII and II George Jeffreys Johann Georg IV, Elector of Saxony Arnold Joost van Keppel Jack Ketch Gottfried Leibniz Louis XIV of France Mary II of England Thomas Newcomen Isaac Newton Henry Oldenburg William Penn Samuel Pepys Peter the Great traveling incognito as Peter Romanov Bonaventure Rossignol, a French cryptanalyst James Scott, Duke of Monmouth John III Sobieski, King of Poland Sophia of Hanover Sophia Charlotte of Hanover Edward "Blackbeard" Teach Elizabeth Villiers John Wilkins William III of England, Prince of Orange Christopher Wren John Locke Mary Goose John Keill Ignatius Sancho's life as a freed slave in 18th century London, letters as an abolitionist, and life under the protection of a Duchess bear a strong similarity to the character of Dappa Critical response Robert Wiersem of The Toronto Star called The Baroque Cycle a "sublime, immersive, brain-throttlingly complex marvel of a novel that will keep scholars and critics occupied for the next 100 years". References External links Locus Magazine interview with Neal Stephenson The Source of the Modern World interview by Glenn Reynolds at Tech Central Station Back to the Baroque review by Reynolds in The Weekly Standard "Neal Stephenson – the interview" on Guardian Unlimited, regarding The Baroque Cycle Book series introduced in 2003 Historical novels by series Novels by Neal Stephenson American picaresque novels Cultural depictions of Benjamin Franklin Cultural depictions of Blackbeard Cultural depictions of Isaac Newton Cultural depictions of Charles II of England Cultural depictions of George I of Great Britain Cultural depictions of George II of Great Britain Cultural depictions of Nell Gwyn Cultural depictions of George Frideric Handel Cultural depictions of James II of England Cultural depictions of Louis XIV Cultural depictions of Mary II Cultural depictions of Peter the Great Cultural depictions of William III of England
The Baroque Cycle
Astronomy
2,091
78,097,982
https://en.wikipedia.org/wiki/Erlianhyus
Erlianhyus is a genus of cetancodontamorph artiodactyl that lived during the Middle Eocene in China. It is monotypic and known from one species, E. primitivus. Taxonomy The holotype of Erlianhyus (IVPP V 28275) was discovered in strata from the Irdin Manha Formation. Its binomial name is derived from the Erlian Basin, in which it was discovered, and the suffix -hyus, which is often applied to bunodont artiodactyls. The species name, primitivus, refers to it bearing relatively primitive features. Erlianhyus was originally regarded as the sister taxon of a clade containing dichobunids, anthracotheriids and suoids. In 2023, Yu and colleagues performed a phylogenetic analysis that recovered it in a polytomy with Andrewsarchus and Achaenodon. Description Erlianhyus is known from a partial maxilla that preserves teeth P3–M3. The molars are bunodont and bear weak cristae, similar to entelodontids. Similarly M1 in both Erlianhyus and entelodonts lacks lingual cingulum. However, the molars have a large and lingually-positioned metaconule and the postmetaconule crista on M1 and M2 are distinct, suggesting that it does not belong to that clade. References Cetancodontamorpha Eocene Artiodactyla Eocene mammals of Asia Fossil taxa described in 2021 Monotypic prehistoric Artiodactyla genera
Erlianhyus
Biology
335
65,731,439
https://en.wikipedia.org/wiki/HAT-P-28
HAT-P-28 is the primary of a binary star system about 1320 light-years away. It is a G-type main-sequence star. The star's age is older than the Sun's at 6.1 billion years. HAT-P-28 is slightly enriched in heavy elements, having a 130% concentration of iron compared to the Sun. Since 2014, the binary star system is suspected to be surrounded by a debris disk with a 6.1″(2500 AU) radius. The red dwarf stellar companion was detected in 2015 at a projected separation of 0.972″ and confirmed in 2016 to be either bound or comoving. Planetary system In 2011 a transiting hot Jupiter planet b was detected on a nearly circular orbit. The planetary equilibrium temperature is 1384 K. No orbital decay was detected as in 2018, despite the close proximity of the planet to the host star. References Andromeda (constellation) G-type main-sequence stars Binary stars Planetary systems with one confirmed planet Planetary transit variables J00520018+3443422
HAT-P-28
Astronomy
219
45,958
https://en.wikipedia.org/wiki/Mutual%20assured%20destruction
Mutual assured destruction (MAD) is a doctrine of military strategy and national security policy which posits that a full-scale use of nuclear weapons by an attacker on a nuclear-armed defender with second-strike capabilities would result in the complete annihilation of both the attacker and the defender. It is based on the theory of rational deterrence, which holds that the threat of using strong weapons against the enemy prevents the enemy's use of those same weapons. The strategy is a form of Nash equilibrium in which, once armed, neither side has any incentive to initiate a conflict or to disarm. The result may be a nuclear peace, in which the presence of nuclear weapons decreases the risk of crisis escalation, since parties will seek to avoid situations that could lead to the use of nuclear weapons. Proponents of nuclear peace theory therefore believe that controlled nuclear proliferation may be beneficial for global stability. Critics argue that nuclear proliferation increases the chance of nuclear war through either deliberate or inadvertent use of nuclear weapons, as well as the likelihood of nuclear material falling into the hands of violent non-state actors. The term "mutual assured destruction", commonly abbreviated "MAD", was coined by Donald Brennan, a strategist working in Herman Kahn's Hudson Institute in 1962. Brennan conceived the acronym cynically, spelling out the English word "mad" to argue that holding weapons capable of destroying society was irrational. Theory Under MAD, each side has enough nuclear weaponry to destroy the other side. Either side, if attacked for any reason by the other, would retaliate with equal or greater force. The expected result is an immediate, irreversible escalation of hostilities resulting in both combatants' mutual, total, and assured destruction. The doctrine requires that neither side construct shelters on a massive scale. If one side constructed a similar system of shelters, it would violate the MAD doctrine and destabilize the situation, because it would have less to fear from a second strike. The same principle is invoked against missile defense. The doctrine further assumes that neither side will dare to launch a first strike because the other side would launch on warning (also called fail-deadly) or with surviving forces (a second strike), resulting in unacceptable losses for both parties. The payoff of the MAD doctrine was and still is expected to be a tense but stable global peace. However, many have argued that mutually assured destruction is unable to deter conventional war that could later escalate. Emerging domains of cyber-espionage, proxy-state conflict, and high-speed missiles threaten to circumvent MAD as a deterrent strategy. The primary application of this doctrine started during the Cold War (1940s to 1991), in which MAD was seen as helping to prevent any direct full-scale conflicts between the United States and the Soviet Union while they engaged in smaller proxy wars around the world. MAD was also responsible for the arms race, as both nations struggled to keep nuclear parity, or at least retain second-strike capability. Although the Cold War ended in the early 1990s, the MAD doctrine continues to be applied. Proponents of MAD as part of the US and USSR strategic doctrine believed that nuclear war could best be prevented if neither side could expect to survive a full-scale nuclear exchange as a functioning state. Since the credibility of the threat is critical to such assurance, each side had to invest substantial capital in their nuclear arsenals even if they were not intended for use. In addition, neither side could be expected or allowed to adequately defend itself against the other's nuclear missiles. This led both to the hardening and diversification of nuclear delivery systems (such as nuclear missile silos, ballistic missile submarines, and nuclear bombers kept at fail-safe points) and to the Anti-Ballistic Missile Treaty. This MAD scenario is often referred to as rational nuclear deterrence. Theory of mutually assured destruction When the possibility of nuclear warfare between the United States and Soviet Union started to become a reality, theorists began to think that mutual assured destruction would be sufficient to deter the other side from launching a nuclear weapon. Kenneth Waltz, an American political scientist, believed that nuclear forces were in fact useful, but even more useful in the fact that they deterred other nuclear threats from using them, based on mutually assured destruction. The theory of mutually assured destruction being a safe way to deter continued even farther with the thought that nuclear weapons intended on being used for the winning of a war, were impractical, and even considered too dangerous and risky. Even with the Cold War ending in 1991, deterrence from mutually assured destruction is still said to be the safest course to avoid nuclear warfare. A study published in the Journal of Conflict Resolution in 2009 quantitatively evaluated the nuclear peace hypothesis and found support for the existence of the stability-instability paradox. The study determined that nuclear weapons promote strategic stability and prevent large-scale wars but simultaneously allow for more low intensity conflicts. If a nuclear monopoly exists between two states, and one state has nuclear weapons and its opponent does not, there is a greater chance of war. In contrast, if there is mutual nuclear weapon ownership with both states possessing nuclear weapons, the odds of war drop precipitously. History Pre-1945 The concept of MAD had been discussed in the literature for nearly a century before the invention of nuclear weapons. One of the earliest references comes from the English author Wilkie Collins, writing at the time of the Franco-Prussian War in 1870: "I begin to believe in only one civilizing influence—the discovery one of these days of a destructive agent so terrible that War shall mean annihilation and men's fears will force them to keep the peace." The concept was also described in 1863 by Jules Verne in his novel Paris in the Twentieth Century, though it was not published until 1994. The book is set in 1960 and describes "the engines of war", which have become so efficient that war is inconceivable and all countries are at a perpetual stalemate. MAD has been invoked by more than one weapons inventor. For example, Richard Jordan Gatling patented his namesake Gatling gun in 1862 with the partial intention of illustrating the futility of war. Likewise, after his 1867 invention of dynamite, Alfred Nobel stated that "the day when two army corps can annihilate each other in one second, all civilized nations, it is to be hoped, will recoil from war and discharge their troops." In 1937, Nikola Tesla published The Art of Projecting Concentrated Non-dispersive Energy through the Natural Media, a treatise concerning charged particle beam weapons. Tesla described his device as a "superweapon that would put an end to all war." The March 1940 Frisch–Peierls memorandum, the earliest technical exposition of a practical nuclear weapon, anticipated deterrence as the principal means of combating an enemy with nuclear weapons. Early Cold War In August 1945, the United States became the first nuclear power after the nuclear attacks on Hiroshima and Nagasaki. Four years later, on August 29, 1949, the Soviet Union detonated its own nuclear device. At the time, both sides lacked the means to effectively use nuclear devices against each other. However, with the development of aircraft like the American Convair B-36 and the Soviet Tupolev Tu-95, both sides were gaining a greater ability to deliver nuclear weapons into the interior of the opposing country. The official policy of the United States became one of "Instant Retaliation", as coined by Secretary of State John Foster Dulles, which called for massive atomic attack against the Soviet Union if they were to invade Europe, regardless of whether it was a conventional or a nuclear attack. By the time of the 1962 Cuban Missile Crisis, both the United States and the Soviet Union had developed the capability of launching a nuclear-tipped missile from a submerged submarine, which completed the "third leg" of the nuclear triad weapons strategy necessary to fully implement the MAD doctrine. Having a three-branched nuclear capability eliminated the possibility that an enemy could destroy all of a nation's nuclear forces in a first-strike attack; this, in turn, ensured the credible threat of a devastating retaliatory strike against the aggressor, increasing a nation's nuclear deterrence. Campbell Craig and Sergey Radchenko argue that Nikita Khrushchev (Soviet leader 1953 to 1964) decided that policies that facilitated nuclear war were too dangerous to the Soviet Union. His approach did not greatly change his foreign policy or military doctrine but is apparent in his determination to choose options that minimized the risk of war. Strategic Air Command Beginning in 1955, the United States Strategic Air Command (SAC) kept one-third of its bombers on alert, with crews ready to take off within fifteen minutes and fly to designated targets inside the Soviet Union and destroy them with nuclear bombs in the event of a Soviet first-strike attack on the United States. In 1961, President John F. Kennedy increased funding for this program and raised the commitment to 50 percent of SAC aircraft. During periods of increased tension in the early 1960s, SAC kept part of its B-52 fleet airborne at all times, to allow an extremely fast retaliatory strike against the Soviet Union in the event of a surprise attack on the United States. This program continued until 1969. Between 1954 and 1992, bomber wings had approximately one-third to one-half of their assigned aircraft on quick reaction ground alert and were able to take off within a few minutes. SAC also maintained the National Emergency Airborne Command Post (NEACP, pronounced "kneecap"), also known as "Looking Glass", which consisted of several EC-135s, one of which was airborne at all times from 1961 through 1990. During the Cuban Missile Crisis the bombers were dispersed to several different airfields, and sixty-five B-52s were airborne at all times. During the height of the tensions between the US and the USSR in the 1960s, two popular films were made dealing with what could go terribly wrong with the policy of keeping nuclear-bomb-carrying airplanes at the ready: Dr. Strangelove (1964) and Fail Safe (1964). Retaliation capability (second strike) The strategy of MAD was fully declared in the early 1960s, primarily by United States Secretary of Defense Robert McNamara. In McNamara's formulation, there was the very real danger that a nation with nuclear weapons could attempt to eliminate another nation's retaliatory forces with a surprise, devastating first strike and theoretically "win" a nuclear war relatively unharmed. The true second-strike capability could be achieved only when a nation had a guaranteed ability to fully retaliate after a first-strike attack. The United States had achieved an early form of second-strike capability by fielding continual patrols of strategic nuclear bombers, with a large number of planes always in the air, on their way to or from fail-safe points close to the borders of the Soviet Union. This meant the United States could still retaliate, even after a devastating first-strike attack. The tactic was expensive and problematic because of the high cost of keeping enough planes in the air at all times and the possibility they would be shot down by Soviet anti-aircraft missiles before reaching their targets. In addition, as the idea of a missile gap existing between the US and the Soviet Union developed, there was increasing priority being given to ICBMs over bombers. It was only with the advent of nuclear-powered ballistic missile submarines, starting with the George Washington class in 1959, that a genuine survivable nuclear force became possible and a retaliatory second strike capability guaranteed. The deployment of fleets of ballistic missile submarines established a guaranteed second-strike capability because of their stealth and by the number fielded by each Cold War adversary—it was highly unlikely that all of them could be targeted and preemptively destroyed (in contrast to, for example, a missile silo with a fixed location that could be targeted during a first strike). Given their long-range, high survivability and ability to carry many medium- and long-range nuclear missiles, submarines were credible and effective means for full-scale retaliation even after a massive first strike. This deterrence strategy and the program have continued into the 21st century, with nuclear submarines carrying Trident II ballistic missiles as one leg of the US strategic nuclear deterrent and as the sole deterrent of the United Kingdom. The other elements of the US deterrent are intercontinental ballistic missiles (ICBMs) on alert in the continental United States, and nuclear-capable bombers. Ballistic missile submarines are also operated by the navies of China, France, India, and Russia. The US Department of Defense anticipates a continued need for a sea-based strategic nuclear force. The first of the current Ohio-class SSBNs are expected to be retired by 2029, meaning that a replacement platform must already be seaworthy by that time. A replacement may cost over $4 billion per unit compared to the USS Ohios $2 billion. The USN's follow-on class of SSBN will be the Columbia class, which began construction in 2021 and enter service in 2031. ABMs threaten MAD In the 1960s both the Soviet Union (A-35 anti-ballistic missile system) and the United States (LIM-49 Nike Zeus) developed anti-ballistic missile systems. Had such systems been able to effectively defend against a retaliatory second strike, MAD would have been undermined. However, multiple scientific studies showed technological and logistical problems in these systems, including the inability to distinguish between real and decoy weapons. MIRVs MIRVs as counter against ABM The multiple independently targetable re-entry vehicle (MIRV) was another weapons system designed specifically to aid with the MAD nuclear deterrence doctrine. With a MIRV payload, one ICBM could hold many separate warheads. MIRVs were first created by the United States in order to counterbalance the Soviet A-35 anti-ballistic missile systems around Moscow. Since each defensive missile could be counted on to destroy only one offensive missile, making each offensive missile have, for example, three warheads (as with early MIRV systems) meant that three times as many defensive missiles were needed for each offensive missile. This made defending against missile attacks more costly and difficult. One of the largest US MIRVed missiles, the LGM-118A Peacekeeper, could hold up to 10 warheads, each with a yield of around —all together, an explosive payload equivalent to 230 Hiroshima-type bombs. The multiple warheads made defense untenable with the available technology, leaving the threat of retaliatory attack as the only viable defensive option. MIRVed land-based ICBMs tend to put a premium on striking first. The START II agreement was proposed to ban this type of weapon, but never entered into force. In the event of a Soviet conventional attack on Western Europe, NATO planned to use tactical nuclear weapons. The Soviet Union countered this threat by issuing a statement that any use of nuclear weapons (tactical or otherwise) against Soviet forces would be grounds for a full-scale Soviet retaliatory strike (massive retaliation). Thus it was generally assumed that any combat in Europe would end with apocalyptic conclusions. Land-based MIRVed ICBMs threaten MAD MIRVed land-based ICBMs are generally considered suitable for a first strike (inherently counterforce) or a counterforce second strike, due to: Their high accuracy (low circular error probable), compared to submarine-launched ballistic missiles which used to be less accurate, and more prone to defects; Their fast response time, compared to bombers which are considered too slow; Their ability to carry multiple MIRV warheads at once, useful for destroying a whole missile field or several cities with one missile. Unlike a decapitation strike or a countervalue strike, a counterforce strike might result in a potentially more constrained retaliation. Though the Minuteman III of the mid-1960s was MIRVed with three warheads, heavily MIRVed vehicles threatened to upset the balance; these included the SS-18 Satan which was deployed in 1976, and was considered to threaten Minuteman III silos, which led some neoconservatives to conclude a Soviet first strike was being prepared for. This led to the development of the aforementioned Pershing II, the Trident I and Trident II, as well as the MX missile, and the B-1 Lancer. MIRVed land-based ICBMs are considered destabilizing because they tend to put a premium on striking first. When a missile is MIRVed, it is able to carry many warheads (up to eight in existing US missiles, limited by New START, though Trident II is capable of carrying up to 12) and deliver them to separate targets. If it is assumed that each side has 100 missiles, with five warheads each, and further that each side has a 95 percent chance of neutralizing the opponent's missiles in their silos by firing two warheads at each silo, then the attacking side can reduce the enemy ICBM force from 100 missiles to about five by firing 40 missiles with 200 warheads, and keeping the rest of 60 missiles in reserve. As such, this type of weapon was intended to be banned under the START II agreement; however, the START II agreement was never brought into force, and neither Russia nor the United States ratified the agreement. Late Cold War The original US MAD doctrine was modified on July 25, 1980, with US President Jimmy Carter's adoption of countervailing strategy with Presidential Directive 59. According to its architect, Secretary of Defense Harold Brown, "countervailing strategy" stressed that the planned response to a Soviet attack was no longer to bomb Soviet population centers and cities primarily, but first to kill the Soviet leadership, then attack military targets, in the hope of a Soviet surrender before total destruction of the Soviet Union (and the United States). This modified version of MAD was seen as a winnable nuclear war, while still maintaining the possibility of assured destruction for at least one party. This policy was further developed by the Reagan administration with the announcement of the Strategic Defense Initiative (SDI, nicknamed "Star Wars"), the goal of which was to develop space-based technology to destroy Soviet missiles before they reached the United States. SDI was criticized by both the Soviets and many of America's allies (including Prime Minister of the United Kingdom Margaret Thatcher) because, were it ever operational and effective, it would have undermined the "assured destruction" required for MAD. If the United States had a guarantee against Soviet nuclear attacks, its critics argued, it would have first-strike capability, which would have been a politically and militarily destabilizing position. Critics further argued that it could trigger a new arms race, this time to develop countermeasures for SDI. Despite its promise of nuclear safety, SDI was described by many of its critics (including Soviet nuclear physicist and later peace activist Andrei Sakharov) as being even more dangerous than MAD because of these political implications. Supporters also argued that SDI could trigger a new arms race, forcing the USSR to spend an increasing proportion of GDP on defense—something which has been claimed to have been an indirect cause of the eventual collapse of the Soviet Union. Gorbachev himself in 1983 announced that “the continuation of the S.D.I. program will sweep the world into a new stage of the arms race and would destabilize the strategic situation.” Proponents of ballistic missile defense (BMD) argue that MAD is exceptionally dangerous in that it essentially offers a single course of action in the event of a nuclear attack: full retaliatory response. The fact that nuclear proliferation has led to an increase in the number of nations in the "nuclear club", including nations of questionable stability (e.g. North Korea), and that a nuclear nation might be hijacked by a despot or other person or persons who might use nuclear weapons without a sane regard for the consequences, presents a strong case for proponents of BMD who seek a policy which both protect against attack, but also does not require an escalation into what might become global nuclear war. Russia continues to have a strong public distaste for Western BMD initiatives, presumably because proprietary operative BMD systems could exceed their technical and financial resources and therefore degrade their larger military standing and sense of security in a post-MAD environment. Russian refusal to accept invitations to participate in NATO BMD may be indicative of the lack of an alternative to MAD in current Russian war-fighting strategy due to the dilapidation of conventional forces after the breakup of the Soviet Union. Proud Prophet Proud Prophet was a series of war games played out by various American military officials. The simulation revealed MAD made the use of nuclear weapons virtually impossible without total nuclear annihilation, regardless of how nuclear weapons were implemented in war plans. These results essentially ruled out the possibility of a limited nuclear strike, as every time this was attempted, it resulted in a complete expenditure of nuclear weapons by both the United States and USSR. Proud Prophet marked a shift in American strategy; following Proud Prophet, American rhetoric of strategies that involved the use of nuclear weapons dissipated and American war plans were changed to emphasize the use of conventional forces. TTAPS Study In 1983, a group of researchers including Carl Sagan released the TTAPS study (named for the respective initials of the authors), which predicted that the large scale use of nuclear weapons would cause a “nuclear winter”. The study predicted that the debris burned in nuclear bombings would be lifted into the atmosphere and diminish sunlight worldwide, thus reducing world temperatures by “-15° to -25°C”. These findings led to theory that MAD would still occur with many fewer weapons than were possessed by either the United States or USSR at the height of the Cold War. As such, nuclear winter was used as an argument for significant reduction of nuclear weapons since MAD would occur anyway. Post-Cold War After the fall of the Soviet Union, the Russian Federation emerged as a sovereign entity encompassing most of the territory of the former USSR. Relations between the United States and Russia were, at least for a time, less tense than they had been with the Soviet Union. While MAD has become less applicable for the US and Russia, it has been argued as a factor behind Israel's acquisition of nuclear weapons. Similarly, diplomats have warned that Japan may be pressured to nuclearize by the presence of North Korean nuclear weapons. The ability to launch a nuclear attack against an enemy city is a relevant deterrent strategy for these powers. The administration of US President George W. Bush withdrew from the Anti-Ballistic Missile Treaty in June 2002, claiming that the limited national missile defense system which they proposed to build was designed only to prevent nuclear blackmail by a state with limited nuclear capability and was not planned to alter the nuclear posture between Russia and the United States. While relations have improved and an intentional nuclear exchange is more unlikely, the decay in Russian nuclear capability in the post–Cold War era may have had an effect on the continued viability of the MAD doctrine. A 2006 article by Keir Lieber and Daryl Press stated that the United States could carry out a nuclear first strike on Russia and would "have a good chance of destroying every Russian bomber base, submarine, and ICBM." This was attributed to reductions in Russian nuclear stockpiles and the increasing inefficiency and age of that which remains. Lieber and Press argued that the MAD era is coming to an end and that the United States is on the cusp of global nuclear primacy. However, in a follow-up article in the same publication, others criticized the analysis, including Peter Flory, the US Assistant Secretary of Defense for International Security Policy, who began by writing "The essay by Keir Lieber and Daryl Press contains so many errors, on a topic of such gravity, that a Department of Defense response is required to correct the record." Regarding reductions in Russian stockpiles, another response stated that "a similarly one-sided examination of [reductions in] U.S. forces would have painted a similarly dire portrait". A situation in which the United States might actually be expected to carry out a "successful" attack is perceived as a disadvantage for both countries. The strategic balance between the United States and Russia is becoming less stable, and the objective, the technical possibility of a first strike by the United States is increasing. At a time of crisis, this instability could lead to an accidental nuclear war. For example, if Russia feared a US nuclear attack, Moscow might make rash moves (such as putting its forces on alert) that would provoke a US preemptive strike. An outline of current US nuclear strategy toward both Russia and other nations was published as the document "Essentials of Post–Cold War Deterrence" in 1995. In November 2020, the US successfully destroyed a dummy ICBM outside the atmosphere with another missile. Bloomberg Opinion writes that this defense ability "ends the era of nuclear stability". India and Pakistan MAD does not entirely apply to all nuclear-armed rivals. India and Pakistan are an example of this; because of the superiority of conventional Indian armed forces to their Pakistani counterparts, Pakistan may be forced to use their nuclear weapons on invading Indian forces out of desperation regardless of an Indian retaliatory strike. As such, any large-scale attack on Pakistan by India could precipitate the use of nuclear weapons by Pakistan, thus rendering MAD inapplicable. However, MAD is applicable in that it may deter Pakistan from making a “suicidal” nuclear attack rather than a defensive nuclear strike. North Korea Since the emergence of North Korea as a nuclear state, military action has not been an option in handling the instability surrounding North Korea because of their option of nuclear retaliation in response to any conventional attack on them, thus rendering non-nuclear neighboring states such as South Korea and Japan incapable of resolving the destabilizing effect of North Korea via military force. MAD may not apply to the situation in North Korea because the theory relies on rational consideration of the use and consequences of nuclear weapons, which may not be the case for potential North Korean deployment. Official policy Whether MAD was the officially accepted doctrine of the United States military during the Cold War is largely a matter of interpretation. The United States Air Force, for example, has retrospectively contended that it never advocated MAD as a sole strategy, and that this form of deterrence was seen as one of numerous options in US nuclear policy. Former officers have emphasized that they never felt as limited by the logic of MAD (and were prepared to use nuclear weapons in smaller-scale situations than "assured destruction" allowed), and did not deliberately target civilian cities (though they acknowledge that the result of a "purely military" attack would certainly devastate the cities as well). However, according to a declassified 1959 Strategic Air Command study, US nuclear weapons plans specifically targeted the populations of Beijing, Moscow, Leningrad, East Berlin, and Warsaw for systematic destruction. MAD was implied in several US policies and used in the political rhetoric of leaders in both the United States and the USSR during many periods of the Cold War: The doctrine of MAD was officially at odds with that of the USSR, which had, contrary to MAD, insisted survival was possible. The Soviets believed they could win not only a strategic nuclear war, which they planned to absorb with their extensive civil defense planning, but also the conventional war that they predicted would follow after their strategic nuclear arsenal had been depleted. Official Soviet policy, though, may have had internal critics towards the end of the Cold War, including some in the USSR's own leadership: Other evidence of this comes from the Soviet minister of defense, Dmitriy Ustinov, who wrote that "A clear appreciation by the Soviet leadership of what a war under contemporary conditions would mean for mankind determines the active position of the USSR." The Soviet doctrine, although being seen as primarily offensive by Western analysts, fully rejected the possibility of a "limited" nuclear war by 1975. Criticism Deterrence theory has been criticized by numerous scholars for various reasons. A prominent strain of criticism argues that rational deterrence theory is contradicted by frequent deterrence failures, which may be attributed to misperceptions. Critics have also argued that leaders do not behave in ways that are consistent with the predictions of nuclear deterrence theory. For example, it has been argued that it is inconsistent with the logic of rational deterrence theory that states continue to build nuclear arsenals once they have reached the second-strike threshold. Additionally, many scholars have advanced philosophical objections against the principles of deterrence theory on purely ethical grounds. Included in this group is Robert L. Holmes who observes that mankind's reliance upon a system of preventing war which is based exclusively upon the threat of waging war is inherently irrational and must be considered immoral according to fundamental deontological principles. In addition, he questions whether it can be conclusively demonstrated that such a system has in fact served to prevent warfare in the past and may actually serve to increase the probability of waging war in the future due to its reliance upon the continuous development of new generations of technologically advanced nuclear weapons. Challengeable assumptions Second-strike capability A first strike must not be capable of preventing a retaliatory second strike or else mutual destruction is not assured. In this case, a state would have nothing to lose with a first strike or might try to preempt the development of an opponent's second-strike capability with a first strike. To avoid this, countries may design their nuclear forces to make decapitation strike almost impossible, by dispersing launchers over wide areas and using a combination of sea-based, air-based, underground, and mobile land-based launchers. Another method of ensuring second strike capability is through the use of dead man's switch or "fail-deadly:" in the absence of ongoing action from a functional command structure—such as would occur after suffering a successful decapitation strike—an automatic system defaults to launching a nuclear strike upon some target. A particular example is the Soviet (now Russian) Dead Hand system, which has been described as a semi-automatic "version of Dr. Strangelove's Doomsday Machine" which, once activated, can launch a second strike without human intervention. The purpose of the Dead Hand system is to ensure a second strike even if Russia were to suffer a decapitation attack, thus maintaining MAD. Perfect detection No false positives (errors) in the equipment and/or procedures that must identify a launch by the other side. The implication of this is that an accident could lead to a full nuclear exchange. During the Cold War there were several instances of false positives, as in the case of Stanislav Petrov. Perfect attribution. If there is a launch from the Sino-Russian border, it could be difficult to distinguish which nation is responsible—both Russia and China have the capability—and, hence, against which nation retaliation should occur. A launch from a nuclear-armed submarine could also be difficult to attribute. Perfect rationality No rogue commanders will have the ability to corrupt the launch decision process. Such an incident very nearly occurred during the Cuban Missile Crisis when an argument broke out aboard a nuclear-armed submarine cut off from radio communication. The second-in-command, Vasili Arkhipov, refused to launch despite an order from Captain Savitsky to do so. All leaders with launch capability seem to care about the survival of their citizens. Winston Churchill is quoted as saying that any strategy will not "cover the case of lunatics or dictators in the mood of Hitler when he found himself in his final dugout." Inability to defend No fallout shelter networks of sufficient capacity to protect large segments of the population and/or industry. No development of anti-missile technology or deployment of remedial protective gear. Inherent instability Another reason is that deterrence has an inherent instability. As Kenneth Boulding said: "If deterrence were really stable... it would cease to deter." If decision-makers were perfectly rational, they would never order the largescale use of nuclear weapons, and the credibility of the nuclear threat would be low. However, that apparent perfect rationality criticism is countered and so is consistent with current deterrence policy. In Essentials of Post-Cold War Deterrence, the authors detail an explicit advocation of ambiguity regarding "what is permitted" for other nations and its endorsement of "irrationality" or, more precisely, the perception thereof as an important tool in deterrence and foreign policy. The document claims that the capacity of the United States, in exercising deterrence, would be hurt by portraying US leaders as fully rational and cool-headed: Terrorism The threat of foreign and domestic nuclear terrorism has been a criticism of MAD as a defensive strategy. Deterrent strategies are ineffective against those who attack without regard for their life. Furthermore, the doctrine of MAD has been critiqued in regard to terrorism and asymmetrical warfare. Critics contend that a retaliatory strike would not be possible in this case because of the decentralization of terrorist organizations, which may be operating in several countries and dispersed among civilian populations. A misguided retaliatory strike made by the targeted nation could even advance terrorist goals in that a contentious retaliatory strike could drive support for the terrorist cause that instigated the nuclear exchange. However Robert Gallucci, the president of the John D. and Catherine T. MacArthur Foundation, argues that although traditional deterrence is not an effective approach toward terrorist groups bent on causing a nuclear catastrophe, "the United States should instead consider a policy of expanded deterrence, which focuses not solely on the would-be nuclear terrorists but on those states that may deliberately transfer or inadvertently lead nuclear weapons and materials to them. By threatening retaliation against those states, the United States may be able to deter that which it cannot physically prevent." Graham Allison makes a similar case and argues that the key to expanded deterrence is coming up with ways of tracing nuclear material to the country that forged the fissile material: "After a nuclear bomb detonates, nuclear forensic cops would collect debris samples and send them to a laboratory for radiological analysis. By identifying unique attributes of the fissile material, including its impurities and contaminants, one could trace the path back to its origin." The process is analogous to identifying a criminal by fingerprints: "The goal would be twofold: first, to deter leaders of nuclear states from selling weapons to terrorists by holding them accountable for any use of their own weapons; second, to give leaders every incentive to tightly secure their nuclear weapons and materials." Space weapons Strategic analysts have criticized the doctrine of MAD for its inability to respond to the proliferation of space weaponry. First, military space systems have unequal dependence across countries. This means that less-dependent countries may find it beneficial to attack a more-dependent country's space weapons, which complicates deterrence. This is especially true for countries like North Korea which have extensive ballistic missiles that could strike space-based systems. Even across countries with similar dependence, anti-satellite weapons (ASATs) have the ability to remove the command and control of nuclear weapons. This encourages crisis-instability and pre-emptive nuclear-disabling strikes. Third, there is a risk of asymmetrical challengers. Countries that fall behind in space weapon advancement may turn to using chemical or biological weapons. This may heighten the risk of escalation, bypassing any deterrent effects of nuclear weapons. Entanglements Cold-war bipolarity no longer is applicable to the global power balance. The complex modern alliance system makes allies and enemies tied to one another. Thus, action by one country to deter another could threaten the safety of a third country. "Security trilemmas" could increase tension during mundane acts of cooperation, complicating MAD. Emerging hypersonic weapons Hypersonic ballistic or cruise missiles threaten the retaliatory backbone of mutual assured destruction. The high precision and speed of these weapons may allow for the development of "decapitory" strikes that remove the ability of another nation to have a nuclear response. In addition, the secretive nature of these weapons' development can make deterrence more asymmetrical. Failure to retaliate If it was known that a country's leader would not resort to nuclear retaliation, adversaries may be emboldened. Edward Teller, a member of the Manhattan Project, echoed these concerns as early as 1985 when he said that "The MAD policy as a deterrent is totally ineffective if it becomes known that in case of attack, we would not retaliate against the aggressor." See also References External links "The Rise of U.S. Nuclear Primacy" from Foreign Affairs, March/April 2006 First Strike and Mutual Deterrence from the Dean Peter Krogh Foreign Affairs Digital Archives Herman Kahn's Doomsday Machine Robert McNamara's "Mutual Deterrence" speech from 1967 Getting MAD: Nuclear Mutual Assured Destruction Center for Arms Control and Non-Proliferation Council for a Livable World Nuclear Files.org Mutual Assured Destruction John G. Hines et al. Soviet Intentions 1965–1985. BDM, 1995. Cold War policies Nuclear strategy Nuclear weapons Nuclear warfare English phrases Military doctrines Cold War terminology Nuclear doomsday Theories of history
Mutual assured destruction
Chemistry
7,616
483,720
https://en.wikipedia.org/wiki/Fibonacci%20polynomials
In mathematics, the Fibonacci polynomials are a polynomial sequence which can be considered as a generalization of the Fibonacci numbers. The polynomials generated in a similar way from the Lucas numbers are called Lucas polynomials. Definition These Fibonacci polynomials are defined by a recurrence relation: The Lucas polynomials use the same recurrence with different starting values: They can be defined for negative indices by The Fibonacci polynomials form a sequence of orthogonal polynomials with and . Examples The first few Fibonacci polynomials are: The first few Lucas polynomials are: Properties The degree of Fn is n − 1 and the degree of Ln is n. The Fibonacci and Lucas numbers are recovered by evaluating the polynomials at x = 1; Pell numbers are recovered by evaluating Fn at x = 2. The ordinary generating functions for the sequences are: The polynomials can be expressed in terms of Lucas sequences as They can also be expressed in terms of Chebyshev polynomials and as where is the imaginary unit. Identities As particular cases of Lucas sequences, Fibonacci polynomials satisfy a number of identities, such as Closed form expressions, similar to Binet's formula are: where are the solutions (in t) of For Lucas Polynomials n > 0, we have A relationship between the Fibonacci polynomials and the standard basis polynomials is given by For example, Combinatorial interpretation If F(n,k) is the coefficient of xk in Fn(x), namely then F(n,k) is the number of ways an n−1 by 1 rectangle can be tiled with 2 by 1 dominoes and 1 by 1 squares so that exactly k squares are used. Equivalently, F(n,k) is the number of ways of writing n−1 as an ordered sum involving only 1 and 2, so that 1 is used exactly k times. For example F(6,3)=4 and 5 can be written in 4 ways, 1+1+1+2, 1+1+2+1, 1+2+1+1, 2+1+1+1, as a sum involving only 1 and 2 with 1 used 3 times. By counting the number of times 1 and 2 are both used in such a sum, it is evident that This gives a way of reading the coefficients from Pascal's triangle as shown on the right. References Jin, Z. On the Lucas polynomials and some of their new identities. Advances in Differential Equations 2018, 126 (2018). https://doi.org/10.1186/s13662-018-1527-9 Further reading External links Polynomials Fibonacci numbers
Fibonacci polynomials
Mathematics
553
46,964,570
https://en.wikipedia.org/wiki/IGF-1%20LR3
Long arginine 3-IGF-1, abbreviated as IGF-1 LR3 or LR3-IGF-1, is a synthetic protein and lengthened analogue of human insulin-like growth factor 1 (IGF-1). It differs from native IGF-1 in that it possesses an arginine instead of a glutamic acid at the third position in its amino acid sequence ("arginine 3"), and also has an additional 13 amino acids at its N-terminus (MFPAMPLLSLFVN) ("long"), for a total of 83 amino acids (relative to the 70 of IGF-1). The consequences of these modifications are that IGF-1 LR3 retains the pharmacological activity of IGF-1 as an agonist of the IGF-1 receptor, has very low affinity for the insulin-like growth factor-binding proteins (IGFBPs), and has improved metabolic stability. As a result, it is approximately three times more potent than IGF-1, and possesses a significantly longer half-life of about 20–30 hours (relative to IGF-1's half-life of about 12–15 hours). The amino acid sequence of IGF-1 LR3 is MFPAMPLSSL FVNGPRTLCG AELVDALQFV CGDRGFYFNK PTGYGSSSRR APQTGIVDEC CFRSCDLRRL EMYCAPLKPA KSA. See also des(1-3)IGF-1 Mecasermin Mecasermin rinfabate Insulin-like growth factor 2 Growth hormone therapy References Growth hormones Insulin-like growth factor receptor agonists Recombinant proteins
IGF-1 LR3
Biology
369
27,000
https://en.wikipedia.org/wiki/Smog
Smog, or smoke fog, is a type of intense air pollution. The word "smog" was coined in the early 20th century, and is a portmanteau of the words smoke and fog to refer to smoky fog due to its opacity, and odor. The word was then intended to refer to what was sometimes known as pea soup fog, a familiar and serious problem in London from the 19th century to the mid-20th century, where it was commonly known as a London particular or London fog. This kind of visible air pollution is composed of nitrogen oxides, sulfur oxide, ozone, smoke and other particulates. Man-made smog is derived from coal combustion emissions, vehicular emissions, industrial emissions, forest and agricultural fires and photochemical reactions of these emissions. Smog is often categorized as being either summer smog or winter smog. Summer smog is primarily associated with the photochemical formation of ozone. During the summer season when the temperatures are warmer and there is more sunlight present, photochemical smog is the dominant type of smog formation. During the winter months when the temperatures are colder, and atmospheric inversions are common, there is an increase in coal and other fossil fuel usage to heat homes and buildings. These combustion emissions, together with the lack of pollutant dispersion under inversions, characterize winter smog formation. Smog formation in general relies on both primary and secondary pollutants. Primary pollutants are emitted directly from a source, such as emissions of sulfur dioxide from coal combustion. Secondary pollutants, such as ozone, are formed when primary pollutants undergo chemical reactions in the atmosphere. Photochemical smog, as found for example in Los Angeles, is a type of air pollution derived from vehicular emission from internal combustion engines and industrial fumes. These pollutants react in the atmosphere with sunlight to form secondary pollutants that also combine with the primary emissions to form photochemical smog. In certain other cities, such as Delhi, smog severity is often aggravated by stubble burning in neighboring agricultural areas since the 1980s. The atmospheric pollution levels of Los Angeles, Beijing, Delhi, Lahore, Mexico City, Tehran and other cities are often increased by an inversion that traps pollution close to the ground. The developing smog is usually toxic to humans and can cause severe sickness, a shortened life span, or premature death. Etymology Coinage of the term "smog" has been attributed to Henry Antoine Des Voeux in his 1905 paper, "Fog and Smoke" for a meeting of the Public Health Congress. The 26 July 1905 edition of the London newspaper Daily Graphic quoted Des Voeux, "He said it required no science to see that there was something produced in great cities which was not found in the country, and that was smoky fog, or what was known as 'smog'." The following day the newspaper stated that "Dr. Des Voeux did a public service in coining a new word for the London fog." However, the term appeared twenty-five years earlier than Voeux's paper, in the Santa Cruz & Monterey Illustrated Handbook published in 1880 and also appears in print in a column quoting from the book in the 3 July 1880, Santa Cruz Weekly Sentinel. On 17 December 1881, in the publication Sporting Times, the author claims to have invented the word: "The 'Smog'a word I have invented, combined of smoke and fog, to designate the London atmosphere..." Anthropogenic causes Coal Coal fire can emit significant clouds of smoke that contribute to the formation of winter smog. Coal fires can be used to heat individual buildings or to provide energy in a power-producing plant. Air pollution from this source has been reported in England since the Middle Ages. London, in particular, was notorious up through the mid-20th century for its coal-caused smogs, which were nicknamed "pea-soupers". Air pollution of this type is still a problem in areas that generate significant smoke from burning coal. The emissions from coal combustion are one of the main causes of air pollution in China. Especially during autumn and winter when coal-fired heating ramps up, the amount of produced smoke at times forces some Chinese cities to close down roads, schools or airports. One prominent example for this was China's Northeastern city of Harbin in 2013. Transportation emissions Traffic emissions – such as from trucks, buses, and automobiles – also contribute to the formation of smog. Airborne by-products from vehicle exhaust systems and air conditioning cause air pollution and are a major ingredient in the creation of smog in some large cities. The major culprits from transportation sources are carbon monoxide (CO), nitrogen oxides (NO and NO2) and volatile organic compounds including hydrocarbons (hydrocarbons are the main component of petroleum fuels such as gasoline and diesel fuel). Transportation emissions also include sulfur dioxides and particulate matter but in much smaller quantities than the pollutants mentioned previously. The nitrogen oxides and volatile organic compounds can undergo a series of chemical reactions with sunlight, heat, ammonia, moisture, and other compounds to form the noxious vapors, ground level ozone, and particles that comprise smog. Photochemical smog Photochemical smog, often referred to as "summer smog", is the chemical reaction of sunlight, nitrogen oxides and volatile organic compounds in the atmosphere, which leaves airborne particles and ground-level ozone. Photochemical smog depends on primary pollutants as well as the formation of secondary pollutants. These primary pollutants include nitrogen oxides, particularly nitric oxide (NO) and nitrogen dioxide (NO2), and volatile organic compounds. The relevant secondary pollutants include peroxylacyl nitrates (PAN), tropospheric ozone, and aldehydes. An important secondary pollutant for photochemical smog is ozone, which is formed when hydrocarbons (HC) and nitrogen oxides (NOx) combine in the presence of sunlight; nitrogen dioxide (NO2), which is formed as nitric oxide (NO) combines with oxygen (O2) in the air. In addition, when SO2 and NOx are emitted they eventually are oxidized in the troposphere to nitric acid and sulfuric acid, which, when mixed with water, form the main components of acid rain. All of these harsh chemicals are usually highly reactive and oxidizing. Photochemical smog is therefore considered to be a problem of modern industrialization. It is present in all modern cities, but it is more common in cities with sunny, warm, dry climates and a large number of motor vehicles. Because it travels with the wind, it can affect sparsely populated areas as well. The composition and chemical reactions involved in photochemical smog were not understood until the 1950s. In 1948, flavor chemist Arie Haagen-Smit adapted some of his equipment to collect chemicals from polluted air, and identified ozone as a component of Los Angeles smog. Haagen-Smit went on to discover that nitrogen oxides from automotive exhausts and gaseous hydrocarbons from cars and oil refineries, exposed to sunlight, were key ingredients in the formation of ozone and photochemical smog. Haagen-Smit worked with Arnold Beckman, who developed various equipment for detecting smog, ranging from an "Apparatus for recording gas concentrations in the atmosphere" patented on 7 October 1952, to "air quality monitoring vans" for use by government and industry. Formation and reactions During the morning rush hour, a high concentration of nitric oxide and hydrocarbons are emitted to the atmosphere, mostly via on-road traffic but also from industrial sources. Some hydrocarbons are rapidly oxidized by OH· and form peroxy radicals, which convert nitric oxide (NO) to nitrogen dioxide (NO2). (1) R{.} + O2 + M -> RO2{.} + M (2) RO2{.} + NO -> NO2 + RO{.} (3) HO2{.} + NO -> NO2 + OH{.} Nitrogen dioxide (NO2) and nitric oxide (NO) further react with ozone (O3) in a series of chemical reactions: (4) NO2 + hv -> O(^3P) + NO, (5) O(^3P) + O2 + M-> O3 + M(heat) (6) O3 + NO -> NO2 + O2 This series of equations is referred to as the photostationary state (PSS). However, because of the presence of Reaction 2 and 3, NOx and ozone are not in a perfectly steady state. By replacing Reaction 6 with Reaction 2 and Reaction 3, the O3 molecule is no longer destroyed. Therefore, the concentration of ozone keeps increasing throughout the day. This mechanism can escalate the formation of ozone in smog. Other reactions such as the photooxidation of formaldehyde (HCHO), a common secondary pollutant, can also contribute to the increased concentration of ozone and NO2. Photochemical smog is more prevalent during summer days since incident solar radiation fluxes are high, which favors the formation of ozone (reactions 4 and 5). The presence of a temperature inversion layer is another important factor. That is because it prevents the vertical convective mixing of the air and thus allows the pollutants, including ozone, to accumulate near the ground level, which again favors the formation of photochemical smog. There are certain reactions that can limit the formation of O3 in smog. The main limiting reaction in polluted areas is: (7) NO2 + OH{.} + M -> HNO3 + M This reaction removes NO2 which limits the amount of O3 that can be produced from its photolysis (reaction 4). HNO3, nitric acid, is a sticky compound that can easily be removed onto surfaces (dry deposition) or dissolved in water and be rained out (wet deposition). Both ways are common in the atmosphere and can efficiently remove radicals and nitrogen dioxide. Natural causes Volcanoes An erupting volcano can emit high levels of sulfur dioxide along with a large quantity of particulate matter; two key components to the creation of smog. However, the smog created as a result of a volcanic eruption is often known as vog to distinguish it as a natural occurrence. The chemical reactions that form smog following a volcanic eruption are different than the reactions that form photochemical smog. The term smog encompasses the effect when a large number of gas-phase molecules and particulate matter are emitted to the atmosphere, creating a visible haze. The event causing a large number of emissions can vary but still result in the formation of smog. Plants Plants are another natural source of hydrocarbons that could undergo reactions in the atmosphere and produce smog. Globally both plants and soil contribute a substantial amount to the production of hydrocarbons, mainly by producing isoprene and terpenes. Hydrocarbons released by plants can often be more reactive than man-made hydrocarbons. For example when plants release isoprene, the isoprene reacts very quickly in the atmosphere with hydroxyl radicals. These reactions produce hydroperoxides which increase ozone formation. Health effects Smog is a serious problem in many cities and continues to harm human health. Ground-level ozone, sulfur dioxide, nitrogen dioxide and carbon monoxide are especially harmful for senior citizens, children, and people with heart and lung conditions such as emphysema, bronchitis, and asthma. It can inflame breathing passages, decrease the lungs' working capacity, cause shortness of breath, pain when inhaling deeply, wheezing, and coughing. It can cause eye and nose irritation and it dries out the protective membranes of the nose and throat and interferes with the body's ability to fight infection, increasing susceptibility to illness. Hospital admissions and respiratory deaths often increase during periods when ozone levels are high. There is a lack of knowledge on the long-term effects of air pollution exposure and the origin of asthma. An experiment was carried out using intense air pollution similar to that of the 1952 Great Smog of London. The results from this experiment concluded that there is a link between early-life pollution exposure that leads to the development of asthma, proposing the ongoing effect of the Great Smog. Modern studies continue to find links between mortality and the presence of smog. One study, published in Nature magazine, found that smog episodes in the city of Jinan, a large city in eastern China, during 2011–15, were associated with a 5.87% (95% CI 0.16–11.58%) increase in the rate of overall mortality. This study highlights the effect of exposure to air pollution on the rate of mortality in China. A similar study in Xi'an found an association between ambient air pollution and increased mortality associated with respiratory diseases. Levels of unhealthy exposure The U.S. EPA has developed an air quality index to help explain air pollution levels to the general public. 8 hour average ozone concentrations of 85 to 104 ppbv are described as "Unhealthy for Sensitive Groups", 105 ppbv to 124 ppbv as "unhealthy" and 125 ppb to 404 ppb as "very unhealthy". The "very unhealthy" range for some other pollutants are: 355 μg m−3 – 424 μg m−3 for PM10; 15.5 ppm – 30.4ppm for CO and 0.65 ppm – 1.24 ppm for NO2. Premature deaths due to cancer and respiratory disease In 2016, the Ontario Medical Association announced that smog is responsible for an estimated 9,500 premature deaths in the province each year. A 20-year American Cancer Society study found that cumulative exposure also increases the likelihood of premature death from respiratory disease, implying the 8-hour standard may be insufficient. Alzheimer risk Tiny magnetic particles from air pollution have for the first time been discovered to be lodged in human brains– and researchers think they could be a possible cause of Alzheimer's disease. Researchers at Lancaster University found abundant magnetite nanoparticles in the brain tissue from 37 individuals aged three to 92-years-old who lived in Mexico City and Manchester. This strongly magnetic mineral is toxic and has been implicated in the production of reactive oxygen species (free radicals) in the human brain, which is associated with neurodegenerative diseases including Alzheimer's disease. Risk of certain birth defects A study examining 806 women who had babies with birth defects between 1997 and 2006, and 849 women who had healthy babies, found that smog in the San Joaquin Valley area of California was linked to two types of neural tube defects: spina bifida (a condition involving, among other manifestations, certain malformations of the spinal column), and anencephaly (the underdevelopment or absence of part or all of the brain, which if not fatal usually results in profound impairment). An emerging cohort study in China linked early-life smog exposure to an increased risk for adverse pregnancy outcomes, in particular oxidative stress. Low birth weight According to a study published in The Lancet, even a very small (5 μg) change in PM2.5 exposure was associated with an increase (18%) in risk of a low birth weight at delivery, and this relationship held even below the current accepted safe levels. Other negative effects Although severe health effects caused by smog are the chief issue, intense air pollution caused by haze from air pollution, dust storm particles, and bush fire smoke, cause a reduction in irradiance that hurts both solar photovoltaic production as well as agricultural yield. Areas affected Smog can form in almost any climate where industries or cities release large amounts of air pollution, such as smoke or gases. However, it is worse during periods of warmer, sunnier weather when the upper air is warm enough to inhibit vertical circulation. It is especially prevalent in geologic basins encircled by hills or mountains. It often stays for an extended period of time over densely populated cities or urban areas and can build up to dangerous levels. Asia India For the past few years, cities in northern India have been covered in a thick layer of winter smog. The situation has turned quite drastic in the national capital, Delhi. This smog is caused by the collection of particulate matter (a very fine type of dust and toxic gases) in the air due to stagnant movement of air during winters. Moreover, during the post-monsoon to winter transition, air quality in the Indo-Gangetic Plain (IGP) worsens significantly due to shifts in weather patterns, such as changes in wind, temperature, and boundary layer mixing. The impact of emissions from both biomass burning and urban activities has intensified, leading to a rise in aerosols mainly particulate matters. The nearby Himalayan region is also affected, where mountainous topography trap air pollutants and increase the air quality issues specifically in northern India. Delhi is the most polluted city in the world and according to one estimate, air pollution causes the death of about 10,500 people in Delhi every year. During 2013–14, peak levels of fine particulate matter (PM) in Delhi increased by about 44%, primarily due to high vehicular and industrial emissions, construction work and crop burning in adjoining states. Delhi has the highest level of the airborne particulate matter, PM2.5 considered most harmful to health, with 153 micrograms. Rising air pollution level has significantly increased lung-related ailments (especially asthma and lung cancer) among Delhi's children and women. The dense smog in Delhi during winter season results in major air and rail traffic disruptions every year. According to Indian meteorologists, the average maximum temperature in Delhi during winters has declined notably since 1998 due to rising air pollution. Environmentalists have criticized the Delhi government for not doing enough to curb air pollution and to inform people about air quality issues. Most of Delhi's residents are unaware of alarming levels of air pollution in the city and the health risks associated with it. Since the mid-1990s, Delhi has undertaken some measures to curb air pollution – Delhi has the third highest quantity of trees among Indian cities and the Delhi Transport Corporation operates the world's largest fleet of environmentally friendly compressed natural gas (CNG) buses. In 1996, the Centre for Science and Environment (CSE) started a public interest litigation in the Supreme Court of India that ordered the conversion of Delhi's fleet of buses and taxis to run on CNG and banned the use of leaded petrol in 1998. In 2003, Delhi won the United States Department of Energy's first 'Clean Cities International Partner of the Year' award for its "bold efforts to curb air pollution and support alternative fuel initiatives". The Delhi Metro has also been credited for significantly reducing air pollutants in the city. However, according to several authors, most of these gains have been lost, especially due to stubble burning, rise in market share of diesel cars and a considerable decline in bus ridership. According to CUE and System of Air Quality Weather Forecasting and Research (SAFER), burning of agricultural waste in nearby Punjab, Haryana and Uttar Pradesh regions results in severe intensification of smog over Delhi. The state government of adjoining Uttar Pradesh is considering imposing a ban on crop burning to reduce pollution in Delhi NCR and an environmental panel has appealed to India's Supreme Court to impose a 30% cess on diesel cars. China Joint research between American and Chinese researchers in 2006 concluded that much of Beijing's pollution comes from surrounding cities and provinces. On average 35–60% of the ozone can be traced to sources outside the city. Shandong Province and Tianjin Municipality have a "significant influence on Beijing's air quality", partly due to the prevailing south/southeasterly flow during the summer and the mountains to the north and northwest. Iran In December 2005, schools and public offices were forced to close in Tehran and 1,600 people were taken to hospital, in a severe smog blamed largely on unfiltered car exhaust. Mongolia In the late 1990s, massive immigration to Ulaanbaatar from the countryside began. An estimated 150,000 households, mainly living in traditional Mongolian gers on the outskirts of Ulaanbaatar, burn wood and coal (some poor families burn even car tires and trash) to heat themselves during the harsh winter, which lasts from October to April, since these outskirts are not connected to the city's central heating system. A temporary solution to decrease smog was proposed in the form of stoves with improved efficiency, although with no visible results. Coal-fired ger stoves release high levels of ash and other particulate matter (PM). When inhaled, these particles can settle in the lungs and respiratory tract and cause health problems. At two to 10 times above Mongolian and international air quality standards, Ulaanbaatar's PM rates are among the worst in the world, according to a December 2009 World Bank report. The Asian Development Bank (ADB) estimates that health costs related to this air pollution account for as much as 4 percent of Mongolia's GDP. Southeast Asia Smog is a regular problem in Southeast Asia caused by land and forest fires in Indonesia, especially Sumatra and Kalimantan, although the term haze is preferred in describing the problem. Farmers and plantation owners are usually responsible for the fires, which they use to clear tracts of land for further plantings. Those fires mainly affect Brunei, Indonesia, Philippines, Malaysia, Singapore and Thailand, and occasionally Guam and Saipan. The economic losses of the fires in 1997 have been estimated at more than US$9 billion. This includes damages in agriculture production, destruction of forest lands, health, transportation, tourism, and other economic endeavours. Not included are social, environmental, and psychological problems and long-term health effects. The second-latest bout of haze to occur in Malaysia, Singapore and the Malacca Straits is in October 2006, and was caused by smoke from fires in Indonesia being blown across the Straits of Malacca by south-westerly winds. A similar haze has occurred in June 2013, with the PSI setting a new record in Singapore on 21 June at 12pm with a reading of 401, which is in the "Hazardous" range. The Association of Southeast Asian Nations (ASEAN) reacted. In 2002, the Agreement on Transboundary Haze Pollution was signed between all ASEAN nations. ASEAN formed a Regional Haze Action Plan (RHAP) and established a co-ordination and support unit (CSU). RHAP, with the help of Canada, established a monitoring and warning system for forest/vegetation fires and implemented a Fire Danger Rating System (FDRS). The Malaysian Meteorological Department (MMD) has issued a daily rating of fire danger since September 2003. Indonesia has been ineffective at enforcing legal policies on errant farmers. Pakistan Since the start of the winter season, heavy smog loaded with pollutants covered major parts of Punjab, especially the city of Lahore, causing breathing problems and disrupting normal traffic. A recent study from 2022 shows that the primary cause of pollution in Lahore is from traffic-related PM (both exhausts and non exhaust sources). Air quality in the Punjab, Pakistan deteriorates markedly during the post-monsoon to winter transition, driven by shifts in weather patterns like alterations in wind, temperature, and boundary layer mixing. In post-moonsoon, anthropogenic emissions from sources like vehicle exhaust, industrial activities, and crop burning impact air quality across Punjab, Pakistan, affecting the region by 90–100%. Doctors advised residents to stay indoors and wear facemasks outside. United Kingdom London In 1306, concerns over air pollution were sufficient for Edward I to (briefly) ban coal fires in London. In 1661, John Evelyn's Fumifugium suggested burning fragrant wood instead of mineral coal, which he believed would reduce coughing. The "Ballad of Gresham College" the same year describes how the smoke "does our lungs and spirits choke, Our hanging spoil, and rust our iron." Severe episodes of smog continued in the 19th and 20th centuries, mainly in the winter, and were nicknamed "pea-soupers," from the phrase "as thick as pea soup". The Great Smog of 1952 darkened the streets of London and killed approximately 4,000 people in the short time of four days (a further 8,000 died from its effects in the following weeks and months). Initially, a flu epidemic was blamed for the loss of life. In 1956 the Clean Air Act started legally enforcing smokeless zones in the capital. There were areas where no soft coal was allowed to be burned in homes or in businesses, only coke, which produces no smoke. Because of the smokeless zones, reduced levels of sooty particulates eliminated the intense and persistent London smog. It was after this that the great clean-up of London began. One by one, historical buildings which, during the previous two centuries had gradually completely blackened externally, had their stone facades cleaned and restored to their original appearance. Victorian buildings whose appearance changed dramatically after cleaning included the British Museum of Natural History. A more recent example was the Palace of Westminster, which was cleaned in the 1980s. A notable exception to the restoration trend was 10 Downing Street, whose bricks upon cleaning in the late 1950s proved to be naturally yellow; the smog-derived black color of the façade was considered so iconic that the bricks were painted black to preserve the image. Smog caused by traffic pollution, however, does still occur in modern London. Other areas Other areas of the United Kingdom were affected by smog, especially heavily industrialised areas. The cities of Glasgow and Edinburgh, in Scotland, suffered smoke-laden fogs in 1909. Des Voeux, commonly credited with creating the "smog" moniker, presented a paper in 1911 to the Manchester Conference of the Smoke Abatement League of Great Britain about the fogs and resulting deaths. One Birmingham resident described near black-out conditions in the 1900s before the Clean Air Act, with visibility so poor that cyclists had to dismount and walk to stay on the road. On 29 April 2015, the UK Supreme Court ruled that the government must take immediate action to cut air pollution, following a case brought by environmental lawyers at ClientEarth. Latin America Mexico Due to its location in a highland "bowl", cold air sinks down onto the urban area of Mexico City, trapping industrial and vehicle pollution underneath, and turning it into the most infamously smog-plagued city of Latin America. Within one generation, the city has changed from being known for some of the cleanest air of the world into one with some of the worst pollution, with pollutants like nitrogen dioxide being double or even triple international standards. Chile Similar to Mexico City, the air pollution of the Santiago valley in Chile, located between the Andes and the Chilean Coast Range, turn it into the most infamously smog-plagued city of South America. Other aggravates of the situation reside in its high latitude (31 degrees South) and dry weather during most of the year. North America Canada According to the Canadian Science Smog Assessment published in 2012, smog is responsible for detrimental effects on human and ecosystem health, as well as socioeconomic well-being across the country. It was estimated that the province of Ontario sustains $201 million in damages annually for selected crops, and an estimated tourism revenue degradation of $7.5 million in Vancouver and $1.32 million in The Fraser Valley due to decreased visibility. Air pollution in British Columbia is of particular concern, especially in the Fraser Valley, because of a meteorological effect called inversion which decreases air dispersion and leads to smog concentration. United States Smog was brought to the attention of the general U.S. public in 1933 with the publication of the book "Stop That Smoke", by Henry Obermeyer, a New York public utility official, in which he pointed out the effect on human life and even the destruction of of a farmer's spinach crop. Since then, the United States Environmental Protection Agency has designated over 300 U.S. counties to be non-attainment areas for one or more pollutants tracked as part of the National Ambient Air Quality Standards. These areas are largely clustered around large metropolitan areas, with the largest contiguous non-attainment zones in California and the Northeast. Various U.S. and Canadian government agencies collaborate to produce real-time air quality maps and forecasts. To combat smog conditions, localities may declare "smog alert" days, such as in the Spare the Air program in the San Francisco Bay Area. By 1970, Congress enacted the Clean Air Act to regulate air pollutant emissions. In the United States, smog pollution kills 24,000 Americans every year. The U.S. is among the dirtier countries in terms of smog, ranked 123 out of 195 countries measured, where 1 is cleanest and 195 is most smog polluted. Los Angeles and the San Joaquin Valley Because of their locations in low basins surrounded by mountains, Los Angeles and the San Joaquin Valley are notorious for their smog. Heavy automobile traffic, combined with the additional effects of the San Francisco Bay and Los Angeles/Long Beach port complexes, frequently contribute to further air pollution. Los Angeles, in particular, is strongly predisposed to the accumulation of smog, because of the peculiarities of its geography and weather patterns. Los Angeles is situated in a flat basin with the ocean on one side and mountain ranges on three sides. A nearby cold ocean current depresses surface air temperatures in the area, resulting in an inversion layer: a phenomenon where air temperature increases, instead of decreasing, with altitude, suppressing thermals and restricting vertical convection. All taken together, this results in a relatively thin, enclosed layer of air above the city that cannot easily escape out of the basin and tends to accumulate pollution. Los Angeles was one of the best-known cities suffering from transportation smog for much of the 20th century, so much so that it was sometimes said that Los Angeles was a synonym for smog. In 1970, when the Clean Air Act was passed, Los Angeles was the most polluted basin in the country, and California was unable to create a State Implementation Plan that would enable it to meet the new air quality standards. However, ensuing strict regulations by state and federal government agencies overseeing this problem (such as the California Air Resources Board and the United States Environmental Protection Agency), including tight restrictions on allowed emissions levels for all new cars sold in California and mandatory regular emission tests of older vehicles, resulted in significant improvements in air quality. For example, air concentrations of volatile organic compounds declined by a factor of 50 between 1962 and 2012. Concentrations of air pollutants such as nitrous oxides and ozone declined by 70% to 80% over the same period of time. Major incidents in the U.S. 26 July 1943, Los Angeles, California: A smog so sudden and severe that "Los Angeles residents believe the Japanese are attacking them with chemical warfare." 30-31 October 1948, Donora, Pennsylvania: 20 died, 600 hospitalized, thousands more stricken. Lawsuits were not settled until 1951. 24 November 1966, New York City, New York: Smog kills at least 169 people. Pollution index The severity of smog is often measured using automated optical instruments such as nephelometers, as haze is associated with visibility and traffic control in ports. Haze, however, can also be an indication of poor air quality, though this is often better reflected using accurate purpose-built air indexes such as the American Air Quality Index, the Malaysian API (Air Pollution Index), and the Singaporean Pollutant Standards Index. In hazy conditions, it is likely that the index will report the suspended particulate level. The disclosure of the responsible pollutant is mandated in some jurisdictions. The Malaysian API does not have a capped value. Hence, its most hazardous readings can go above 500. When the reading goes above 500, a state of emergency is declared in the affected area. Usually, this means that non-essential government services are suspended, and all ports in the affected area are closed. There may also be prohibitions on private sector commercial and industrial activities in the affected area excluding the food sector. So far, the state of emergency rulings due to hazardous API levels was applied to the Malaysian towns of Port Klang, Kuala Selangor, and the state of Sarawak during 1997 Southeast Asian haze and the 2005 Malaysian haze. Cultural references The London "pea-soupers" earned the capital the nickname of "The Smoke". Similarly, Edinburgh was known as "Auld Reekie". The smogs feature in many London novels as a motif indicating hidden danger or a mystery, perhaps most overtly in Margery Allingham's The Tiger in the Smoke (1952), but also in Dickens's Bleak House (1852) and T.S. Eliot's "The Love Song of J. Alfred Prufrock". In the 1957 Warner Brothers cartoon, What's Opera, Doc, Elmer Fudd called for various calamities to befall Bugs Bunny, ending in a screamed "SMOG!!" The 1970 made-for-TV movie A Clear and Present Danger was one of the first American television network entertainment programs to warn about the problem of smog and air pollution, as it dramatized a man's efforts toward clean air after emphysema killed his friend. The history of smog in LA is detailed in Smogtown by Chip Jacobs and William J. Kelly. See also Smog tower Asian brown cloud 1997 Southeast Asian haze 2005 Malaysian haze 2006 Southeast Asian haze 2013 Eastern China smog 2013 Northeastern China smog 2013 Southeast Asian haze 2015 Southeast Asian haze Atmospheric chemistry CityTrees Contrail Criteria air contaminants Emission standard Great Smog of London Haze Inversion (meteorology) List of least polluted cities by particulate matter concentration Nitric oxide Ozone Umweltzone Vog References Upadhyay, Harikrishna (2016-11-07)"All You Need To Know About Delhi Smog / Air Pollution – 10 Questions Answered", Dainik Bhaskar. Retrieved on 7 November 2016. Further reading Brimblecombe, Peter. "History of air pollution." in Composition, Chemistry and Climate of the Atmosphere (Van Nostrand Reinhold (1995): 1–18 Brimblecombe, Peter, and László Makra. "Selections from the history of environmental pollution, with special attention to air pollution. Part 2*: From medieval times to the 19th century." International Journal of environment and pollution 23.4 (2005): 351–367. Corton, Christine L. London Fog: The Biography (2015) 1900s neologisms Pollution Air pollution
Smog
Physics
7,200
8,828,449
https://en.wikipedia.org/wiki/Public%20health%20laboratory
Public health laboratories (PHLs) or National Public Health Laboratories (NPHL) are governmental reference laboratories that protect the public against diseases and other health hazards. The 2005 International Health Regulations came into force in June 2007, with 196 binding countries that recognised that certain public health incidents, extending beyond disease, ought to be designated as a Public Health Emergency of International Concern (PHEIC), as they pose a significant global threat. The PHLs serve as national hazard detection centres, and forward these concerns to the World Health Organization. International accreditation In 2007, Haim Hacham et al. published a paper addressing the need for and the process of international standardised accreditation for laboratory proficiency in Israel. With similar efforts, both the Japan Accreditation Board for Conformity Assessment (JAB) and the European Communities Confederation of Clinical Chemistry and Laboratory Medicine (EC4) have validated and convened ISO 15189 Medical laboratories — Requirements for quality and competence, respectively. In 2006, Spitzenberger and Edelhäuser expressed concerns that ISO accreditation may include obstacles arising from new emerging medical devices and the new approach of assessment; in so doing, they indicate the time dependence of standards. Africa WHO-Afro HIV/AIDS Laboratory Network East African Laboratory Network African Society for Laboratory Medicine National Public Health Laboratory (Sudan) Canada Canadian Public Health Laboratory Network Europe European Union Reference Laboratories cf. Commission Regulation (EC) No 776/2006 and Commission Regulation (EC) No 882/2004 EpiSouth Network United Kingdom The Public Health Laboratory Service (PHLS) was established as part of the National Health Service in 1946. An Emergency Public Health Laboratory Service was established in 1940 as a response to the threat of bacteriological warfare. There was originally a central laboratory at Colindale and a network of regional and local laboratories. By 1955 there were about 1000 staff. These laboratories were primarily preventive with an epidemiological focus. They were, however, in some places located with hospital laboratories which had a diagnostic focus. The PHLS was replaced by the Health Protection Agency in 2003; the HPA was disbanded and in its stead was constituted Public Health England, which later became the UK Health Security Agency in 2021. United States United States laboratory networks and organizations Association of Public Health Laboratories Laboratory Response Network (CDC) PulseNet (CDC) Integrated Consortium of Laboratory Networks Food Emergency Response Network Environmental Laboratory Response Network Council to Improve Foodborne Outbreak Response US State Public Health Laboratories US City and County Public Health Laboratories US State Environmental and Agriculture Laboratories Other international laboratory networks WHO Global Influenza Surveillance and Response System WHO H5 Reference Laboratories WHO Emerging and Dangerous Pathogens Laboratory Network See also Association of Public Health Laboratories ISO 9000 ISO 15189 ISO/IEC 17025 References Clinical pathology Laboratory types Public health organizations Public health emergencies of international concern
Public health laboratory
Chemistry
563
683,436
https://en.wikipedia.org/wiki/Space%20elevator%20economics
Space elevator economics compares the cost of sending a payload into Earth orbit via a space elevator with the cost of doing so with alternatives, like rockets. Costs of current systems (rockets) The costs of using a well-tested system to launch payloads are high. The main cost comes from the components of the launch system that are not intended to be reused, which normally burn up in the atmosphere or are sent to graveyard orbits. Even when reusing components, there is often a high refurbishment cost. For geostationary transfer orbits, prices are as low as about US$11,300/kg for a Falcon Heavy or Falcon 9 launch. Costs of low Earth orbit launches are significantly less, but this is not the intended orbit for a space elevator. Proposed cost reductions Various adaptations of the conventional rocket design have been proposed to reduce the cost. Several are currently in development, like the SpaceX Starship. An aspirational price for this fully reusable launch vehicle is , significantly cheaper than most proposed space elevators. New Glenn is also currently in development, a partially reusable rocket that promises to reduce price. However, an exact cost per launch has not been specified. Others, like the Sea Dragon and Roton have failed to get sufficient funding. The Space Shuttle promised a large cost reduction, but financially underperformed due to the extensive refurbishment costs needed after every launch. Cost estimates for a space elevator For a space elevator, the cost varies according to the design. Bradley C. Edwards received funding from NIAC from 2001 to 2003 to write a paper, describing a space elevator design. In it he stated that: "The first space elevator would reduce lift costs immediately to $100 per pound" ($220/kg). The gravitational potential energy of any object in geosynchronous orbit (GEO), relative to Earth's surface, is about 50 MJ (15 kWh) of energy per kilogram (see geosynchronous orbit for details). Using wholesale electricity prices for 2008 to 2009, and the current 0.5% efficiency of power beaming, a space elevator would require US$220/kg just in electrical costs. Dr. Edwards expects technical advances to increase the efficiency to 2%. However, due to the fact that space elevators would have a limited throughput as only a few payloads could climb the tether at any one time, the launch price may be subject to market forces. Funding of capital costs According to a paper presented at the 55th International Astronautical Congress in Vancouver in October 2004, the space elevator can be considered a prestige megaproject whose current estimated cost (US$6.2 billion) is favourable compared to other megaprojects e.g. bridges, pipelines, tunnels, tall towers, high-speed rail links and maglevs. Costs are also favourable compared to that of other aerospace systems and launch vehicles. Total cost of a privately funded Edwards' Space Elevator A space elevator built according to the Edwards proposal is estimated to have total cost of about $40 billion (that figure includes $1.56 billions operational costs for first 10 years). Subsequent space elevators are estimated to cost only $14.3 billion each. For comparison, in potentially the same time frame as the elevator: the Skylon, a 12,000 kg cargo capacity single-stage-to-orbit spaceplane (not a conventional rocket) is estimated to have an R&D and production cost of about $15 billion. The vehicle has about $3,000/kg price tag. Skylon would be suitable to launch cargo and particularly people to low/medium Earth orbit (targeting maximum 30 people per flight). Early space elevator designs move only cargo but could move people as well to a much wider range of destinations. Another alternative project to get large numbers of people and cargo to orbit inexpensively during this time frame is the SpaceX Starship which, like Skylon, is not a conventional rocket design as it will be fully reusable. Its cargo capacity will be between , is estimated to have an R&D cost of $10 billion, and production cost of about $200-million for Starship crew, $130-million for Starship tanker and $230-million for Super Heavy. The system has a less than $140/kg price tag which is possibly as low as $47/kg. It will be capable of transporting 100 people comfortably to Mars (therefore significantly more to low/medium earth orbit). See also Commercialization of space Elevator:2010 Lunar space elevator Megaproject Non-rocket spacelaunch Orbital ring Skyhook (structure) Space elevator construction Space elevator safety Space elevators in fiction Space tether Tether propulsion References Economics Spaceflight economics Transport economics
Space elevator economics
Astronomy,Technology
961
48,254,824
https://en.wikipedia.org/wiki/Reflex%20%28building%20design%20software%29
Reflex was a 3D building design software application developed in the mid 1980s and - along with its predecessor Sonata - is now regarded as a forerunner to today's building information modelling applications. History The application was developed by two former GMW Computers employees who had been involved with Sonata. After Sonata had "disappeared in a mysterious, corporate black hole, somewhere in eastern Canada in 1992," Jonathan Ingram and colleague Gerard Gartside then went on to develop Reflex, bought for $30 million by Parametric Technology Corporation (PTC) in July 1996. PTC had identified the architecture, engineering and construction market as a target for its parametric modelling solutions, and bought Reflex to expand into the sector. However, the fit between Reflex and PTC's existing solutions was poor, and PTC's Pro/Reflex gained little market traction; PTC then sold the product to another US company, The Beck Group, in 1997, where it formed the kernel of a parametric estimating package called DESTINI. Around the same time, several people from PTC set up a new company, Charles River Software (renamed Revit Technology Corporation in 2000, later (2002) bought by Autodesk). Leonid Raiz and Irwin Jungreis obtained from PTC a non-exclusive, source code development license for Reflex as part of their severance package. In the words of Jerry Laiserin: "While Autodesk Revit may not contain genomic snippets of Reflex code, Revit clearly is spiritual heir to a lineage of BIM 'begats' — RUCAPS begat Sonata, Sonata begat Reflex, and Reflex begat Revit." In a 2017 letter to AEC Magazine, Jungreis said: "After receiving several hours of instruction in the software architecture of Reflex from Reflex developers, we decided not to use it as our starting point because of several important differences at the very foundations of the software. At that point, we put it aside and never looked at it again. ... Revit was not based on Reflex. No code from Reflex was used...." However, Ingram, in his 2020 book Understanding BIM: The Past, Present and Future, shows much of the functionality of Reflex is duplicated in Revit. A 2022 account of the history of BIM by Kasper Miller asserts: "Reflex and Revit shared a myriad of features — so much so that it is fairly clear where the Revit team found much of its inspiration". References Sources Data modeling Computer-aided design Computer-aided design software Building information modeling
Reflex (building design software)
Engineering
528
32,580,947
https://en.wikipedia.org/wiki/Frobenius%20splitting
In mathematics, a Frobenius splitting, introduced by , is a splitting of the injective morphism OX→F*OX from a structure sheaf OX of a characteristic p > 0 variety X to its image F*OX under the Frobenius endomorphism F*. give a detailed discussion of Frobenius splittings. A fundamental property of Frobenius-split projective schemes X is that the higher cohomology Hi(X,L) (i > 0) of ample line bundles L vanishes. References External links Conference on Frobenius splitting in algebraic geometry, commutative algebra, and representation theory at Michigan, 2010. Algebraic geometry
Frobenius splitting
Mathematics
139
20,693,354
https://en.wikipedia.org/wiki/Deployment%20cost%E2%80%93benefit%20selection%20in%20physiology
Deployment cost–benefit selection in physiology concerns the costs and benefits of physiological process that can be deployed and selected in regard to whether they will increase or not an animal’s survival and biological fitness. Variably deployable physiological processes relate mostly to processes that defend or clear infections as these are optional while also having high costs and circumstance linked benefits. They include immune system responses, fever, antioxidants and the plasma level of iron. Notable determining factors are life history stage, and resource availability. Immunity Activating the immune system has the present and future benefit of clearing infections, but it is also both expensive in regard to present high metabolic energy consumption, and in the risk of resulting in a future immune related disorder. Therefore, an adaptive advantage exists if an animal can control its deployment in regard to actuary-like evaluations of future benefits and costs as to its biological fitness. In many circumstances, such trade-off calculations explain why immune responses are suppressed and infections are tolerated. Circumstances where immunity is not activated due to lack of an actuarial benefit include: Malnutrition Old age Hibernation Parasitism (low or high risk) Sexually transmitted diseases (low or high risk) Light patterns associated with winter (probable resource shortage) Fever Cost benefit trade-off actuary issues apply to the antibacterial and antiviral effects of fever (increased body temperature). Fever has the future benefit of clearing infections since it reduces the replication of bacteria and viruses. But it also has great present metabolic (BMR) cost, and the risk of hyperpyrexia. Where it is achieved internally, each degree raise in blood temperature, raises BMR by 10–15%. 90% of the total cost of fighting pneumonia, goes, for example, on energy devoted to raising body temperature. During sepsis, the resulting fever can raise BMR by 55%—and cause a 15% to 30% loss of body mass. Circumstances in which fever deployment is not selected or is reduced include: Aged individuals—the burden of tolerating infection will exist for a short time which reduces the actuarial future benefits of clearing an infection compared to the costs of its removal. This change favors reduced or no deployment of fever. When internal resources are limited (such as in winter), and the ability to afford high expenditure on increased metabolism is reduced. This increases the risks of activating fever relative to its potential benefit, and animals are less likely to use fever to fight infections. Late Pregnancy Antioxidants Antioxidants such as carotenoids, vitamin C, Vitamin E, and enzymes such as superoxide dismutase (SOD) and glutathione peroxidase (GPx) can protect against reactive oxygen species that damage DNA, proteins and lipids, and result in cell senescence and death. A cost exists in creating or obtaining these antioxidants. This creates a conflict between the biological fitness benefits of future survival compared with the use of these antioxidants to advantage present reproductive success. In some birds, antioxidants are diverted from maintaining the body to reproduction for this reason with the result that they have accelerated senescence Related to this, birds can show their biological capacity to afford the cost of diverting antioxidants (such as carotenoids) in the form of pigments into plumage as a costly signal. Hypoferremia Iron is vital to biological processes, not only of a host, but also to bacteria infecting the host. A biological fitness advantage can exist for hosts to reduce the availability of iron within itself to such bacteria (hypoferremia), even though this happens at a cost of the host impairing itself with anemia. The potential benefits of such self impairment is illustrated by the paradoxical effect that providing iron supplements to those with iron deficiency (which interferes with its antibacterial action) can result in an individual being cured of anemia but having increased bacterial illness. See also Adaptation Cost–benefit analysis Evolutionary medicine Notes Evolutionary biology Physiology
Deployment cost–benefit selection in physiology
Biology
831
58,573,773
https://en.wikipedia.org/wiki/Bioinspiration
Bioinspiration refers to the human development of novel materials, devices, structures, and behaviors inspired by solutions found in biological organisms, where they have evolved and been refined over millions of years. The goal is to improve modeling and simulation of the biological system to attain a better understanding of nature's critical structural features, such as a wing, for use in future bioinspired designs. Bioinspiration differs from biomimicry in that the latter aims to precisely replicate the designs of biological materials. Bioinspired research is a return to the classical origins of science: it is a field based on observing the remarkable functions that characterize living organisms and trying to abstract and imitate those functions. History Ideas in science and technology often arise from studying nature. In the 16th and 17th century, G. Galilei, J. Kepler and I. Newton studied the motion of the sun and the planets and developed the first empirical equation to describe gravity. A few years later, M. Faraday and J. C. Maxwell derived the fundamentals of electromagnetism by examining interactions between electrical currents and magnets. The studies of heat transfer and mechanical work lead to the understanding of thermodynamics. However, quantum mechanics originated from the spectroscopic study of light. Current objects of attention have originated in chemistry but the most abundant of them are found in biology, e.g. the study of genetics, characteristics of cells and the development of higher animals and disease. The current field of research Bioinspiration is a solidly established strategy in the field of chemistry, but it is not a mainstream approach. Especially, this research is still developing its scientific and technological systems, on academic and industrial levels. In recent years, it is also considered to develop composites for aerospace and military applications. This field dates back from the 1980s but in the 2010s, many natural phenomena have not been studied. Typical characteristics of Bioinspiration Function Bio-inspired research is a form of study that takes inspiration from the natural world. Unlike traditional chemistry research, it does not delve into the microscopic details of molecules. Instead, it focuses on understanding the functions and behaviors of living organisms. By observing nature's solutions, researchers can find innovative ideas for technology and problem-solving. A limitless source of ideas There are various kinds of organisms and many different strategies that have proved successful in biology at solving some functional problem. Some kinds of high-level bio functions may seem simple, but they are supported by many layers of underlying structures, processes, molecules and their elaborate interaction. There is no chance to run out of phenomena for bio-inspired research. Simplicity Often, bio-inspired research about something can be much easier than precisely replicating the source of inspiration. For example, researchers do not have to know how a bird flies to make an airplane. Transcultural field Bioinspiration returns to observation of nature as a source of inspiration for problem-solving and make it part of a grand tradition. The simplicity of many solutions emerge from a bio-inspired strategy, combined with the fact that different geographical and cultural regions have different types of contact with animals, fish, plants, birds and even microorganisms. This means different regions will have intrinsic advantages in areas in which their natural landscape is rich. So bio-inspired research is trans-cultural field. Technical applications There are many technical applications available nowadays that are bioinspired. However, this term should not be confused with biomimicry. For example, an airplane in general is inspired by birds. The wing tips of an airplane are biomimetic because their original function of minimizing turbulence and therefore needing less energy to fly, are not changed or improved compared to nature's original. Nano 3D printing methods are also one of the novel methods for bioinspiration. Plants and animals have particular properties which are often related to their composition of nano - and micro- surface structures. For example, research has been conducted to mimic the superhydrophobicity of Salvinia molesta leaves, the adhesiveness of gecko's toes on slippery surfaces, and moth antennas which inspire new approaches to detect chemical leaks, drugs and explosives. References <https://www.researchgate.net/publication/330246880_Biomimicry_Exploring_Research_Challenges_Gaps_and_Tools_Proceedings_of_ICoRD_2019_Volume_1/> See also Bio-inspired computing Bio-inspired engineering Bio-inspired photonics Bio-inspired robotics Paleo-inspiration
Bioinspiration
Engineering,Biology
921
2,974,898
https://en.wikipedia.org/wiki/Spiritual%20successor
A spiritual successor (sometimes called a spiritual sequel) is a product or fictional work that is similar to, or directly inspired by, another previous product or work, but (unlike a traditional prequel or sequel) does not explicitly continue the product line or media franchise of its predecessor, and is thus only a successor "in spirit". Spiritual successors often have similar themes and styles to their preceding material, but are generally a distinct intellectual property. In fiction, the term generally refers to a work by a creator that shares similarities to one of their earlier works, but is set in a different continuity, and features distinct characters and settings. Such works may arise when licensing issues prevent a creator from releasing a direct sequel using the same copyrighted characters and names as the original. The term is also used more broadly to describe a pastiche work that intentionally evokes similarities to pay homage to other influential works, but is also distinct enough to avoid copyright infringement. In literature Arthur Conan Doyle's Sherlock Holmes stories, published between 1887 and 1927, drew a large number of pastiches from other authors as early as the 1900s to capture the same mystery and spirit as Doyle's writings. Subsequently, Doyle and his publishers, and since then Doyle's estate, had aggressive enforced copyright on the Holmes character, often requiring authors that were publishing stories to change any use of Holmes' name to something else. The name "Herlock Sholmes" became one of the more common variations on this, notably in Maurice Leblanc's Arsène Lupin versus Herlock Sholmes, with the Sholmes character having a personality similar, but not quite exactly like Holmes to further distance potential copyright issues. In and around the 1950s, the character Solar Pons, a pastiche of Holmes, appeared in several books not authorized by the estate of Conan Doyle. These copyright issues have continued into contemporary times: in the case Klinger v. Conan Doyle Estate, Ltd. (2014), it was determined that the characters of Holmes and Watson were in the public domain. However, certain story elements were under copyright until 2023. In films and television In films and television shows, spiritual successor often describes similar works by the same creator, or starring the same cast. For example, the show Parks and Recreation is a spiritual successor to The Office. Both are workplace mockumentaries developed by Greg Daniels, featuring satirical humor and characters being filmed by an in-universe documentary film crew. The film 10 Cloverfield Lane was not originally scripted with any connection to Cloverfield. When the film was acquired by Bad Robot, producer J. J. Abrams recognized a common element of a giant monster attack between the two films, and chose to market 10 Cloverfield Lane as a spiritual successor to Cloverfield to help bring interest to the newer film, which allowed him to establish a franchise he could build upon in the future. Spiritual successors are common in Indian film industries, particularly Bollywood, where films marketed as sequels do not share continuity with their predecessors. The 2006 film Superman Returns was created as a spiritual sequel to Superman: The Movie and Superman II, with no references to Superman III or Superman IV: The Quest for Peace, though the Arrowverse's Crisis on Infinite Earths would later confirm that the latter two sequels had occurred within the timeline established in the 2006 film. The 2022 film Chip 'n Dale: Rescue Rangers was created as a spiritual sequel to the 1988 film Who Framed Roger Rabbit; both films showcases worlds where cartoon characters coexist with humans. The 2022 miniseries We Own This City was described as a spiritual successor to the 2002–08 series The Wire in that both are street-level crime dramas set in Baltimore and both are produced by David Simon for HBO. In video games Games by the same studio Spiritual successor games are sometimes made by the same studio as the original, but with a new title due to licensing issues. Some examples of these include: The Dark Souls series by FromSoftware was inspired by the studio's earlier game, Demon's Souls, an exclusive title for the PlayStation 3. Because Sony Interactive Entertainment held the rights to Demon's Souls, the studio was unable to produce a direct sequel on other platforms, leading them to create a new property with similar gameplay mechanics. Demon's Souls itself was a spiritual successor to King's Field. Irrational Games' BioShock is a spiritual successor to their earlier System Shock 2. While System Shock 2 was met with critical acclaim, it was considered a commercial failure, and publisher Electronic Arts would not allow a third title in the series. After several years and other projects at Irrational, as well as being acquired by a new publisher 2K Games, the studio developed BioShock, with a similar free-form narrative structure. Shadow of the Colossus was considered a spiritual successor to Ico by Fumito Ueda, who directed both games as leader of Team Ico. Ueda expressed that he did not necessarily want a direct canonical connection between the games, but that both had similar narrative themes and elements that he wanted players to interpret on their own. Created by Facepunch Studios, Sandbox (stylized as s&box) is an upcoming spiritual successor to Garry's Mod. Unlike the latter being a sandbox mod of the Source engine, s&box is a game developoment platform built on top of Source 2. Games by the same staff Alternatively, a successor may be developed by some of the staff who worked on the preceding game, under a new studio name. Examples of these include: Yooka-Laylee is a spiritual successor evoking the style and gameplay of Rare's Banjo-Kazooie. It was developed by Playtonic Games, which consisted of many former Rare staff members, including composer Grant Kirkhope. Yooka and Laylee, the game's animal protagonists, serve as direct stand-ins for the original game's Banjo and Kazooie. Mighty No. 9 closely resembles the gameplay and character design of the Mega Man series, which project lead Keiji Inafune worked on before leaving Capcom, and is considered a spiritual successor. Bloodstained: Ritual of the Night is considered a spiritual successor to the Castlevania series, created by Koji Igarashi who had led development of several Castlevania games before leaving Konami. A number of games from Bullfrog Productions have spawned spiritual successors in the years after the studio was closed by Electronic Arts in 2001, with these projects typically led by former staff from Bullfrog having found their own studios. These include Godus by Peter Molyneux's studio 22cans, succeeding Populous; 5 Lives Studios' Satellite Reign, succeeding Syndicate Wars; and Two Point Hospital by Mark Webley and Gary Carr's Two Point Studios, succeeding Theme Hospital. P.N.03 has been called the spiritual predecessor of Bayonetta for its "combat...with stylish dance-inspired movements" and "flashy, energetic, intense" gameplay and character design. P.N.03 director Shinji Mikami later co-founded PlatinumGames, the studio that developed Bayonetta, and Bayonetta director and PlatinumGames co-founder Hideki Kamiya also directed Resident Evil 2, Devil May Cry, and Viewtiful Joe, the last of which was part of the Capcom Five with P.N.03. Common themes only The term is also more broadly applied to video games developed by a different studio with no connection to the original, and simply inspired by the gameplay, aesthetics or other elements of the preceding work. Examples of such games include: The game Cities: Skylines (along with other city-builder games) is considered a spiritual successor to the SimCity series, both focusing on constructing and managing a simulated city. Axiom Verge is a side-scrolling Metroidvania game that succeeds the Metroid series. The Mother series (known as EarthBound outside Japan) has directly inspired a number of pixel-art, role-playing indie games featuring children in playable character roles as spiritual successors to the series. These include Undertale and Citizens of Earth. War for the Overworld (succeeding Dungeon Keeper) crossed through several of these categories over the course of the development. Originating as a fan-made direct sequel to Dungeon Keeper 2, the game then became a spiritual successor with only thematic connection after moving away from the Dungeon Keeper IP. Finally, the hiring of returning voice actor Richard Ridings presented a direct staff connection to the original. In sports In sports, the Ravens–Steelers rivalry is considered the spiritual successor to the older Browns–Steelers rivalry due to the original Cleveland Browns relocation to Baltimore, as well as the reactivated Browns having a 6–30 record against the Steelers since returning to the league in 1999. In other industries The Honda CR-Z is regarded as the spiritual successor to the second generation Honda CR-X in both name and exterior design, despite a nearly two decade time difference in production. The Toyota Fortuner SUV is a spiritual successor to the Toyota 4Runner SUV mainly because they both share the same platform as the Hilux pickup truck. The Canon Cat computer was Jef Raskin's spiritual successor to the Apple Macintosh. See also Canon (fiction) Continuation novel Phoenix club (sports) Reboot (fiction) Remake Revisionism (fictional) Sequel Spin-off (media) Gaiden Digression References Spiritual successor Sequel, spiritual Film and video terminology Video game terminology
Spiritual successor
Technology
1,917
10,070,974
https://en.wikipedia.org/wiki/Minimal%20prime%20%28recreational%20mathematics%29
In recreational number theory, a minimal prime is a prime number for which there is no shorter subsequence of its digits in a given base that form a prime. In base 10 there are exactly 26 minimal primes: 2, 3, 5, 7, 11, 19, 41, 61, 89, 409, 449, 499, 881, 991, 6469, 6949, 9001, 9049, 9649, 9949, 60649, 666649, 946669, 60000049, 66000049, 66600049 . For example, 409 is a minimal prime because there is no prime among the shorter subsequences of the digits: 4, 0, 9, 40, 49, 09. The subsequence does not have to consist of consecutive digits, so 109 is not a minimal prime (because 19 is prime). But it does have to be in the same order; so, for example, 991 is still a minimal prime even though a subset of the digits can form the shorter prime 19 by changing the order. Similarly, there are exactly 32 composite numbers which have no shorter composite subsequence: 4, 6, 8, 9, 10, 12, 15, 20, 21, 22, 25, 27, 30, 32, 33, 35, 50, 51, 52, 55, 57, 70, 72, 75, 77, 111, 117, 171, 371, 711, 713, 731 . There are 146 primes congruent to 1 mod 4 which have no shorter prime congruent to 1 mod 4 subsequence: 5, 13, 17, 29, 37, 41, 61, 73, 89, 97, 101, 109, 149, 181, 233, 277, 281, 349, 409, 433, 449, 677, 701, 709, 769, 821, 877, 881, 1669, 2221, 3001, 3121, 3169, 3221, 3301, 3833, 4969, 4993, 6469, 6833, 6949, 7121, 7477, 7949, 9001, 9049, 9221, 9649, 9833, 9901, 9949, ... There are 113 primes congruent to 3 mod 4 which have no shorter prime congruent to 3 mod 4 subsequence: 3, 7, 11, 19, 59, 251, 491, 499, 691, 991, 2099, 2699, 2999, 4051, 4451, 4651, 5051, 5651, 5851, 6299, 6451, 6551, 6899, 8291, 8699, 8951, 8999, 9551, 9851, ... Other bases Minimal primes can be generalized to other bases. It can be shown that there are only a finite number of minimal primes in every base. Equivalently, every sufficiently large prime contains a shorter subsequence that forms a prime. The base 12 minimal primes written in base 10 are listed in . Number of minimal (probable) primes in base n are 1, 2, 3, 3, 8, 7, 9, 15, 12, 26, 152, 17, 228, 240, 100, 483, 1280, 50, 3463, 651, 2601, 1242, 6021, 306, (17608 or 17609), 5664, 17215, 5784, (57296 or 57297), 220, ... The length of the largest minimal (probable) prime in base n are 2, 2, 3, 2, 5, 5, 5, 9, 4, 8, 45, 8, 32021, 86, 107, 3545, (≥111334), 33, (≥110986), 449, (≥479150), 764, 800874, 100, (≥136967), (≥8773), (≥109006), (≥94538), (≥174240), 1024, ... Largest minimal (probable) prime in base n (written in base 10) are 2, 3, 13, 5, 3121, 5209, 2801, 76695841, 811, 66600049, 29156193474041220857161146715104735751776055777, 388177921, ... (next term has 35670 digits) Number of minimal composites in base n are 1, 3, 4, 9, 10, 19, 18, 26, 28, 32, 32, 46, 43, 52, 54, 60, 60, 95, 77, 87, 90, 94, 97, 137, 117, 111, 115, 131, 123, 207, ... The length of the largest minimal composite in base n are 4, 4, 3, 3, 3, 4, 3, 3, 2, 3, 3, 4, 3, 3, 2, 3, 3, 4, 3, 3, 2, 3, 3, 4, 2, 3, 2, 3, 3, 4, ... Notes References Chris Caldwell, The Prime Glossary: minimal prime, from the Prime Pages A research of minimal primes in bases 2 to 30 Minimal primes and unsolved families in bases 2 to 30 Minimal primes and unsolved families in bases 28 to 50 J. Shallit, Minimal primes, Journal of Recreational Mathematics, 30:2, pp. 113–117, 1999-2000. PRP records, search by form 8*13^n+183 (primes of the form 8{0}111 in base 13), n=32020 PRP records, search by form (51*21^n-1243)/4 (primes of the form C{F}0K in base 21), n=479149 PRP records, search by form (106*23^n-7)/11 (primes of the form 9{E} in base 23), n=800873 Classes of prime numbers Base-dependent integer sequences
Minimal prime (recreational mathematics)
Mathematics
1,349
636,913
https://en.wikipedia.org/wiki/Goi%C3%A2nia%20accident
The Goiânia accident was a radioactive contamination accident that occurred on September 13, 1987, in Goiânia, Goiás, Brazil, after an unsecured radiotherapy source was stolen from an abandoned hospital site in the city. It was subsequently handled by many people, resulting in four deaths. About 112,000 people were examined for radioactive contamination and 249 of them were found to have been contaminated. In the consequent cleanup operation, topsoil had to be removed from several sites, and several houses were demolished. All the objects from within those houses, including personal possessions, were seized and incinerated. Time magazine has identified the accident as one of the world's "worst nuclear disasters" and the International Atomic Energy Agency (IAEA) called it "one of the world's worst radiological incidents". Description of the source The radiation source in the Goiânia accident was a small capsule containing about of highly radioactive caesium chloride (a caesium salt made with a radioisotope, caesium-137) encased in a shielding canister made of lead and steel. The source was positioned in a container of the wheel type, where the wheel turns inside the casing to move the source between the storage and irradiation positions. The activity of the source was 74 terabecquerels (TBq) in 1971. The International Atomic Energy Agency (IAEA) describes the container as an "international standard capsule". It was 51 millimeters (2 inches) in diameter and 48 mm (1.8 inches) long. The specific activity of the active solid was about 814 TBq·kg−1 of caesium-137, an isotope whose half life is 30 years. The dose rate at one meter from the source was 4.56 grays per hour (456 rad·h−1). While the serial number of the device was unknown, hindering the ability to verify its identity, the device was thought to have been made in the U.S. at Oak Ridge National Laboratory as a radiation source for radiation therapy at the Goiânia hospital. The IAEA states that the source contained when it was taken and that about of contamination had been recovered during the cleanup operation. This means that remained in the environment; it would have decayed to about by 2016. Events Hospital abandonment The (IGR), a private radiotherapy institute in Goiânia, was northwest of , the administrative center of the city. When IGR moved to its new premises in 1985, it left behind a caesium-137-based teletherapy unit purchased in 1977. The fate of the abandoned site was disputed in court between IGR and the Society of Saint Vincent de Paul, then owner of the premises. On September 11, 1986, the Court of Goiás stated it had knowledge of the abandoned radioactive material in the building. Four months before the theft, on May 4, 1987, Saura Taniguti, then director of Ipasgo, the institute of insurance for civil servants, used police force to prevent one of the owners of IGR, Carlos Figueiredo Bezerril, from removing the radioactive material that had been left behind. Figueiredo then warned the president of Ipasgo, Lício Teixeira Borges, that he should take responsibility "for what would happen with the caesium bomb". The Court of Goiás posted a security guard to protect the site. Meanwhile, the owners of IGR wrote several letters to the National Nuclear Energy Commission (CNEN), warning them about the danger of keeping a teletherapy unit at an abandoned site, but they could not remove the equipment on their own once a court order prevented them from doing so. Theft of the source On September 13, 1987, the guard tasked with protecting the site did not show up for work. Roberto dos Santos Alves and Wagner Mota Pereira illegally entered the partially demolished IGR site. They partially disassembled the teletherapy unit and placed the source assembly in a wheelbarrow to later take to Roberto's home. They thought they might get some scrap value for the unit. They began dismantling the equipment. That same evening, they both began to vomit due to radiation sickness. The following day, Pereira began to experience diarrhea and dizziness, and his left hand began to swell. He later developed a burn on his hand in the same size and shape as the aperture, and he underwent partial amputation of several fingers. On September 15, Pereira visited a local clinic, where he was diagnosed with a foodborne illness; he was told to return home and rest. Roberto, however, continued with his efforts to dismantle the equipment and eventually freed the caesium capsule from its protective rotating head. His prolonged exposure to the radioactive material led to his right forearm becoming ulcerated, requiring amputation on October 14. Opening the capsule On September 16, Roberto punctured the capsule's aperture window with a screwdriver, allowing him to see a deep blue light coming from the tiny opening he had created. He inserted the screwdriver and successfully scooped out some of the glowing substance. Thinking it was perhaps a type of gunpowder, he tried to light it, but the powder would not ignite. The exact mechanism by which the blue light was generated was not known at the time the IAEA report of the incident was written, though it was thought to be either ionized air glow, fluorescence, or Cherenkov radiation associated with the absorption of moisture by the source; a similar blue light was observed in 1988 at Oak Ridge National Laboratory in the United States during the disencapsulation of a caesium-137 source. Source is sold and dismantled On September 18, Roberto sold the items to a nearby scrapyard. That night, Devair Alves Ferreira, the owner of the scrapyard, noticed the blue glow from the punctured capsule. Thinking the capsule's contents were valuable or supernatural, he immediately brought it into his house. Over the next three days, he invited friends and family to view the strange glowing powder. On September 21, at the scrapyard, one of Ferreira's friends (identified as "EF1" in the IAEA report) freed several rice-sized grains of the glowing material from the capsule using a screwdriver. Ferreira began to share some of them with various friends and family members. That same day, his wife, 37-year-old Maria Gabriela Ferreira, began to fall ill. On September 25, 1987, Devair Ferreira sold the scrap metal to a third scrapyard. Ivo and his daughter The day before the sale to the third scrapyard, on September 24, Ivo, Devair's brother, successfully scraped some additional dust out of the source and took it to his house a short distance away. There he spread some of it on the concrete floor. His six-year-old daughter, Leide das Neves Ferreira, later ate an egg sandwich while sitting on the floor. She was also fascinated by the blue glow of the powder, applying it to her body and showing it off to her mother. The egg sandwich was also exposed to dust from the powder; Leide absorbed 1.0 GBq and received a total dose of 6.0 Gy, a fatal dose for which medical intervention was ineffective. Maria Gabriela Ferreira notifies authorities Maria Gabriela Ferreira had been the first to notice that many people around her had become severely ill at the same time. On September 28, 1987 – fifteen days after the item was found – she reclaimed the materials from the rival scrapyard and transported them to a hospital. Source's radioactivity is detected In the morning of September 29, a visiting medical physicist used a scintillation counter to confirm the presence of radioactivity and persuaded the authorities to take immediate action. The city, state, and national governments were all aware of the incident by the end of the day. Health outcomes News of the radiation incident was broadcast on local, national, and international media. Within days, nearly 130,000 people in Goiânia flooded local hospitals, concerned that they might have been exposed. Of those, 249 were indeed found to be contaminated – some with radioactive residue still on their skin – through the use of Geiger counters. Eventually, twenty people showed signs of radiation sickness and required treatment. Fatalities Ages in years are given, with dosages listed in grays (Gy). Admilson Alves de Souza, aged 18 (5.3 Gy), was an employee of Devair Ferreira who worked on the radioactive source. He developed lung damage, internal bleeding, and heart damage, and died October 28, 1987. Leide das Neves Ferreira, aged 6 (6.0 Gy), was the daughter of Ivo Ferreira. When an international team arrived to treat her, she was discovered confined to an isolated room in the hospital because the staff were afraid to go near her. She gradually experienced swelling in the upper body, hair loss, kidney and lung damage, and internal bleeding. She died on October 23, 1987, of "septicemia and generalized infection" at the Marcilio Dias Navy Hospital, in Rio de Janeiro. She was buried in a common cemetery in Goiânia, in a special fiberglass coffin lined with lead to prevent the spread of radiation. Despite these measures, news of her impending burial caused a riot of more than 2,000 people in the cemetery on the day of her burial, all fearing that her corpse would poison the surrounding land. Rioters tried to prevent her burial by using stones and bricks to block the cemetery roadway. She was buried despite this interference. Maria Gabriela Ferreira, a 37-year-old woman (exposed to 5.7 Gy), was the wife of scrapyard owner Devair Ferreira and who turned the material over to the authorities. She became sick about three days after coming into contact with the substance. Her condition worsened, and she developed hair loss and internal bleeding, especially of the limbs, eyes, and digestive tract. She suffered mental confusion, diarrhea, and acute renal insufficiency before dying on October 23, 1987, the same day as her niece, of "septicemia and generalized infection", about a month after exposure. Israel Batista dos Santos, aged 22 (4.5 Gy), was also an employee of Devair Ferreira who worked on the radioactive source primarily to extract the lead. He developed serious respiratory and lymphatic complications, was eventually admitted to the hospital, and died six days later on October 27, 1987. Devair Ferreira survived despite receiving 7 Gy of radiation. He died in 1994 of cirrhosis aggravated by depression and binge drinking. Ivo Ferreira died of emphysema in 2003. Other individuals The outcomes for the 46 most contaminated people are shown in the bar chart below. Several people survived high doses of radiation. This is thought in some cases to be because the dose was fractionated. Given time, the body's repair mechanisms will reverse cell damage caused by radiation. If the dose is spread over a long time period, these mechanisms can mitigate the effects of radiation poisoning. Other affected people Afterwards, about 112,000 people were examined for radioactive contamination; 249 were found to have significant levels of radioactive material in or on their body. Of this group, 129 people had internal contamination. The majority of the internally contaminated people only suffered small doses (, corresponding to less than about 1 in 200 excess risk of developing cancer later in life). A thousand people were identified as having suffered a dose which was greater than one year of background radiation; it is thought that 97% of these people had a dose of between 10 and 200 mSv (between 1 in 1,000 and 1 in 50 excess risk of developing cancer as a result). In 2007, the Oswaldo Cruz Foundation determined that the rate of caesium-137 related diseases are the same in Goiânia accident survivors as they are in the population at large. Nevertheless, compensation is still distributed to survivors, who suffer radiation-related prejudices in everyday life. Legal matters In addition to a public civil action for damages to the environment that was brought in September 1995 by the Federal Public Prosecution Service (Department of Justice), together with the State of Goiás’ Public Prosecution Service, before the 8th Federal Court of Goiânia, legal proceedings were also brought against the Federal Union; the National Nuclear Energy Commission; the State of Goiás (through its Health Department); the Social Security Institute for Civil Servants in the State of Goiás – IPASGO, which at the time of the accident was the private owner of the land where the IGR was located; the four medical doctors who owned IGR; and the clinic’s physicist, who was also the supervisor. On March 17, 2000, the 8th Federal Court of Goiás ordered the defendants to pay compensation of R$1.3 million (near US$750,000) to the Defence of the Diffused Rights Fund, a federal fund for the compensation of damage to the environment, consumers, property and rights of artistic, historic, or cultural value and other collective rights. In his sentence, the Judge excluded the state of Goiás and the Federal Union from the payment of compensation. The CNEN was ordered to pay compensation of R$1 million, to guarantee medical and psychological treatment for the direct and indirect victims of the accident and their descendants down to the third generation, to provide transportation to medical exams for the most serious victims, and was responsible for the medical follow-up for the people of Abadia de Goiás city. The Social Security Institute for Civil Servants in the State of Goiás, IPASGO, was ordered to pay a fine of R$100 000, plus interest as of 13 September 1987, the date of removal of the caesium-137 capsule. As the accidents occurred before the promulgation of the Federal Constitution of 1988 and because the substance was acquired by the clinic and not by the individual owners, the court could not declare the owners of IGR liable. However, one of the owners was fined R$100 000 because he was found liable for the abandoned state of the IGR building where the caesium source was kept, including the removal of gates, windows, timberwork and the roof in May 1987. The clinic’s physicist was also fined R$100 000 because he was the technician responsible for the control of the medical manipulation of the radiological device. Although the two thieves were not included as defendants in the public civil suit, the judgement of the court found them directly responsible for the accident. If they had been arraigned as defendants, they certainly would have been convicted, as their actions led to strict (no-fault) liability. However, in terms of criminal intent, they were not aware of the seriousness of their actions in removing the caesium source from its location, and they had no knowledge of the dangers of the radiological device; moreover, there was no danger sign erected in the abandoned clinic in order to ward off intruders. Cleanup Objects and places Topsoil had to be removed from several sites, and several houses were demolished. All the objects from within those houses were removed and examined. Those that were found to be free of radioactivity were wrapped in plastic bags, while those that were contaminated were either decontaminated or disposed of as waste. In industry, the choice between decontaminating or disposing objects is based on only the economic value of the object and the ease of decontamination. In this case, the IAEA recognized that to reduce the psychological impact of the event, greater effort should have been taken to clean up items of personal value, such as jewelry and photographs. It is not clear from the IAEA report to what degree this was practised. Means and methods After the houses were emptied, vacuum cleaners were used to remove dust, and plumbing was examined for radioactivity. Painted surfaces could be scraped, while floors were treated with acid and Prussian blue mixtures. Roofs were vacuumed and hosed, but two houses had to have their roofs removed. The waste from the cleanup was moved out of the city to a remote place for storage. Aeroradiometric operations were undertaken by low-altitude survey, which was carried out over Goiânia. The radiometric equipment and materials available at the IRD were quickly transported and mounted on a Eurocopter AS350 Écureuil helicopter provided by the police of the state of Goiás. Potassium alum dissolved in hydrochloric acid was used on clay, concrete, soil, and roofs. Caesium has a high affinity for many clays. Organic solvents, followed by potassium alum dissolved in hydrochloric acid, were used to treat waxed/greased floors and tables. Sodium hydroxide solutions, also followed by dissolved potassium alum, were used to treat synthetic floors, machines and typewriters. Prussian blue was used to internally decontaminate many people, although by the time it was applied, much of the radioactive material had already migrated from the bloodstream to muscle tissue, greatly hampering its effectiveness. Urine from victims was treated with ion-exchange resin to compact the waste for ease of storage. Recovery considerations The cleanup operation was much harder for this event than it could have been because the source was opened and the active material was water-soluble. A sealed source need only be picked up, placed in a lead container, and transported to the radioactive waste storage. In the recovery of lost sources, the IAEA recommends careful planning and using a crane or other device to place shielding (such as a pallet of bricks or a concrete block) near the source to protect recovery workers. Contamination locations The Goiânia accident spread significant radioactive contamination throughout the Aeroporto, Central, and Ferroviários districts. Even after the cleanup, 7 TBq of radioactivity remained unaccounted for. Some of the key contamination sites: Goiânia's (IGR) (), despite being the origin of the radiation source, suffered no actual exposure or breach of radioactive contents. IGR moved its clinic to another location in the city, with the previous site having been replaced around 2000 with the modernized (Goiânia Convention Center). Roberto dos Santos' house () on Rua 57. The radioactive source was here for about six days, and it was partially broken into. Devair Ferreira's scrapyard (), on Rua 15A ("Junkyard I") in the Aeroporto section of the city, had possession of the items for seven days. The caesium container was entirely dismantled, spreading significant contamination. Extreme radiation levels of up to 1.5 Sv·h−1 were found by investigators in the middle of the scrapyard. Ivo Ferreira's house () ("Junkyard II"), at 1F Rua 6. Some of the contamination was spread about the house, fatally poisoning Leide das Neves Ferreira and Maria Gabriela Ferreira. The adjacent junkyard scavenged the remainder of parts from the IGR facility. The premises were heavily contaminated, with radiation dose rates up to 2 Sv·h−1. "Junkyard III" (). This junkyard had possession of the items for three days until they were sent away. (). Here, the substance was quarantined, and an official cleanup response began. Other contamination was also found in or on: Three buses 42 houses fourteen cars five pigs 50,000 rolls of toilet paper Legacy Disposal of the capsule The original teletherapy capsule was seized by the Brazilian military as soon as it was discovered, and since then the empty capsule has been on display at the ("School of Specialized Instruction") in Rio de Janeiro as a memento to those who participated in the cleanup of the contaminated area. Research In 1991, a group of researchers collected blood samples from highly exposed survivors of the incident. Subsequent analysis resulted in the publication of numerous scientific articles. In popular culture A 1990 film, (Caesium-137 – The Nightmare of Goiânia), a dramatisation of the incident, was made by Roberto Pires. It won several awards at the 1990 Festival de Brasília. An episode of Star Trek: The Next Generation, "Thine Own Self," was partially inspired by the accident. Economic implications Much of the radioactive substances were cleared after testing. However a gloom hung over the local residents, as they were asked for certificates stating that they were free of radioactivity. Also banned products from Goiânia created a public outcry, citing unjust discrimination. Foundation The state government of Goiás established the in February 1988, both to study the extent of contamination of the population as a result of the incident and to render aid to those affected. See also List of civilian radiation accidents Ciudad Juárez cobalt-60 contamination incident, similar disaster in Mexico Radioactive scrap metal List of orphan source incidents 1990 Clinic of Zaragoza radiotherapy accident 1962 Mexico City radiation accident Nuclear and radiation accidents and incidents Samut Prakan radiation accident Therac-25 References External links Detailed Report from the International Atomic Energy Agency, Vienna, 1988 Similar accidents over the world (short overview) The Goiânia Radiation Incident Health disasters in Brazil Disasters in Brazil 1987 industrial disasters 1987 health disasters 1987 in Brazil Radiation accidents and incidents Radioactive waste Waste disposal incidents Caesium Goiânia Radioactively contaminated areas 1987 in the environment September 1987 events in South America INES Level 5 accidents Civilian nuclear power accidents 1987 disasters in Brazil
Goiânia accident
Chemistry,Technology
4,426
47,747,907
https://en.wikipedia.org/wiki/Penicillium%20subarcticum
Penicillium subarcticum is a species of fungus in the genus Penicillium. References Further reading subarcticum Fungi described in 2002 Fungus species
Penicillium subarcticum
Biology
36
617,475
https://en.wikipedia.org/wiki/Fenobucarb
Fenobucarb is a carbamate insecticide, also widely known as BPMC. A pale yellow or pale red liquid, insoluble in water; used as an agricultural insecticide, especially for control of Hemipteran pests, on rice and cotton and moderately toxic for humans. Synonyms 2-(1-methylpropyl)phenol methylcarbamate; 2-(1-methylpropyl)phenyl methylcarbamate; 2-sec-Butylphenyl N-methylcarbamate; BPMC; fenocarb; N-methyl o-sec-butylphenyl carbamate Tradenames Fenobucarb, Osbac, Bassa, Bipvin, Baycarb, etc LD50 Male Mouse 340 mg/kg Male Rat 410 mg/kg References Acetylcholinesterase inhibitors Carbamate insecticides Phenol esters Aromatic carbamates Sec-Butyl compounds
Fenobucarb
Chemistry
205
16,277,372
https://en.wikipedia.org/wiki/Name%20calling
Name-calling is a form of argument in which insulting or demeaning labels are directed at an individual or group. This phenomenon is studied by a variety of academic disciplines such as anthropology, child psychology, and political science. It is also studied in rhetoric and a variety of other disciplines. In politics and public opinion Politicians sometimes resort to name-calling during political campaigns or public events with the intentions of gaining advantage over, or defending themselves from, an opponent or critic. Often such name-calling takes the form of labelling an opponent as an unreliable and untrustworthy source, such as use of the term "flip-flopper". Common misconceptions Gratuitous verbal abuse or "name-calling" is not on its own an example of the abusive argumentum ad hominem logical fallacy. The fallacy occurs only if personal attacks are employed to devalue a speaker's argument by attacking the speaker; personal insults in the middle of an otherwise sound argument are not fallacious ad hominem attacks. References Harassment and bullying Informal fallacies Names Pejorative terms
Name calling
Biology
224
44,301,349
https://en.wikipedia.org/wiki/Point%20of%20care%20medical%20information%20summary
Point of care medical information summaries are defined as "web-based medical compendia specifically designed to deliver predigested, rapidly accessible, comprehensive, periodically updated, and evidence-based information" to healthcare providers. Products BMJ Best Practice DynaMed UpToDate See also Clinical decision support system References Evidence-based medicine Medical databases Medical websites Online databases Health informatics
Point of care medical information summary
Biology
80
39,738,199
https://en.wikipedia.org/wiki/H.%20Narayan%20Murthy
Hosur Narayan Murthy (H. N. Murthy) (; 1924–2011) was an Indian clinical psychologist, writer, philosopher, Sanskrit scholar and teacher who headed the department of clinical psychology at National Institute of Mental Health and Neuro Sciences (NIMHANS) at Bangalore. He was born in the city of Bangalore in 1924 to Brahmin parents Hosur Ramaswamaiah Subba Rao and smt Rajamma. Murthy's father was an official at "Iron and Steel Plant" at Bhadravathi town in Karnataka. Education Murthy finished his basic schooling at Bhadravathi before coming to Mysore for his college education. While in Mysore, Murthy enrolled into the Maharaja's College, Mysore to pursue his bachelor's degree (B.A.) in psychology under the professorship of Dr M.V.Gopalaswamy. His dissertation for the bachelor's degree at Maharaja's College, Mysore was "National Stereotypes" – a comparative study of how Indians perceive foreigners and how foreigners perceive Indians (the stereotyped impressions of each other). On completion in 1952, Murthy was awarded the "Bhabha Memorial Gold Medal" for the best scholar in psychology and philosophy. M.V.Gopalaswamy Dr M.V.Gopalaswamy, mentor and professor to Murthy, was among the founding fathers of Department of Psychology at University of Mysore (1924). A student of Dr Charles Spearman under whom he secured his PhD in London, M.V.Gopalaswamy returned to India with a stand-alone transponder with which he started the first amateur radio station. He is credited with coining the term "Akashvani" for All India Radio., An avid reader and intellectual, Gopalaswamy's interests in "Tantra Philosophy" and "Modern Psychology" saw him spend hours with another distinguished historian at University of Mysore – S. Srikanta Sastri (shown together in photograph). Incidentally, Murthy happens to be S. Srikanta Sastri's nephew. Murthy would secure his master's degree (M.A.) in psychology under Dr M.V.Gopalaswamy in 1954 at University of Mysore whereafter he occupied positions at "Ranchi European Lunatic Asylum" and "Mysore State Mental Hospital" before pursuing a doctoral study abroad. Doctoral studies After securing his master's degree in psychology from University of Mysore, Murthy gained admission into "Katholieke Universiteit Leuven" or the "Catholic University of Leuven", Belgium to pursue his doctoral studies in psychology for the award of a PhD.His subject chosen was "Causality in Experimental Psychology". In addition to the award of a PhD, the title of "Professor Excelsior" was conferred on Murthy. While in Europe, Murthy perfected the nuances of "Behavioural therapy" and would later work on adapting the same to Indian conditions back home. His keen interest in Manic Depressive Psychosis and Schizophrenia probably stemmed forth while at Leuven. Contributions On his return to India from Belgium, Murthy joined National Institute of Mental Health and Neuro Sciences (NIMHANS), Bangalore where for the next two decades, he dedicated his efforts at introducing the concept of behavioural therapy in the Indian setting., He was responsible for the introduction of clinical neuropsychology and behavioural medicine to India, and developed a number of diagnostic scales for classifying mental disorders. A novel approach, Behavioural therapy was new to India in the early 1970s and under Murthy's guidance embodied a holistic approach towards the psychiatric patient, taking into account not only the patient, but also his family members in their efforts at effective counselling. The success of such an approach saw a decline in the admissions to the mental health facility for the first time in years. Murthy also drew up various questionnaires (Multiphasic Questionnaires) to better assess and quantify the psychological state of the patient and many of these are still in vogue. Some of the diagnostic scales incorporated in the "Multiphasic Personality Questionnaire" formulated by Murthy are shown here: Depressive Scale Paranoid Scale Schizophrenia Scale Manic Scale Depressive Anxiety Scale Hysteria Scale K Scale His work on "The relation of cyclothymia-schizothymia to extroversion-introversion" is significant and finds place in the Kyoto University Psychology Department syllabus. His contribution towards "Organic Brain Dysfunction" is acknowledged in an article concerning "battery of tests to detect organic brain dysfunction" which appeared in the January issue of Journal of Clinical Psychology. Another, somewhat controversial yet intriguing aspect which Murthy took time to examine were the claims of reincarnation while in a psychotic state. A brief report of the same appeared in the September 1978 issue of Indian Journal of Clinical Psychology. His comparative study of "suicides" with "attempted suicides" in women was published in 1983 in Indian Journal of Psychological Medicine and assumed significance in the Asian setting against the backdrop of newer economic realities. Dr H.N.Murthy aided a treatise examining the etiology (Nidana) of Mental diseases in Ayurveda and was successful in bridging allopathic medicine and its integral concepts to the age old system of Indian Medicine ("Apasmara"-Epilepsy). A comparison of "Yogis" and "Control subjects" with regard to their voluntary control of personality traits and psychological adjustment patterns appeared in the Indian Journal of Physiology and Pharmacology in 1987 and was a turning point in the scientific analysis of yogic claims of self-control of personality and self actuation. Legacy Murthy remained a bachelor. His legacy today stems from the scores of students who got trained under him. Murthy is best remembered for guiding a doctoral work on "Psychology in Music" by Dr Padma Murthy. His other student Dr M.S.Thimmappa would later occupy the chair of Vice Chancellor of Bangalore University. His personal library with a collection of books exceeding thousands was his treasured possession. Deeply spiritual and philosophical, he was an ardent devotee and follower of the "Ramakrishna Mutt". Murthy died on 22 August 2011 aged 87 years at Bangalore. In his memory, "Dr H.N.Murthy Oration" is arranged every year by the "Indian Journal of Clinical Psychology" where budding psychologists from the fraternity deliver papers in his honour in subjects concerning behavioral medicine and bio-feedback. On Murthy's death, his student Dr M.S.Thimmappa (ex-vice chancellor of Bangalore University) dedicated a tribute to his mentor, excerpts of which are shown here: References External links A Tribute to a Mentor: Dr H.N.Murthy by Dr M.S.Thimmappa Dr H. N. Murthy – A Remembrance Indian psychologists Behavior therapy Biofeedback Scientists from Bengaluru 1924 births 2011 deaths Maharaja's College, Mysore alumni Catholic University of Leuven (1834–1968) alumni Indian medical academics
H. Narayan Murthy
Biology
1,474
10,016,001
https://en.wikipedia.org/wiki/List%20of%20organisms%20named%20after%20famous%20people
In biological nomenclature, organisms often receive scientific names that honor a person. A taxon (e.g., species or genus; plural: taxa) named in honor of another entity is an eponymous taxon, and names specifically honoring a person or persons are known as patronyms. Scientific names are generally formally published in peer-reviewed journal articles or larger monographs along with descriptions of the named taxa and ways to distinguish them from other taxa. Following rules of Latin grammar, species or subspecies names derived from a man's name often end in -i or -ii if named for an individual, and -orum if named for a group of men or mixed-sex group, such as a family. Similarly, those named for a woman often end in -ae, or -arum for two or more women. There are exceptions such as Strumigenys ayersthey. This list includes organisms named after famous individuals or ensembles (including bands and comedy troupes), but excludes companies, institutions, ethnic groups or nationalities, and populated places. It does not include organisms named for fictional entities (which can be found in the List of organisms named after works of fiction), for biologists or other natural scientists, nor for associates or family members of researchers who are not otherwise notable. The scientific names are given as originally described (their basionyms): subsequent research may have placed species in different genera, or rendered them taxonomic synonyms of previously described taxa. Some of these names are unavailable in the zoological sense or illegitimate in the botanical sense due to senior homonyms already having the same name. Lists See also List of bacterial genera named after personal names List of rose cultivars named after people List of taxa named by anagrams List of unusual biological names List of organisms named after works of fiction Notes Named after celebrities Taxonomy (biology) Organisms Organisms Organisms Taxonomic lists
List of organisms named after famous people
Biology
381
1,655,028
https://en.wikipedia.org/wiki/Apple%20community
The Apple community consists of the users, media, and third party companies interested in Apple Inc. and its products. They discuss rumors, future products, news stories, and support of Apple's products. Apple has a devoted following, especially for the Apple II, Mac, iPod, iPhone, and luminary staff members. The personal computer revolution, mixed with Apple's vertical integration of its products and services, has increased popularity. Apple's corporate policy of extreme secrecy about future products intensify interest in the company's activities. Magazines Before the popular use of the internet, early Apple-related publications were available in traditional print media form, often but not always moving later to online publication. MacLife (stylized as Mac|Life) is a San Francisco-based American publication, originally known as MacAddict between September 1996 and February 2007. Published by Future US, it started as a monthly magazine, focusing on the Macintosh personal computer and other related Apple products. While originally a print publication, it is now exclusively a digital–only product, or available through their app which can be obtained via the App Store. MacUser is a print magazine that was published biweekly and then monthly by Dennis Publishing Ltd. and licensed by Felden in the UK. Its content is for Mac users in the design sector, with its Masterclass tutorials and technical advice. It began publishing in 1985, ceasing publication in 2015. In 1985, Felix Dennis's Dennis Publishing, the creators of MacUser in the UK, licensed its name and mouse-rating symbol to Ziff-Davis Publishing for use worldwide as a completely separate publication, later consolidated into Macworld. Macworld is one of the oldest magazine publications focused on Apple products and software, starting in 1984. It received competition with the launch of the US version of MacUser magazine the following year. The two magazines merged as Macworld in 1997. In September 2014, it discontinued its print edition, instead focusing on its website and YouTube coverage. Online publishers 9to5Mac was founded in 2007 by Seth Weintraub as an Apple news website originally focused on Macs in the enterprise. Since then, the website has expanded to covering all things Apple. 9to5Mac is known as the leading website within the Apple News Community in terms of breaking impactful news. The site gained fame in its earlier years for publishing the first photos of the third-generation iPod nano, the original iPod touch, early photos of the first iPhone, and details about Apple's still-in-use aluminum manufacturing process for laptops. In recent years, 9to5Mac published the first accurate details about the iPhone 4S, Siri, Apple's move from Google Maps to Apple Maps, new health and fitness applications, OS X/macOS updates, and the Apple Watch. The site also published the first photos of the white iPad 2, iPhone 5, and the iPad Air. AppleInsider launched in 1997 as a news and rumor website for Apple products and services at appleinsider.com. It includes a forum for discussion of news stories and other community news. In the late 1990s, Apple successfully sued John Doe from AppleInsider's boards with the username "Worker Bee" for revealing information on what became the Apple Pro Mouse. It is a rare case of Apple following through on threats of a suit. The case was settled out of court. iMore was an Apple-enthusiast website founded in 2008, previously as Phonedifferent, with its main focus on all aspects of Apple devices (also featuring sections on several other platforms). Gerald Lynch was the final editor in chief. It was run by editor-in-chief Rene Ritchie with a small editing staff until 2020; Joseph Keller was the editor until mid-2022. Along with the usual news and rumors, iMore often featured in-depth technical details of Apple software and operating systems, aimed at explaining to readers how and why certain things have been done by Apple, in their wider context of achieving better usability and design goals. It ceased publication and closed its member forums on November 1, 2024. Low End Mac is an Apple-centric website founded in 1997 to support Mac users with early Mac hardware and growing over time to cover the entire range of Macs, as each line eventually had model years falling into the “vintage and obsolete” category. Low End Mac's primary focus is on aging Apple gear, primarily Macs, but touching on iPhone, iPad, iPod, Apple TV, and other devices as well. It is published by its founder Daniel Knight with a small volunteer writing staff. MacDailyNews launched in September 2002. MacDailyNews was cited by CNet as its source for the launch of the first Verizon (CDMA-capable) iPhone after Christmas, 2010; the phone was announced by Verizon in early 2011. It was cited by MacRumors with a forecast for the second generation Mac Pro in April 2013; Apple announced it in June. MacOS Rumors was founded by Ethan C. Allen in 1995 as the first known "Apple rumors" website on the early web. His early work was noticed and referenced by other print media including CNET, Forbes, and Mac the Knife in MacWEEK. Allen was only 16 at the time but had developed extensive source contacts. Apple was unhappy with some of the releases on the site which proved to be early and accurate. Apple requested several times that he stop releasing data from his sources. After a brief shutdown of the site at the request of Apple, MacOS Rumors was obtained by Ryan Meader after a domain expiration within two years of its creation. Originally with Ethan, the site posted most of its rumors based on screenshots and info sent via email from followers. With Ryan at the helm, MacOS Rumors collected content from message boards and Usenet posts but later claimed (unsubstantiated) to have developed contacts inside Apple. After several successful years, MacOS Rumors gained a reputation for being inaccurate. After the MacOS Rumors site was obtained by Ryan in 1997, Ethan tried to briefly return to Apple rumors with his sources by creating a new website titled Mac Rumor Mill. Apple quickly caught onto the new site and was able to shut it down with threatened legal action. MacRumors was launched in February 2000 by Arnold Kim, as an aggregator of Mac-related rumors and reports around the web. MacRumors attempts to keep track of the rumor community by consolidating reports and cross-referencing claims, along with having extensive online forums for most Apple products and services. SecureMac was founded in 1999 as a Mac-oriented security news portal. The site has expanded to cover a wide range of digital security and privacy topics, but has retained its focus on Apple products and software. In 2016, SecureMac launched The Checklist, a weekly security-themed podcast aimed at iOS and macOS users. SecureMac has been credited with discovering several significant macOS threats, including the Boonana Trojan, a new variant of the rogue security program Mac Defender. Think Secret launched in 1999. Apple filed a lawsuit against the company alleging it printed stories containing Apple trade secrets. In December 2007, the lawsuit was settled with no sources being disclosed; however, the site was shut down, finally closing on February 14, 2008. In the year leading up to the closing of the site, Think Secret correctly predicted an aluminum shell iMac, development of a touchscreen based iPod starting in 2006, and the relative BlackBerry-esque form factor of the new iPod Nano. However, there were still some reports that turned out to be false, such as its prediction of the demise of the Mac Mini, when it received an upgrade in mid-2007. TidBITS was founded by Adam Engst and Tonya Engst in April 1990, making it the oldest online Apple publication and the second-oldest Internet publication. TidBITS covers Apple news and publishes detailed technical advice for users. It started as an email newsletter before the rise of the Web, began publishing on the Web in 1994, and continues to provide information via both the Web and weekly email distribution. The Unofficial Apple Weblog (TUAW) was founded in 2004, and claimed to be "a resource for all things Apple and beyond". TUAW published news stories, credible rumors, and how-tos covering a variety of topics daily. TUAW was known for its rumor roundups, seeking to dispel false Apple rumors from around the web. On February 3, 2015, TUAW was shut down by its owners, Weblogs, Inc. In July 2024, its domain name was sold to ad agency Web Orange Limited (WOL) and was reused as an AI-generated content farm. The Mac Observer publishes Mac, iPhone, and Apple related news, reviews, tips, and podcasts. The site was launched on December 29, 1998, by Dave Hamilton and Bryan Chaffin. The site has evolved from just providing news and reviews to now hosting popular podcasts, columns, and more. Macintosh User Groups Macintosh User Groups (MUGs) are groups of Macintosh users, that started after the 1985 creation of the Apple User Group Connection (AUGC). France Former Macintosh division lead Jean-Louis Gassée, a Frenchman, was an advocate in France for personal computing, and contributed to Apple's "remarkable" success in that country. Until 2007, the Apple Expo trade show was held yearly in Paris, and attended by Apple to hold several keynotes. French Apple news sites include Mac4Ever, MacBidouille, MacGeneration, and MacPlus. In 1996, Macworld bought Golden magazine, and renamed it Macworld France. Two years later, it was renamed after merging with the magazine; in 2003, the French version of the magazine changed its name to Macworld. Bernard Le Du, a French Macworld journalist, later started his own magazine, . is another notable French magazine, which went online-only in 2017. Apple evangelists An Apple evangelist is a technology evangelist for Apple products. The term "software evangelist" was coined by Mike Murray of the Macintosh division. Apple's first evangelist was Mike Boich, a member of the original Macintosh development team. Alain Rossmann succeeded him. Their job was to promote Apple products, primarily by working with third-party developers. Boich and Rossmann later cofounded Radius. One prominent Apple evangelist is Apple Fellow Guy Kawasaki. He is credited as one of the first to use evangelism marketer of a computer platform through a weblog. Apple formerly had a "Why Mac?" evangelist site. The company subsequently ran Get a Mac, which gave numerous reasons why "PC users" should switch to Macs. Several third-parties still host and maintain Apple evangelism websites, many of which are listed above. The AppleMasters program was a similar endeavor in the late nineties. In the early days of the Macintosh computer, the primary function of an evangelist was to convince software developers to write software products for the Macintosh. When software developers need help from within Apple, evangelists will often act as go-betweens, helping the developers to find the right people at Apple to talk to. This role is now filled by the Apple Developer program, led by Phil Schiller. Apple's response Apple's official stance on speculation around future product releases is to refrain from discussing any products or outside speculation until release. Historically, Apple has often used legal means, such as cease and desist orders, in order to retain trade secrets, intellectual property, or confidential corporate information, when needed. Typically, Apple has primarily pursued the leakers of information themselves, rather than any sites containing rumors on their products. However, Apple's suit against Think Secret in 2005 targeted whether these sites have the right to knowingly publish this protected information. Staff are also required to sign non-disclosure clauses within the company. During his January 10, 2006, keynote address to the Macworld Conference & Expo in San Francisco, Apple CEO Steve Jobs lampooned the rumor community by pretending to create a "Super Secret Apple Rumors" podcast during his demonstration of new features in GarageBand. On October 16, 2014, at an Apple Special Event keynote, Craig Federighi pretended to "triple down on secrecy" by hiring Stephen Colbert as Supreme Commander of Secrecy. He lampooned the "spaceship" rumors. References Community Macintosh websites Apple Inc. user groups Fandom
Apple community
Technology
2,537
16,174,922
https://en.wikipedia.org/wiki/Energy%20subsidy
Energy subsidies are measures that keep prices for customers below market levels, or for suppliers above market levels, or reduce costs for customers and suppliers. Energy subsidies may be direct cash transfers to suppliers, customers, or related bodies, as well as indirect support mechanisms, such as tax exemptions and rebates, price controls, trade restrictions, and limits on market access. During FY 2016–22, most US federal subsidies were for renewable energy producers (primarily biofuels, wind, and solar), low-income households, and energy-efficiency improvements. During FY 2016–22, nearly half (46%) of federal energy subsidies were associated with renewable energy, and 35% were associated with energy end uses. Federal support for renewable energy of all types more than doubled, from $7.4 billion in FY 2016 to $15.6 billion in FY 2022. The International Renewable Energy Agency tracked some $634 billion in energy-sector subsidies in 2020, and found that around 70% were fossil fuel subsidies. About 20% went to renewable power generation, 6% to biofuels and just over 3% to nuclear. Overview of all sources of energy If governments choose to subsidize one particular source of energy more than another, that choice can impact the environment. That distinguishing factor informs the below discussion on all energy subsidies of all sources of energy in general. Main arguments for energy subsidies are: Security of supply – subsidies are used to ensure adequate domestic supply by supporting indigenous fuel production in order to reduce import dependency, or supporting overseas activities of national energy companies, or to secure the electricity grid. Environmental and health improvement – subsidies are used to improve health by reducing air pollution, and to fulfill international climate pledges. For example the IEA says the purchase price of heat pumps should be subsidized. Economic benefits – subsidies in the form of reduced prices are used to stimulate particular economic sectors or segments of the population, e.g. alleviating poverty and increasing access to energy in developing countries. With regards to fossil fuel prices in particular, Ian Parry, the lead author of a 2021 IMF report said, "Some countries are reluctant to raise energy prices because they think it will harm the poor. But holding down fossil fuel prices is a highly inefficient way to help the poor, because most of the benefits accrue to wealthier households. It would be better to target resources towards helping poor and vulnerable people directly." Employment and social benefits – subsidies are used to maintain employment, especially in periods of economic transition. In 2021, with regards to fossil fuel prices in particular, Ipek Gençsü, at the Overseas Development Institute, said: "[Subsidy reform] requires support for vulnerable consumers who will be impacted by rising costs, as well for workers in industries which simply have to shut down. It also requires information campaigns, showing how the savings will be redistributed to society in the form of healthcare, education and other social services. Many people oppose subsidy reform because they see it solely as governments taking something away, and not giving back." Main arguments against energy subsidies are: Some energy subsidies, such as the fossil fuel subsidies (oil, coal, and gas subsidies), counter the goal of sustainable development, as they may lead to higher consumption and waste, exacerbating the harmful effects of energy use on the environment, create a heavy burden on government finances and weaken the potential for economies to grow, undermine private and public investment in the energy sector. Also, most benefits from fossil fuel subsidies in developing countries go to the richest 20% of households. Impede the expansion of distribution networks and the development of more environmentally benign energy technologies, and do not always help the people that need them most. The study conducted by the World Bank finds that subsidies to the large commercial businesses that dominate the energy sector are not justified. However, under some circumstances it is reasonable to use subsidies to promote access to energy for the poorest households in developing countries. Energy subsidies should encourage access to the modern energy sources, not to cover operating costs of companies. The study conducted by the World Resources Institute finds that energy subsidies often go to capital intensive projects at the expense of smaller or distributed alternatives. Types of energy subsidies are below. ("Fossil-fuel subsidies generally take two forms. Production subsidies...[and]...consumption subsidies."): Direct financial transfers – grants to suppliers; grants to customers; low-interest or preferential loans to suppliers. Preferential tax treatments – rebates or exemption on royalties, duties, supplier levies and tariffs; tax credit; accelerated depreciation allowances on energy supply equipment. Trade restrictions – quota, technical restrictions and trade embargoes. Energy-related services provided by government at less than full cost – direct investment in energy infrastructure; public research and development. Regulation of the energy sector – demand guarantees and mandated deployment rates; price controls; market-access restrictions; preferential planning consent and controls over access to resources. Failure to impose external costs – environmental externality costs; energy security risks and price volatility costs. Depletion Allowance – allows a deduction from gross income of up to ~27% for the depletion of exhaustible resources (oil, gas, minerals). Overall, energy subsidies require coordination and integrated implementation, especially in light of globalization and increased interconnectedness of energy policies, thus their regulation at the World Trade Organization is often seen as necessary. Support for new technology Early support of solar power by the United States and Germany greatly helped renewable energy commercialization to reduce greenhouse gas emissions worldwide, but may not have helped local manufacturing. Support for nuclear fusion continues, although it is not expected to be commercially viable in time to contribute to countries net zero targets. Energy storage research is also supported. Fossil fuel subsidies See also Fossil fuel subsidies Corporate welfare Building-integrated photovoltaics Government subsidies Feed-in tariff Gasoline subsidies Renewable Energy Certificates Renewable energy commercialization Renewable energy payments Stranded assets Financial incentives for photovoltaics References Bibliography External links Fossil Fuel Subsidy Tracker- a collaboration between the Organisation for Economic Co-operation and Development (OECD) and the International Institute for Sustainable Development (IISD) Global Subsidies Initiative - a project of the International Institute for Sustainable Development OECD-IEA analysis of fossil fuels and other support - OECD European countries spend billions a year on fossil fuel subsidies, survey shows (2017) Energy economics Renewable energy commercialization Subsidies
Energy subsidy
Environmental_science
1,318
334,955
https://en.wikipedia.org/wiki/Therapeutic%20index
The therapeutic index (TI; also referred to as therapeutic ratio) is a quantitative measurement of the relative safety of a drug with regard to risk of overdose. It is a comparison of the amount of a therapeutic agent that causes toxicity to the amount that causes the therapeutic effect. The related terms therapeutic window or safety window refer to a range of doses optimized between efficacy and toxicity, achieving the greatest therapeutic benefit without resulting in unacceptable side-effects or toxicity. Classically, for clinical indications of an approved drug, TI refers to the ratio of the dose of the drug that causes adverse effects at an incidence/severity not compatible with the targeted indication (e.g. toxic dose in 50% of subjects, TD) to the dose that leads to the desired pharmacological effect (e.g. efficacious dose in 50% of subjects, ED). In contrast, in a drug development setting TI is calculated based on plasma exposure levels. In the early days of pharmaceutical toxicology, TI was frequently determined in animals as lethal dose of a drug for 50% of the population (LD50) divided by the minimum effective dose for 50% of the population (ED50). In modern settings, more sophisticated toxicity endpoints are used. For many drugs, severe toxicities in humans occur at sublethal doses, which limit their maximum dose. A higher safety-based therapeutic index is preferable instead of a lower one; an individual would have to take a much higher dose of a drug to reach the lethal threshold than the dose taken to induce the therapeutic effect of the drug. However, a lower efficacy-based therapeutic index is preferable instead of a higher one; an individual would have to take a higher dose of a drug to reach the toxic threshold than the dose taken to induce the therapeutic effect of the drug. Generally, a drug or other therapeutic agent with a narrow therapeutic range (i.e. having little difference between toxic and therapeutic doses) may have its dosage adjusted according to measurements of its blood levels in the person taking it. This may be achieved through therapeutic drug monitoring (TDM) protocols. TDM is recommended for use in the treatment of psychiatric disorders with lithium due to its narrow therapeutic range. Types Based on efficacy and safety of drugs, there are two types of therapeutic index: Safety-based therapeutic index It is desirous for the value of LD to be as large as possible, to decrease risk of lethal effects and increase the therapeutic window. In the above formula, TI increases as the difference between LD and ED increases—hence, a higher safety-based therapeutic index indicates a larger therapeutic window, and vice versa. Efficacy-based therapeutic index Ideally the ED is as low as possible for faster drug response and larger therapeutic window, whereas a drugs TD is ideally as large as possible to decrease risk of toxic effects. In the above equation, the greater the difference between ED and TD, the greater the value of TI. Hence, a lower efficacy-based therapeutic index indicates a larger therapeutic window. Protective index Similar to safety-based therapeutic index, the protective index uses TD50 (median toxic dose) in place of LD50. For many substances, toxicity can occur at levels far below lethal effects (that cause death), and thus, if toxicity is properly specified, the protective index is often more informative about a substance's relative safety. Nevertheless, the safety-based therapeutic index () is still useful as it can be considered an upper bound of the protective index, and the former also has the advantages of objectivity and easier comprehension. Since the protective index (PI) is calculated as TD divided by ED, it can be mathematically expressed that: which means that is a reciprocal of protective index. All the above types of therapeutic index can be used in both pre-clinical trials and clinical trials. Drug development A low efficacy-based therapeutic index () and a high safety-based therapeutic index () are preferable for a drug to have a favorable efficacy vs safety profile. At the early discovery/development stage, the clinical TI of a drug candidate is unknown. However, understanding the preliminary TI of a drug candidate is of utmost importance as early as possible since TI is an important indicator of the probability of successful development. Recognizing drug candidates with potentially suboptimal TI at the earliest possible stage helps to initiate mitigation or potentially re-deploy resources. TI is the quantitative relationship between pharmacological efficacy and toxicological safety of a drug, without considering the nature of pharmacological or toxicological endpoints themselves. However, to convert a calculated TI into something useful, the nature and limitations of pharmacological and/or toxicological endpoints must be considered. Depending on the intended clinical indication, the associated unmet medical need and/or the competitive situation, more or less weight can be given to either the safety or efficacy of a drug candidate in order to create a well balanced indication-specific efficacy vs safety profile. In general, it is the exposure of a given tissue to drug (i.e. drug concentration over time), rather than dose, that drives the pharmacological and toxicological effects. For example, at the same dose there may be marked inter-individual variability in exposure due to polymorphisms in metabolism, DDIs or differences in body weight or environmental factors. These considerations emphasize the importance of using exposure instead of dose to calculate TI. To account for delays between exposure and toxicity, the TI for toxicities that occur after multiple dose administrations should be calculated using the exposure to drug at steady state rather than after administration of a single dose. A review published by Muller and Milton in Nature Reviews Drug Discovery critically discusses TI determination and interpretation in a translational drug development setting for both small molecules and biotherapeutics. Range of therapeutic indices The therapeutic index varies widely among substances, even within a related group. For instance, the opioid painkiller remifentanil is very forgiving, offering a therapeutic index of 33,000:1, while Diazepam, a benzodiazepine sedative-hypnotic and skeletal muscle relaxant, has a less forgiving therapeutic index of 100:1. Morphine is even less so with a therapeutic index of 70. Less safe are cocaine (a stimulant and local anaesthetic) and ethanol (colloquially, the "alcohol" in alcoholic beverages, a widely available sedative consumed worldwide): the therapeutic indices for these substances are 15:1 and 10:1, respectively. Paracetamol, alternatively known by its trade names Tylenol or Panadol, also has a therapeutic index of 10. Even less safe are drugs such as digoxin, a cardiac glycoside; its therapeutic index is approximately 2:1. Other examples of drugs with a narrow therapeutic range, which may require drug monitoring both to achieve therapeutic levels and to minimize toxicity, include dimercaprol, theophylline, warfarin and lithium carbonate. Some antibiotics and antifungals require monitoring to balance efficacy with minimizing adverse effects, including: gentamicin, vancomycin, amphotericin B (nicknamed 'amphoterrible' for this very reason), and polymyxin B. Cancer radiotherapy Radiotherapy aims to shrink tumors and kill cancer cells using high energy. The energy arises from x-rays, gamma rays, or charged or heavy particles. The therapeutic ratio in radiotherapy for cancer treatment is determined by the maximum radiation dose for killing cancer cells and the minimum radiation dose causing acute or late morbidity in cells of normal tissues. Both of these parameters have sigmoidal dose–response curves. Thus, a favorable outcome in dose–response for tumor tissue is greater than that of normal tissue for the same dose, meaning that the treatment is effective on tumors and does not cause serious morbidity to normal tissue. Conversely, overlapping response for two tissues is highly likely to cause serious morbidity to normal tissue and ineffective treatment of tumors. The mechanism of radiation therapy is categorized as direct or indirect radiation. Both direct and indirect radiation induce DNA mutation or chromosomal rearrangement during its repair process. Direct radiation creates a DNA free radical from radiation energy deposition that damages DNA. Indirect radiation occurs from radiolysis of water, creating a free hydroxyl radical, hydronium and electron. The hydroxyl radical transfers its radical to DNA. Or together with hydronium and electron, a free hydroxyl radical can damage the base region of DNA. Cancer cells cause an imbalance of signals in the cell cycle. G1 and G2/M arrest were found to be major checkpoints by irradiating human cells. G1 arrest delays the repair mechanism before synthesis of DNA in S phase and mitosis in M phase, suggesting it is a key checkpoint for survival of cells. G2/M arrest occurs when cells need to repair after S phase but before mitotic entry. It is known that S phase is the most resistant to radiation and M phase is the most sensitive to radiation. p53, a tumor suppressor protein that plays a role in G1 and G2/M arrest, enabled the understanding of the cell cycle through radiation. For example, irradiation of myeloid leukemia cells leads to an increase in p53 and a decrease in the level of DNA synthesis. Patients with Ataxia telangiectasia delays have hypersensitivity to radiation due to the delay of accumulation of p53. In this case, cells are able to replicate without repair of their DNA, becoming prone to incidence of cancer. Most cells are in G1 and S phase. Irradiation at G2 phase showed increased radiosensitivity and thus G1 arrest has been a focus for therapeutic treatment. Irradiation of a tissue induces a response in both irradiated and non-irridiated cells. It was found that even cells up to 50–75 cell diameters distant from irradiated cells exhibit a phenotype of enhanced genetic instability such as micronucleation. This suggests an effect on cell-to-cell communication such as paracrine and juxtacrine signaling. Normal cells do not lose their DNA repair mechanism whereas cancer cells often lose it during radiotherapy. However, the high energy radiation can override the ability of damaged normal cells to repair, leading to additional risk of carcinogenesis. This suggests a significant risk associated with radiation therapy. Thus, it is desirable to improve the therapeutic ratio during radiotherapy. Employing IG-IMRT, protons and heavy ions are likely to minimize the dose to normal tissues by altered fractionation. Molecular targeting of the DNA repair pathway can lead to radiosensitization or radioprotection. Examples are direct and indirect inhibitors on DNA double-strand breaks. Direct inhibitors target proteins (PARP family) and kinases (ATM, DNA-PKCs) that are involved in DNA repair. Indirect inhibitors target protein tumor cell signaling proteins such as EGFR and insulin growth factor. The effective therapeutic index can be affected by targeting, in which the therapeutic agent is concentrated in its desirable area of effect. For example, in radiation therapy for cancerous tumors, shaping the radiation beam precisely to the profile of a tumor in the "beam's eye view" can increase the delivered dose without increasing toxic effects, though such shaping might not change the therapeutic index. Similarly, chemotherapy or radiotherapy with infused or injected agents can be made more efficacious by attaching the agent to an oncophilic substance, as in peptide receptor radionuclide therapy for neuroendocrine tumors and in chemoembolization or radioactive microspheres therapy for liver tumors and metastases. This concentrates the agent in the targeted tissues and lowers its concentration in others, increasing efficacy and lowering toxicity. Safety ratio Sometimes the term safety ratio is used, particularly when referring to psychoactive drugs used for non-therapeutic purposes, e.g. recreational use. In such cases, the effective dose is the amount and frequency that produces the desired effect, which can vary, and can be greater or less than the therapeutically effective dose. The Certain Safety Factor, also referred to as the Margin of Safety (MOS), is the ratio of the lethal dose to 1% of population to the effective dose to 99% of the population (LD/ED). This is a better safety index than the LD50 for materials that have both desirable and undesirable effects, because it factors in the ends of the spectrum where doses may be necessary to produce a response in one person but can, at the same dose, be lethal in another. Synergistic effect A therapeutic index does not consider drug interactions or synergistic effects. For example, the risk associated with benzodiazepines increases significantly when taken with alcohol, opiates, or stimulants when compared with being taken alone. Therapeutic index also does not take into account the ease or difficulty of reaching a toxic or lethal dose. This is more of a consideration for recreational drug users, as the purity can be highly variable. Therapeutic window The therapeutic window (or pharmaceutical window) of a drug is the range of drug dosages which can treat disease effectively without having toxic effects. Medication with a small therapeutic window must be administered with care and control, frequently measuring blood concentration of the drug, to avoid harm. Medications with narrow therapeutic windows include theophylline, digoxin, lithium, and warfarin. Optimal biological dose Optimal biological dose (OBD) is the quantity of a drug that will most effectively produce the desired effect while remaining in the range of acceptable toxicity. Maximum tolerated dose The maximum tolerated dose (MTD) refers to the highest dose of a radiological or pharmacological treatment that will produce the desired effect without unacceptable toxicity. The purpose of administering MTD is to determine whether long-term exposure to a chemical might lead to unacceptable adverse health effects in a population, when the level of exposure is not sufficient to cause premature mortality due to short-term toxic effects. The maximum dose is used, rather than a lower dose, to reduce the number of test subjects (and, among other things, the cost of testing), to detect an effect that might occur only rarely. This type of analysis is also used in establishing chemical residue tolerances in foods. Maximum tolerated dose studies are also done in clinical trials. MTD is an essential aspect of a drug's profile. All modern healthcare systems dictate a maximum safe dose for each drug, and generally have numerous safeguards (e.g. insurance quantity limits and government-enforced maximum quantity/time-frame limits) to prevent the prescription and dispensing of quantities exceeding the highest dosage which has been demonstrated to be safe for members of the general patient population. Patients are often unable to tolerate the theoretical MTD of a drug due to the occurrence of side-effects which are not innately a manifestation of toxicity (not considered to severely threaten a patient's health) but cause the patient sufficient distress and/or discomfort to result in non-compliance with treatment. Such examples include emotional "blunting" with antidepressants, pruritus with opiates, and blurred vision with anticholinergics. See also Drug titration – process of finding the correct dose of a drug Effective dose EC50 IC50 LD50 Hormesis References Pharmacokinetics Life sciences industry
Therapeutic index
Chemistry,Biology
3,160
40,946,450
https://en.wikipedia.org/wiki/Pluteus%20brunneosquamulosus
Pluteus brunneosquamulosus is a species of agaric fungus in the family Pluteaceae. It is found in India. See also List of Pluteus species References External links brunneosquamulosus Fungi described in 2012 Fungi of Asia Fungus species
Pluteus brunneosquamulosus
Biology
62
14,946
https://en.wikipedia.org/wiki/Ice
Ice is water that is frozen into a solid state, typically forming at or below temperatures of 0 °C, 32 °F, or 273.15 K. It occurs naturally on Earth, on other planets, in Oort cloud objects, and as interstellar ice. As a naturally occurring crystalline inorganic solid with an ordered structure, ice is considered to be a mineral. Depending on the presence of impurities such as particles of soil or bubbles of air, it can appear transparent or a more or less opaque bluish-white color. Virtually all of the ice on Earth is of a hexagonal crystalline structure denoted as ice Ih (spoken as "ice one h"). Depending on temperature and pressure, at least nineteen phases (packing geometries) can exist. The most common phase transition to ice Ih occurs when liquid water is cooled below (, ) at standard atmospheric pressure. When water is cooled rapidly (quenching), up to three types of amorphous ice can form. Interstellar ice is overwhelmingly low-density amorphous ice (LDA), which likely makes LDA ice the most abundant type in the universe. When cooled slowly, correlated proton tunneling occurs below (, ) giving rise to macroscopic quantum phenomena. Ice is abundant on the Earth's surface, particularly in the polar regions and above the snow line, where it can aggregate from snow to form glaciers and ice sheets. As snowflakes and hail, ice is a common form of precipitation, and it may also be deposited directly by water vapor as frost. The transition from ice to water is melting and from ice directly to water vapor is sublimation. These processes plays a key role in Earth's water cycle and climate. In the recent decades, ice volume on Earth has been decreasing due to climate change. The largest declines have occurred in the Arctic and in the mountains located outside of the polar regions. The loss of grounded ice (as opposed to floating sea ice) is the primary contributor to sea level rise. Humans have been using ice for various purposes for thousands of years. Some historic structures designed to hold ice to provide cooling are over 2,000 years old. Before the invention of refrigeration technology, the only way to safely store food without modifying it through preservatives was to use ice. Sufficiently solid surface ice makes waterways accessible to land transport during winter, and dedicated ice roads may be maintained. Ice also plays a major role in winter sports. Physical properties Ice possesses a regular crystalline structure based on the molecule of water, which consists of a single oxygen atom covalently bonded to two hydrogen atoms, or H–O–H. However, many of the physical properties of water and ice are controlled by the formation of hydrogen bonds between adjacent oxygen and hydrogen atoms; while it is a weak bond, it is nonetheless critical in controlling the structure of both water and ice. An unusual property of water is that its solid form—ice frozen at atmospheric pressure—is approximately 8.3% less dense than its liquid form; this is equivalent to a volumetric expansion of 9%. The density of ice is 0.9167–0.9168 g/cm3 at 0 °C and standard atmospheric pressure (101,325 Pa), whereas water has a density of 0.9998–0.999863 g/cm3 at the same temperature and pressure. Liquid water is densest, essentially 1.00 g/cm3, at 4 °C and begins to lose its density as the water molecules begin to form the hexagonal crystals of ice as the freezing point is reached. This is due to hydrogen bonding dominating the intermolecular forces, which results in a packing of molecules less compact in the solid. The density of ice increases slightly with decreasing temperature and has a value of 0.9340 g/cm3 at −180 °C (93 K). When water freezes, it increases in volume (about 9% for fresh water). The effect of expansion during freezing can be dramatic, and ice expansion is a basic cause of freeze-thaw weathering of rock in nature and damage to building foundations and roadways from frost heaving. It is also a common cause of the flooding of houses when water pipes burst due to the pressure of expanding water when it freezes. Because ice is less dense than liquid water, it floats, and this prevents bottom-up freezing of the bodies of water. Instead, a sheltered environment for animal and plant life is formed beneath the floating ice, which protects the underside from short-term weather extremes such as wind chill. Sufficiently thin floating ice allows light to pass through, supporting the photosynthesis of bacterial and algal colonies. When sea water freezes, the ice is riddled with brine-filled channels which sustain sympagic organisms such as bacteria, algae, copepods and annelids. In turn, they provide food for animals such as krill and specialized fish like the bald notothen, fed upon in turn by larger animals such as emperor penguins and minke whales. When ice melts, it absorbs as much energy as it would take to heat an equivalent mass of water by . During the melting process, the temperature remains constant at . While melting, any energy added breaks the hydrogen bonds between ice (water) molecules. Energy becomes available to increase the thermal energy (temperature) only after enough hydrogen bonds are broken that the ice can be considered liquid water. The amount of energy consumed in breaking hydrogen bonds in the transition from ice to water is known as the heat of fusion. As with water, ice absorbs light at the red end of the spectrum preferentially as the result of an overtone of an oxygen–hydrogen (O–H) bond stretch. Compared with water, this absorption is shifted toward slightly lower energies. Thus, ice appears blue, with a slightly greener tint than liquid water. Since absorption is cumulative, the color effect intensifies with increasing thickness or if internal reflections cause the light to take a longer path through the ice. Other colors can appear in the presence of light absorbing impurities, where the impurity is dictating the color rather than the ice itself. For instance, icebergs containing impurities (e.g., sediments, algae, air bubbles) can appear brown, grey or green. Because ice in natural environments is usually close to its melting temperature, its hardness shows pronounced temperature variations. At its melting point, ice has a Mohs hardness of 2 or less, but the hardness increases to about 4 at a temperature of and to 6 at a temperature of , the vaporization point of solid carbon dioxide (dry ice). Phases Most liquids under increased pressure freeze at higher temperatures because the pressure helps to hold the molecules together. However, the strong hydrogen bonds in water make it different: for some pressures higher than , water freezes at a temperature below . Ice, water, and water vapour can coexist at the triple point, which is exactly at a pressure of 611.657 Pa. The kelvin was defined as of the difference between this triple point and absolute zero, though this definition changed in May 2019. Unlike most other solids, ice is difficult to superheat. In an experiment, ice at −3 °C was superheated to about 17 °C for about 250 picoseconds. Subjected to higher pressures and varying temperatures, ice can form in nineteen separate known crystalline phases at various densities, along with hypothetical proposed phases of ice that have not been observed. With care, at least fifteen of these phases (one of the known exceptions being ice X) can be recovered at ambient pressure and low temperature in metastable form. The types are differentiated by their crystalline structure, proton ordering, and density. There are also two metastable phases of ice under pressure, both fully hydrogen-disordered; these are Ice IV and Ice XII. Ice XII was discovered in 1996. In 2006, Ice XIII and Ice XIV were discovered. Ices XI, XIII, and XIV are hydrogen-ordered forms of ices I, V, and XII respectively. In 2009, ice XV was found at extremely high pressures and −143 °C. At even higher pressures, ice is predicted to become a metal; this has been variously estimated to occur at 1.55 TPa or 5.62 TPa. As well as crystalline forms, solid water can exist in amorphous states as amorphous solid water (ASW) of varying densities. In outer space, hexagonal crystalline ice is present in the ice volcanoes, but is extremely rare otherwise. Even icy moons like Ganymede are expected to mainly consist of other crystalline forms of ice. Water in the interstellar medium is dominated by amorphous ice, making it likely the most common form of water in the universe. Low-density ASW (LDA), also known as hyperquenched glassy water, may be responsible for noctilucent clouds on Earth and is usually formed by deposition of water vapor in cold or vacuum conditions. High-density ASW (HDA) is formed by compression of ordinary ice I or LDA at GPa pressures. Very-high-density ASW (VHDA) is HDA slightly warmed to 160 K under 1–2 GPa pressures. Ice from a theorized superionic water may possess two crystalline structures. At pressures in excess of such superionic ice would take on a body-centered cubic structure. However, at pressures in excess of the structure may shift to a more stable face-centered cubic lattice. It is speculated that superionic ice could compose the interior of ice giants such as Uranus and Neptune. Friction properties Ice is "slippery" because it has a low coefficient of friction. This subject was first scientifically investigated in the 19th century. The preferred explanation at the time was "pressure melting" -i.e. the blade of an ice skate, upon exerting pressure on the ice, would melt a thin layer, providing sufficient lubrication for the blade to glide across the ice. Yet, 1939 research by Frank P. Bowden and T. P. Hughes found that skaters would experience a lot more friction than they actually do if it were the only explanation. Further, the optimum temperature for figure skating is and for hockey; yet, according to pressure melting theory, skating below would be outright impossible. Instead, Bowden and Hughes argued that heating and melting of the ice layer is caused by friction. However, this theory does not sufficiently explain why ice is slippery when standing still even at below-zero temperatures. Subsequent research suggested that ice molecules at the interface cannot properly bond with the molecules of the mass of ice beneath (and thus are free to move like molecules of liquid water). These molecules remain in a semi-liquid state, providing lubrication regardless of pressure against the ice exerted by any object. However, the significance of this hypothesis is disputed by experiments showing a high coefficient of friction for ice using atomic force microscopy. Thus, the mechanism controlling the frictional properties of ice is still an active area of scientific study. A comprehensive theory of ice friction must take into account all of the aforementioned mechanisms to estimate friction coefficient of ice against various materials as a function of temperature and sliding speed. 2014 research suggests that frictional heating is the most important process under most typical conditions. Natural formation The term that collectively describes all of the parts of the Earth's surface where water is in frozen form is the cryosphere. Ice is an important component of the global climate, particularly in regard to the water cycle. Glaciers and snowpacks are an important storage mechanism for fresh water; over time, they may sublimate or melt. Snowmelt is an important source of seasonal fresh water. The World Meteorological Organization defines several kinds of ice depending on origin, size, shape, influence and so on. Clathrate hydrates are forms of ice that contain gas molecules trapped within its crystal lattice. In the oceans Ice that is found at sea may be in the form of drift ice floating in the water, fast ice fixed to a shoreline or anchor ice if attached to the seafloor. Ice which calves (breaks off) from an ice shelf or a coastal glacier may become an iceberg. The aftermath of calving events produces a loose mixture of snow and ice known as Ice mélange. Sea ice forms in several stages. At first, small, millimeter-scale crystals accumulate on the water surface in what is known as frazil ice. As they become somewhat larger and more consistent in shape and cover, the water surface begins to look "oily" from above, so this stage is called grease ice. Then, ice continues to clump together, and solidify into flat cohesive pieces known as ice floes. Ice floes are the basic building blocks of sea ice cover, and their horizontal size (defined as half of their diameter) varies dramatically, with the smallest measured in centimeters and the largest in hundreds of kilometers. An area which is over 70% ice on its surface is said to be covered by pack ice. Fully formed sea ice can be forced together by currents and winds to form pressure ridges up to tall. On the other hand, active wave activity can reduce sea ice to small, regularly shaped pieces, known as pancake ice. Sometimes, wind and wave activity "polishes" sea ice to perfectly spherical pieces known as ice eggs. On land The largest ice formations on Earth are the two ice sheets which almost completely cover the world's largest island, Greenland, and the continent of Antarctica. These ice sheets have an average thickness of over and have existed for millions of years. Other major ice formations on land include ice caps, ice fields, ice streams and glaciers. In particular, the Hindu Kush region is known as the Earth's "Third Pole" due to the large number of glaciers it contains. They cover an area of around , and have a combined volume of between 3,000-4,700 km3. These glaciers are nicknamed "Asian water towers", because their meltwater run-off feeds into rivers which provide water for an estimated two billion people. Permafrost refers to soil or underwater sediment which continuously remains below for two years or more. The ice within permafrost is divided into four categories: pore ice, vein ice (also known as ice wedges), buried surface ice and intrasedimental ice (from the freezing of underground waters). One example of ice formation in permafrost areas is aufeis - layered ice that forms in Arctic and subarctic stream valleys. Ice, frozen in the stream bed, blocks normal groundwater discharge, and causes the local water table to rise, resulting in water discharge on top of the frozen layer. This water then freezes, causing the water table to rise further and repeat the cycle. The result is a stratified ice deposit, often several meters thick. Snow line and snow fields are two related concepts, in that snow fields accumulate on top of and ablate away to the equilibrium point (the snow line) in an ice deposit. On rivers and streams Ice which forms on moving water tends to be less uniform and stable than ice which forms on calm water. Ice jams (sometimes called "ice dams"), when broken chunks of ice pile up, are the greatest ice hazard on rivers. Ice jams can cause flooding, damage structures in or near the river, and damage vessels on the river. Ice jams can cause some hydropower industrial facilities to completely shut down. An ice dam is a blockage from the movement of a glacier which may produce a proglacial lake. Heavy ice flows in rivers can also damage vessels and require the use of an icebreaker vessel to keep navigation possible. Ice discs are circular formations of ice floating on river water. They form within eddy currents, and their position results in asymmetric melting, which makes them continuously rotate at a low speed. On lakes Ice forms on calm water from the shores, a thin layer spreading across the surface, and then downward. Ice on lakes is generally four types: primary, secondary, superimposed and agglomerate. Primary ice forms first. Secondary ice forms below the primary ice in a direction parallel to the direction of the heat flow. Superimposed ice forms on top of the ice surface from rain or water which seeps up through cracks in the ice which often settles when loaded with snow. An ice shove occurs when ice movement, caused by ice expansion and/or wind action, occurs to the extent that ice pushes onto the shores of lakes, often displacing sediment that makes up the shoreline. Shelf ice is formed when floating pieces of ice are driven by the wind piling up on the windward shore. This kind of ice may contain large air pockets under a thin surface layer, which makes it particularly hazardous to walk across it. Another dangerous form of rotten ice to traverse on foot is candle ice, which develops in columns perpendicular to the surface of a lake. Because it lacks a firm horizontal structure, a person who has fallen through has nothing to hold onto to pull themselves out. As precipitation Snow and freezing rain Snow crystals form when tiny supercooled cloud droplets (about 10 μm in diameter) freeze. These droplets are able to remain liquid at temperatures lower than , because to freeze, a few molecules in the droplet need to get together by chance to form an arrangement similar to that in an ice lattice; then the droplet freezes around this "nucleus". Experiments show that this "homogeneous" nucleation of cloud droplets only occurs at temperatures lower than . In warmer clouds an aerosol particle or "ice nucleus" must be present in (or in contact with) the droplet to act as a nucleus. Our understanding of what particles make efficient ice nuclei is poor – what we do know is they are very rare compared to that cloud condensation nuclei on which liquid droplets form. Clays, desert dust and biological particles may be effective, although to what extent is unclear. Artificial nuclei are used in cloud seeding. The droplet then grows by condensation of water vapor onto the ice surfaces. Ice storm is a type of winter storm characterized by freezing rain, which produces a glaze of ice on surfaces, including roads and power lines. In the United States, a quarter of winter weather events produce glaze ice, and utilities need to be prepared to minimize damages. Hard forms Hail forms in storm clouds when supercooled water droplets freeze on contact with condensation nuclei, such as dust or dirt. The storm's updraft blows the hailstones to the upper part of the cloud. The updraft dissipates and the hailstones fall down, back into the updraft, and are lifted up again. Hail has a diameter of or more. Within METAR code, GR is used to indicate larger hail, of a diameter of at least and GS for smaller. Stones of , and are the most frequently reported hail sizes in North America. Hailstones can grow to and weigh more than . In large hailstones, latent heat released by further freezing may melt the outer shell of the hailstone. The hailstone then may undergo 'wet growth', where the liquid outer shell collects other smaller hailstones. The hailstone gains an ice layer and grows increasingly larger with each ascent. Once a hailstone becomes too heavy to be supported by the storm's updraft, it falls from the cloud. Hail forms in strong thunderstorm clouds, particularly those with intense updrafts, high liquid water content, great vertical extent, large water droplets, and where a good portion of the cloud layer is below freezing . Hail-producing clouds are often identifiable by their green coloration. The growth rate is maximized at about , and becomes vanishingly small much below as supercooled water droplets become rare. For this reason, hail is most common within continental interiors of the mid-latitudes, as hail formation is considerably more likely when the freezing level is below the altitude of . Entrainment of dry air into strong thunderstorms over continents can increase the frequency of hail by promoting evaporative cooling which lowers the freezing level of thunderstorm clouds giving hail a larger volume to grow in. Accordingly, hail is actually less common in the tropics despite a much higher frequency of thunderstorms than in the mid-latitudes because the atmosphere over the tropics tends to be warmer over a much greater depth. Hail in the tropics occurs mainly at higher elevations. Ice pellets (METAR code PL) are a form of precipitation consisting of small, translucent balls of ice, which are usually smaller than hailstones. This form of precipitation is also referred to as "sleet" by the United States National Weather Service. (In British English "sleet" refers to a mixture of rain and snow.) Ice pellets typically form alongside freezing rain, when a wet warm front ends up between colder and drier atmospheric layers. There, raindrops would both freeze and shrink in size due to evaporative cooling. So-called snow pellets, or graupel, form when multiple water droplets freeze onto snowflakes until a soft ball-like shape is formed. So-called "diamond dust", (METAR code IC) also known as ice needles or ice crystals, forms at temperatures approaching due to air with slightly higher moisture from aloft mixing with colder, surface-based air. On surfaces As water drips and re-freezes, it can form hanging icicles, or stalagmite-like structures on the ground. On sloped roofs, buildup of ice can produce an ice dam, which stops melt water from draining properly and potentially leads to damaging leaks. More generally, water vapor depositing onto surfaces due to high relative humidity and then freezing results in various forms of atmospheric icing, or frost. Inside buildings, this can be seen as ice on the surface of un-insulated windows. Hoar frost is common in the environment, particularly in the low-lying areas such as valleys. In Antarctica, the temperatures can be so low that electrostatic attraction is increased to the point hoarfrost on snow sticks together when blown by wind into tumbleweed-like balls known as yukimarimo. Sometimes, drops of water crystallize on cold objects as rime instead of glaze. Soft rime has a density between a quarter and two thirds that of pure ice, due to a high proportion of trapped air, which also makes soft rime appear white. Hard rime is denser, more transparent, and more likely to appear on ships and aircraft. Cold wind specifically causes what is known as advection frost when it collides with objects. When it occurs on plants, it often causes damage to them. Various methods exist to protect agricultural crops from frost - from simply covering them to using wind machines. In recent decades, irrigation sprinklers have been calibrated to spray just enough water to preemptively create a layer of ice that would form slowly and so avoid a sudden temperature shock to the plant, and not be so thick as to cause damage with its weight. Ablation Ablation of ice refers to both its melting and its dissolution. The melting of ice entails the breaking of hydrogen bonds between the water molecules. The ordering of the molecules in the solid breaks down to a less ordered state and the solid melts to become a liquid. This is achieved by increasing the internal energy of the ice beyond the melting point. When ice melts it absorbs as much energy as would be required to heat an equivalent amount of water by 80 °C. While melting, the temperature of the ice surface remains constant at 0 °C. The rate of the melting process depends on the efficiency of the energy exchange process. An ice surface in fresh water melts solely by free convection with a rate that depends linearly on the water temperature, T∞, when T∞ is less than 3.98 °C, and superlinearly when T∞ is equal to or greater than 3.98 °C, with the rate being proportional to (T∞ − 3.98 °C)α, with α =  for T∞ much greater than 8 °C, and α =  for in between temperatures T∞. In salty ambient conditions, dissolution rather than melting often causes the ablation of ice. For example, the temperature of the Arctic Ocean is generally below the melting point of ablating sea ice. The phase transition from solid to liquid is achieved by mixing salt and water molecules, similar to the dissolution of sugar in water, even though the water temperature is far below the melting point of the sugar. However, the dissolution rate is limited by salt concentration and is therefore slower than melting. Role in human activities Cooling Ice has long been valued as a means of cooling. In 400 BC Iran, Persian engineers had already developed techniques for ice storage in the desert through the summer months. During the winter, ice was transported from harvesting pools and nearby mountains in large quantities to be stored in specially designed, naturally cooled refrigerators, called yakhchal (meaning ice storage). Yakhchals were large underground spaces (up to 5000 m3) that had thick walls (at least two meters at the base) made of a specific type of mortar called sarooj made from sand, clay, egg whites, lime, goat hair, and ash. The mortar was resistant to heat transfer, helping to keep the ice cool enough not to melt; it was also impenetrable by water. Yakhchals often included a qanat and a system of windcatchers that could lower internal temperatures to frigid levels, even during the heat of the summer. One use for the ice was to create chilled treats for royalty. Harvesting There were thriving industries in 16th–17th century England whereby low-lying areas along the Thames Estuary were flooded during the winter, and ice harvested in carts and stored inter-seasonally in insulated wooden houses as a provision to an icehouse often located in large country houses, and widely used to keep fish fresh when caught in distant waters. This was allegedly copied by an Englishman who had seen the same activity in China. Ice was imported into England from Norway on a considerable scale as early as 1823. In the United States, the first cargo of ice was sent from New York City to Charleston, South Carolina, in 1799, and by the first half of the 19th century, ice harvesting had become a big business. Frederic Tudor, who became known as the "Ice King", worked on developing better insulation products for long distance shipments of ice, especially to the tropics; this became known as the ice trade. Between 1812 and 1822, under Lloyd Hesketh Bamford Hesketh's instruction, Gwrych Castle was built with 18 large towers, one of those towers is called the 'Ice Tower'. Its sole purpose was to store Ice. Trieste sent ice to Egypt, Corfu, and Zante; Switzerland, to France; and Germany sometimes was supplied from Bavarian lakes. From 1930s and up until 1994, the Hungarian Parliament building used ice harvested in the winter from Lake Balaton for air conditioning. Ice houses were used to store ice formed in the winter, to make ice available all year long, and an early type of refrigerator known as an icebox was cooled using a block of ice placed inside it. Many cities had a regular ice delivery service during the summer. The advent of artificial refrigeration technology made the delivery of ice obsolete. Ice is still harvested for ice and snow sculpture events. For example, a swing saw is used to get ice for the Harbin International Ice and Snow Sculpture Festival each year from the frozen surface of the Songhua River. Artificial production The earliest known written process to artificially make ice is by the 13th-century writings of Arab historian Ibn Abu Usaybia in his book Kitab Uyun al-anba fi tabaqat-al-atibba concerning medicine in which Ibn Abu Usaybia attributes the process to an even older author, Ibn Bakhtawayhi, of whom nothing is known. Ice is now produced on an industrial scale, for uses including food storage and processing, chemical manufacturing, concrete mixing and curing, and consumer or packaged ice. Most commercial icemakers produce three basic types of fragmentary ice: flake, tubular and plate, using a variety of techniques. Large batch ice makers can produce up to 75 tons of ice per day. In 2002, there were 426 commercial ice-making companies in the United States, with a combined value of shipments of $595,487,000. Home refrigerators can also make ice with a built in icemaker, which will typically make ice cubes or crushed ice. The first such device was presented in 1965 by Frigidaire. Land travel Ice forming on roads is a common winter hazard, and black ice particularly dangerous because it is very difficult to see. It is both very transparent, and often forms specifically in shaded (and therefore cooler and darker) areas, i.e. beneath overpasses. Whenever there is freezing rain or snow which occurs at a temperature near the melting point, it is common for ice to build up on the windows of vehicles. Often, snow melts, re-freezes, and forms a fragmented layer of ice which effectively "glues" snow to the window. In this case, the frozen mass is commonly removed with ice scrapers. A thin layer of ice crystals can also form on the inside surface of car windows during sufficiently cold weather. In the 1970s and 1980s, some vehicles such as Ford Thunderbird could be upgraded with heated windshields as the result. This technology fell out of style as it was too expensive and prone to damage, but rear-window defrosters are cheaper to maintain and so are more widespread. In sufficiently cold places, the layers of ice on water surfaces can get thick enough for ice roads to be built. Some regulations specify that the minimum safe thickness is for a person, for a snowmobile and for an automobile lighter than 5 tonnes. For trucks, effective thickness varies with load - i.e. a vehicle with 9-ton total weight requires a thickness of . Notably, the speed limit for a vehicle moving at a road which meets its minimum safe thickness is 25 km/h (15 mph), going up to 35 km/h (25 mph) if the road's thickness is 2 or more times larger than the minimum safe value. There is a known instance where a railroad has been built on ice. The most famous ice road had been the Road of Life across Lake Ladoga. It operated in the winters of 1941–1942 and 1942–1943, when it was the only land route available to the Soviet Union to relieve the Siege of Leningrad by the German Army Group North. The trucks moved hundreds of thousands tonnes of supplies into the city, and hundreds of thousands of civilians were evacuated. It is now a World Heritage Site. Water-borne travel For ships, ice presents two distinct hazards. Firstly, spray and freezing rain can produce an ice build-up on the superstructure of a vessel sufficient to make it unstable, potentially to the point of capsizing. Earlier, crewmembers were regularly forced to manually hack off ice build-up. After 1980s, spraying de-icing chemicals or melting the ice through hot water/steam hoses became more common. Secondly, icebergs – large masses of ice floating in water (typically created when glaciers reach the sea) – can be dangerous if struck by a ship when underway. Icebergs have been responsible for the sinking of many ships, the most famous being the Titanic. For harbors near the poles, being ice-free, ideally all year long, is an important advantage. Examples are Murmansk (Russia), Petsamo (Russia, formerly Finland), and Vardø (Norway). Harbors which are not ice-free are opened up using specialized vessels, called icebreakers. Icebreakers are also used to open routes through the sea ice for other vessels, as the only alternative is to find the openings called "polynyas" or "leads". A widespread production of icebreakers began during the 19th century. Earlier designs simply had reinforced bows in a spoon-like or diagonal shape to effectively crush the ice. Later designs attached a forward propeller underneath the protruding bow, as the typical rear propellers were incapable of effectively steering the ship through the ice Air travel For aircraft, ice can cause a number of dangers. As an aircraft climbs, it passes through air layers of different temperature and humidity, some of which may be conducive to ice formation. If ice forms on the wings or control surfaces, this may adversely affect the flying qualities of the aircraft. In 1919, during the first non-stop flight across the Atlantic, the British aviators Captain John Alcock and Lieutenant Arthur Whitten Brown encountered such icing conditions – Brown left the cockpit and climbed onto the wing several times to remove ice which was covering the engine air intakes of the Vickers Vimy aircraft they were flying. One vulnerability effected by icing that is associated with reciprocating internal combustion engines is the carburetor. As air is sucked through the carburetor into the engine, the local air pressure is lowered, which causes adiabatic cooling. Thus, in humid near-freezing conditions, the carburetor will be colder, and tend to ice up. This will block the supply of air to the engine, and cause it to fail. Between 1969 and 1975, 468 such instances were recorded, causing 75 aircraft losses, 44 fatalities and 202 serious injuries. Thus, carburetor air intake heaters were developed. Further, reciprocating engines with fuel injection do not require carburetors in the first place. Jet engines do not experience carb icing, but they can be affected by the moisture inherently present in jet fuel freezing and forming ice crystals, which can potentially clog up fuel intake to the engine. Fuel heaters and/or de-icing additives are used to address the issue. Recreation and sports Ice plays a central role in winter recreation and in many sports such as ice skating, tour skating, ice hockey, bandy, ice fishing, ice climbing, curling, broomball and sled racing on bobsled, luge and skeleton. Many of the different sports played on ice get international attention every four years during the Winter Olympic Games. Small boat-like craft can be mounted on blades and be driven across the ice by sails. This sport is known as ice yachting, and it had been practiced for centuries. Another vehicular sport is ice racing, where drivers must speed on lake ice, while also controlling the skid of their vehicle (similar in some ways to dirt track racing). The sport has even been modified for ice rinks. Other uses As thermal ballast Ice is still used to cool and preserve food in portable coolers. Ice cubes or crushed ice can be used to cool drinks. As the ice melts, it absorbs heat and keeps the drink near . Ice can be used as part of an air conditioning system, using battery- or solar-powered fans to blow hot air over the ice. This is especially useful during heat waves when power is out and standard (electrically powered) air conditioners do not work. Ice can be used (like other cold packs) to reduce swelling (by decreasing blood flow) and pain by pressing it against an area of the body. As structural material Engineers used the substantial strength of pack ice when they constructed Antarctica's first floating ice pier in 1973. Such ice piers are used during cargo operations to load and offload ships. Fleet operations personnel make the floating pier during the winter. They build upon naturally occurring frozen seawater in McMurdo Sound until the dock reaches a depth of about . Ice piers are inherently temporary structures, although some can last as long as 10 years. Once a pier is no longer usable, it is towed to sea with an icebreaker. Structures and ice sculptures are built out of large chunks of ice or by spraying water The structures are mostly ornamental (as in the case with ice castles), and not practical for long-term habitation. Ice hotels exist on a seasonal basis in a few cold areas. Igloos are another example of a temporary structure, made primarily from snow. Engineers can also use ice to destroy. In mining, drilling holes in rock structures and then pouring water during cold weather is an accepted alternative to using dynamite, as the rock cracks when the water expands as ice. During World War II, Project Habbakuk was an Allied programme which investigated the use of pykrete (wood fibers mixed with ice) as a possible material for warships, especially aircraft carriers, due to the ease with which a vessel immune to torpedoes, and a large deck, could be constructed by ice. A small-scale prototype was built, but it soon turned out the project would cost far more than a conventional aircraft carrier while being many times slower and also vulnerable to melting. Ice has even been used as the material for a variety of musical instruments, for example by percussionist Terje Isungset. Impacts of climate change Historical Greenhouse gas emissions from human activities unbalance the Earth's energy budget and so cause an accumulation of heat. About 90% of that heat is added to ocean heat content, 1% is retained in the atmosphere and 3-4% goes to melt major parts of the cryosphere. Between 1994 and 2017, 28 trillion tonnes of ice were lost around the globe as the result. Arctic sea ice decline accounted for the single largest loss (7.6 trillion tonnes), followed by the melting of Antarctica's ice shelves (6.5 trillion tonnes), the retreat of mountain glaciers (6.1 trillion tonnes), the melting of the Greenland ice sheet (3.8 trillion tonnes) and finally the melting of the Antarctic ice sheet (2.5 trillion tonnes) and the limited losses of the sea ice in the Southern Ocean (0.9 trillion tonnes). Other than the sea ice (which already displaces water due to Archimedes' principle), these losses are a major cause of sea level rise (SLR) and they are expected to intensify in the future. In particular, the melting of the West Antarctic ice sheet may accelerate substantially as the floating ice shelves are lost and can no longer buttress the glaciers. This would trigger poorly understood marine ice sheet instability processes, which could then increase the SLR expected for the end of the century (between and , depending on future warming), by tens of centimeters more. Ice loss in Greenland and Antarctica also produces large quantities of fresh meltwater, which disrupts the Atlantic meridional overturning circulation (AMOC) and the Southern Ocean overturning circulation, respectively. These two halves of the thermohaline circulation are very important for the global climate. A continuation of high meltwater flows may cause a severe disruption (up to a point of a "collapse") of either circulation, or even both of them. Either event would be considered an example of tipping points in the climate system, because it would be extremely difficult to reverse. AMOC is generally not expected to collapse during the 21st century, while there is only limited knowledge about the Southern Ocean circulation. Another example of ice-related tipping point is permafrost thaw. While the organic content in the permafrost causes and methane emissions once it thaws and begins to decompose, ice melting liqufies the ground, causing anything built above the former permafrost to collapse. By 2050, the economic damages from such infrastructure loss are expected to cost tens of billions of dollars. Predictions In the future, the Arctic Ocean is likely to lose effectively all of its sea ice during at least some Septembers (the end of the ice melting season), although some of the ice would refreeze during the winter. I.e. an ice-free September is likely to occur once in every 40 years if global warming is at , but would occur once in every 8 years at and once in every 1.5 years at . This would affect the regional and global climate due to the ice-albedo feedback. Because ice is highly reflective of solar energy, persistent sea ice cover lowers local temperatures. Once that ice cover melts, the darker ocean waters begin to absorb more heat, which also helps to melt the remaining ice. Global losses of sea ice between 1992 and 2018, almost all of them in the Arctic, have already had the same impact as 10% of greenhouse gas emissions over the same period. If all the Arctic sea ice was gone every year between June and September (polar day, when the Sun is constantly shining), temperatures in the Arctic would increase by over , while the global temperatures would increase by around . By 2100, at least a quarter of mountain glaciers outside of Greenland and Antarctica would melt, and effectively all ice caps on non-polar mountains are likely to be lost around 200 years after global warming reaches . The West Antarctic ice sheet is highly vulnerable and will likely disappear even if the warming does not progress further, although it could take around 2,000 years before its loss is complete. The Greenland ice sheet will most likely be lost with the sustained warming between and , although its total loss requires around 10,000 years. Finally, the East Antarctic ice sheet will take at least 10,000 years to melt entirely, which requires a warming of between and . If all the ice on Earth melted, it would result in about of sea level rise, with some coming from East Antarctica. Due to isostatic rebound, the ice-free land would eventually become higher in Greenland and  in Antarctica, on average. Areas in the center of each landmass would become up to and  higher, respectively. The impact on global temperatures from losing West Antartica, mountain glaciers and the Greenland ice sheet is estimated at , and , respectively, while the lack of the East Antarctic ice sheet would increase the temperatures by . Non-water The solid phases of several other volatile substances are also referred to as ices; generally a volatile is classed as an ice if its melting or sublimation point lies above or around (assuming standard atmospheric pressure). The best known example is dry ice, the solid form of carbon dioxide. Its sublimation/deposition point occurs at . A "magnetic analogue" of ice is also realized in some insulating magnetic materials in which the magnetic moments mimic the position of protons in water ice and obey energetic constraints similar to the Bernal-Fowler ice rules arising from the geometrical frustration of the proton configuration in water ice. These materials are called spin ice. See also References Further reading Brady, Amy. Ice: From Mixed Drinks to Skating Rinks--A Cool History of a Hot Commodity (G. P. Putnam's Sons, 2023). Hogge, Fred. Of Ice and Men: How We've Used Cold to Transform Humanity (Pegasus Books, 2022) Leonard, Max. A Cold Spell: A Human History of Ice (Bloomsbury, 2023) online review of this book External links Webmineral listing for Ice MinDat.org listing and location data for Ice Estimating the bearing capacity of ice High-temperature, high-pressure ice The Surprisingly Cool History of Ice Glaciology Minerals Transparent materials Articles containing video clips Limnology Oceanography Cryosphere
Ice
Physics,Environmental_science
8,904
27,327,850
https://en.wikipedia.org/wiki/Terbogrel
Terbogrel (INN) is an experimental drug that has been studied for its potential to prevent the vasoconstricting and platelet-aggregating action of thromboxanes. Terbogrel is an orally available thromboxane A2 receptor antagonist and a thromboxane A synthase inhibitor. The drug was developed by Boehringer Ingelheim. A phase 2 clinical trial of terbogrel was discontinued due to its induction of leg pain. See also Ramatroban References Antiplatelet drugs 3-Pyridyl compounds Guanidines Nitriles Tert-butyl compounds
Terbogrel
Chemistry
133
50,523,514
https://en.wikipedia.org/wiki/Gboard
Gboard is a virtual keyboard app developed by Google for Android and iOS devices. It was first released on iOS in May 2016, followed by a release on Android in December 2016, debuting as a major update to the already-established Google Keyboard app on Android. Gboard features Google Search, including web results (removed since April 2020) and predictive answers, easy searching and sharing of GIF and emoji content, a predictive typing engine suggesting the next word depending on context, and multilingual language support. Updates to the keyboard have enabled additional functionality, including GIF suggestions, options for a dark color theme or adding a personal image as the keyboard background, support for voice dictation, next-phrase prediction, and hand-drawn emoji recognition. At the time of its launch on iOS, the keyboard only offered support for the English language, with more languages being gradually added in the following months, whereas on Android, the keyboard supported more than 100 languages at the time of release. In August 2018, Gboard passed 1 billion installs on the Google Play Store, making it one of the most popular Android apps. This is measured by the Google Play Store and includes downloads by users as well as pre-installed instances of the app. Features Gboard is a virtual keyboard app. It features Google Search, including web results (removed for Android version of the app) and predictive answers, easy searching and sharing of GIF and emoji content, and a predictive typing engine suggesting the next word depending on context. At its May 2016 launch on iOS, Gboard only supported the English language, while it supported "more than 100 languages" at the time of its launch on the Android platform. Google states that Gboard will add more languages "over the coming months". As of October 2019, 916 languages are supported on Android. Gboard features Floating Keyboard and Google Translate in Gboard itself. Gboard supports one-handed mode on Android after its May 2016 update. This functionality was added to the app when it was branded as Google Keyboard. Gboard supports a variety of different keyboard layouts including QWERTY, QWERTZ, AZERTY, Dvorak and Colemak. An update for the iOS app released in August 2016 added French, German, Italian, Portuguese, and Spanish languages, as well as offering "smart GIF suggestions", where the keyboard will suggest GIFs relevant to text written. The keyboard also offers new options for a dark theme or adding a personal image from the camera roll as the keyboard's background. Another new update in March 2018 added Croatian, Czech, Danish, Dutch, Finnish, Greek, Polish, Romanian, Balochi, Swedish, Catalan, Hungarian, Malay, Russian, Latin American Spanish, and Turkish languages, along with support for voice dictation, enabling users to "long press the mic button on the space bar and talk". In April 2017, Google significantly increased the amount of Indian languages supported on Gboard, adding 11 new languages, bringing the total number of supported Indian languages to 22. In June 2017, the Android app was updated to support recognition of hand-drawn emoji and the ability to predict whole phrases rather than single words. The functionality is expected to come to the iOS app at a later time. Offline voice recognition was added in March 2019. On February 12, 2020, a new feature "Emoji Kitchen" was introduced that allowed users to mash up different emoji and use them as stickers when messaging. Grammar correction was introduced in October 2021, first on the Pixel 6 series. Reception In 2016, The Wall Street Journal praised the keyboard, particularly the integrated Google search feature. However, it was noted that the app does not currently support integration with other apps on the device, meaning that queries such as "Buy Captain America movie tickets" sends users to the web browser rather than an app for movie tickets installed on their phone. The Wall Street Journal also praised the predictive typing engine, stating that it "blows past most competitors" and "it gets smarter with use". They also discovered that Gboard "cleverly suggests emojis as you type words". It was noted that there was the lack of a one-handed mode (a feature added in May 2016 for Android), as well as a lack of options for changing color or the size of keys, writing that "If you're looking to customize a keyboard, Gboard isn't for you." References External links List of supported languages Google software 2016 software Android (operating system) software Android virtual keyboards Input methods for handheld devices IOS software
Gboard
Technology
959
57,706,434
https://en.wikipedia.org/wiki/Galeas%20per%20montes
Galeas per montes (galleys across mountains) is the name given to a feat of military engineering made between December 1438 and April 1439 by the Republic of Venice, when several Venetian ships, including galleys and frigates were transported from the Adriatic Sea to Lake Garda. The operation required towing the ships upstream on the river Adige until Rovereto, then transporting the fleet by land to Torbole, on the Northern shores of the lake. The second leg of the journey was the most remarkable achievement, requiring a land journey 20 km through the Loppio Lake and the narrow . Context The Republic of Venice was at the time a power in the Mediterranean and, in the 15th century, it began an expansion phase towards the mainland of the current Lombardia and Veneto regions both through military conquest (e.g. Padua) or spontaneous "dedication", as in the case of Vicenza. The city of Brescia, located West of Lake Garda, allied with the Republic of Venice to escape the Duchy of Milan on November 20, 1426. In 1438, the Duke of Milan Filippo Maria Visconti waged war against the Republic of Venice and, through a series of lucky victories, took control of Lombard lands up to the southern shores of Lake Garda. At the same time, the city of Brescia was under siege by the mercenary condottiero Niccolò Piccinino, on the Duke of Milan's payroll, and called on the Venetian Senate for assistance. Piccinino took control of the entire Southern sector of the lake, so the Venetian warlord Gattamelata (Erasmo da Narni) could only access the lake from its Northern shores, namely Torbole or Riva. The Milanese army was also fortified in the castles of Peschiera del Garda and Desenzano, making a head-on clash too expensive. To avoid this problem, the Republic of Venice decided to prepare a military plan that would allow its troops (and navy) to surprise the Visconti army entering the lake from its Northern shore. On December 1, 1438, after a very long session, the Republic's Minor Council approved plan formulated by Blasio de Arboribus, Niccolò Carcavilla, and Niccolò Sorbolo that would become the galeas per montes. The plan The plan foresaw moving a fleet of warships by dragging it upstream the Adige river, then beaching it, and dragging it on wooden rollers along the Loppio valley to the Northern shores of Lake Garda, near Torbole. From there, the Venetian fleet would have unleashed a surprise attack toward the Milanese army, that was anchored in Desenzano, cutting supplies to the Visconti militia guarding Peschiera del Garda, and gaining a foothold to free Brescia and potentially threaten Milan. The fleet, that included 25 large ships, 6 galleys and 2 frigates, set sail in January 1439 from Venice entering the mouths of the Adige river near Sottomarina. The fleet went upstream until Verona where, since the river was drier than usual, the Venetians had to fit the ships with devices to increase their buoyancy in order to reduce their draught. The fleet was then dragged further upstream until the village of , where it was beached. The Venetians designed and built special devices for the operations, and hired hundreds of workers including diggers, carpenters, sailors, and local craftsmen. The workers flattened the road that would be used by the fleet, and used around 2000 oxen divided in groups, since the largest ships could require more than 200 oxen to be dragged. In order to facilitate the passing of the fleet, the workers leveled natural and man-made obstacles, and built several bridges and infrastructural aids. The main road for the ships was built by laying down wooden planks, so that the massively heavy ships could be slid over the planks using wooden rollers. The fleet's passage was made easier by having ships sail through the Lago di Loppio, reducing the length of the land passage. After the lake, the fleet was once again beached, and dragged along the steep and narrow slope from Passo San Giovanni to Torbole. As the ships would gather velocity during the downhill segment (potentially crashing against rocks), they were slowed down by tying their masts to large boulders using winches and thick ropes. To further slow down the ships' descent, Venetians unfurled the ships' sails, and made use of a local strong wind, the so-called . The complex operation was completed in only 15 days, but cost the staggeringly high amount of 16,000 Ducats. It was one of the most remarkable feats of military engineering at the time, becoming famous throughout Europe. Gallery Consequences The fleet's presence on the lake allowed the Venetians to resupply Brescia, though these operations were soon noticed and contested by the Milanese navy. The two navies faced each other in two battles on April 12 and September 26, 1439, both seeing the defeat of the Venetians. The Venetians finally managed to re-capture Lake Garda and Brescia only in 1440. An instrumental step in this victory was the naval battle in April 1440, where the Venetian fleet inflicted a major defeat to the Milanese navy on the waters off the Ponale pass. A painting by Tintoretto in the Doge's Palace's Sala del Maggior Consiglio celebrates this victory. Notes Bibliography Paolo Renier Testimonianze sul trasporto delle navi da Venezia al Garda eseguito dai veneziani nel 1439, Venezia 1967 Paolo D. Malvinni La magnifica intrapresa. Galeas per montes conducendo, Curcu & Genovese, Trento 2010 Samuel Romanin Storia documentata di Venezia – Tomo 4, 1853–1861. David Sanderson Chambers The Imperial Age of Venice 1380–1580 – (History of European Civilization Library), Harcourt Brace Jovanovich, 1970 Clemente Cavalcabo Idea della storia e delle cossuetudini antiche della valle Lagarina ed...del Roveretano, 1776 Eugenio Musatti, Storia di Venezia, 1880, tomo I, p. 270 e seg. Fabio Romanoni La guerra d’acqua dolce. Navi e conflitti medievali nell’Italia settentrionale, Clueb, Bologna 2023 Military engineering Military history of the Republic of Venice 15th century in the Republic of Venice 1438 in Europe 1439 in Europe
Galeas per montes
Engineering
1,389
36,008,539
https://en.wikipedia.org/wiki/Hygrophorus%20camarophyllus
Hygrophorus camarophyllus is a species of edible fungus in the genus Hygrophorus. References External links camarophyllus Fungi of Europe Edible fungi Taxa named by Johannes Baptista von Albertini Taxa named by Lewis David de Schweinitz Fungus species
Hygrophorus camarophyllus
Biology
61
12,924,521
https://en.wikipedia.org/wiki/Vepris%20borenensis
Vepris borenensis is a species of plant in the family Rutaceae. It is found in Ethiopia and Kenya. References borenensis Flora of Ethiopia Flora of Kenya Taxonomy articles created by Polbot Unplaced names
Vepris borenensis
Biology
47
24,375,753
https://en.wikipedia.org/wiki/C10H10N2
{{DISPLAYTITLE:C10H10N2}} The molecular formula C10H10N2 (molar mass: 158.20 g/mol, exact mass: 158.0844 u) may refer to: 1,5-Diaminonaphthalene 1,8-Diaminonaphthalene Nicotyrine
C10H10N2
Chemistry
74
2,012,125
https://en.wikipedia.org/wiki/Simple%20precedence%20parser
In computer science, a simple precedence parser is a type of bottom-up parser for context-free grammars that can be used only by simple precedence grammars. The implementation of the parser is quite similar to the generic bottom-up parser. A stack is used to store a viable prefix of a sentential form from a rightmost derivation. The symbols ⋖, ≐ and ⋗ are used to identify the pivot, and to know when to Shift or when to Reduce. Implementation Compute the Wirth–Weber precedence relationship table for a grammar with initial symbol S. Initialize a stack with the starting marker $. Append an ending marker $ to the string being parsed (Input). Until Stack equals "$ S" and Input equals "$" Search the table for the relationship between Top(stack) and NextToken(Input) if the relationship is ⋖ or ≐ Shift: Push(Stack, relationship) Push(Stack, NextToken(Input)) RemoveNextToken(Input) if the relationship is ⋗ Reduce: SearchProductionToReduce(Stack) Remove the Pivot from the Stack Search the table for the relationship between the nonterminal from the production and first symbol in the stack (Starting from top) Push(Stack, relationship) Push(Stack, Non terminal) SearchProductionToReduce (Stack) Find the topmost ⋖ in the stack; this and all the symbols above it are the Pivot. Find the production of the grammar which has the Pivot as its right side. Example Given following language, which can parse arithmetic expressions with the multiplication and addition operations: E --> E + T' | T' T' --> T T --> T * F | F F --> ( E' ) | num E' --> E num is a terminal, and the lexer parse any integer as num; E represents an arithmetic expression, T is a term and F is a factor. and the Parsing table: STACK PRECEDENCE INPUT ACTION $ ⋖ 2 * ( 1 + 3 )$ SHIFT $ ⋖ 2 ⋗ * ( 1 + 3 )$ REDUCE (F -> num) $ ⋖ F ⋗ * ( 1 + 3 )$ REDUCE (T -> F) $ ⋖ T ≐ * ( 1 + 3 )$ SHIFT $ ⋖ T ≐ * ⋖ ( 1 + 3 )$ SHIFT $ ⋖ T ≐ * ⋖ ( ⋖ 1 + 3 )$ SHIFT $ ⋖ T ≐ * ⋖ ( ⋖ 1 ⋗ + 3 )$ REDUCE 4× (F -> num) (T -> F) (T' -> T) (E ->T ') $ ⋖ T ≐ * ⋖ ( ⋖ E ≐ + 3 )$ SHIFT $ ⋖ T ≐ * ⋖ ( ⋖ E ≐ + ⋖ 3 )$ SHIFT $ ⋖ T ≐ * ⋖ ( ⋖ E ≐ + < 3 ⋗ )$ REDUCE 3× (F -> num) (T -> F) (T' -> T) $ ⋖ T ≐ * ⋖ ( ⋖ E ≐ + ≐ T ⋗ )$ REDUCE 2× (E -> E + T) (E' -> E) $ ⋖ T ≐ * ⋖ ( ≐ E' ≐ )$ SHIFT $ ⋖ T ≐ * ⋖ ( ≐ E' ≐ ) ⋗ $ REDUCE (F -> ( E' )) $ ⋖ T ≐ * ≐ F ⋗ $ REDUCE (T -> T * F) $ ⋖ T ⋗ $ REDUCE 2× (T' -> T) (E -> T') $ ⋖ E $ ACCEPT References Alfred V. Aho, Jeffrey D. Ullman (1977). Principles of Compiler Design. 1st Edition. Addison–Wesley. William A. Barrett, John D. Couch (1979). Compiler construction: Theory and Practice. Science Research Associate. Jean-Paul Tremblay, P. G. Sorenson (1985). The Theory and Practice of Compiler Writing. McGraw–Hill. Parsing algorithms
Simple precedence parser
Technology
825
15,062,629
https://en.wikipedia.org/wiki/IFI35
Interferon-induced 35 kDa protein is a protein that in humans is encoded by the IFI35 gene. Interactions IFI35 has been shown to interact with NMI and BATF. References Further reading External links Transcription factors
IFI35
Chemistry,Biology
48
37,307,647
https://en.wikipedia.org/wiki/Ammonium%20carbamate
Ammonium carbamate is a chemical compound with the formula consisting of ammonium cation and carbamate anion . It is a white solid that is extremely soluble in water, less so in alcohol. Ammonium carbamate can be formed by the reaction of ammonia with carbon dioxide , and will slowly decompose to those gases at ordinary temperatures and pressures. It is an intermediate in the industrial synthesis of urea , an important fertilizer. Properties Solid-gas equilibrium In a closed container solid ammonium carbamate is in equilibrium with carbon dioxide and ammonia Lower temperatures shift the equilibrium towards the carbamate. At higher temperatures ammonium carbamate condenses into urea: This reaction was first discovered in 1870 by Bassarov, by heating ammonium carbamate in sealed glass tubes at temperatures ranging from 130 to 140 °C. Equilibrium in water At ordinary temperatures and pressures, ammonium carbamate exists in aqueous solutions as an equilibrium with ammonia and carbon dioxide, and the anions bicarbonate, , and carbonate, . Indeed, solutions of ammonium carbonate or bicarbonate will contain some carbamate anions too. Structure The structure of solid ammonium carbamate has been confirmed by X-ray crystallography. The oxygen centers form hydrogen bonds to the ammonium cation. There are two polymorphs, α and β, both in the orthorhombic crystal system but differing in their space group. The α polymorph is in space group Pbca (no. 61), whereas the β polymorph is in Ibam (no. 72). The α polymorph is more volatile. Natural occurrence Ammonium carbamate serves a key role in the formation of carbamoyl phosphate, which is necessary for both the urea cycle and the production of pyrimidines. In this enzyme-catalyzed reaction, ATP and ammonium carbamate are converted to ADP and carbamoyl phosphate: Preparation From liquid ammonia and dry ice Ammonium carbamate is prepared by the direct reaction between liquid ammonia and dry ice (solid carbon dioxide): From gaseous ammonia and carbon dioxide Ammonium carbamate can be prepared by reaction of the two gases at high temperature (175–225 °C) and high pressure (150–250 bar). It can also be obtained by bubbling gaseous and in anhydrous ethanol, 1-propanol, or DMF at ambient pressure and 0 °C. The carbamate precipitates and can be separated by simple filtration, and the liquid containing the unreacted ammonia can be returned to the reactor. The absence of water prevents the formation of bicarbonate and carbonate, and no ammonia is lost. Uses Urea synthesis Ammonium carbamate is an intermediate in the industrial production of urea. A typical industrial plant that makes urea can produce up to 4000 tons a day. in this reactor and can then be dehydrated to urea according to the following equation: Pesticide formulations Ammonium carbamate has also been approved by the Environmental Protection Agency as an inert ingredient present in aluminium phosphide pesticide formulations. This pesticide is commonly used for insect and rodent control in areas where agricultural products are stored. The reason for ammonium carbamate as an ingredient is to make the phosphine less flammable by freeing ammonia and carbon dioxide to dilute phosphine formed by a hydrolysis reaction. Laboratory Ammonium carbamate can be used as a good ammoniating agent, though not nearly as strong as ammonia itself. For instance, it is an effective reagent for preparation of different substituted β-amino-α,β-unsaturated esters. The reaction can be carried out in methanol at room temperature and can be isolated in the absence of water, in high purity and yield. Preparation of metal carbamates Ammonium carbamate can be a starting reagent for the production of salts of other cations. For instance, by reacting it with solid potassium chloride KCl in liquid ammonia one can obtain potassium carbamate . Carbamates of other metals, such as calcium, can be produced by reacting ammonium carbamate with a suitable salt of the desired cation, in an anhydrous solvent such as methanol, ethanol, or formamide, even at room temperature. References Carbamates Ammonium compounds
Ammonium carbamate
Chemistry
923
9,018,921
https://en.wikipedia.org/wiki/Pari%20%28unit%29
A pari was customary unit of area equal to 50×60 sana lamjel in Manipur, India, approximately 1 hectare. A sana lamjel was defined by the ruler of the kingdom, Nongda Lairen Pakhangpa in 33 CE, being equal to the distance from the floor to the tips of the fingers of his raised right hand while standing (a fathom), plus 4 fingerwidths. 1 pari was equal to 2 lourak, 4 sangam, 8 loukhai, 16 loushal, or 32 tong. See also List of customary units of measurement in South Asia References Customary units in India Obsolete units of measurement Units of area
Pari (unit)
Mathematics
142
14,430,199
https://en.wikipedia.org/wiki/Hilbert%E2%80%93Huang%20transform
The Hilbert–Huang transform (HHT) is a way to decompose a signal into so-called intrinsic mode functions (IMF) along with a trend, and obtain instantaneous frequency data. It is designed to work well for data that is nonstationary and nonlinear. In contrast to other common transforms like the Fourier transform, the HHT is an algorithm that can be applied to a data set, rather than a theoretical tool. The Hilbert–Huang transform (HHT), a NASA designated name, was proposed by Norden E. Huang et al. (1996, 1998, 1999, 2003, 2012). It is the result of the empirical mode decomposition (EMD) and the Hilbert spectral analysis (HSA). The HHT uses the EMD method to decompose a signal into so-called intrinsic mode functions (IMF) with a trend, and applies the HSA method to the IMFs to obtain instantaneous frequency data. Since the signal is decomposed in time domain and the length of the IMFs is the same as the original signal, HHT preserves the characteristics of the varying frequency. This is an important advantage of HHT since a real-world signal usually has multiple causes happening in different time intervals. The HHT provides a new method of analyzing nonstationary and nonlinear time series data. Definition Empirical mode decomposition The fundamental part of the HHT is the empirical mode decomposition (EMD) method. Breaking down signals into various components, EMD can be compared with other analysis methods such as Fourier transform and Wavelet transform. Using the EMD method, any complicated data set can be decomposed into a finite and often small number of components. These components form a complete and nearly orthogonal basis for the original signal. In addition, they can be described as intrinsic mode functions (IMF). Because the first IMF usually carries the most oscillating (high-frequency) components, it can be rejected to remove high-frequency components (e.g., random noise). EMD based smoothing algorithms have been widely used in seismic data processing, where high-quality seismic records are highly demanded. Without leaving the time domain, EMD is adaptive and highly efficient. Since the decomposition is based on the local characteristic time scale of the data, it can be applied to nonlinear and nonstationary processes. Intrinsic mode functions An intrinsic mode function (IMF) is defined as a function that satisfies the following requirements: In the whole data set, the number of extrema and the number of zero-crossings must either be equal or differ at most by one. At any point, the mean value of the envelope defined by the local maxima and the envelope defined by the local minima is zero. It represents a generally simple oscillatory mode as a counterpart to the simple harmonic function. By definition, an IMF is any function with the same number of extrema and zero crossings, whose envelopes are symmetric with respect to zero. This definition guarantees a well-behaved Hilbert transform of the IMF. Hilbert spectral analysis Hilbert spectral analysis (HSA) is a method for examining each IMF's instantaneous frequency as functions of time. The final result is a frequency-time distribution of signal amplitude (or energy), designated as the Hilbert spectrum, which permits the identification of localized features. Techniques The Intrinsic Mode Function (IMF) amplitude and frequency can vary with time and it must satisfy the rule below: The number of extremes(local maximums & local minimums) and the number of zero-crossings must either equal or differ at most by one. At any point, the mean value of the envelope defined by the local maxima and the envelope defined by the local minima is near zero. Empirical mode decomposition The empirical mode decomposition (EMD) method is a necessary step to reduce any given data into a collection of intrinsic mode functions (IMF) to which the Hilbert spectral analysis can be applied. IMF represents a simple oscillatory mode as a counterpart to the simple harmonic function, but it is much more general: instead of constant amplitude and frequency in a simple harmonic component, an IMF can have variable amplitude and frequency along the time axis. The procedure of extracting an IMF is called sifting. The sifting process is as follows: Identify all the local extrema in the test data. Connect all the local maxima by a cubic spline line as the upper envelope. Repeat the procedure for the local minima to produce the lower envelope. The upper and lower envelopes should cover all the data between them. Their mean is m1. The difference between the data and m1 is the first component h1: Ideally, h1 should satisfy the definition of an IMF, since the construction of h1 described above should have made it symmetric and having all maxima positive and all minima negative. After the first round of sifting, a crest may become a local maximum. New extrema generated in this way actually reveal the proper modes lost in the initial examination. In the subsequent sifting process, h1 can only be treated as a proto-IMF. In the next step, h1 is treated as data: After repeated sifting up to k times, h1 becomes an IMF, that is Then, h1k is designated as the first IMF component of the data: Stoppage criteria of the sifting process The stoppage criterion determines the number of sifting steps to produce an IMF. Following are the four existing stoppage criterion: Standard deviation This criterion is proposed by Huang et al. (1998). It is similar to the Cauchy convergence test, and we define a sum of the difference, SD, as Then the sifting process stops when SD is smaller than a pre-given value. S Number criterion This criterion is based on the so-called S-number, which is defined as the number of consecutive siftings for which the number of zero-crossings and extrema are equal or at most differing by one. Specifically, an S-number is pre-selected. The sifting process will stop only if, for S consecutive siftings, the numbers of zero-crossings and extrema stay the same, and are equal or at most differ by one. Threshold method Proposed by Rilling, Flandrin and Gonçalvés, threshold method set two threshold values to guaranteeing globally small fluctuations in the meanwhile taking in account locally large excursions. Energy difference tracking Proposed by Cheng, Yu and Yang, energy different tracking method utilized the assumption that the original signal is a composition of orthogonal signals, and calculate the energy based on the assumption. If the result of EMD is not an orthogonal basis of the original signal, the amount of energy will be different from the original energy. Once a stoppage criterion is selected, the first IMF, c1, can be obtained. Overall, c1 should contain the finest scale or the shortest period component of the signal. We can, then, separate c1 from the rest of the data by Since the residue, r1, still contains longer period variations in the data, it is treated as the new data and subjected to the same sifting process as described above. This procedure can be repeated for all the subsequent rj's, and the result is The sifting process finally stops when the residue, rn, becomes a monotonic function from which no more IMF can be extracted. From the above equations, we can induce that Thus, a decomposition of the data into n-empirical modes is achieved. The components of the EMD are usually physically meaningful, for the characteristic scales are defined by the physical data. Flandrin et al. (2003) and Wu and Huang (2004) have shown that the EMD is equivalent to a dyadic filter bank. Hilbert spectral analysis Having obtained the intrinsic mode function components, the instantaneous frequency can be computed using the Hilbert transform. After performing the Hilbert transform on each IMF component, the original data can be expressed as the real part, Real, in the following form: Current applications Two-Dimensional EMD In the above examples, all signals are one-dimensional signals, and in the case of two-dimensional signals, the Hilbert-Huang Transform can be applied for image and video processing in the following ways: Pseudo-Two-Dimensional EMD (Pseudo-two-dimensional Empirical Mode Decomposition): Directly splitting the two-dimensional signal into two sets of one-dimensional signals and applying the Hilbert-Huang Transform separately. After that, rearrange the two signals back into a two-dimensional signal. The result can produce excellent patterns, and display local rapid oscillations in long-wavelength waves. However, this method has many drawbacks. The most significant one is the discontinuities, occurring when the two sets of processed Intrinsic Mode Functions (IMFs) are recombined into the original two-dimensional signal. The following methods can be used to address this issue. Pseudo-Two-Dimensional EEMD (Pseudo-two-dimensional Ensemble Empirical Mode Decomposition): Compared to Pseudo-Two-Dimensional EMD, using EEMD instead of EMD can effectively improve the issue of discontinuity. However, this method has limitations and it's only effective when the time scale is very clear, such as in the case of temperature detection in the North Atlantic. It is not suitable for situations where the time scale of the signal is unclear. Genuine Two-Dimensional EMD (Genuine two-dimensional Empirical Mode Decomposition): As Genuine Two-Dimensional EMD directly processes two-dimensional signals, it poses some definitional challenges. How to determine the maximum value—should the edges of the image be considered, or should another method be used to define the maximum value? How to choose the progressive manner after identifying the maximum value. While Bézier curves may be effective in one-dimensional signals, they may not be directly applicable to two-dimensional signals. Therefore, Nunes et al. used radial basis functions and the Riesz transform to handle Genuine Two-Dimensional EMD. The following is the form of the Riesz transform. For a complex function f on . for j = 1,2,...,d. The constant is a dimension-normalized constant. Linderhed used Genuine Two-Dimensional EMD for image compression. Compared to other compression methods, this approach provides a lower distortion rate. Song and Zhang [2001], Damerval et al. [2005], and Yuan et al. [2008] used Delaunay triangulation to find the upper and lower bounds of the image. Depending on the requirements for defining maxima and selecting different progressive methods, different effects can be obtained. Other application Improved EMD on ECG signals: Ahmadi et al.[2019] presented an Improved EMD and compared with other types of EMD. Results show the proposed algorithm provides no spurious IMF for these functions and is not placed in an infinite loop. EMD types comparison on ECG(Electrocardiography) signals reveal the improved EMD was an appropriate algorithm to be used for analyzing biological signals. Biomedical applications: Huang et al. [1999b] analyzed the pulmonary arterial pressure on conscious and unrestrained rats. Neuroscience: Pigorini et al. [2011] analyzed Human EEG response to Transcranial Magnetic Stimulation; Liang et al. [2005] analyzed the visual evoked potentials of macaque performing visual spatial attention task. Epidemiology: Cummings et al. [2004] applied the EMD method to extract a 3-year-periodic mode embedded in Dengue Fever outbreak time series recorded in Thailand and assessed the travelling speed of Dengue Fever outbreaks. Yang et al. [2010] applied the EMD method to delineate sub-components of a variety of neuropsychiatric epidemiological time series, including the association between seasonal effect of Google search for depression [2010], association between suicide and air pollution in Taipei City [2011], and association between cold front and incidence of migraine in Taipei city [2011]. Chemistry and chemical engineering: Phillips et al. [2003] investigated a conformational change in Brownian dynamics and molecular dynamics simulations using a comparative analysis of HHT and wavelet methods. Wiley et al. [2004] used HHT to investigate the effect of reversible digitally filtered molecular dynamics which can enhance or suppress specific frequencies of motion. Montesinos et al. [2002] applied HHT to signals obtained from BWR neuron stability. Financial applications: Huang et al. [2003b] applied HHT to nonstationary financial time series and used a weekly mortgage rate data. Image processing: Hariharan et al. [2006] applied EMD to image fusion and enhancement. Chang et al. [2009] applied an improved EMD to iris recognition, which reported a 100% faster in computational speed without losing accuracy than the original EMD. Atmospheric turbulence: Hong et al. [2010] applied HHT to turbulence data observed in the stable boundary layer to separate turbulent and non-turbulent motions. Scaling processes with intermittency correction: Huang et al. [2008] has generalized the HHT into arbitrary order to take the intermittency correction of scaling processes into account, and applied this HHT-based method to hydrodynamic turbulence data collected in laboratory experiment,; daily river discharge,; Lagrangian single particle statistics from direct numerical simulation,; Tan et al., [2014], vorticity field of two dimensional turbulence,; Qiu et al.[2016], two dimensional bacterial turbulence,; Li & Huang [2014], China stock market,; Calif et al. [2013], solar radiation. A source code to realize the arbitrary order Hilbert spectral analysis can be found at . Meteorological and atmospheric applications: Salisbury and Wimbush [2002], using Southern Oscillation Index data, applied the HHT technique to determine whether the Sphere of influence data are sufficiently noise free that useful predictions can be made and whether future El Nino southern oscillation events can be predicted from SOI data. Pan et al. [2002] used HHT to analyze satellite scatterometer wind data over the northwestern Pacific and compared the results to vector empirical orthogonal function results. Ocean engineering: Schlurmann [2002] introduced the application of HHT to characterize nonlinear water waves from two different perspectives, using laboratory experiments. Veltcheva [2002] applied HHT to wave data from nearshore sea. Larsen et al. [2004] used HHT to characterize the underwater electromagnetic environment and identify transient manmade electromagnetic disturbances. Seismic studies: Huang et al. [2001] used HHT to develop a spectral representation of earthquake data. Chen et al. [2002a] used HHT to determine the dispersion curves of seismic surface waves and compared their results to Fourier-based time-frequency analysis. Shen et al. [2003] applied HHT to ground motion and compared the HHT result with the Fourier spectrum. Solar physics: Nakariakov et al. [2010] used EMD to demonstrate the triangular shape of quasi-periodic pulsations detected in the hard X-ray and microwave emission generated in solar flares. Barnhart and Eichinger [2010] used HHT to extract the periodic components within sunspot data, including the 11-year Schwabe, 22-year Hale, and ~100-year Gleissberg cycles. They compared their results with traditional Fourier analysis. Structural applications: Quek et al. [2003] illustrate the feasibility of the HHT as a signal processing tool for locating an anomaly in the form of a crack, delamination, or stiffness loss in beams and plates based on physically acquired propagating wave signals. Using HHT, Li et al. [2003] analyzed the results of a pseudodynamic test of two rectangular reinforced concrete bridge columns. Structural health monitoring: Pines and Salvino [2002] applied HHT in structural health monitoring. Yang et al. [2004] used HHT for damage detection, applying EMD to extract damage spikes due to sudden changes in structural stiffness. Yu et al. [2003] used HHT for fault diagnosis of roller bearings. System identification: Chen and Xu [2002] explored the possibility of using HHT to identify the modal damping ratios of a structure with closely spaced modal frequencies and compared their results to FFT. Xu et al. [2003] compared the modal frequencies and damping ratios in various time increments and different winds for one of the tallest composite buildings in the world. Speech recognition: Huang and Pan [2006] have used the HHT for speech pitch determination. Astroparticle physics : Bellini et al. [2014] (Borexino collaboration), Measurement of the seasonal modulation of the solar neutrino fluxes with Borexino experiment, Phys. Rev. D 89, 112007 2014 Limitations Chen and Feng [2003] proposed a technique to improve the HHT procedure. The authors noted that the EMD is limited in distinguishing different components in narrow-band signals. The narrow band may contain either (a) components that have adjacent frequencies or (b) components that are not adjacent in frequency but for which one of the components has a much higher energy intensity than the other components. The improved technique is based on beating-phenomenon waves. Datig and Schlurmann [2004] conducted a comprehensive study on the performance and limitations of HHT with particular applications to irregular water waves. The authors did extensive investigation into the spline interpolation. The authors discussed using additional points, both forward and backward, to determine better envelopes. They also performed a parametric study on the proposed improvement and showed significant improvement in the overall EMD computations. The authors noted that HHT is capable of differentiating between time-variant components from any given data. Their study also showed that HHT was able to distinguish between riding and carrier waves. Huang and Wu [2008] reviewed applications of the Hilbert–Huang transformation emphasizing that the HHT theoretical basis is purely empirical, and noting that "one of the main drawbacks of EMD is mode mixing". They also outline outstanding open problems with HHT, which include: End effects of the EMD, Spline problems, Best IMF selection and uniqueness. Although the ensemble EMD (EEMD) may help mitigate the latter. End effect End effect occurs at the beginning and end of the signal because there is no point before the first data point and after the last data point to be considered together. However, in most cases, these endpoints are not the extreme value of the signal. Therefore, when doing the EMD process of the HHT, the extreme envelope will diverge at the endpoints and cause significant errors. This error distorts the IMF waveform at its endpoints. Furthermore, the error in the decomposition result accumulates through each repetition of the sifting process. When computing the instantaneous frequency and amplitude of IMFs, Fast Fourier Transform (FFT) result may cause Gibbs phenomenon and frequency leakage, leading to information loss. Here are several methods are proposed to solve the end effect in HHT: 1. Characteristic wave extending method This method leverages the inherent variation trend of the signal to extend itself, resulting in extensions that closely resemble the characteristics of the original data. Waveform matching extension : This extension is based on the assumption that similar waveforms repeat themselves within the signal. Therefore, a triangular waveform best matching the signal's boundary is identified within the signal's waveform. Local values within the signal's boundary can then be predicted based on the corresponding local values of the triangular waveform. Mirror extending method: Many signals exhibit internal repetition patterns. Leveraging this characteristic, the mirror extension method appends mirrored copies of the original signal to its ends. This simple and efficient approach significantly improves the accuracy of Intrinsic Mode Functions (IMFs) for periodic signals. However, it is not suitable for non-periodic signals and can introduce side effects. Several alternative strategies have been proposed to address these limitations 2. Data extending method design and compute some needed parameters from the original signal for building a particular mathematical model. After that, the model predicts the trend of the two endpoints. Support vector regression machine (SVRM) prediction : This method utilizes machine learning techniques to tackle the end effect in HHT. Its advantages are adaptive, flexible, highly accurate, and effective for both periodic and non-periodic signals. Although computational complexity can be a concern, disregarding this factor reveals SVRM as a robust and effective solution for mitigating the end effect in HHT. Autoregressive (AR) model : By formulating the input-output relationship as linear equations with time-varying coefficients, AR modeling enables statistical prediction of the missing values at the signal's endpoints. This method requires minimal computational resources and proves particularly effective for analyzing stationary signals. However, its accuracy diminishes for non-stationary signals, and the selection of an appropriate model order can significantly impact its effectiveness. Neural network prediction: Leveraging the power of neural network learning, these methods offer a versatile and robust approach to mitigating the end effect in HHT. Various network architectures, including RBF-NN and GRNN , have emerged, demonstrating their ability to capture complex relationships within the signal and learn from large datasets. Mode mixing problem Mode mixing problem happens during the EMD process. A straightforward implementation of the sifting procedure produces mode mixing due to IMF mode rectification. Specific signals may not be separated into the same IMFs every time. This problem makes it hard to implement feature extraction, model training, and pattern recognition since the feature is no longer fixed in one labeling index. Mode mixing problem can be avoided by including an intermittence test during the HHT process. Masking Method Source: The masking method improves EMD by allowing for the separation of similar frequency components through the following steps: Construction of masking signal: Construct masking signal from the frequency information of the original data, . This masking signal is designed to prevent lower-frequency components from IMFs obtained through EMD. Perform EMD with masking signal: EMD is again performed on the modified signal x+(n) = x(n) + s(n) to obtain the IMF z+(n), and similarly, on x-(n) = x(n) - s(n) to obtain the IMF z-(n). The IMF is then defined as z(n) = (z+(n) + z-(n))/2 . Separation of Components: By appropriately choosing the masking signal frequency, components with similar frequencies can be separated. The masking signal prevents mode mixing, allowing EMD to distinguish between closely spaced frequency components. Error Minimization: The choice of parameters for the masking signal, such as amplitude, will affect the performance of the algorithm. The optimal choice of amplitude depends on the frequencies Overall, the masking method enhances EMD by providing a means to prevent mode mixing, improving the accuracy and applicability of EMD in signal analysis Ensemble empirical mode decomposition (EEMD) Source: EEMD adds finite amplitude white noise to the original signal. After that, decompose the signal into IMFs using EMD. The processing steps of EEMD are developed as follows: Add finite amplitude white noise to the original signal. Decompose the noisy signal into IMFs using EMD. Repeat steps 1 and 2 multiple times to create an ensemble of IMFs. Calculate the mean of each IMF across the ensemble to obtain the final IMF components. The effects of the decomposition using the EEMD are that the added white noise series cancel each other(or fill all the scale space uniformly). The noise also enables the EMD method to be a truly dyadic filter bank for any data, which means that a signal of a similar scale in a noisy data set could be contained in one IMF component, significantly reducing the chance of mode mixing. This approach preserves the physical uniqueness of decomposition and represents a major improvement over the EMD method. Comparison with other transforms See also Hilbert transform Hilbert spectral analysis Hilbert spectrum Instantaneous frequency Multidimensional empirical mode decomposition Nonlinear Wavelet transform Fourier transform Signal envelope References Signal processing Telecommunication theory
Hilbert–Huang transform
Technology,Engineering
5,017
1,815,224
https://en.wikipedia.org/wiki/Coherent%20duality
In mathematics, coherent duality is any of a number of generalisations of Serre duality, applying to coherent sheaves, in algebraic geometry and complex manifold theory, as well as some aspects of commutative algebra that are part of the 'local' theory. The historical roots of the theory lie in the idea of the adjoint linear system of a linear system of divisors in classical algebraic geometry. This was re-expressed, with the advent of sheaf theory, in a way that made an analogy with Poincaré duality more apparent. Then according to a general principle, Grothendieck's relative point of view, the theory of Jean-Pierre Serre was extended to a proper morphism; Serre duality was recovered as the case of the morphism of a non-singular projective variety (or complete variety) to a point. The resulting theory is now sometimes called Serre–Grothendieck–Verdier duality, and is a basic tool in algebraic geometry. A treatment of this theory, Residues and Duality (1966) by Robin Hartshorne, became a reference. One concrete spin-off was the Grothendieck residue. To go beyond proper morphisms, as for the versions of Poincaré duality that are not for closed manifolds, requires some version of the compact support concept. This was addressed in SGA2 in terms of local cohomology, and Grothendieck local duality; and subsequently. The Greenlees–May duality, first formulated in 1976 by Ralf Strebel and in 1978 by Eben Matlis, is part of the continuing consideration of this area. Adjoint functor point of view While Serre duality uses a line bundle or invertible sheaf as a dualizing sheaf, the general theory (it turns out) cannot be quite so simple. (More precisely, it can, but at the cost of imposing the Gorenstein ring condition.) In a characteristic turn, Grothendieck reformulated general coherent duality as the existence of a right adjoint functor , called twisted or exceptional inverse image functor, to a higher direct image with compact support functor . Higher direct images are a sheafified form of sheaf cohomology in this case with proper (compact) support; they are bundled up into a single functor by means of the derived category formulation of homological algebra (introduced with this case in mind). If is proper, then is a right adjoint to the inverse image functor . The existence theorem for the twisted inverse image is the name given to the proof of the existence for what would be the counit for the comonad of the sought-for adjunction, namely a natural transformation , which is denoted by (Hartshorne) or (Verdier). It is the aspect of the theory closest to the classical meaning, as the notation suggests, that duality is defined by integration. To be more precise, exists as an exact functor from a derived category of quasi-coherent sheaves on , to the analogous category on , whenever is a proper or quasi projective morphism of noetherian schemes, of finite Krull dimension. From this the rest of the theory can be derived: dualizing complexes pull back via , the Grothendieck residue symbol, the dualizing sheaf in the Cohen–Macaulay case. In order to get a statement in more classical language, but still wider than Serre duality, Hartshorne (Algebraic Geometry) uses the Ext functor of sheaves; this is a kind of stepping stone to the derived category. The classical statement of Grothendieck duality for a projective or proper morphism of noetherian schemes of finite dimension, found in Hartshorne (Residues and duality) is the following quasi-isomorphism for a bounded above complex of -modules with quasi-coherent cohomology and a bounded below complex of -modules with coherent cohomology. Here the 's are sheaves of homomorphisms. Construction of the f! pseudofunctor using rigid dualizing complexes Over the years, several approaches for constructing the pseudofunctor emerged. One quite recent successful approach is based on the notion of a rigid dualizing complex. This notion was first defined by Van den Bergh in a noncommutative context. The construction is based on a variant of derived Hochschild cohomology (Shukla cohomology): Let be a commutative ring, and let be a commutative algebra. There is a functor which takes a cochain complex to an object in the derived category over . Assuming is noetherian, a rigid dualizing complex over relative to is by definition a pair where is a dualizing complex over which has finite flat dimension over , and where is an isomorphism in the derived category . If such a rigid dualizing complex exists, then it is unique in a strong sense. Assuming is a localization of a finite type -algebra, existence of a rigid dualizing complex over relative to was first proved by Yekutieli and Zhang assuming is a regular noetherian ring of finite Krull dimension, and by Avramov, Iyengar and Lipman assuming is a Gorenstein ring of finite Krull dimension and is of finite flat dimension over . If is a scheme of finite type over , one can glue the rigid dualizing complexes that its affine pieces have, and obtain a rigid dualizing complex . Once one establishes a global existence of a rigid dualizing complex, given a map of schemes over , one can define , where for a scheme , we set . Dualizing Complex Examples Dualizing Complex for a Projective Variety The dualizing complex for a projective variety is given by the complex Plane Intersecting a Line Consider the projective variety We can compute using a resolution by locally free sheaves. This is given by the complex Since we have that This is the complex See also Verdier duality Notes References Topological methods of algebraic geometry Sheaf theory Duality theories
Coherent duality
Mathematics
1,254
11,759,970
https://en.wikipedia.org/wiki/Trail%20ethics
Trail ethics define appropriate ranges of behavior for hikers on a public trail. It is similar to both environmental ethics and human rights in that it deals with the shared interaction of humans and nature. There are multiple agencies and groups that support and encourage ethical behavior on trails. Trail ethics applies to the use of trails, by pedestrians, dog walkers, hikers, backpackers, mountain bikers, equestrians, hunters, and off-road vehicles. Etiquette Sometimes conflicts can develop between different types of users of a trail or pathway. Etiquette has developed to minimize such interference. Examples include: When two groups meet on a steep trail, a custom has developed in some areas whereby the group moving uphill has the right-of-way. Trail users generally avoid making loud sounds, such as shouting or loud conversation, playing music, or the use of mobile phones. Trail users tend to avoid impacting on the land through which they travel. Users can avoid impact by staying on established trails, and durable surfaces, not picking plants, or disturbing wildlife, and carrying garbage out. The Leave No Trace movement offers a set of guidelines for low-impact hiking: "Leave nothing but footprints. Take nothing but photos. Kill nothing but time. Keep nothing but memories". The feeding of wild animals is dangerous and can cause harm to both the animals and to other people. Mountain bikers must yield to both hikers and riders on horses (equestrians), unless the trail is clearly designated and marked for bike-only travel. Hikers yield to equestrians. Trails in urban areas Some cities have worked to add pathways for pedestrians and cyclists. This can reduce the amount of vehicle traffic in busy urban areas, and make visiting downtown areas more pleasant, There can be difficulties when a path is used by people travelling at different speeds, such as pedestrians, joggers, and cyclists, and the appropriate etiquette is not observed. Off road vehicles In the US off-road vehicle use on public land has been criticized by some members of the government and environmental organizations including the Sierra Club and The Wilderness Society. They have noted several consequences of illegal ORV use such as pollution, trail damage, erosion, land degradation, possible species extinction, and habitat destruction which can leave hiking trails impassable. ORV proponents argue that legal use taking place under planned access along with the multiple environment and trail conservation efforts by ORV groups will mitigate these issues. Groups such as the Blue-ribbon Coalition advocate Treadlightly, which is the responsible use of public lands used for off-road activities. See also Tread Lightly! Leave No Trace "Rules of the Trail" (as applied in Mountain biking) Clean Trails Conservation ethic Environmental ethics References External links Clean Trails Trail Ethics - Ontario-based Codes Trail ethics are provided by: Leave No Trace, Inc. Trail Etiquette in the Age of Me Environmental ethics Ethics
Trail ethics
Environmental_science
586
2,129,702
https://en.wikipedia.org/wiki/Phototube
A phototube or photoelectric cell is a type of gas-filled or vacuum tube that is sensitive to light. Such a tube is more correctly called a 'photoemissive cell' to distinguish it from photovoltaic or photoconductive cells. Phototubes were previously more widely used but are now replaced in many applications by solid state photodetectors. The photomultiplier tube is one of the most sensitive light detectors, and is still widely used in physics research. Operating principles Phototubes operate according to the photoelectric effect: Incoming photons strike a photocathode, knocking electrons out of its surface, which are attracted to an anode. Thus current is dependent on the frequency and intensity of incoming photons. Unlike photomultiplier tubes, no amplification takes place, so the current through the device is typically of the order of a few microamperes. The light wavelength range over which the device is sensitive depends on the material used for the photoemissive cathode. A caesium-antimony cathode gives a device that is very sensitive in the violet to ultra-violet region with sensitivity falling off to blindness to red light. Caesium on oxidised silver gives a cathode that is most sensitive to infra-red to red light, falling off towards blue, where the sensitivity is low but not zero. Vacuum devices have a near constant anode current for a given level of illumination relative to anode voltage. Gas-filled devices are more sensitive, but the frequency response to modulated illumination falls off at lower frequencies compared to the vacuum devices. The frequency response of vacuum devices is generally limited by the transit time of the electrons from cathode to anode. Applications One major application of the phototube was the reading of optical sound tracks for projected films. Phototubes were used in a variety of light-sensing applications until some were superseded by photoresistors and photodiodes. References Optical devices Sensors Vacuum tubes
Phototube
Physics,Materials_science,Technology,Engineering
415
39,468,215
https://en.wikipedia.org/wiki/Laryngeal%20tube
The laryngeal tube (also known as the King LT) is an airway management device designed as an alternative to other airway management techniques such as mask ventilation, laryngeal mask airway, and tracheal intubation. This device can be inserted blindly through the oropharynx into the hypopharynx to create an airway during anaesthesia and cardiopulmonary resuscitation so as to enable mechanical ventilation of the lungs. Medical use Various studies have shown that insertion and use of the standard tracheal tube is easy, providing a clear airway in the majority of cases. Comparative studies indicate that the standard laryngeal tube is generally as effective as the laryngeal mask airway, while some studies indicate that the Pro-seal laryngeal mask may be more effective than the standard laryngeal tube under controlled ventilation conditions in general anaesthesia. The indications and contraindications for use of the laryngeal tube are similar to those of the laryngeal mask airway and include the use in general anaesthesia for minor surgical operations. Several studies describe the usefulness of the device in securing a difficult airway, even in cases where insertion of the laryngeal mask had failed. The double-lumen laryngeal tube-Suction II, with the possibility of placing a gastric tube, has been found to have distinct advantages over the standard laryngeal tube and has been recommended as a first-line device to secure the airway in emergency situations when direct laryngoscopy fails in neonates and infants. The laryngeal tube is also recommended for medical personnel not experienced in tracheal intubation, and as a rescue device when intubation has failed in adults. According to the manufacturer the use of Laryngeal tubes is contraindicated in people with an intact gag reflex, known oesophageal disease, and people who have ingested caustic substances. Description In its basic (standard) version, the laryngeal tube is made up of a tube with a larger balloon cuff in the middle (oropharyngeal cuff) and a smaller balloon cuff at the end (oesophageal cuff). The tube is kinked at an angle of 30-45° in the middle; the kink is located in the larger cuff. There are two apertures, located between the two cuffs, through which ventilation takes place. Both cuffs are inflated through a single small lumen line and pilot balloon. The cuffs are high-volume, low-pressure cuffs with inflating volume ranging from 10 ml (size 0) to 90 ml (size 5). A large bore syringe, which is marked with the required volume for each size, is used to inflate the cuffs. A cuff inflator can also be used, in which case the cuffs should be inflated to a pressure of 60 cm H2O. Three black lines on the tube indicate the depth of insertion when aligned with the teeth. History The laryngeal tube was developed in Germany and introduced to the European market by VBM Medizintechnik in the autumn of 1999. Since then the design has been modified several times. Currently four different models are used: the standard tube as single use or re-use models and the modified tube (laryngeal tube-Suction II) as single use or re-use models. The re-usable models can be autoclaved up to 50 times, while the modified laryngeal tube (Suction) incorporates an extra lumen for inserting a gastric tube or suction system. There are six sizes of the laryngeal tube, ranging from newborn (size 0) to large adult (size 5). The connector of the tube is color-coded for each size. The different sizes are calibrated according to weight or height. The laryngeal tube was licensed for use during cardiopulmonary resuscitation in Japan in 2002, and approved for use in the United States by the Food and Drug Administration in 2003. The European Resuscitation Council, in its 2005 guidelines for advanced life support (ALS), accepts its use as an alternate airway device for medical personnel who are not experienced in tracheal intubation. See also Combitube Endotracheal tube Airtraq Double-lumen endo-bronchial tube References Airway management Medical equipment Emergency medical equipment Emergency medicine 1999 introductions
Laryngeal tube
Biology
943
15,301,222
https://en.wikipedia.org/wiki/Monster%20%28physics%29
A monster, in quantum physics, is an arrangement of matter that has maximum disorder. The high-entropy state of monsters has been theorized as being responsible for the high entropy of black holes; while the likelihood of any given star entering a "monster" state while collapsing is small, quantum mechanics takes into account all possible outcomes so the monster's entropy has to be taken into account when calculating black hole entropy. References Entropy
Monster (physics)
Physics,Chemistry,Mathematics
87
67,093,341
https://en.wikipedia.org/wiki/CD-73%C2%B0375
CD-73°375 is a binary star located in the constellation Volans about away. The two components, HR 2979 and HR 2980, are separated by two arc-seconds. The pair has a combined apparent magnitude of 6.34. It has a radial velocity of about , which means it drifting away from the Solar System. Properties The two stars making up CD-73°375 are both B9 subgiants with almost identical properties. HR 2979 is generally designated as the primary because of its higher mass, although HR 2980 is marginally brighter at magnitude 7.02. They are apart and have an assumed orbit of 3,760 years. Each star has a mass about three and a half times the Sun's and a temperature of about . References Volans B-type subgiants Binary stars 2979 80 62153 4 036914 Durchmusterung objects
CD-73°375
Astronomy
187
5,549,155
https://en.wikipedia.org/wiki/Pictive
PICTIVE (Plastic Interface for Collaborative Technology Initiative through Video Exploration) is a participatory design method used to develop graphical user interfaces. It was developed at Bellcore around 1990. Usability Human–computer interaction
Pictive
Engineering
45
49,105,793
https://en.wikipedia.org/wiki/Healthcare%20engineering
In its succinct definition, healthcare engineering is "engineering involved in all aspects of healthcare". The term engineering in this definition covers all engineering disciplines such as biomedical, chemical, civil, computer, electrical, environmental, hospital architecture, industrial, information, materials, mechanical, software, and systems engineering. Based on the definition of healthcare, a more elaborated definition is: "Healthcare engineering is engineering involved in all aspects of the prevention, diagnosis, treatment, and management of illness, as well as the preservation and improvement of physical and mental health and well-being, through the services offered to humans by the medical and allied health professions". Overview Almost all engineering disciplines (e.g., biomedical, chemical, civil, computer, electrical, environmental, industrial, information, materials, mechanical, software, and systems engineering) have made significant contributions and brought about advances in healthcare. Contributions have also been made by healthcare professionals (e.g., physicians, dentists, nurses, pharmacists, allied health professionals, and health scientists) who are engaged in supporting, improving, and/or advancing healthcare through engineering approaches. Healthcare engineering is expected to play a role of growing importance as healthcare continues to be one of the world's largest and fastest-growing industries where engineering is a major factor of advancement through creating, developing, and implementing cutting-edge devices, systems, and procedures attributed to breakthroughs in electronics, information technology, miniaturization, material science, optics, and other fields, to address challenges associated with issues such as the continued rise in healthcare costs, the quality and safety of healthcare, care of the aging population, management of common diseases, the impact of high technology, increasing demands for regulatory compliance, risk management, and reducing litigation risk. As the demand for engineers continues to increase in healthcare, healthcare engineering will be recognized as the most important profession where engineers make major contributions directly benefiting human health. History The American Society of Healthcare Engineering (ASHE), established in 1962, was one of the first to publicize the term healthcare engineering. ASHE, as well as its many local affiliate societies, is devoted to the health care physical environment, including design, building, maintenance, and operation of hospitals and other health care facilities, which represents only one sector of engineers' activities in healthcare. The term healthcare engineers first appeared in the scientific literature in 1989, where the critical role of engineers in the healthcare delivery system was discussed. A number of academic programs have adopted the name healthcare engineering (e.g., Indiana University, Northwestern University, Purdue University, Texas Tech University, University of Illinois, University of Michigan, University of North Carolina, University of Southern California, University of Toronto), although the description or definition of the term by these programs varies, as each institution has designed its program based on its own distinctive interest, strength, and focus. The first scholarly journal dedicated to healthcare engineering, Journal of Healthcare Engineering, was launched in 2010 by Dr. Ming-Chien Chyu, focusing on engineering involved in all aspects of healthcare delivery processes and systems. In the meantime, a number of companies with various foci have adopted healthcare engineering in their names. Healthcare engineering was first defined in a white paper published in 2015 by Chyu and 40 co-authors who are active members of and contributors to the healthcare engineering community around the world. The white paper was reviewed by more than 280 reviewers, including members of US National Academy of Engineering, engineering deans of the world's top universities, administrators and faculty members of healthcare engineering academic programs, leaders of healthcare/medical and engineering professional societies and associations, leaders of healthcare industry and government, and healthcare engineering professionals from around the world. This white paper documents a clear, rigorous definition of healthcare engineering as an academic discipline, an area of research, a field of specialty, and a profession, and is expected to raise the status and visibility of the field, help students choose healthcare engineering-related fields as majors, help engineers and healthcare professionals choose healthcare engineering as a profession, define healthcare engineering as a specialty area for the research community, funding agencies, and conference or event organizers, help job-search databases properly categorize healthcare engineering jobs, help healthcare employers recruit from the right pool of expertise, bring academic administrators' attention to healthcare engineering in considering new program initiations, help governments and institutions of different levels put healthcare engineering into perspective for policy making, budgeting, and other purposes, and help publishers and librarians categorize literature related to healthcare engineering. Based on this white paper, a global, non-profit professional organization, Healthcare Engineering Alliance Society (HEALS), was founded by Chyu in 2015, which focuses on improving and advancing all aspects of healthcare through engineering approaches. The white paper has been cited in numerous scientific papers. Purpose The purpose of healthcare engineering is to improve human health and well-being through engineering approaches. Scope Healthcare engineering covers the following two major fields: Engineering for healthcare intervention: engineering involved in the development or provision of any treatment, preventive care, or test that a person could take or undergo to improve health or to help with a particular health problem. Engineering for healthcare systems: engineering involved in the complete network of organizations, agencies, facilities, information systems, management systems, financing mechanisms, logistics, and all trained personnel engaged in delivering healthcare within a geographical area. Healthcare engineering subjects Updated ramifications and lists of topics within individual subjects are available from authoritative sources such as the leading societies/associations of individual subjects and government organizations. (I) Engineering for healthcare intervention Fundamentals Biomechanics Biomaterials Biomedical instruments Medical devices Engineering for surgery Medical imaging Organ transplantation Artificial organs Drug delivery Genetic engineering Engineering for diagnosis/detection Health informatics, information engineering and decision support Disinfection engineering Engineering for disease prevention, diagnosis, treatment, and management Cardiovascular disease Cancer Alzheimer's disease Diabetes Respiratory disease Obesity Degenerative diseases Others Engineering for patient care Patient safety Critical care Neonatal care Home healthcare Elderly care Patient monitoring Health disparities Disaster management Engineering for medical specialties Allergy and immunology Anesthesiology Cardiology Critical care medicine Emergency medicine Endocrinology Gastroenterology General surgery Geriatrics Infectious disease Neurology Neurosurgery Nuclear medicine Occupational medicine Oncology Ophthalmology Orthopedics Pathology Pediatrics Physical medicine and rehabilitation Plastic, reconstructive and aesthetic surgery Public health Pulmonology Radiology Radiotherapy Rheumatology Sports medicine Urology Vascular medicine Others Engineering for dental specialties Endodontics Oral and maxillofacial pathology, radiology, and surgery Orthodontics and dentofacial orthopedics Periodontics Prosthodontics Others Engineering for allied health specialties Audiology Clinical laboratory science Environmental health Occupational therapy Orthotics and prosthetics Physical therapy Rehabilitation Respiratory therapy Speech therapy Others Engineering for nursing – including nursing in all related areas Engineering for pharmacy Pharmaceutical design and development Bio-/pharmaceutical manufacturing Pharmaceutical devices Pharmaceutical testing Pharmaceutical information systems Clinical science Regulatory compliance (II) Engineering for healthcare systems Healthcare system management, improvement and reform Quality, cost, efficiency, effectiveness Operations research and systems engineering Lean, Six Sigma, total quality management Human factors High reliability organization Resilience engineering Rural health Healthcare information systems Electronic health record eHealth mHealth Telemedicine Wireless technology Data mining and big data Information security Healthcare facilities Healthcare infrastructure Healthcare energy systems Healthcare sustainability and green design Environmental health and safety Healthcare policy (III) Others Healthcare engineering education and training Collegiate education Continued education Future of healthcare Synergy Healthcare engineering features a synergy among the healthcare and medical sectors of all engineering disciplines and the engineering and technology sectors of the health sciences, as depicted in Figure 1. Professional Healthcare engineering professionals are mainly (a) engineers from all engineering disciplines such as biomedical, chemical, civil, computer, electrical, environmental, industrial, information, materials, mechanical, software, and systems engineering, and (b) healthcare professionals such as physicians, dentists, nurses, pharmacists, allied health professionals, and health scientists, who are engaged in supporting, improving, and/or advancing any aspect of healthcare through engineering approaches, in accordance with the above definition of healthcare engineering. Since some healthcare professionals engaged in healthcare engineering may not be considered to be "engineers", "healthcare engineering professional" is a more appropriate term than "Healthcare Engineer". Venue Healthcare engineering professionals generally perform their jobs in, with, or for the healthcare industry. Major sectors and subsectors of healthcare industry along with healthcare engineering professionals' contributions are summarized in Table 2. Education and training Engineers from almost all engineering disciplines (such as biomedical, chemical, civil, computer, electrical, environmental, industrial, information, materials, mechanical, software, and systems engineering) are always in demand in healthcare. It is a common misconception that only engineers with a background in biomedical engineering, clinical engineering, or related areas may work in healthcare. However, there is a need for courses and certificate type of programs that prepare non-biomedical engineering students and practicing engineers for service in healthcare. On the other hand, healthcare professionals (physicians, dentists, nurses, pharmacists, allied health professionals, etc.) may benefit from training to apply engineering to their practice, problem solving, and advancing healthcare. Due to the rapid advance of technology, continuing education plays a crucial role in ensuring healthcare engineering professionals' continued competence. See also Health systems engineering Health systems science References Health care Engineering disciplines Health care occupations
Healthcare engineering
Engineering
1,921
265,112
https://en.wikipedia.org/wiki/Improvised%20explosive%20device
An improvised explosive device (IED) is a bomb constructed and deployed in ways other than in conventional military action. It may be constructed of conventional military explosives, such as an artillery shell, attached to a detonating mechanism. IEDs are commonly used as roadside bombs, or homemade bombs. The term "IED" was coined by the British Army during the Northern Ireland conflict to refer to booby traps made by the IRA, and entered common use in the U.S. during the Iraq War. IEDs are generally utilized in terrorist operations or in asymmetric unconventional warfare or urban warfare by insurgent guerrillas or commando forces in a theatre of operations. In the Iraq War (2003–2011), insurgents used IEDs extensively against U.S.-led forces, and by the end of 2007, IEDs were responsible for approximately 63% of coalition deaths in Iraq. They were also used in Afghanistan by insurgent groups, and caused over 66% of coalition casualties in the 2001–2021 Afghanistan War. IEDs were also used frequently by the Liberation Tigers of Tamil Eelam (LTTE) in Sri Lanka during the Sri Lankan Civil War, by the Chechen insurgency following the Second Chechen War, and by Ambazonian separatists in the ongoing Anglophone Crisis. Background An IED is a bomb fabricated in an improvised manner incorporating destructive, lethal, noxious, pyrotechnic, or incendiary chemicals and designed to destroy or incapacitate personnel or vehicles. In some cases, IEDs are used to distract, disrupt, or delay an opposing force, facilitating another type of attack. IEDs may incorporate military or commercially sourced explosives, and often combine both types, or they may otherwise be made with homemade explosives (HME). An HME lab refers to a Homemade Explosive Lab, or the physical location where the devices are crafted. An IED has five components: a switch (activator), an initiator (fuse), container (body), charge (explosive), and a power source (battery). An IED designed for use against armoured targets such as personnel carriers or tanks will be designed for armour penetration, by using a shaped charge that creates an explosively formed penetrator. IEDs are extremely diverse in design and may contain many types of initiators, detonators, penetrators, and explosive loads. Antipersonnel IEDs typically also contain fragmentation-generating objects such as nails, ball bearings or even small rocks to cause wounds at greater distances than blast pressure alone could. In the conflicts of the 21st century, anti-personnel improvised explosive devices (IED) have partially replaced conventional or military landmines as the source of injury to dismounted (pedestrian) soldiers and civilians. These injuries were reported in BMJ Open to be far worse with IEDs than with landmines resulting in multiple limb amputations and lower body mutilation. This combination of injuries has been given the name "Dismounted Complex Blast Injury" and is thought to be the worst survivable injury ever seen in war. IEDs are triggered by various methods, including remote control, infrared or magnetic triggers, pressure-sensitive bars or trip wires (victim-operated). In some cases, multiple IEDs are wired together in a daisy chain to attack a convoy of vehicles spread out along a roadway. IEDs made by inexperienced designers or with substandard materials may fail to detonate, and in some cases, they detonate on either the maker or the placer of the device. Some groups, however, have been known to produce sophisticated devices constructed with components scavenged from conventional munitions and standard consumer electronics components, such as mobile phones, consumer-grade two-way radios, washing machine timers, pagers, or garage door openers. The sophistication of an IED depends on the training of the designer and the tools and materials available. IEDs may use artillery shells or conventional high-explosive charges as their explosive load as well as homemade explosives. However, the threat exists that toxic chemical, biological, or radioactive (dirty bomb) material may be added to a device, thereby creating other life-threatening effects beyond the shrapnel, concussive blasts and fire normally associated with bombs. Chlorine liquid has been added to IEDs in Iraq, producing clouds of chlorine gas. A vehicle-borne IED, or VBIED, is a military term for a car bomb or truck bomb but can be any type of transportation such as a bicycle, motorcycle, donkey (), etc. They are typically employed by insurgents, in particular ISIS, and can carry a relatively large payload. They can also be detonated from a remote location. VBIEDs can create additional shrapnel through the destruction of the vehicle itself and use vehicle fuel as an incendiary weapon. The act of a person's being in this vehicle and detonating it is known as an SVBIED suicide. Of increasing popularity among insurgent forces in Iraq is the house-borne IED, or HBIED, from the common military practice of clearing houses; insurgents rig an entire house to detonate and collapse shortly after a clearing squad has entered. By warhead The Dictionary of Military and Associated Terms (JCS Pub 1-02) includes two definitions for improvised devices: improvised explosive devices (IED) and improvised nuclear device (IND). These definitions address the Nuclear and Explosive in CBRNe. That leaves chemical, biological and radiological undefined. Four definitions have been created to build on the structure of the JCS definition. Terms have been created to standardize the language of first responders and members of the military and to correlate the operational picture. Explosive A device placed or fabricated in an improvised manner incorporating destructive, lethal, noxious, pyrotechnic, or incendiary chemicals and designed to destroy, incapacitate, harass, or distract. It may incorporate military stores, but is normally devised from non-military components. Explosively formed penetrator/projectiles (EFPs) IEDs have been deployed in the form of explosively formed projectiles (EFP), a special type of shaped charge that is effective at long standoffs from the target (50 meters or more), however they are not accurate at long distances. This is because of how they are produced. The large "slug" projected from the explosion has no stabilization because it has no tail fins and it does not spin like a bullet from a rifle. Without this stabilization the trajectory can not be accurately determined beyond 50 meters. An EFP is essentially a cylindrical shaped charge with a machined concave metal disc (often copper) in front, pointed inward. The force of the shaped charge turns the disc into a high velocity slug, capable of penetrating the armor of most vehicles in Iraq. Directionally focused charges Directionally focused charges (also known as directionally focused fragmentary charges depending on the construction) are very similar to EFPs, with the main difference being that the top plate is usually flat and not concave. It also is not made with machined copper but much cheaper cast or cut metal. When made for fragmentation, the contents of the charge are usually nuts, bolts, ball bearings and other similar shrapnel products and explosive. If it only consists of the flat metal plate, it is known as a platter charge, serving a similar role as an EFP with reduced effect but easier construction. Chemical A device incorporating the toxic attributes of chemical materials designed to result in the dispersal of toxic chemical materials for the purpose of creating a primary patho-physiological toxic effect (morbidity and mortality), or secondary psychological effect (causing fear and behavior modification) on a larger population. Such devices may be fabricated in a completely improvised manner or may be an improvised modification to an existing weapon. Biological A device incorporating biological materials designed to result in the dispersal of vector borne biological material for the purpose of creating a primary patho-physiological toxic effect (morbidity and mortality), or secondary psychological effect (causing fear and behavior modification) on a larger population. Incendiary A device making use of exothermic chemical reactions designed to result in the rapid spread of fire for the purpose of creating a primary patho-physiological effect (morbidity and mortality), or secondary psychological effect (causing fear and behavior modification) on a larger population or it may be used with the intent of gaining a tactical advantage. Such devices may be fabricated in a completely improvised manner or may be an improvised modification to an existing weapon. A common type of this is the Molotov cocktail. Radiological A speculative device incorporating radioactive materials designed to result in the dispersal of radioactive material for the purpose of area denial and economic damage, and/or for the purpose of creating a primary patho-physiological toxic effect (morbidity and mortality), or secondary psychological effect (causing fear and behavior modification) on a larger population. Such devices may be fabricated in a completely improvised manner or may be an improvised modification to an existing nuclear weapon. Also called a Radiological Dispersion Device (RDD) or "dirty bomb". Nuclear Improvised nuclear device of most likely gun-type or implosion-type. By delivery mechanism Car A vehicle may be laden with explosives, set to explode by remote control or by a passenger/driver, commonly known as a car bomb or vehicle-borne IED (VBIED, pronounced vee-bid). On occasion the driver of the car bomb may have been coerced into delivery of the vehicle under duress, a situation known as a proxy bomb. Distinguishing features are low-riding vehicles with excessive weight, vehicles with only one passenger, and ones where the interior of the vehicles look as if they have been stripped down and built back up. Car bombs can carry thousands of pounds of explosives and may be augmented with shrapnel to increase fragmentation. ISIS has used truck bombs with devastating effects. Boat (WBIED) Water-borne Improvised Explosive Devices (WBIED), i.e. boats carrying explosives, can be used against ships and areas connected to water. An early example of this type was the Japanese Shinyo suicide boats during World War II. The boats were filled with explosives and attempted to ram Allied ships, sometimes successfully, having sunk or severely damaged several American ships by war's end. Suicide bombers used a boat-borne IED to attack the USS Cole; US and UK troops have also been killed by boat-borne IEDs in Iraq. The Tamil Tigers Sea Tigers have also been known to use SWBIEDs during the Sri Lankan Civil War. WBIEDs have been used in the Red Sea. Animal Monkeys and war pigs were used as incendiaries around 1000 AD. More famously the "anti-tank dog" and "bat bomb" were developed during World War II. In recent times, a two-year-old child and seven other people were killed by explosives strapped to a horse in the town of Chita in Colombia. The carcasses of certain animals were also used to conceal explosive devices by the Iraqi insurgency. Collar IEDs strapped to the necks of farmers have been used on at least three occasions by guerrillas in Colombia, as a way of extortion. American pizza delivery man Brian Douglas Wells was killed in 2003 by an explosive fastened to his neck, purportedly under duress from the maker of the bomb. In 2011 a schoolgirl in Sydney, Australia had a suspected collar bomb attached to her by an attacker in her home. The device was removed by police after a ten-hour operation and proved to be a hoax. Suicide Suicide bombing usually refers to an individual wearing explosives and detonating them to kill others including themselves, the bomber will conceal explosives on and around their person, commonly using a vest, and will use a timer or some other trigger to detonate the explosives. The logic behind such attacks is the belief that an IED delivered by a human has a greater chance of achieving success than any other method of attack. In addition, there is the psychological impact of child soldiers prepared to deliberately sacrifice themselves for their cause. Surgically implanted In May 2012 American counter-terrorism officials leaked their acquisition of documents describing the preparation and use of surgically implanted improvised explosive devices. The devices were designed to evade detection. The devices were described as containing no metal, so they could not be detected by X-rays. Security officials referred to bombs being surgically implanted into suicide bombers' "love handles". According to the Daily Mirror UK security officials at MI-6 asserted that female bombers could travel undetected carrying the explosive chemicals in otherwise standard breast implants. The bomber would blow up the implanted explosives by injecting a chemical trigger. Robot Robots could also be used to carry explosives. First such documented case was during the aftermath of 2016 shooting of Dallas police officers when a bomb disposal robot was used to deliver explosives to kill Micah Xavier Johnson, who was hiding in a place inaccessible to police snipers. As well, drones carrying explosives were used in a suspected assassination attempt against Venezuelan president Nicolás Maduro in 2018. Tunnel ISIS and Al-Nusra have used bombs detonated in tunnels dug under targets. Improvised rocket In 2008, rocket-propelled IEDs, dubbed Improvised Rocket Assisted Munitions, Improvised Rocket Assisted Mortars and (IRAM) by the military, came to be employed in numbers against U.S. forces in Iraq. They have been described as propane tanks packed with explosives and powered by 107 mm rockets. They are similar to some Provisional IRA barrack buster mortars. New types of IRAMs including Volcano IRAM and Elephant Rockets, are used during the Syrian Civil War. Improvised mortar Improvised mortars have been used by many insurgent groups including during the civil war in Syria and Boko Haram insurgency. IRA used improvised mortars called barrack busters. Improvised artillery including hell cannons are used by rebel forces during Syrian Civil War. By trigger mechanism Wire Command-wire improvised, explosive devices (CWIED) use an electrical firing cable that affords the user complete control over the device right up until the moment of initiation. Radio The trigger for a radio-controlled improvised explosive device (RCIED) is controlled by radio link. The device is constructed so that the receiver is connected to an electrical firing circuit and the transmitter operated by the perpetrator at a distance. A signal from the transmitter causes the receiver to trigger a firing pulse that operates the switch. Usually the switch fires an initiator; however, the output may also be used to remotely arm an explosive circuit. Often the transmitter and receiver operate on a matched coding system that prevents the RCIED from being initiated by spurious radio frequency signals or jamming. An RCIED can be triggered from any number of different radio-frequency based mechanisms including handheld remote control transmitters, car alarms, wireless door bells, cell phones, pagers and portable two-way radios, including those designed for the UHF PMR446, FRS, and GMRS services. Mobile phone A radio-controlled IED (RCIED) incorporating a mobile phone that is modified and connected to an electrical firing circuit. Mobile phones operate in the UHF band in line of sight with base transceiver station (BTS) antennae sites. In the common scenario, receipt of a paging signal by phone is sufficient to initiate the IED firing circuit. Victim-operated Victim-operated improvised explosive devices (VOIED), also known as booby traps, are designed to function upon contact with a victim. VOIED switches are often well hidden from the victim or disguised as innocuous everyday objects. They are operated by means of movement. Switching methods include tripwire, pressure mats, spring-loaded release, push, pull or tilt. Common forms of VOIED include the under-vehicle IED (UVIED), improvised landmines, and mail bombs. Infrared The British accused Iran and Hezbollah of teaching Iraqi fighters to use infrared light beams to trigger IEDs. As the occupation forces became more sophisticated in interrupting radio signals around their convoys, the insurgents adapted their triggering methods. In some cases, when a more advanced method was disrupted, the insurgents regressed to using uninterruptible means, such as hard wires from the IED to detonator; however, this method is much harder to effectively conceal. It later emerged however, that these "advanced" IEDs were actually old IRA technology. The infrared beam method was perfected by the IRA in the early 1990s after it acquired the technology from a botched undercover British Army operation. Many of the IEDs being used against the invading coalition forces in Iraq were originally developed by the British Army who unintentionally passed the information on to the IRA. The IRA taught their techniques to the Palestine Liberation Organisation and the knowledge spread to Iraq. Counterefforts Counter-IED efforts are done primarily by military, law enforcement, diplomatic, financial, and intelligence communities and involve a comprehensive approach to countering the threat networks that employ IEDs, not just efforts to defeat the devices themselves. Detection and disarmament Because the components of these devices are being used in a manner not intended by their manufacturer, and because the method of producing the explosion is limited only by the science and imagination of the perpetrator, it is not possible to follow a step-by-step guide to detect and disarm a device that an individual has only recently developed. As such, explosive ordnance disposal (IEDD) operators must be able to fall back on their extensive knowledge of the first principles of explosives and ammunition, to try and deduce what the perpetrator has done, and only then to render it safe and dispose of or exploit the device. Beyond this, as the stakes increase and IEDs are emplaced not only to achieve the direct effect, but to deliberately target IEDD operators and cordon personnel, the IEDD operator needs to have a deep understanding of tactics to ensure they are neither setting up any of their team or the cordon troops for an attack, nor walking into one themselves. The presence of chemical, biological, radiological, or nuclear (CBRN) material in an IED requires additional precautions. As with other missions, the EOD operator provides the area commander with an assessment of the situation and of support needed to complete the mission. Military and law enforcement personnel from around the world have developed a number of render-safe procedures (RSPs) to deal with IEDs. RSPs may be developed as a result of direct experience with devices or by applied research designed to counter the threat. The supposed effectiveness of IED jamming systems, including vehicle- and personally-mounted systems, has caused IED technology to essentially regress to command-wire detonation methods. These are physical connections between the detonator and explosive device and cannot be jammed. However, these types of IEDs are more difficult to emplace quickly, and are more readily detected. Military forces and law enforcement from India, Canada, United Kingdom, Israel, Spain, and the United States are at the forefront of counter-IED efforts, as all have direct experience in dealing with IEDs used against them in conflict or terrorist attacks. From the research and development side, programs such as the new Canadian Unmanned Systems Challenge will bring student groups together to invent an unmanned device to both locate IEDs and pinpoint the insurgents. Historical use The fougasse was improvised for centuries, eventually inspiring factory-made land mines. Ernst Jünger mentions in his war memoir the systematic use of IEDs and booby traps to cover the retreat of German troops at the Somme region during World War I. Another early example of coordinated large-scale use of IEDs was the Belarusian Rail War launched by Belarusian guerrillas against the Germans during World War II. Both command-detonated and delayed-fuse IEDs were used to derail thousands of German trains during 1943–1944. Afghanistan Starting six months before the invasion of Afghanistan by the USSR on 27 December 1979, the Afghan Mujahideen were supplied by the CIA, among others, with large quantities of military supplies. Among those supplies were many types of anti-tank mines. The insurgents often removed the explosives from several foreign anti-tank mines, and combined the explosives in tin cooking-oil cans for a more powerful blast. By combining the explosives from several mines and placing them in tin cans, the insurgents made them more powerful, but sometimes also easier to detect by Soviet sappers using mine detectors. After an IED was detonated, the insurgents often used direct-fire weapons such as machine guns and rocket-propelled grenades to continue the attack. Afghan insurgents operating far from the border with Pakistan did not have a ready supply of foreign anti-tank mines. They preferred to make IEDs from Soviet unexploded ordnance. The devices were rarely triggered by pressure fuses. They were almost always remotely detonated. Since the 2001 invasion of Afghanistan, the Taliban and its supporters have used IEDs against NATO and Afghan military and civilian vehicles. This has become the most common method of attack against NATO forces, with IED attacks increasing consistently year on year. A brigade commander said that sniffer dogs are the most reliable way of detecting IEDs. However, statistical evidence gathered by the US Army Maneuver Support Center at Fort Leonard Wood, MO shows that the dogs are not the most effective means of detecting IEDs. The U.S. Army's 10th Mountain Division was the first unit to introduce explosive detection dogs in southern Afghanistan. In less than two years the dogs discovered 15 tons of illegal munitions, IED's, and weapons. In July 2012 it was reported that "sticky bombs", magnetically adhesive IED's that were prevalent in the Iraq War, showed up in Afghanistan. By 2021 there was at least one sticky bomb attack a day in Kabul. They are used in both traditional assassinations and targeted killings and as terror weapons against the population at large. In November 2013 one of the largest IEDs constructed was intercepted near Gardez City in Eastern Afghanistan. The 61,000 pounds of explosives was hidden under what appeared to be piles of wood. By comparison, the truck bomb that all but razed the Alfred P. Murrah Federal Building in Oklahoma City and killed 168 people in 1995 weighed less than 5,000 pounds. A United States Army Corps of Engineers officer assigned to the nearby FOB Lightning analyzed the potential blast damage, which resulted in closing FOB Goode due to its proximity to the highway. ISAF troops stationed in Afghanistan and other IED prone areas of operation would commonly "BIP" (blow in place) IED's and other explosives that were considered too dangerous to defuse. Egypt IEDs are being used by insurgents against government forces during the insurgency in Egypt (2013–present) and the Sinai insurgency. India IEDs are increasingly being used by Maoists in India. On 13 July 2011, three IEDs were used by the Insurgency in Jammu and Kashmir to carry out a coordinated attack on the city of Mumbai, killing 19 people and injuring 130 more. On 21 February 2013, two IEDs were used to carry out bombings in the Indian city of Hyderabad. The bombs exploded in Dilsukhnagar, a crowded shopping area of the city, within 150 metres of each other. On 17 April 2013, two kilos of explosives used in Bangalore bomb blast at Malleshwaram area, leaving 16 injured and no fatalities. Intelligence sources have said the bomb was an Improvised Explosive Device or IED. On 21 May 2014, Indinthakarai village supporters of the Kudankulam Nuclear Power Plant were targeted by opponents using over half a dozen crude "country-made bombs". It was further reported that there had been at least four similar bombings in Tamil Nadu during the preceding year. On 28 December 2014, a minor explosion took place near the Coconut Grove restaurant at Church Street in Bangalore on Sunday around 8:30 pm. One woman was killed and another injured in the blast. During the 2016 Pathankot attack, several casualties came from IEDs. On 14 February 2019 in 2019 Pulwama attack, several casualties were reported due to IED blast. On 29 October 2023, a series of IED explosions were used to kill 2 attendees at a Jehovah's Witnesses Convention in Kalamassery, India. Iraq In the 2003–2011 Iraq War, IEDs have been used extensively against Coalition forces and by the end of 2007 they have been responsible for at least 64% of Coalition deaths in Iraq. Since the detonation of the first IED in Iraq in 2003, more than 81,000 IED attacks have occurred in the country, killing and wounding 21,200 Americans. Beginning in July 2003, the Iraqi insurgency used IEDs to target invading coalition vehicles. According to The Washington Post, 64% of U.S. deaths in Iraq occurred due to IEDs. A French study showed that in Iraq, from March 2003 to November 2006, on a global deaths in the US-led invading coalition soldiers, were caused by IEDs, i.e. 41%. That is to say more than in the "normal fights" (1027 dead, 34%). Insurgents now use the bombs to target not only invading coalition vehicles but Iraqi police as well. Common locations for placing these bombs on the ground include animal carcasses, soft drink cans, and boxes. Typically, they explode underneath or to the side of the vehicle to cause the maximum amount of damage. However, as vehicle armour was improved on military vehicles, insurgents began placing IEDs in elevated positions such as on road signs, utility poles, or trees, to hit less protected areas. IEDs in Iraq may be made with artillery or mortar shells or with varying amounts of bulk or homemade explosives. Early during the Iraq war, the bulk explosives were often obtained from stored munitions bunkers to include stripping landmines of their explosives. Despite the increased armor, IEDs are killing military personnel and civilians with greater frequency. May 2007 was the deadliest month for IED attacks thus far, with a reported 89 of the 129 invading coalition casualties coming from an IED attack. According to the Pentagon, 250,000 tons (out of 650,000 tons total) of Iraqi heavy ordnance were looted, providing a large supply of ammunition for the insurgents. In October 2005, the UK government charged that Iran was supplying insurgents with the technological know-how to make shaped charge IEDs. Both Iranian and Iraqi government officials denied the allegations. During the Iraqi Civil War (2014–2017), ISIL has made extensive use of suicide VBIEDs, often driven by children, elderly and disabled. On August 27, 2023, Israeli security forces successfully foiled an attempt to smuggle Iranian-made explosives into Israel from Jordan. The thwarted smuggling operation in the Jordan Valley aimed to supply terror groups in the West Bank with explosives. Counter-smuggling efforts along the border have led to increased seizures of weapons and explosive devices. Ireland and the United Kingdom From 1912-1913, the Suffragettes utilised IEDs in the Suffragette bombing and arson campaign. Throughout the Troubles, the Provisional Irish Republican Army made extensive use of IEDs in their 1969–97 campaign, much of which were made in the Republic of Ireland. They used barrack buster mortars and remote-controlled IEDs. Members of the IRA developed and counter-developed devices and tactics. IRA bombs became highly sophisticated, featuring anti-handling devices such as a mercury tilt switch or microswitches. These devices would detonate the bomb if it was moved in any way. Typically, the safety-arming device used was a clockwork Memopark timer, which armed the bomb up to 60 minutes after it was placed by completing an electrical circuit supplying power to the anti-handling device. Depending on the particular design (e.g., boobytrapped briefcase or car bomb) an independent electrical circuit supplied power to a conventional timer set for the intended time delay, e.g. 40 minutes. However, some electronic delays developed by IRA technicians could be set to accurately detonate a bomb weeks after it was hidden, which is what happened in the Brighton hotel bomb attack of 1984. Initially, bombs were detonated either by timer or by simple command wire. Later, bombs could be detonated by radio control. Initially, simple servos from radio-controlled aircraft were used to close the electrical circuit and supply power to the detonator. After the British developed jammers, IRA technicians introduced devices that required a sequence of pulsed radio codes to arm and detonate them. These were harder to jam. The IRA as well as Ulster loyalist paramilitaries have also utilized less sophisticated devices, such as homemade grenades crudely thrown at the target. These are sometimes called "blast bombs". Roadside bombs were extensively used by the IRA. Typically, a roadside bomb was placed in a drain or culvert along a rural road and detonated by remote control when British security forces vehicles were passing, as with the case of the 1979 Warrenpoint ambush. As a result of the use of these bombs, the British military stopped transport by road in areas such as South Armagh, and used helicopter transport instead to avoid the danger. Most IEDs used commercial or homemade explosives made in the Republic of Ireland, with ingredients such as gelignite and ANFO either stolen in construction sites or provided for by supporters in the South, although the use of Semtex-H smuggled in from Libya in the 1980s was also common from the mid-1980s onward. Bomb Disposal teams from 321 EOD manned by Ammunition Technicians were deployed in those areas to deal with the IED threat. The IRA also used secondary devices to catch British reinforcements sent in after an initial blast as occurred in the Warrenpoint Ambush. Between 1970 and 2005, the IRA detonated 19,000 IEDs in the Northern Ireland and Britain, an average of one every 17 hours for three and a half decades, arguably making it "the biggest terrorist bombing campaign in history". In the early 1970s, at the height of the IRA campaign, the British Army unit tasked with rendering safe IEDs, 321 EOD, sustained significant casualties while engaged in bomb disposal operations. This mortality rate was far higher than other high risk occupations such as deep sea diving, and a careful review was made of how men were selected for EOD operations. The review recommended bringing in psychometric testing of soldiers to ensure those chosen had the correct mental preparation for high risk bomb disposal duties. The IRA came up with ever more sophisticated designs and deployments of IEDs. Booby Trap or Victim Operated IEDs (VOIEDs), became commonplace. The IRA engaged in an ongoing battle to gain the upper hand in electronic warfare with remote controlled devices. The rapid changes in development led 321 EOD to employ specialists from DERA (now Dstl, an agency of the MOD), the Royal Signals, and Military Intelligence. This approach by the British army to fighting the IRA in Northern Ireland led to the development and use of most of the modern weapons, equipment and techniques now used by EOD Operators throughout the rest of the world today. The bomb disposal operations were led by Ammunition Technicians and Ammunition Technical Officers from 321 EOD, and were trained at the Felix Centre at the Army School of Ammunition. Israel IEDs have been used in many attacks by Palestinian militants and continue to be used in recent attacks. Lebanon The Lebanese National Resistance Front, the Popular Front for the Liberation of Palestine, other resistance groups in Lebanon, and later Hezbollah, made extensive use of IEDs to resist Israeli forces after Israel's invasion of Lebanon in 1982. Israel withdrew from Beirut, Northern Lebanon, and Mount Lebanon in 1985, whilst maintaining its occupation of Southern Lebanon. Hezbollah frequently used IEDs to attack Israeli military forces in this area up until the Israeli withdrawal, and the end of the invasion of Lebanon in May 2000. One such bomb killed Israeli Brigadier General Erez Gerstein on 28 February 1999, the highest-ranking Israeli to die in Lebanon since Yekutiel Adam's death in 1982. Also in the 2006 War in Lebanon, a Merkava Mark II tank was hit by a pre-positioned Hezbollah IED, killing all 4 IDF servicemen on board, the first of two IEDs to damage a Merkava tank. Libya Homemade IEDs are used extensively during the post-civil war violence in Libya, mostly in the city of Benghazi against police stations, cars or foreign embassies. Nepal IEDs were also widely used in the 10-years long civil war of the Maoists in Nepal, ranging from those bought from illicit groups in India and China, to self-made devices. Typically used devices were pressure cooker bombs, socket bombs, pipe bombs, bucket bombs, etc. The devices were used more for the act of terrorizing the urban population rather than for fatal causes, placed in front of governmental offices, street corners or road sides. Mainly, the home-made IEDs were responsible for destruction of majority of structures targeted by the Maoists and assisted greatly in spreading terror among the public. Nigeria Boko Haram are using IEDs during their insurgency. Pakistan Taliban and other insurgent groups use IEDs against police, military, security forces, and civilian targets. Russia IEDs have also been popular in Chechnya, where Russian forces were engaged in fighting with rebel elements. While no concrete statistics are available on this matter, bombs have accounted for many Russian deaths in both the First Chechen War (1994–1996) and the Second (1999–2009). Somalia Al Shabaab is using IEDs during the Somali Civil War. Syria During the Syrian Civil War, militant insurgents were using IEDs to attack buses, cars, trucks, tanks and military convoys. Additionally, the Syrian Air Force has used barrel bombs to attack targets in cities and other areas. Such barrel bombs consist of barrels filled with high explosives, oil, and shrapnel, and are dropped from helicopters. Along with mines and IEDs, ISIL also used VBIEDs in Syria, including during 2017 Aleppo suicide car bombing. See also: Improvised artillery in the Syrian civil war. Uganda On 16 November 2021, suicide bombers set off two powerful explosions in the center of Uganda's capital Kampala during rush hour in an attack later claimed by Islamic State. There have been a number of bomb explosions in 2021. In October, a 20-year-old waitress was killed after a device, left in a shopping bag, detonated in a bar in the city. Days later several people were injured when a suicide bomber blew himself up in a bus near Kampala. United States In the 1995 Oklahoma City bombing, Timothy McVeigh and Terry Nichols built an IED with ammonium nitrate fertilizer, nitromethane, and stolen commercial explosives in a rental truck, with sandbags used to concentrate the explosive force in the desired direction. McVeigh detonated it next to the Alfred P. Murrah Federal Building, killing 168 people, 19 of whom were children. High school students Eric Harris and Dylan Klebold used multiple IEDs during the Columbine High School massacre on 20 April 1999, including two large propane bombs that were placed in the cafeteria, powerful enough to kill or injure everyone inside the room, along with pipe bombs, Molotov cocktails, and also two car bombs, designed to attack first responders and news reporters responding to the initial bombing. Both propane bombs and both car bombs failed to detonate correctly. They then went on to shoot and kill 13 people before committing suicide. If all bombs detonated, there could have been hundreds killed in the massacre, but nobody was injured by any of the explosives during the massacre. The pair had planned to exceed the death count during the Oklahoma City bombing four years earlier. In January 2011, a shaped pipe bomb was discovered and defused at a Martin Luther King Jr. memorial march in Spokane, Washington. The FBI said that the bomb was specifically designed to cause maximum harm as the explosive device was, according to the Los Angeles Times, packed with fishing weights covered in rat poison, and may have been racially motivated. No one was injured during the event. On 15 April 2013, as the annual Boston Marathon race was concluding, two bombs were detonated seconds apart close to the finish line. Initial FBI response indicated suspicion of IED pressure cooker bombs. On 17–19 September 2016, several explosions occurred in Manhattan and New Jersey. The sources of the explosions were all found to be IEDs of various types, such as pressure cooker bombs and pipe bombs Many IED-related arrests are made each year in circumstances where the plot was foiled before the device was deployed, or the device exploded but no one was injured. A number of deaths and property damage occurring during gender reveal parties have been caused by the detonation of improvised explosive devices. These include the 2017 Sawmill Fire, which was started by the detonation of a mass of tannerite intended to disperse coloured powder, and an incident in 2019 where an IED similarly designed to release powder exploded in a manner similar to a pipe bomb, killing a 56-year-old woman after shrapnel struck her in the head. Ukraine IEDs are in use in the 2022 Russian invasion of Ukraine and have also been used there for assassinations. Vietnam IEDs were used during the Vietnam War by the Viet Cong against land- and river-borne vehicles as well as personnel. They were commonly constructed using materials from unexploded American ordnance. Thirty-three percent of U.S. casualties in Vietnam and twenty-eight percent of deaths were officially attributed to mines; these figures include losses caused by both IEDs and commercially manufactured mines. Yemen Houthis are using IEDs against Saudi-led coalition and Hadi's forces during Yemeni Civil War (2015–present), Saudi Arabian-led intervention in Yemen and Saudi–Yemeni border conflict. Al-Qaeda in the Arabian Peninsula and ISIL in Yemen are also known to use IEDs. In popular culture The film The Hurt Locker follows an Iraq War Explosive Ordnance Disposal team who are targeted by insurgents and shows their psychological reactions to the stress of combat. See also Acetone peroxide Blast bomb Blast fishing Dragon Runner Fertilizer bomb Improvised firearm JIEDDO List of notable 3D printed weapons and parts Nail bomb Satchel charge Sidolówka grenade Time bomb (explosive) TM 31-210 Improvised Munitions Handbook The Anarchist Cookbook References External links Area denial weapons Bombs Explosives Explosive weapons Illegal drug trade in the Americas Tactics of the Iraqi insurgency (2003–2011) Improvised weapons
Improvised explosive device
Chemistry,Engineering
7,874
25,711,341
https://en.wikipedia.org/wiki/Kepler-7
Kepler-7 is a star located in the constellation Lyra in the field of view of the Kepler Mission, a NASA operation in search of Earth-like planets. It is home to the fourth of the first five planets that Kepler discovered; this planet, a Jupiter-size gas giant named Kepler-7b, is as light as styrofoam. The star itself is more massive than the Sun, and is nearly twice the Sun's radius. It is also slightly metal-rich, a major factor in the formation of planetary systems. Kepler-7's planet was presented on January 4, 2010 at a meeting of the American Astronomical Society. Nomenclature and discovery Kepler-7 received its name because it was the home to the seventh planetary system discovered by the NASA-led Kepler Mission, a project aimed at detecting terrestrial planets that transit, or pass in front of, their host stars as seen from Earth. The planet orbiting Kepler-7 was the fourth planet to be discovered by the Kepler spacecraft; the first three planets combed from Kepler's data had been previously discovered, and were used to verify the accuracy of Kepler's measurements. Kepler-7b was announced to the public on January 4, 2010 at the 215th meeting of the American Astronomical Society in Washington, D.C. along with Kepler-4b, Kepler-5b, Kepler-6b, and Kepler-8b. Kepler-7b was noted for its unusually and extremely low density. The planet's initial discovery by Kepler was verified by additional observations made at observatories in Hawaii, Texas, Arizona, California, and the Canary Islands. Characteristics Kepler-7 is a sunlike star that is 1.347 Msun and 1.843 Rsun. This means that the star is about 35% more massive and 84% wider than the Sun. The star is estimated to be 3.5 (± 1) billion years old. It is also estimated to have a metallicity of [Fe/H] = 0.11 (± 0.03), meaning that Kepler-7 is approximately 30% more metal-rich than the Sun; metallicity plays a significant role in the formation of planetary systems, as metal-rich stars tend to be more likely to have planets in orbit. The star's effective temperature is 5933 (± 44) K. In comparison, the 4.6 billion-year-old Sun releases less heat, with an effective temperature of 5778 K. The star has an apparent magnitude of 13, meaning that it is extremely dim as seen from Earth. It cannot be seen with the naked eye. It is estimated to lie at approximately 3160 light years from the Solar System. There is a star that is 4 magnitudes dimmer located 1.90 arcseconds away, whether this a gravitationally bound companion star or a chance optical alignment is unknown. Planetary system Kepler-7b is the only planet that has been discovered in Kepler-7's orbit. It is .433 MJ and 1.478 RJ, meaning it is 43% the mass of planet Jupiter, but is nearly three halves its size. With a density of .166 grams/cc, the planet is approximately 17% the density of water. This is comparable to styrofoam. At a distance of .06224 AU from its host star, Kepler-7b completes an orbit around Kepler-7 every 4.8855 days. Planet Mercury, however, orbits the Sun at .3871 AU, and takes approximately 87.97 days to complete one orbit. Kepler-7b's eccentricity is assumed to be 0, which would give Kepler-7b a circular orbit by definition. See also List of extrasolar planets References External links Planetary systems with one confirmed planet Lyra 97 Planetary transit variables G-type stars
Kepler-7
Astronomy
791
143,696
https://en.wikipedia.org/wiki/Orbital%20period
The orbital period (also revolution period) is the amount of time a given astronomical object takes to complete one orbit around another object. In astronomy, it usually applies to planets or asteroids orbiting the Sun, moons orbiting planets, exoplanets orbiting other stars, or binary stars. It may also refer to the time it takes a satellite orbiting a planet or moon to complete one orbit. For celestial objects in general, the orbital period is determined by a 360° revolution of one body around its primary, e.g. Earth around the Sun. Periods in astronomy are expressed in units of time, usually hours, days, or years. Small body orbiting a central body According to Kepler's Third Law, the orbital period T of two point masses orbiting each other in a circular or elliptic orbit is: where: a is the orbit's semi-major axis G is the gravitational constant, M is the mass of the more massive body. For all ellipses with a given semi-major axis the orbital period is the same, regardless of eccentricity. Inversely, for calculating the distance where a body has to orbit in order to have a given orbital period T: For instance, for completing an orbit every 24 hours around a mass of 100 kg, a small body has to orbit at a distance of 1.08 meters from the central body's center of mass. In the special case of perfectly circular orbits, the semimajor axis a is equal to the radius of the orbit, and the orbital velocity is constant and equal to where: r is the circular orbit's radius in meters, This corresponds to times (≈ 0.707 times) the escape velocity. Effect of central body's density For a perfect sphere of uniform density, it is possible to rewrite the first equation without measuring the mass as: where: r is the sphere's radius a is the orbit's semi-major axis in metres, G is the gravitational constant, ρ is the density of the sphere in kilograms per cubic metre. For instance, a small body in circular orbit 10.5 cm above the surface of a sphere of tungsten half a metre in radius would travel at slightly more than 1 mm/s, completing an orbit every hour. If the same sphere were made of lead the small body would need to orbit just 6.7 mm above the surface for sustaining the same orbital period. When a very small body is in a circular orbit barely above the surface of a sphere of any radius and mean density ρ (in kg/m3), the above equation simplifies to (since ) Thus the orbital period in low orbit depends only on the density of the central body, regardless of its size. So, for the Earth as the central body (or any other spherically symmetric body with the same mean density, about 5,515 kg/m3, e.g. Mercury with 5,427 kg/m3 and Venus with 5,243 kg/m3) we get: T = 1.41 hours and for a body made of water (ρ ≈ 1,000 kg/m3), or bodies with a similar density, e.g. Saturn's moons Iapetus with 1,088 kg/m3 and Tethys with 984 kg/m3 we get: T = 3.30 hours Thus, as an alternative for using a very small number like G, the strength of universal gravity can be described using some reference material, such as water: the orbital period for an orbit just above the surface of a spherical body of water is 3 hours and 18 minutes. Conversely, this can be used as a kind of "universal" unit of time if we have a unit of density. Two bodies orbiting each other In celestial mechanics, when both orbiting bodies' masses have to be taken into account, the orbital period T can be calculated as follows: where: a is the sum of the semi-major axes of the ellipses in which the centers of the bodies move, or equivalently, the semi-major axis of the ellipse in which one body moves, in the frame of reference with the other body at the origin (which is equal to their constant separation for circular orbits), M1 + M2 is the sum of the masses of the two bodies, G is the gravitational constant. In a parabolic or hyperbolic trajectory, the motion is not periodic, and the duration of the full trajectory is infinite. Related periods For celestial objects in general, the orbital period typically refers to the sidereal period, determined by a 360° revolution of one body around its primary relative to the fixed stars projected in the sky. For the case of the Earth orbiting around the Sun, this period is referred to as the sidereal year. This is the orbital period in an inertial (non-rotating) frame of reference. Orbital periods can be defined in several ways. The tropical period is more particularly about the position of the parent star. It is the basis for the solar year, and respectively the calendar year. The synodic period refers not to the orbital relation to the parent star, but to other celestial objects, making it not a merely different approach to the orbit of an object around its parent, but a period of orbital relations with other objects, normally Earth, and their orbits around the Sun. It applies to the elapsed time where planets return to the same kind of phenomenon or location, such as when any planet returns between its consecutive observed conjunctions with or oppositions to the Sun. For example, Jupiter has a synodic period of 398.8 days from Earth; thus, Jupiter's opposition occurs once roughly every 13 months. There are many periods related to the orbits of objects, each of which are often used in the various fields of astronomy and astrophysics, particularly they must not be confused with other revolving periods like rotational periods. Examples of some of the common orbital ones include the following: The synodic period is the amount of time that it takes for an object to reappear at the same point in relation to two or more other objects. In common usage, these two objects are typically Earth and the Sun. The time between two successive oppositions or two successive conjunctions is also equal to the synodic period. For celestial bodies in the solar system, the synodic period (with respect to Earth and the Sun) differs from the tropical period owing to Earth's motion around the Sun. For example, the synodic period of the Moon's orbit as seen from Earth, relative to the Sun, is 29.5 mean solar days, since the Moon's phase and position relative to the Sun and Earth repeats after this period. This is longer than the sidereal period of its orbit around Earth, which is 27.3 mean solar days, owing to the motion of Earth around the Sun. The draconitic period (also draconic period or nodal period), is the time that elapses between two passages of the object through its ascending node, the point of its orbit where it crosses the ecliptic from the southern to the northern hemisphere. This period differs from the sidereal period because both the orbital plane of the object and the plane of the ecliptic precess with respect to the fixed stars, so their intersection, the line of nodes, also precesses with respect to the fixed stars. Although the plane of the ecliptic is often held fixed at the position it occupied at a specific epoch, the orbital plane of the object still precesses, causing the draconitic period to differ from the sidereal period. The anomalistic period is the time that elapses between two passages of an object at its periapsis (in the case of the planets in the Solar System, called the perihelion), the point of its closest approach to the attracting body. It differs from the sidereal period because the object's semi-major axis typically advances slowly. Also, the tropical period of Earth (a tropical year) is the interval between two alignments of its rotational axis with the Sun, also viewed as two passages of the object at a right ascension of 0 hr. One Earth year is slightly shorter than the period for the Sun to complete one circuit along the ecliptic (a sidereal year) because the inclined axis and equatorial plane slowly precess (rotate with respect to reference stars), realigning with the Sun before the orbit completes. This cycle of axial precession for Earth, known as precession of the equinoxes, recurs roughly every 25,772 years. Periods can be also defined under different specific astronomical definitions that are mostly caused by the small complex external gravitational influences of other celestial objects. Such variations also include the true placement of the centre of gravity between two astronomical bodies (barycenter), perturbations by other planets or bodies, orbital resonance, general relativity, etc. Most are investigated by detailed complex astronomical theories using celestial mechanics using precise positional observations of celestial objects via astrometry. Synodic period One of the observable characteristics of two bodies which orbit a third body in different orbits, and thus have different orbital periods, is their synodic period, which is the time between conjunctions. An example of this related period description is the repeated cycles for celestial bodies as observed from the Earth's surface, the synodic period, applying to the elapsed time where planets return to the same kind of phenomenon or for example, when any planet returns between its consecutive observed conjunctions with or oppositions to the Sun. For example, Jupiter has a synodic period of 398.8 days from Earth; thus, Jupiter's opposition occurs once roughly every 13 months. If the orbital periods of the two bodies around the third are called T1 and T2, so that T1 < T2, their synodic period is given by: Examples of sidereal and synodic periods Table of synodic periods in the Solar System, relative to Earth: In the case of a planet's moon, the synodic period usually means the Sun-synodic period, namely, the time it takes the moon to complete its illumination phases, completing the solar phases for an astronomer on the planet's surface. The Earth's motion does not determine this value for other planets because an Earth observer is not orbited by the moons in question. For example, Deimos's synodic period is 1.2648 days, 0.18% longer than Deimos's sidereal period of 1.2624 d. Relative synodic periods The concept of synodic period applies not just to the Earth, but also to other planets as well; the computation of synodic periods applies the same formula as above. The following table lists the synodic periods of some planets relative to each the Sun and each other: Example of orbital periods: binary stars See also Geosynchronous orbit derivation Rotation period – time that it takes to complete one revolution around its axis of rotation Satellite revisit period Sidereal time Sidereal year Opposition (astronomy) List of periodic comets Leap year Notes Bibliography External links Time in astronomy Period Durations
Orbital period
Physics,Astronomy
2,278
1,412,544
https://en.wikipedia.org/wiki/Myxococcaceae
Myxococcaceae is a family of gram-negative, rod-shaped bacteria. The family Myxococcaceae is encompassed within the myxobacteria ("slime bacteria"). The family is ubiquitously found in soils, marine, and freshwater environments. Production of compounds with medical uses by Myxococcaceae makes them useful in human health fields. Phylogeny The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature (LPSN) and National Center for Biotechnology Information (NCBI) Morphology and Behavior Cells can be motile with gliding and swarming behavior. The vegetative cell shape in the Myxococcaceae family is long rods, which vary in size between members. The most common fruiting body morphs are soft hump and knob shaped with possible colors of yellow, peach, white, or orange depending on species. Myxococcaceae are spore producing bacteria and are delineated by their spore shape. The myxospores are oval to round and are optically refractive. Quorum sensing (QS) behavior is limited in this family. However, there is evidence that some members of the family produce molecules that interrupt the QS of other microbes, behavior potentially useful in predation. Relevance Bacteria in the order of Myxococcales have led to scientific discoveries including the first genome to be sequenced, the primary observation of plasmid replication, and the first discovery of bacteriophage. Members of the Myxococcaceae produce a wide range of secondary metabolites having useful functions and applications. Compounds with anti-microbial, anti-parasitic, and in rare cases, anti-HIV activities have been isolated from the Myxococcaceae. See also List of bacterial orders List of bacteria genera References Myxococcota
Myxococcaceae
Biology
386
64,494,209
https://en.wikipedia.org/wiki/Henk%20Tennekes%20%28toxicologist%29
Henk Tennekes (21 November 1950 – 7 July 2020) was a Dutch toxicologist. Tennekes worked as doctor and researcher at the Philipps-Universiteit Marburg; the German research cancer centre Krebsforschungszentrum in Heidelberg; Sandoz in Muttenz, Switzerland and at the Research and Consulting Company in Itingen. Since 1992 he was an independent researcher. Tennekes was born in Zutphen. He studied between 1968 and 1974 and earned his PhD in 1979 at the Wageningen University and Research with the thesis The Relationship between Microsomal Enzyme Induction and Liver Tumour Formation". Tennekes wrote in 2010 the book "A disaster in the making", about the dangers of neonicotinoids, a new generation of pesticides for insects and bees in particular. He discovered that Bayer had researched the effects of neonicotinoid on flies back in 1991, and that the effect was irreversible. Bayer nowadays claims the contrary. Initially, his vision had strong opposition as his findings were criticized, but follow-up research partially confirmed his warning. Since the end of 2018, the use of three neonicotinoids (clothianidin, thiamethoxam and imidacloprid) has been banned in the European Union. Tennekes who used to work as a freelance researcher for chemical companies found himself blacklisted, and lost all his clients, but that didn't deter him, because he considered it his moral duty. Tennekes was suffering from a rare pulmonary disease, and opted for euthanasia. He died on 7 July 2020, aged 69. References 1950 births 2020 deaths 2020 suicides Dutch biologists Deaths by euthanasia Toxicologists People from Zutphen Drug-related suicides in the Netherlands
Henk Tennekes (toxicologist)
Environmental_science
368
4,106,274
https://en.wikipedia.org/wiki/Standard%20components%20%28food%20processing%29
Standard components is a food technology term, when manufacturers buy in a standard component they would use a pre-made product in the production of their food. They help products to be the same in consistency, they are quick and easy to use in batch production of food products. Some examples are pre-made stock cubes, marzipan, icing, ready made pastry. Usage Manufacturers use standard components as they save time and sometimes cost a lot less and it also helps with consistency in products. If a manufacturer is to use a standard component from another supplier it is essential that a precise and accurate specification is produced by the manufacturer so that the component meets the standards set by the manufacturer. Advantages Saves preparation time. Fewer steps in the production process Less effort and skill required by staff Less machinery and equipment needed Good quality Saves money from all aspects Can be bought in bulk High-quality consistency Food preparation is hygienic Disadvantages Have to rely on other manufacturers to supply products Fresh ingredients may taste better May require special storage conditions Less reliable than doing it yourself Cost more to make Can't control the nutritional value of the product There is a larger risk of cross contamination. GCSE food technology References Food Technology Nelson Thornes, 2001 pg. 144 Components Food industry Food ingredients
Standard components (food processing)
Technology
256
67,136,789
https://en.wikipedia.org/wiki/The%20Code%20Breaker
The Code Breaker: Jennifer Doudna, Gene Editing, and the Future of the Human Race is a non-fiction book authored by American historian and journalist Walter Isaacson. Published in March 2021 by Simon & Schuster, it is a biography of Jennifer Doudna, the winner of the 2020 Nobel Prize in Chemistry for her work on the CRISPR system of gene editing. Promotion On March 22, 2021, Isaacson appeared on The Late Show with Stephen Colbert to discuss the book. Reception The book debuted at number one on The New York Times nonfiction best-seller list for the week ending March 13, 2021. In its starred review, Kirkus Reviews called it a "vital book about the next big thing in science—and yet another top-notch biography from Isaacson." Publishers Weekly called it a "gripping account of a great scientific advancement and of the dedicated scientists who realized it." References External links The Code Breaker at the Simon & Schuster website CRISPR Scientist's Biography Explores Ethics Of Rewriting The Code Of Life. Author interview, audio and transcript. Fresh Air, NPR, March 8, 2021. 2021 non-fiction books English-language non-fiction books Books about scientists Jennifer Doudna Simon & Schuster books American biographies Genetics books Genome editing Books by Walter Isaacson
The Code Breaker
Engineering,Biology
261
62,968,403
https://en.wikipedia.org/wiki/Cognitive%20ecology%20of%20individual%20recognition%20in%20colonial%20birds
The cognitive ecology of individual recognition has been studied in many species, especially in primates or other mammalian species that exhibit complex social behaviours, but comparatively little research has been done on colonial birds. Colonial birds live in dense colonies in which many individuals interact with each other daily. For colonial birds, being able to identify and recognize individuals can be a crucial skill. Sociality and brain size Individual recognition is one of the most basic forms of social cognition. If we were to define individual recognition, it would imply that a given individual has the capacity to discriminate a familiar individual from another one at any given time. It is believed that in many species, group size is often a representation of social complexity, with higher social complexity demanding higher cognitive capabilities. This hypothesis is also known as the "social brain hypothesis" and has been supported by many researchers. The logic behind this hypothesis is based on the principle that larger group size will require a higher degree of complexity in their interactions. Many studies have looked at the effect of sociality on the brain development, mostly focussing on non-human primate species. In primates, it has been shown that relative brain size, when controlling for the size of the species and the phylogeny, seemed to correlate with the size of the social group. These results allowed for a direct correlation between sociality and cognition. However, when reproducing such experiments in non-primate species, like with reptiles, birds and even other mammalian species, the correlation between brain size and social group size does not seems to exist. A study done on mountain chickadees looking at the impact of sociality on the hippocampus size as well as on neurogenesis found no evidence of change related to group size, therefore rejecting the "social brain hypothesis" in birds. Further research looking at bird cognitive ecology demonstrated that social complexity is a more reliable proxy for brain size, as it relies not only on the number of individuals but also on the degree of social interactions and more. Role of recognition In the wild, recognition can have many advantages. When looking at monogamous birds species, being able to recognize your mate can be crucial. As colonial birds tend to cluster in high density groups, finding your mate can be a challenge. Being able to identify your mate is not all, recognition can also help in the context of mate selection as individual recognition allows birds to avoid inbreeding with conspecifics. Inbreeding avoidance has been shown in a species of storm petrel, a colonial seabird that nests in burrows. In the case of storm petrels, individual relatedness is assessed based on olfactory signatures that allow them to distinguish closely related individuals from non-related ones. The capacity of an individual to identify conspecifics is not only used to avoid inbreeding, but can also be used in order to help closely related individuals. Such instances can be seen in scrub jays, whose offspring stay after fledging in order to help raise the next brood. Moreover, recognition can be useful for chick identification. Being able to recognize your own chick is essential in many colonial bird species as chicks can wander around and mix up with others' chicks. Feeding the wrong chick would result in high cost for the parent with little to no benefit for their own reproductive success. In herring gulls, chicks can be found wandering around the colony only a few days after hatching from the egg, creating a need for the parent to recognize its own chick. However, in order to have evolved, recognition needs to be beneficial not only for one side, but for both sides, meaning that the chick has to be able to recognize its parents as well. Still looking at herring gulls, chicks will often hide when the parents are not present in order to avoid being predated on by other adult herring gulls or any other predator. Therefore, being able to recognize your parent is crucial in order to reveal your position to the right adult. In the case of bird species that raise many offspring at once, chicks that are able to recognize their parents may also increase their begging rate and therefore obtain more food in return. Chicks that have better recognition capacities would therefore have the advantage over their siblings. Mechanism of recognition in colonial birds Olfactory recognition It has been believed for a long time that birds had a very bad sense of smell, but recent studies have demonstrated that some species of birds such as the procellariiformes have a quite developed sense of smell. Olfaction seems to be used in an array of different task such as for finding food, migrating and kin recognition. In burrowing species such as in puffins, auks and petrels, smells seem to be at the basis of mate and nest recognition. The procellariiformes, also known as tubed-noses, are one of the best studied groups when it comes to olfaction as they seem to have a quite developed sense of smell. A study done on storm petrels showed that not only do petrels use olfaction in order to find their burrow and their mate, but that they are also aware of their own smell. Petrels nest in dense colonies and use the smell of their mate or their own smell in order to find their burrow and avoid entering the wrong burrow. Such a mechanism of recognition has also been shown in auks as they mostly fly at night, keeping them from using spatial memory in order to find their burrow. When looking at the available literature, olfactory cues seems to be used mostly by colonial birds that nest in burrows. Concerning chick recognition in burrowing birds, a researcher called Eduardo Minguez (1997) showed that there was no chick recognition in storm petrels. One of the advantages of burrow nesting is that your chick is confined in the burrow until it is ready to fledge, eliminating the need for chick recognition. It is likely that chicks will acquire their "signature smell" only later upon fledging the parental nest. There are few instances of burrowing birds that have the mechanism of chick recognition, but as recognition is a costly mechanism, it tends to be lost in many bird species for which it is not necessary. Acoustic recognition In many bird colonies, the environment in the colony tends to be quite loud and filed with countless acoustic stimuli. Many researchers have looked into how individuals can identify each other in such a heavily charged acoustic environment. Recognition based on acoustic signatures has been demonstrated in many bird species such as in penguins, swallows, gulls, razorbills and more. A study done on king penguins by Jouventin et al. (1999) was one of the first study to look at the technicalities behind acoustic recognition. They found that chicks could identify their parents based on an acoustic signature specific to the pattern of the call as well as the frequency of the parents' call. The amplitude of the call did not seem to affect the call signature. A similar study done on black-headed gulls in 2001 obtained similar results supporting that the acoustic signatures of parents' calls is most likely based on a redundant pattern and the frequency of the call with no effect regarding the amplitude. This study also supported that the mechanism of acoustic recognition is most likely the same in most species within the gull family, Laridae. Nevertheless, not all members of Laridae exhibit parent-offspring recognition. The black-legged kittiwake, a small cliff nesting gull, does not seem to recognize its chick. This lack of recognition is most likely the result of cliff nesting, as chicks cannot explore far from the nest and get mixed with other chicks. Recognition would have then been lost in kittiwakes. Other exceptions can be found, for example in razorbills. Razorbills exhibit parent-offspring recognition, but research has shown that only males and chicks exhibit such behaviour, meaning that females do not recognize their chick and vice versa. Such difference between the parents can be explained when looking at the natural history of razorbills. Like in kittiwakes, razorbills are cliff nesters, limiting the chicks movement quite a bit. However, when the chick will fledge, only the male will bring the chick out at sea and will keep caring for its chicks for a little while after fledging, creating the need to be able to recognize its own chick. As females do not follow its offspring at sea, there is no need for her to recognize her own chick. References Animal cognition
Cognitive ecology of individual recognition in colonial birds
Biology
1,720
51,952,938
https://en.wikipedia.org/wiki/BP%20Crucis
BP Crucis (x-ray source GX 301-2) is an X-ray binary system containing a blue hypergiant and a pulsar. System BP Crucis is considered as the optical counterpart to the X-ray source GX 301-2. The system consists of a massive hypergiant star and a neutron star in an eccentric 41.5 day orbit. The distance is likely to be between three and four thousand parsecs. It is heavily reddened and has a K-band infrared magnitude of 5.72. There is a mass transfer from the hypergiant to the pulsar which occurs via a dense accretion disc. This produces a cyclotron effect with electron energies of 37 and 48 keV. Variability The system shows both optical and x-ray variability. Although no eclipses are observed, the x-ray luminosity varies during the orbit with large x-ray flares being observed during periastron passages. The system is an optical variable showing brightness changes of up to 0.08 magnitudes at visible wavelengths. These have been attributed to ellipsoidal variations as the hypergiant rotates and to α Cygni variability. There is an intrinsic pseudo-period of 11.9 days as well as small variations corresponding to the orbital period. That X-ray emission comes not from the neutron star itself, but rather represent a radiation re-emitted by optically thick accretion shell. Properties BP Crucis is around 43 times as massive as the Sun, it is also one of the most luminous stars known in the Galaxy, with an estimated bolometric luminosity of around 470,000 times that of the Sun and a radius 70 times that of the Sun. The neutron star appears to belong to the "high mass" variety being at least . It is very likely to have a mass less than as the theoretical maximum mass based on the equation of state for a neutron star. The pulsar has a spin period of 685 seconds, but shows relatively large spindown rates thought to be due to its strong magnetic field, and also occasional spinups due to interaction with the accretion disk. It is calculated that a slowly spinning neutron star could be spun up to the current rotation rate by accretion in only ten years. References External links Swift/BAT transient monitor results Crux B-type hypergiants Emission-line stars Crucis, BP J12263756-6246132 Rotating ellipsoidal variables X-ray binaries Pulsars
BP Crucis
Astronomy
525
24,087,683
https://en.wikipedia.org/wiki/C14H12O4
{{DISPLAYTITLE:C14H12O4}} The molecular formula C14H12O4 (molar mass: 244.25 g/mol, exact mass: 244.073559 u) may refer to: Dioxybenzone (benzophenone-8), an organic compound used in sunscreen Oxyresveratrol, a stilbenoid Piceatannol, a stilbenoid Molecular formulas
C14H12O4
Physics,Chemistry
96
22,703,860
https://en.wikipedia.org/wiki/2003%20ricin%20letters
The 2003 ricin letters were two ricin-laden letters found on two occasions between October and November 2003. One letter was mailed to the White House and intercepted at a processing facility; another was discovered with no address in South Carolina. A February 2004 ricin incident at the Dirksen Senate Office Building was initially connected to the 2003 letters as well. The letters were sent by someone referring to themselves as "Fallen Angel". The sender, who claimed to own a trucking company, expressed anger over changes in federal trucking regulations. As of 2008, no connection between the Fallen Angel letters and the Dirksen building incident has been established. A $100,000 reward was offered in 2004 by the federal law enforcement agencies investigating the case, but to date the reward remains unclaimed. Background Ricin Ricin is a white powder that can be produced as a liquid or a crystal. Ricin is an extremely toxic plant protein that can cause severe allergic reactions, and exposure to small quantities can be fatal. The toxin inhibits the formation of proteins within cells of exposed people. The U.S. Centers for Disease Control and Prevention (CDC) states that 500 micrograms is the minimum lethal dose of ricin in humans provided that exposure is from injection or inhalation. Ricin is easily purified from castor-oil manufacturing waste. It has been utilized by various states and organizations as a weapon, being most effective as an assassination weapon, notably in the case of the 1978 assassination of Bulgarian dissident Georgi Markov. Trucking regulations On January 4, 2004 new federal transportation rules took effect which directly affected the over-the-road trucking industry in the United States. The rules took effect with a 60-day grace period and were aimed at reducing fatigue related accidents and fatalities. Called the most far-reaching rule changes in 65 years, the regulations reduced daily allowed driving time from 11 hours to 10. The most controversial measures involved the way that workdays were calculated. The calculations were not allowed to factor in such delays as food and fuel stops and other time spent waiting at, for instance, a factory for a load. The new provisions allowed drivers to stay on duty for only 14 hours, thus the time spent waiting could eat into the time a driver spent on duty. These rule changes were what the self-proclaimed "Fallen Angel" took aim at in the ricin-laden letters. Letters October 2003 letter On October 15, 2003 a package was discovered at a mail-sorting center in Greenville, South Carolina, near the Greenville-Spartanburg International Airport. The package contained a letter and a small metal vial containing ricin powder. A label on the outside of the envelope containing the vial displayed the typed message: "Caution ricin poison enclosed in sealed container. Do not open without proper protection". The presence of ricin was confirmed by the Centers for Disease Control and Prevention on October 21. The letter inside the envelope was typewritten to the U.S. Department of Transportation, and stated: To the department of transportation: I'm a fleet owner of a tanker company. I have easy access to castor pulp. If my demand is dismissed I'm capable of making Ricin. My demand is simple, January 4, 2004 starts the new hours of service for trucks which include a ridiculous ten hours in the sleeper berth. Keep at eight or I will start dumping. You have been warned this is the only letter that will be sent by me. [sic] Fallen Angel Despite the potentially deadly nature of the poison, no one was exposed to, injured by, or killed by the ricin. The Greenville facility where the letter was found was also declared ricin-free in the ensuing weeks. In addition, the letter had no delivery address and no postmark. November 2003 letter On November 6, 2003, another letter, described as "nearly identical" to the October letter, was discovered. This time, the letter was addressed to The White House and it was discovered at a White House mail-processing facility in Washington, D.C. The letter contained a small vial of a white powdery substance that was initially tested negative for ricin. After subsequent testing at the mail facility resulted in positives for ricin contamination on mail equipment, the U.S. Secret Service ordered a retest that showed by November 10 the letter was "probable for ricin". The letter was postmarked on October 17 in Chattanooga, Tennessee. Though addressed to the White House, the threatening language contained in the letter was again directed at the U.S. Department of Transportation and written by someone calling themselves "Fallen Angel", as with the previous letter. The text of the letter stated: Department of transportationIf you change the hours of service onJanuary 4, 2004 I will turn D.C into a ghost townThe powder on the letter is RICINhave a nice dayFallen Angel The Secret Service did not alert the White House, the Federal Bureau of Investigation (FBI), and other key agencies, including the CDC, of the discovery and positive tests until November 12. In the November 21, 2003 issue of Morbidity and Mortality Weekly Report the CDC recommended that until Fallen Angel was captured, "healthcare providers and public health officials must consider ricin to be a potential public health threat and be vigilant about recognizing illness consistent with ricin exposure". The CDC's November warning mentioned only the first Fallen Angel letter. The discovery of the ricin letter at the White House facility was not disclosed to the public until early February 2004. The public disclosure of the second ricin letter from Fallen Angel coincided with the discovery of ricin in the mail room of a senate office building. February 2004 mail room contamination On February 2, 2004, in a mail room serving Senator Bill Frist in the Dirksen Senate Office Building, a white powdery substance was found on a sorting machine. Tests on February 3 confirmed that the substance was ricin. The positive test results were indicated by six of eight preliminary tests on the substance. The discovery resulted in more than a dozen staffers undergoing decontamination as well as the closure of the Dirksen, Hart, and Russell Senate Office Buildings. The incident was treated as a criminal probe with investigators looking carefully for any connection between the ricin found at Dirksen and the "Fallen Angel" cases. Investigations Fallen Angel The focus of the probe by the FBI, U.S. Postal Inspection Service (USPIS) and the Department of Transportation's Office of Inspector General fell instantly upon the "Fallen Angel" in the two letters. The FBI was the lead agency in the Fallen Angel investigation. Agents questioned various people during their probe, such as one vocal former trucker in Florida. Federal officials, most notably at the U.S. Department of Homeland Security (DHS), remarked that the letters did not have the hallmarks of international terrorism and were more likely produced by a homegrown criminal. On January 4, 2004, the FBI, along with the USPIS and the DOT, offered a $100,000 reward in connection with the October 2003 case from Greenville, South Carolina. In late 2004 the amount of the reward was increased to $120,000. The criminal has not, thus far, been captured. In February 2004, the United States Secret Service revealed a six-day delay between the discovery of the initial letters and informing the FBI and other agencies of their existence. White House spokesman Scott McClellan told reporters that the letter was not determined a threat to the public due to it already having been intercepted. This withholding of information was criticized by some lawmakers and public officials. Dirksen Building contamination Immediately following the incident in Frist's office, both the FBI and the United States Capitol Police were tasked to the investigation; like in the Fallen Angel investigation, the FBI was the lead agency. Detectives and agents focused on the possibility that the person responsible for the 2003 letters was also responsible for the contamination at the Dirksen building. Within two weeks of the incident, investigators were questioning the validity of the positive ricin tests at the Senate building. The results raised suspicion because no source (e.g. a letter) was ever found for the ricin. It was possible that the "contamination" was from paper by-products and not ricin. However, later tests confirmed that the initial tests did not indicate a false positive and the substance was indeed ricin. By the end of March 2005, there were no suspects and no confirmed source for the ricin found in Senator Frist's office. Investigators also found no connection to the Fallen Angel case as of the same date. Despite those developments, investigators were not yet ready to declare a dead end to the investigation. As of 2008, no direct connection has yet been found between the Frist case and the Fallen Angel case and no explanation found for the origin of the ricin in Frist's office. See also 1984 Rajneeshee bioterror attack 2001 anthrax attacks April 2013 ricin letters Wood Green ricin plot Shannon Richardson, former actress who sent ricin letters to politicians in May 2013 References Further reading Anderson, Curt. "Ricin investigation expands to Tennessee, trucker radio", the Associated Press, via the Oakland Tribune via findarticles.com, February 7, 2004, accessed May 6, 2009. Crowley, Michael. "Paul Kevin Curtis and the Weird History of Domestic Ricin Terrorism", Time, April 17, 2013, accessed May 26, 2014. Eggen, Dan. "Letter With Ricin Vial Sent to White House", The Washington Post, February 4, 2004, accessed May 5, 2009. "Investigation of a Ricin-Containing Envelope at a Postal Facility — South Carolina, 2003", Morbidity and Mortality Weekly Report, November 21, 2003, Vol. 52, No. 56, pp. 1129–31. Kucinich, Jackie. "Ricin case 'still being looked at'", The Hill, September 15, 2005, accessed May 5, 2009. Schier, Joshua G. et al. "Public Health Investigation After the Discovery of Ricin in a South Carolina Postal Facility", American Journal of Public Health, Supplement 1, 2007, Vol. 97, No. S1, pp. 152–57, (ISSN 1541-0048) accessed May 6, 2009. External links Fallen Angel reward poster (FBI) (updated), Federal Bureau of Investigation, accessed May 5, 2009. Fallen Angel reward poster (USPIS), (original version), U.S. Postal Inspection Service, accessed May 5, 2009. Fallen Angel reward poster (USPIS) , (updated) U.S. Postal Inspection Service, accessed May 5, 2009. "Poisonous Powder ", Newshour with Jim Lehrer transcript and video, PBS, February 4, 2004, accessed May 6, 2009. 2003 in American politics 2003 in Washington, D.C. Bioterrorism Crimes in Washington, D.C. Failed terrorist attempts in the United States February 2004 crimes in the United States Letters (message) November 2003 crimes in the United States October 2003 crimes in the United States Ricin Terrorist incidents by unknown perpetrators Terrorist incidents in South Carolina Terrorist incidents in the United States in 2003 Terrorist incidents in Washington, D.C. Terrorist incidents involving postal systems
2003 ricin letters
Biology
2,311
66,148,455
https://en.wikipedia.org/wiki/WY%20Sagittae
WY Sagittae, also known as Nova Sagittae 1783, is a star in the constellation Sagitta which had a nova eruption visible in 1783. It was discovered on 26 July 1783 by the French astronomer Joseph Lepaute D'Agelet. It is usually difficult to precisely identify novae that were discovered hundreds of years ago, because the positions were often vaguely reported (for example the discoverer may have only reported the constellation where the nova occurred) and historically there was not a clear distinction drawn between different sorts of transient astronomical events such as novae and comet apparitions. However D'Agelet observed this nova with a mural quadrant, which produced coordinates accurate enough to allow modern astronomers to identify the star. D'Agelet reported the apparent magnitude of the star as 6, but Benjamin Apthorp Gould, who analysed D'Agelet's records, determined that what D'Agelet called magnitude 6 corresponds to magnitude 5.4 ± 0.4 on the modern magnitude scale, so the nova was visible to the naked eye. Very little is known about WY Sagittae's post-eruption light curve. D'Agelet reported the star's magnitude as 6, 6 and 6.7 on the 26th, 27th and 29 July 1783, respectively. At least a half dozen observers attempted to find D'Agelet's nova in the late 19th and early 20th centuries, without success. In 1942 a photographic search for the nova was performed using the 60-inch telescope on Mt. Wilson, and in 1950 Harold Weaver tentatively identified a faint blue star with a photographic magnitude of 18.9 as the quiescent nova. The star was only a few arc seconds away from D'Agelet's reported position, and fluctuations in its brightness added to the confidence that it was indeed the nova. In 1971 Brian Warner observed Weaver's candidate for WY Sagittae with the Otto Struve Telescope, and saw rapid brightness variations that are ubiquitous in quiescent novae, which confirmed Weaver's identification of D'Agelet's nova. All novae are binary stars, with a "donor" star orbiting a white dwarf. The two stars are so close together that matter is transferred from the donor star to the white dwarf. Because the distance between the stars is comparable to the radius of the donor star, novae are often eclipsing binaries, and WY Sagittae does show eclipses. The eclipses, which are quite deep (two magnitudes), show that the binary's orbital period is 3 hours and 41 minutes. Christian Knigge classified the donor star's spectral type as M4±1. Somers et al. estimate the donor star's spectral type to be between M3.5 and M4.5, and the mass of the white dwarf to lie between and . Özdönmez et al. estimate WY Sagittae's distance to be 4200±400 parsecs, based on reddening. WY Sagittae is sometimes listed as the second oldest "recovered" nova (meaning a historical nova for which modern observations have unambiguously identified the post-nova star), with only CK Vulpeculae being older. But Naylor et al. argue that CK Vulpeculae is not a nova, and WY Sagittae is the oldest recovered nova. References Novae Sagitta Sagittae, WY Eclipsing binaries
WY Sagittae
Astronomy
726
15,892,980
https://en.wikipedia.org/wiki/Finno-Ugrian%20suicide%20hypothesis
The Finno-Ugrian suicide hypothesis proposes to link genetic ties originating among Finno-Ugric peoples to high rate of suicide, claiming an allele common among them is responsible. Mari and Udmurts have been found to have a three times higher suicide rate than Finns and Hungarians. It has been thus theorized that such a possible allele may have arisen in those populations. However, contrary to the hypothesis, available contemporary (1990–1994) suicide rates in the United States were uniformly negatively associated with the proportion of the population comprising people of self-reported Hungarian, Lithuanian, Polish, Russian, Slovakian, or Ukrainian descent. The findings of this first test outside Europe are therefore conflicting. A proposal based on the geographical study approach is offered to further the progress of investigations into the genetics of suicide. See also Human genetic variation Finnish heritage disease Gloomy Sunday List of countries by suicide rate References Behavioural genetics Behavioural sciences Finno-Ugric peoples Suicide
Finno-Ugrian suicide hypothesis
Biology
200
1,575,447
https://en.wikipedia.org/wiki/Shear%20modulus
In materials science, shear modulus or modulus of rigidity, denoted by G, or sometimes S or μ, is a measure of the elastic shear stiffness of a material and is defined as the ratio of shear stress to the shear strain: where = shear stress is the force which acts is the area on which the force acts = shear strain. In engineering , elsewhere is the transverse displacement is the initial length of the area. The derived SI unit of shear modulus is the pascal (Pa), although it is usually expressed in gigapascals (GPa) or in thousand pounds per square inch (ksi). Its dimensional form is M1L−1T−2, replacing force by mass times acceleration. Explanation The shear modulus is one of several quantities for measuring the stiffness of materials. All of them arise in the generalized Hooke's law: Young's modulus E describes the material's strain response to uniaxial stress in the direction of this stress (like pulling on the ends of a wire or putting a weight on top of a column, with the wire getting longer and the column losing height), the Poisson's ratio ν describes the response in the directions orthogonal to this uniaxial stress (the wire getting thinner and the column thicker), the bulk modulus K describes the material's response to (uniform) hydrostatic pressure (like the pressure at the bottom of the ocean or a deep swimming pool), the shear modulus G describes the material's response to shear stress (like cutting it with dull scissors). These moduli are not independent, and for isotropic materials they are connected via the equations The shear modulus is concerned with the deformation of a solid when it experiences a force parallel to one of its surfaces while its opposite face experiences an opposing force (such as friction). In the case of an object shaped like a rectangular prism, it will deform into a parallelepiped. Anisotropic materials such as wood, paper and also essentially all single crystals exhibit differing material response to stress or strain when tested in different directions. In this case, one may need to use the full tensor-expression of the elastic constants, rather than a single scalar value. One possible definition of a fluid would be a material with zero shear modulus. Shear waves In homogeneous and isotropic solids, there are two kinds of waves, pressure waves and shear waves. The velocity of a shear wave, is controlled by the shear modulus, where G is the shear modulus is the solid's density. Shear modulus of metals The shear modulus of metals is usually observed to decrease with increasing temperature. At high pressures, the shear modulus also appears to increase with the applied pressure. Correlations between the melting temperature, vacancy formation energy, and the shear modulus have been observed in many metals. Several models exist that attempt to predict the shear modulus of metals (and possibly that of alloys). Shear modulus models that have been used in plastic flow computations include: the Varshni-Chen-Gray model developed by and used in conjunction with the Mechanical Threshold Stress (MTS) plastic flow stress model. the Steinberg-Cochran-Guinan (SCG) shear modulus model developed by and used in conjunction with the Steinberg-Cochran-Guinan-Lund (SCGL) flow stress model. the Nadal and LePoac (NP) shear modulus model that uses Lindemann theory to determine the temperature dependence and the SCG model for pressure dependence of the shear modulus. Varshni-Chen-Gray model The Varshni-Chen-Gray model (sometimes referred to as the Varshni equation) has the form: where is the shear modulus at , and and are material constants. SCG model The Steinberg-Cochran-Guinan (SCG) shear modulus model is pressure dependent and has the form where, μ0 is the shear modulus at the reference state (T = 300 K, p = 0, η = 1), p is the pressure, and T is the temperature. NP model The Nadal-Le Poac (NP) shear modulus model is a modified version of the SCG model. The empirical temperature dependence of the shear modulus in the SCG model is replaced with an equation based on Lindemann melting theory. The NP shear modulus model has the form: where and μ0 is the shear modulus at absolute zero and ambient pressure, ζ is an area, m is the atomic mass, and f is the Lindemann constant. Shear relaxation modulus The shear relaxation modulus is the time-dependent generalization of the shear modulus : . See also Elasticity tensor Dynamic modulus Impulse excitation technique Shear strength Seismic moment References Materials science Shear strength Elasticity (physics) Mechanical quantities
Shear modulus
Physics,Materials_science,Mathematics,Engineering
999
13,885
https://en.wikipedia.org/wiki/High-density%20lipoprotein
High-density lipoprotein (HDL) is one of the five major groups of lipoproteins. Lipoproteins are complex particles composed of multiple proteins which transport all fat molecules (lipids) around the body within the water outside cells. They are typically composed of 80–100 proteins per particle (organized by one, two or three ApoA). HDL particles enlarge while circulating in the blood, aggregating more fat molecules and transporting up to hundreds of fat molecules per particle. Overview Lipoproteins are divided into five subgroups, by density/size (an inverse relationship), which also correlates with function and incidence of cardiovascular events. Unlike the larger lipoprotein particles, which deliver fat molecules to cells, HDL particles remove fat molecules from cells. The lipids carried include cholesterol, phospholipids, and triglycerides, amounts of each are variable. Increasing concentrations of HDL particles are associated with decreasing accumulation of atherosclerosis within the walls of arteries, reducing the risk of sudden plaque ruptures, cardiovascular disease, stroke and other vascular diseases. HDL particles are commonly referred to as "good cholesterol", because they transport fat molecules out of artery walls, reduce macrophage accumulation, and thus help prevent or even regress atherosclerosis. Higher HDL-C may not necessarily be protective against cardiovascular disease and may even be harmful in extremely high quantities, with an increased cardiovascular risk, especially in hypertensive patients. Testing Because of the high cost of directly measuring HDL and LDL (low-density lipoprotein) protein particles, blood tests are commonly performed for the surrogate value, HDL-C, i.e. the cholesterol associated with ApoA-1/HDL particles. In healthy individuals, about 30% of blood cholesterol, along with other fats, is carried by HDL. This is often contrasted with the amount of cholesterol estimated to be carried within low-density lipoprotein particles, LDL, and called LDL-C. HDL particles remove fats and cholesterol from cells, including within artery wall atheroma, and transport it back to the liver for excretion or re-utilization; thus the cholesterol carried within HDL particles (HDL-C) is sometimes called "good cholesterol" (despite being the same as cholesterol in LDL particles). Those with higher levels of HDL-C tend to have fewer problems with cardiovascular diseases, while those with low HDL-C cholesterol levels (especially less than 40 mg/dL or about 1 mmol/L) have increased rates for heart disease. Higher native HDL levels are correlated with lowered risk of cardiovascular disease in healthy people. The remainder of the serum cholesterol after subtracting the HDL is the non-HDL cholesterol. The concentration of these other components, which may cause atheroma, is known as the non-HDL-C. This is now preferred to LDL-C as a secondary marker as it has been shown to be a better predictor and it is more easily calculated. Structure and function With a size ranging from 5 to 17 nm, HDL is the smallest of the lipoprotein particles. It is the densest because it contains the highest proportion of protein to lipids. Its most abundant apolipoproteins are apo A-I and apo A-II. A rare genetic variant, ApoA-1 Milano, has been documented to be far more effective in both protecting against and regressing arterial disease, atherosclerosis. The liver synthesizes these lipoproteins as complexes of apolipoproteins and phospholipid, which resemble cholesterol-free flattened spherical lipoprotein particles, whose NMR structure was published; the complexes are capable of picking up cholesterol, carried internally, from cells by interaction with the ATP-binding cassette transporter A1 (ABCA1). A plasma enzyme called lecithin-cholesterol acyltransferase (LCAT) converts the free cholesterol into cholesteryl ester (a more hydrophobic form of cholesterol), which is then sequestered into the core of the lipoprotein particle, eventually causing the newly synthesized HDL to assume a spherical shape. HDL particles increase in size as they circulate through the blood and incorporate more cholesterol and phospholipid molecules from cells and other lipoproteins, such as by interaction with the ABCG1 transporter and the phospholipid transport protein (PLTP). HDL transports cholesterol mostly to the liver or steroidogenic organs such as adrenals, ovary, and testes by both direct and indirect pathways. HDL is removed by HDL receptors such as scavenger receptor BI (SR-BI), which mediate the selective uptake of cholesterol from HDL. In humans, probably the most relevant pathway is the indirect one, which is mediated by cholesteryl ester transfer protein (CETP). This protein exchanges triglycerides of VLDL against cholesteryl esters of HDL. As the result, VLDLs are processed to LDL, which are removed from the circulation by the LDL receptor pathway. The triglycerides are not stable in HDL, but are degraded by hepatic lipase so that, finally, small HDL particles are left, which restart the uptake of cholesterol from cells. The cholesterol delivered to the liver is excreted into the bile and, hence, intestine either directly or indirectly after conversion into bile acids. Delivery of HDL cholesterol to adrenals, ovaries, and testes is important for the synthesis of steroid hormones. Several steps in the metabolism of HDL can participate in the transport of cholesterol from lipid-laden macrophages of atherosclerotic arteries, termed foam cells, to the liver for secretion into the bile. This pathway has been termed reverse cholesterol transport and is considered as the classical protective function of HDL toward atherosclerosis. HDL carries many lipid and protein species, several of which have very low concentrations but are biologically very active. For example, HDL and its protein and lipid constituents help to inhibit oxidation, inflammation, activation of the endothelium, coagulation, and platelet aggregation. All these properties may contribute to the ability of HDL to protect from atherosclerosis, and it is not yet known which are the most important. In addition, a small subfraction of HDL lends protection against the protozoan parasite Trypanosoma brucei brucei. This HDL subfraction, termed trypanosome lytic factor (TLF), contains specialized proteins that, while very active, are unique to the TLF molecule. In the stress response, serum amyloid A, which is one of the acute-phase proteins and an apolipoprotein, is under the stimulation of cytokines (interleukin 1, interleukin 6), and cortisol produced in the adrenal cortex and carried to the damaged tissue incorporated into HDL particles. At the inflammation site, it attracts and activates leukocytes. In chronic inflammations, its deposition in the tissues manifests itself as amyloidosis. It has been postulated that the concentration of large HDL particles more accurately reflects protective action, as opposed to the concentration of total HDL particles. This ratio of large HDL to total HDL particles varies widely and is measured only by more sophisticated lipoprotein assays using either electrophoresis (the original method developed in the 1970s) or newer NMR spectroscopy methods (See also nuclear magnetic resonance and spectroscopy), developed in the 1990s. Subfractions Five subfractions of HDL have been identified. From largest (and most effective in cholesterol removal) to smallest (and least effective), the types are 2a, 2b, 3a, 3b, and 3c. Epidemiology Men tend to have noticeably lower HDL concentrations, with smaller size and lower cholesterol content, than women. Men also have a greater incidence of atherosclerotic heart disease. Studies confirm the fact that HDL has a buffering role in balancing the effects of the hypercoagulable state in type 2 diabetics and decreases the high risk of cardiovascular complications in these patients. Also, the results obtained in this study revealed that there was a significant negative correlation between HDL and activated partial thromboplastin time (APTT). Epidemiological studies have shown that high concentrations of HDL (over 60 mg/dL) have protective value against cardiovascular diseases such as ischemic stroke and myocardial infarction. Low concentrations of HDL (below 40 mg/dL for men, below 50 mg/dL for women) increase the risk for atherosclerotic diseases. Data from the landmark Framingham Heart Study showed that, for a given level of LDL, the risk of heart disease increases 10-fold as the HDL varies from high to low. On the converse, however, for a fixed level of HDL, the risk increases 3-fold as LDL varies from low to high. Even people with very low LDL levels achieved by statin treatment are exposed to increased risk if their HDL levels are not high enough. Estimating HDL via associated cholesterol Clinical laboratories formerly measured HDL cholesterol by separating other lipoprotein fractions using either ultracentrifugation or chemical precipitation with divalent ions such as Mg2+, then coupling the products of a cholesterol oxidase reaction to an indicator reaction. The reference method still uses a combination of these techniques. Most laboratories now use automated homogeneous analytical methods in which lipoproteins containing apo B are blocked using antibodies to apo B, then a colorimetric enzyme reaction measures cholesterol in the non-blocked HDL particles. HPLC can also be used. Subfractions (HDL-2C, HDL-3C) can be measured, but clinical significance of these subfractions has not been determined. The measurement of apo-A reactive capacity can be used to measure HDL cholesterol but is thought to be less accurate. Recommended ranges The American Heart Association, NIH and NCEP provide a set of guidelines for fasting HDL levels and risk for heart disease. High LDL with low HDL level is an additional risk factor for cardiovascular disease. Measuring HDL concentration and sizes As technology has reduced costs and clinical trials have continued to demonstrate the importance of HDL, methods for directly measuring HDL concentrations and size (which indicates function) at lower costs have become more widely available and increasingly regarded as important for assessing individual risk for progressive arterial disease and treatment methods. Electrophoresis measurements Since the HDL particles have a net negative charge and vary by density & size, ultracentrifugation combined with electrophoresis have been utilized since before 1950 to enumerate the concentration of HDL particles and sort them by size with a specific volume of blood plasma. Larger HDL particles are carrying more cholesterol. NMR measurements Concentration and sizes of lipoprotein particles can be estimated using nuclear magnetic resonance fingerprinting. Optimal total and large HDL concentrations The HDL particle concentrations are typically categorized by event rate percentiles based on the people participating and being tracked in the MESA trial, a medical research study sponsored by the United States National Heart, Lung, and Blood Institute. The lowest incidence of atherosclerotic events over time occurs within those with both the highest concentrations of total HDL particles (the top quarter, >75%) and the highest concentrations of large HDL particles. Multiple additional measures, including LDL particle concentrations, small LDL particle concentrations, VLDL concentrations, estimations of insulin resistance and standard cholesterol lipid measurements (for comparison of the plasma data with the estimation methods discussed above) are routinely provided in clinical testing. Increasing HDL levels While higher HDL levels are correlated with lower risk of cardiovascular diseases, no medication used to increase HDL has been proven to improve health. As of 2017, numerous lifestyle changes and drugs to increase HDL levels were under study. HDL lipoprotein particles that bear apolipoprotein C3 are associated with increased, rather than decreased, risk for coronary heart disease. Diet and exercise Certain changes in diet and exercise may have a positive impact on raising HDL levels: Decreased intake of simple carbohydrates. Aerobic exercise Weight loss Avocado consumption Magnesium supplements raise HDL-C. Addition of soluble fiber to diet Consumption of omega-3 fatty acids such as fish oil or flax oil Increased intake of unsaturated fats Removal of trans fatty acids from the diet Most saturated fats increase HDL cholesterol to varying degrees but also raise total and LDL cholesterol. Recreational drugs HDL levels can be increased by smoking cessation, or mild to moderate alcohol intake. Cannabis in unadjusted analyses, past and current cannabis use was not associated with higher HDL-C levels. A study performed in 4635 patients demonstrated no effect on the HDL-C levels (P=0.78) [the mean (standard error) HDL-C values in control subjects (never used), past users and current users were 53.4 (0.4), 53.9 (0.6) and 53.9 (0.7) mg/dL, respectively]. Exogenous anabolic androgenic steroids, particularly 17α-alkylated anabolic steroids and others administered orally, can reduce HDL-C by 50 percent or more. Other androgen receptor agonists such as selective androgen receptor modulators can also lower HDL. As there is some evidence that the HDL reduction is caused by increased reverse cholesterol transport, it is unknown if AR agonists' HDL-lowering effect is pro- or anti-atherogenic. Pharmaceutical drugs and niacin Pharmacological therapy to increase the level of HDL cholesterol includes use of fibrates and niacin. Fibrates have not been proven to have an effect on overall deaths from all causes, despite their effects on lipids. Niacin (nicotinic acid, a form of vitamin B3) increases HDL by selectively inhibiting hepatic diacylglycerol acyltransferase 2, reducing triglyceride synthesis and VLDL secretion through a receptor HM74 otherwise known as niacin receptor 2 and HM74A / GPR109A, niacin receptor 1. Pharmacologic (1- to 3-gram/day) niacin doses increase HDL levels by 10–30%, making it the most powerful agent to increase HDL-cholesterol. A randomized clinical trial demonstrated that treatment with niacin can significantly reduce atherosclerosis progression and cardiovascular events. Niacin products sold as "no-flush", i.e. not having side-effects such as "niacin flush", do not, however, contain free nicotinic acid and are therefore ineffective at raising HDL, while products sold as "sustained-release" may contain free nicotinic acid, but "some brands are hepatotoxic"; therefore the recommended form of niacin for raising HDL is the cheapest, immediate-release preparation. Both fibrates and niacin increase artery toxic homocysteine, an effect that can be counteracted by also consuming a multivitamin with relatively high amounts of the B-vitamins, but multiple European trials of the most popular B-vitamin cocktails, trial showing 30% average reduction in homocysteine, while not showing problems have also not shown any benefit in reducing cardiovascular event rates. A 2011 extended-release niacin (Niaspan) study was halted early because patients adding niacin to their statin treatment showed no increase in heart health, but did experience an increase in the risk of stroke. In contrast, while the use of statins is effective against high levels of LDL cholesterol, most have little or no effect in raising HDL cholesterol. Rosuvastatin and pitavastatin, however, have been demonstrated to significantly raise HDL levels. Lovaza has been shown to increase HDL-C. However, the best evidence to date suggests it has no benefit for primary or secondary prevention of cardiovascular disease. The PPAR modulator GW501516 has shown a positive effect on HDL-C and an antiatherogenic where LDL is an issue. However, research on the drug has been discontinued after it was discovered to cause rapid cancer development in several organs in rats. See also Asymmetric dimethylarginine Cardiovascular disease Cholesteryl ester storage disease Endothelium Lipid profile Lysosomal acid lipase deficiency References Lipid disorders Cardiology Lipoproteins
High-density lipoprotein
Chemistry
3,619
390,319
https://en.wikipedia.org/wiki/Fuel%20pump
A Fuel pump is a component used in many liquid-fuelled engines (such as petrol/gasoline or diesel engines) to transfer the fuel from the fuel tank to the device where it is mixed with the intake air (such as the carburetor or fuel injector). Carbureted engines often use low-pressure mechanical pumps that are mounted on the engine. Fuel injected engines use either electric fuel pumps mounted inside the fuel tank (for lower pressure manifold injection systems) or high-pressure mechanical pumps mounted on the engine (for high-pressure direct injection systems). Some engines do not use any fuel pump at all. A low-pressure fuel supply used by a carbureted engine can be achieved through a gravity feed system, i.e. by simply mounting the tank higher than the carburetor. This method is commonly used in carbureted motorcycles, where the tank is usually directly above the engine. Low-pressure mechanical pumps On engines that use a carburetor (e.g. in older cars, lawnmowers and power tools), a mechanical fuel pump is typically used in order to transfer fuel from the fuel tank into the carburetor. These fuel pumps operate at a relatively low fuel pressure of . The two most widely used types of mechanical pumps are diaphragm pumps and plunger pumps. High-pressure mechanical pumps Pumps for modern direct-injection engines operate at a much higher pressure, up to and have configurations such as common rail radial piston, common rail two piston radial, inline, port and helix, and metering unit. Injection pumps are fuel lubricated which prevents oil from contaminating the fuel. Port and Helix pumps Port and Helix pumps are most commonly used in marine diesel engines because of their simplicity, reliability, and its ability to be scaled up in proportion to the engine size. The pump is similar to that of a radial piston-type pump, but instead of a piston it has a machined plunger that has no seals. When the plunger is at top dead center, the injection to the cylinder is finished and it is returned on its downward stroke by a compression spring. Due to the fixed height of a cam lobe, the amount of fuel being pumped to the injector is controlled by a rack and pinion device that rotates the plunger, thus allowing variable amounts of fuel to the area above the plunger. The fuel is then forced through a check valve and into the fuel injector nozzle. Plunger-type pumps Plunger-type pumps are a type of positive-displacement pump used by diesel engines. These pumps contain a chamber whose volume is increased and/or decreased by a moving plunger, along with check valves at the inlet and discharge ports. It is similar to that of a piston pump, but the high-pressure seal is stationary while the smooth cylindrical plunger slides through the seal. Plunger-type pumps are often mounted on the side of the injection pump and driven by the camshaft. These pumps usually run at a fuel pressure of . Electric pumps In fuel-injected petrol engines, an electric fuel pump is typically located inside the fuel tank. For older port injection and throttle-body injection systems, this "in-tank" fuel pump transports the fuel from the fuel tank to the engine, as well as pressurising the fuel to typically . While for direct-injection systems, the in-tank fuel pump transports the fuel to the engine, where a separate fuel pump pressurises the fuel (to a much higher pressure). Since the electric pump does not require mechanical power from the engine, it is feasible to locate the pump anywhere between the engine and the fuel tank. The reasons that the fuel pump is typically located in the fuel tank are: By submerging the pump in fuel at the bottom of the tank, the pump is cooled by the surrounding fuel Liquid fuel by itself (i.e. without oxygen present) isn't flammable, therefore surrounding the fuel pump by fuel reduces the risk of fire In-tank fuel pumps are often part of an assembly consisting of the fuel pump, fuel strainer and fuel level sensor (the latter used for the fuel gauge). Turbopumps Rocket engines use a turbopump to supply the fuel and oxidizer into the combustion chamber. See also List of auto parts References Pumps Engine fuel system technology Engine components
Fuel pump
Physics,Chemistry,Technology
897
5,651,414
https://en.wikipedia.org/wiki/Warren%20%28burrow%29
A warren is a network of interconnected burrows, dug by rabbits. Domestic warrens are artificial, enclosed establishments of animal husbandry dedicated to the raising of rabbits for meat and fur. The term evolved from the medieval Anglo-Norman concept of free warren, which had been, essentially, the equivalent of a hunting license for a given woodland. Architecture of the domestic warren The cunicularia of the monasteries may have more closely resembled hutches or pens, than the open enclosures with specialized structures which the domestic warren eventually became. Such an enclosure or close was called a cony-garth, or sometimes conegar, coneygree or "bury" (from "burrow"). Moat and pale To keep the rabbits from escaping, domestic warrens were usually provided with a fairly substantive moat, or ditch filled with water. Rabbits generally do not swim and avoid water. A pale, or fence, was provided to exclude predators. Pillow mounds The most characteristic structure of the "cony-garth" ("rabbit-yard") is the pillow mound. These were "pillow-like", oblong mounds with flat tops, frequently described as being "cigar-shaped", and sometimes arranged like the letter ⟨E⟩ or into more extensive, interconnected rows. Often these were provided with pre-built, stone-lined tunnels. The preferred orientation was on a gentle slope, with the arms extending downhill, to facilitate drainage. The soil needed to be soft, to accommodate further burrowing. This type of architecture and animal husbandry has become obsolete, but numerous pillow mounds are still to be found in Britain, some of them maintained by English Heritage, with the greatest density being found on Dartmoor. Further evolution of the term Ultimately, the term "warren" was generalized to include wild burrows. According to the 1911 Encyclopædia Britannica: The word thus became used of a piece of ground preserved for these beasts of warren. It is now applied loosely to any piece of ground, whether preserved or not, where rabbits breed. The use is further extended to any system of burrows, e.g., "prairie dog warren". By 1649, the term was applied to inferior, crowded human accommodations and meant "cluster of densely populated living spaces" (OED). Contemporarily, the leading use seems to be in the stock phrase "warren of cubicles" in the workplace. References Further reading Livestock Agricultural buildings Buildings and structures used to confine animals Human–animal interaction Shelters built or used by animals Leporidae
Warren (burrow)
Biology
520
233,956
https://en.wikipedia.org/wiki/Anti-pattern
An anti-pattern in software engineering, project management, and business processes is a common response to a recurring problem that is usually ineffective and risks being highly counterproductive. The term, coined in 1995 by computer programmer Andrew Koenig, was inspired by the book Design Patterns (which highlights a number of design patterns in software development that its authors considered to be highly reliable and effective) and first published in his article in the Journal of Object-Oriented Programming. A further paper in 1996 presented by Michael Ackroyd at the Object World West Conference also documented anti-patterns. It was, however, the 1998 book AntiPatterns that both popularized the idea and extended its scope beyond the field of software design to include software architecture and project management. Other authors have extended it further since to encompass environmental, organizational, and cultural anti-patterns. Definition According to the authors of Design Patterns, there are two key elements to an anti-pattern that distinguish it from a bad habit, bad practice, or bad idea: The anti-pattern is a commonly-used process, structure or pattern of action that, despite initially appearing to be an appropriate and effective response to a problem, has more bad consequences than good ones. Another solution exists to the problem the anti-pattern is attempting to address. This solution is documented, repeatable, and proven to be effective where the anti-pattern is not. A guide to what is commonly used is a "rule-of-three" similar to that for patterns: to be an anti-pattern it must have been witnessed occurring at least three times. Uses Documenting anti-patterns can be an effective way to analyze a problem space and to capture expert knowledge. While some anti-pattern descriptions merely document the adverse consequences of the pattern, good anti-pattern documentation also provides an alternative, or a means to ameliorate the anti-pattern. Software engineering anti-patterns In software engineering, anti-patterns include the big ball of mud (lack of) design, the god object (where a single class handles all control in a program rather than control being distributed across multiple classes), magic numbers (unique values with an unexplained meaning or multiple occurrences which could be replaced with a named constant), and poltergeists (ephemeral controller classes that only exist to invoke other methods on classes). Big ball of mud This indicates a software system that lacks a perceivable architecture. Although undesirable from a software engineering point of view, such systems are common in practice due to business pressures, developer turnover and code entropy. The term was popularized in Brian Foote and Joseph Yoder's 1997 paper of the same name, which defines the term: Foote and Yoder have credited Brian Marick as the originator of the "big ball of mud" term for this sort of architecture. Project management anti-patterns Project management anti-patterns included in the Antipatterns book include: Blowhard Jamboree (an excess of industry pundits) Analysis paralysis Viewgraph Engineering (too much time spent making presentations and not enough on the actual software) Death by Planning (similarly, too much planning) Fear of Success (irrational fears near to project completion) The Corncob (difficulties with people) Intellectual Violence (intimidation through use of jargon or arcane technology) Irrational Management (bad management habits) Smoke and Mirrors (excessive use of demos and prototypes by salespeople) Throw It Over the Wall (forcing fad software engineering practices onto developers without buy-in) Fire Drill (long periods of monotony punctuated by short crises) The Feud (conflicts between managers) e-mail Is Dangerous (situations resulting from ill-advised e-mail messages). See also : Software Life Cycle Profiles and Guidelines for Very Small Entities (VSEs) References What supports what Sources Further reading Later re-printed in: External links Anti-pattern at WikiWikiWeb Software architecture Design Industrial and organizational psychology Organizational behavior Anti-social behaviour Workplace 1995 neologisms
Anti-pattern
Technology,Engineering,Biology
820
5,482,472
https://en.wikipedia.org/wiki/XMMXCS%202215-1738
XMMXCS 2215-1738 is a galaxy cluster that lies 10 billion light-years away and has a redshift value of z=1.45. It was discovered by the XMM Cluster Survey in 2006. Discovered in 2006, XMMXCS 2215-1738 is one of the most distant galaxy clusters known. It is embedded in intergalactic gas that has a temperature of 10 million degrees. The estimated mass of the cluster is 500 trillion solar masses, most coming from dark matter. The cluster was discovered and studied using the XMM-Newton and Keck Telescopes. The cluster is surprisingly large and evolved for a cluster that existed when the universe was only 3 billion years old. Led by University of Sussex researchers, part of the XMM Cluster Survey (XCS) used X-ray Multi Mirror (XMM) Newton satellite to find it, Keck Telescope to determine distance, and used the Hubble Space Telescope to further image it. It contains hundreds of reddish galaxies surrounded by x-ray-emitting gas. The galaxy is called XMMXCS 2215-1734 in many references, with some news sources listing both names. The source of the naming contradiction between XMMXCS 2215-1734 and XMMXCS 2215-1738 is not known. However, XMMXCS 2215-1738 seems to be the more accurate. See also 2XMM J083026+524133 galaxy cluster XMM-Newton XMM Cluster Survey List of the most distant astronomical objects References Most Distant Galaxy Cluster Found 10 Billion light-years Away XMM Cluster Survey (XCS) The XMM Cluster Survey: A Massive Galaxy Cluster at z=1.45 S. A. Stanford (arXiv preprint) Sun, 4 Jun 2006 16:23:55 GMT External links Astronomers Find Most Distant Galaxy Cluster Yet (SpaceDaily) Jun 7, 2006 Maturity of Farthest Galaxy Cluster Surprises Astronomers Christine L. Kulyk (SPACE.com) 8 June 2006 06:20 am ET Galaxy clusters Aquarius (constellation)
XMMXCS 2215-1738
Astronomy
426
3,743,846
https://en.wikipedia.org/wiki/Fengyun
Fēngyún (FY, ) are China's meteorological satellites. Launched since 1988 into polar Sun-synchronous and geosynchronous orbit, each three-axis stabilized Fengyun satellite is built by the Shanghai Academy of Spaceflight Technology (SAST) and operated by the China Meteorological Administration (CMA). To date, China has launched twenty-one Fengyun satellites in four classes (FY-1 through FY-4). Fengyun 1 and Fengyun 3 satellites are in polar, Sun-synchronous orbit and Low Earth orbit while Fengyun 2 and 4 are geosynchronous orbit. On 11 January 2007, China destroyed one of these satellites (FY-1C, COSPAR 1999-025A) in a test of an anti-satellite missile. According to NASA, the intentional destruction of FY-1C created more than 3,000 high-velocity debris items, a larger amount of dangerous space debris than any other space mission in history. Classes Fengyun 1 The four satellites of the Fengyun 1 (or FY-1) class were China's first meteorological satellites placed in polar, Sun-synchronous orbit. In this orbit, FY-1 satellites orbited the Earth at both a low altitude (approximate 900 km above the Earth's surface), and at a high inclination between 98.8° and 99.2° traversing the North Pole every 14 minutes, giving FY-1-class satellites global meteorological coverage with a rapid revisit time and closer proximity to the clouds they image. FY-1A, launched in September 1988, lasted 39 days until it suffered attitude control problems. FY-1B, launched in September 1990 along with the first two QQW (Qi Qui Weixing) balloon satellites, lasted until late 1992 when its attitude control system also failed. FY-1C, launched in May 1999 along with Shijian-5, also completed its two-year design life operating until January 2004. The last satellite of the class, FY-1D, was launched in May 2002 and operated continuously for nine years until in May 2011 operations were temporarily lost. Despite resuscitation, FY-1D failed on 1 April 2012. All Fengyun 1 satellites were launched from Taiyuan Satellite Launch Center (TSLC) in Shanxi Province on Long March 4A and 4B rockets and weighed 750 kg, 880 kg, 954 kg, and 954 kg respectively. Aboard each satellite were two multichannel visible and infrared scanning radiometers (MVISR) built by the Shanghai Institute of Technical Physics (SITP) bearing an optical scanner, image processor, radiant cooler, and controller for the radiant cooler. FY-1C and FY-1D satellites also carried on board a high-energy particle detector (HEPD) for study of the space environment, contributing to their increased mass. FY-1 satellites are powered by two deployable solar arrays and internal batteries. Destruction of FY-1C On 11 January 2007, China conducted its first anti-satellite (ASAT) missile test, destroying FY-1C with a kinetic kill vehicle, identified by the United States Defense Intelligence Agency (DIA) as the SC-19, a modified DF-21 ballistic missile with mounted kill vehicle. The shootdown, and the subsequent creation of a record-setting amount of in-orbit debris, drew serious international criticism. Fengyun 2 Satellites of the Fengyun 2 class are based on the spin-stabilized Dong Fang Hong 2 platform and are China's first class of meteorological satellites in geostationary orbit. Unlike meteorological satellites in polar orbit (like the FY-1 and FY-3 classes), FY-2 satellites in geostationary orbit remain in a fixed position relative to the Earth 35,000 km above its surface and maintain a constant watch over an assigned area. Unlike polar orbiting satellites which view the same area about twice a day, geostationary satellites can image a location as fast as once a minute and show long term meteorological trends - at the cost of resolution. Built by the Shanghai Institute of Satellite Engineering and operated by the Chinese Meteorological Administration, FY-2 satellites are 4.5 m tall and are spin-stabilized rotating at 100 rotations per minute. FY-2-class satellites have been marketed for their openly available data whereby any user with a receiver could view FY-2 derived sensory data. Satellites of the Fengyun 2 class have a mass of 1,380 kilograms, use solar cells and batteries for power, and a FG-36 apogee motor jettisoned after attaining orbit. On 2 April 1994, China attempted to launch the Fengyun 2 from Xichang Satellite Launch Center (XSLC) when, prior to its mating with the Long March 3, a fire caused an explosion destroying the satellite, killing a technician, and injuring 20 others. Officials of the Chinese space agency described the $75 million USD loss of the satellite as a "major setback" to the Chinese space program. Despite this, China launched eight successive Fengyun 2 satellites without incident. Fengyun 3 Chinese participation in the monitoring of auroras for scientific and space weather investigation was initiated with the launch of the Fengyun-3D satellite, which carries a wide-field auroral imager. Fengyun 4 As of 2021, China has launched two Fengyun 4 class satellites. List of satellites See also China Meteorological Administration 2007 Chinese anti-satellite missile test Yunhai-2 Gaofen References External links Fēngyún-3 satellite programme Weather satellites of China Spacecraft launched by Long March rockets Intentionally destroyed artificial satellites Satellite collisions Spacecraft that broke apart in space
Fengyun
Technology
1,186
46,517,872
https://en.wikipedia.org/wiki/Intermittent%20inductive%20automatic%20train%20stop
The intermittent inductive automatic train stop (also referred to as IIATS or just automatic train stop or ATS) is a train protection system used in North American mainline railroad and rapid transit systems. It makes use of magnetic reluctance to trigger a passing train to take some sort of action. The system was developed in the 1920s by the General Railway Signal Company as an improvement on existing mechanical train stop systems and saw limited adoption before being overtaken by more advanced cab signaling and automatic train control systems. The system remains in use after having been introduced in the 1920s. Overview The technology works by having the state of a track mounted shoe read by a receiver mounted to a truck on the leading locomotive or car. In the standard implementation the shoe is mounted to the ties a few inches outside the right hand running rail, although in theory the shoe could be mounted anywhere on the ties. The system is binary with the shoe presenting either an on or off state to the receiver. In order to be failsafe when the shoe is energized it presents an off state to the receiver, while the non-energized state presents an on state which triggers an action. This allows things like permanent speed restrictions or other hazards to be protected by non-active devices. The receiver consists of a two coil electromagnet carefully aligned to pass about 1.5 inches above the surface of the inductor shoe. The inductor shoe consists of two metal plates set into a streamlined housing designed to deflect impacts of debris or misaligned receivers. The metal plates are connected through a choke circuit in the body of the shoe. When the choke circuit is open magnetic flux in the receiver's primary coil is able to induce a voltage in the receiver's secondary coil which in turn triggers an action in the locomotive. When the circuit is closed the choke eliminates the magnetic field and the voltage induced by it allowing the locomotive to pass without activation. Where unconditional activation was desired specially shaped metal plates could be used in place of a fully functional shoe, however the design of the system can result in accidental activations when the train passes over switches or other metal objects in the track area. The most common use case for the ATS system was to alert the railroad engineer of an impending hazard and if the alert was not acknowledged, stop the train by means of a full service application of the brakes. When attached to signals the shoe would be energized when the signal was displaying a clear indication. Any other signal indication would de-energize the shoe and trigger an alarm in the cab. If the engineer did not cancel the alarm within 5–8 seconds a penalty brake application would be initiated and could not be reset until the train came to a complete stop. Unlike mechanical train stops or other train stop systems, IIATS was not generally used to automatically stop a train if it passed a stop signal and in practice could not be used for this purpose as the shoes were placed only a few feet from the signal they protected and would not present sufficient braking distance for the train to stop. On bi-directionally signaled lines two shoes would be needed, one for each direction of travel as locomotives would only have a sensor to detect the shoes on one side of the train. The receivers can also be designed for easy removal to prevent damage when operating in non-equipped territory or to cut costs when only a small portion of the railroad requires ATS equipped locomotives. Inert inductors are sometimes placed in advance of certain speed restrictions as an alert or at engine terminals to test the functionality of the ATS system. On a few light rail lines IIATS has been employed in a manner similar to mechanical train stops, stopping the train if it passes an absolute stop signal. It is useful where light rail shares tracks with mainline railroad trains as mechanical trips may be damaged by or interfere with freight operations and because light rail vehicles can be brought to a stop much more quickly than a mainline railroad train without requiring complex signal overlaps Use Starting in the 1930s the US Interstate Commerce Commission, in its role as a federal railroad regulator, encouraged railroads to adopt new safety technologies to decrease the rate of railroad accidents. IIATS was offered by the General Railway Signal Company of Rochester, NY as one such technology and it was adopted by the New York Central railroad for use on its high speed Water Level Route between New York and Chicago and on a number of other lines. The Southern Railway also chose to adopt ATS on most of its main lines eventually covering 2700 route miles. In addition the Chicago and North Western Railway installed the system on some of its Chicago area commuter lines. After the Naperville train disaster caused by a missed signal, the ICC required additional technical safety systems for any train traveling at or above 80 mph with the rule taking effect in 1951. Those railroads still interested in high speed operations IIATS met the minimum ICC requirements with a lower cost compared to other cab signaling or automatic train control systems, however with rail travel facing increased competition from cars and airplanes most railroads simply choose to accept the new speed limit. Only the Atchison, Topeka and Santa Fe choose to fully equip its Chicago to Los Angeles and Los Angeles to San Diego main lines in support of the Super Chief and other premier high speed trains. IIATS installations reached their peak in 1954 with a total of 8650 road miles, 14400 track miles, and 3850 locomotives equipped with the system. However, with the collapse of long distance passenger rail travel and the general North American railroad industry malaise in 1971, the bankrupt Penn Central was permitted to remove IIATS from its Water Level Route along with the Southern and other railroads with test or pilot IIATS systems. Even the ATSF and successor BNSF were gradually allowed by regulators to remove IIATS from parts of previously equipped lines due to the reduced passenger traffic. At the dawn of the 21st century the only IIATS equipped lines were the MetroLink and Coaster line between San Diego and Fullerton, parts of the former ATSF Super Chief route in California, Arizona, New Mexico, Colorado, Kansas and Missouri and the former Chicago and North Western Railway North Line, Northwest Line out of Chicago operated by Union Pacific on behalf of Metra When the NJ Transit River Line opened in 2004 it featured a new IIATS system. This is a light rail systems running on shared track with main line freight traffic and IIATS is used to enforce a full stop at equipped signals instead of as a warning system. See also Indusi Automatic warning system Le Crocodile Cab Signal System References Train protection systems Warning systems
Intermittent inductive automatic train stop
Technology,Engineering
1,326
956,434
https://en.wikipedia.org/wiki/Pharmaceutical%20marketing
Pharmaceutical marketing is a branch of marketing science and practice focused on the communication, differential positioning and commercialization of pharmaceutical products, like specialist drugs, biotech drugs and over-the-counter drugs. By extension, this definition is sometimes also used for marketing practices applied to nutraceuticals and medical devices. Whilst rule of law regulating pharmaceutical industry marketing activities is widely variable across the world, pharmaceutical marketing is usually strongly regulated by international and national agencies, like the Food and Drug Administration and the European Medicines Agency. Local regulations from government or local pharmaceutical industry associations like Pharmaceutical Research and Manufacturers of America or European Federation of Pharmaceutical Industries and Associations (EFPIA) can further limit or specify allowed commercial practices. To health care providers Marketing to health-care providers takes three main forms: activity by pharmaceutical sales representatives, provision of drug samples, and sponsoring continuing medical education (CME). The use of gifts, including pens and coffee mugs embossed with pharmaceutical product names, has been prohibited by PHRMA ethics guidelines since 2008. Of the 237,000 medical sites representing 680,000 physicians surveyed in SK&A's 2010 Physician Access survey, half said they prefer or require an appointment to see a rep (up from 38.5% preferring or requiring an appointment in 2008), while 23% won't see reps at all, according to the survey data. Practices owned by hospitals or health systems are tougher to get into than private practices, since appointments have to go through headquarters, the survey found. 13.3% of offices with just one or two doctors won't see representatives, compared with a no-see rate of 42% at offices with 10 or more doctors. The most accessible physicians for promotional purposes are allergists/immunologists – only 4.2% won't see reps at all – followed by orthopedic specialists (5.1%) and diabetes specialists (7.6%). Diagnostic radiologists are the most rigid about allowing details – 92.1% won't see reps – followed by pathologists and neuroradiologists, at 92.1% and 91.8%, respectively. E-detailing is widely used to reach "no see physicians"; approximately 23% of primary care physicians and 28% of specialists prefer computer-based e-detailing, according to survey findings reported in the 25 April 2011 edition of American Medical News (AMNews), published by the American Medical Association (AMA). PhRMA Code The Pharmaceutical Research and Manufacturers of America (PhRMA) released updates to its voluntary Code on Interactions with Healthcare Professionals on 10 July 2008. The new guidelines took effect in January 2009. In addition to prohibiting small gifts and reminder items such as pens, notepads, staplers, clipboards, paperweights, pill boxes, etc., the revised Code: Prohibits company sales representatives providing restaurant meals to healthcare professionals, but allows them to provide occasional modest meals in healthcare professionals' offices in conjunction with informational presentations" Includes new provisions requiring companies to ensure their representatives are sufficiently trained about applicable laws, regulations, and industry codes of practice and ethics. Provides that each company will state its intentions to abide by the Code and that company CEOs and compliance officers will certify each year that they have processes in place to comply. Includes more detailed standards regarding the independence of continuing medical education. Provides additional guidance and restrictions for speaking and consulting arrangements with healthcare professionals. Free samples Free samples have been shown to affect physician prescribing behavior. Physicians with access to free samples are more likely to prescribe brand name medication over equivalent generic medications. Other studies found that free samples decreased the likelihood that physicians would follow the standard of care practices. Receiving pharmaceutical samples does not reduce prescription costs. Even after receiving samples, sample recipients remain disproportionately burdened by prescription costs. It is argued that a benefit to free samples is the "try it before you buy it" approach. Free samples give immediate access to the medication and the patient can begin treatment right away. It also saves time from going to a pharmacy to get it filled before treatment begins. Since not all medications work for everyone, and many do not work the same way for each person, free samples allow patients to find which dose and brand of medication works best before having to spend money on a filled prescription at a pharmacy. Continuing medical education Hours spent by physicians in industry-supported continuing medical education (CME) is greater than that from either medical schools or professional societies. Pharmaceutical representatives Currently, there are approximately 81,000 pharmaceutical sales representatives in the United States pursuing some 830,000 pharmaceutical prescribers. A pharmaceutical representative will often try to see a given physician every few weeks. Representatives often have a call list of about 200–300 physicians with 120–180 targets that should be visited in 1–2 or 3 week cycle. Because of the large size of the pharmaceutical sales force, the organization, management, and measurement of effectiveness of the sales force are significant business challenges. Management tasks are usually broken down into the areas of physician targeting, sales force size and structure, sales force optimization, call planning, and sales forces effectiveness. A few pharmaceutical companies have realized that training sales representatives on high science alone is not enough, especially when most products are similar in quality. Thus, training sales representatives on relationship selling techniques in addition to medical science and product knowledge, can make a difference in sales force effectiveness. Specialist physicians are relying more and more on specialty sales reps for product information, because they are more knowledgeable than primary care reps. The United States has 81,000 pharmaceutical representatives or 1 for every 7.9 physicians. The number and persistence of pharmaceutical representatives has placed a burden on the time of physicians. "As the number of reps went up, the amount of time an average rep spent with doctors went down—so far down, that tactical scaling has spawned a strategic crisis. Physicians no longer spend much time with sales reps, nor do they see this as a serious problem." Marketers must decide on the appropriate size of a sales force needed to sell a particular portfolio of drugs to the target market. Factors influencing this decision are the optimal reach (how many physicians to see) and frequency (how often to see them) for each individual physician, how many patients with that disease state, how many sales representatives to devote to office and group practice and how many to devote to hospital accounts if needed. To aid this decision, customers are broken down into different classes according to their prescription behavior, patient population, their business potential, and event their personality traits. Marketers attempt to identify the set of physicians most likely to prescribe a given drug. Historically, this was done by drug reps 'on the ground' using zip code sales and engaging in recon to figure out who the high prescribers were in a particular sales territory. However, in the mid-1990s the industry, through third-party prescribing data (e.g., Quintiles/IMS) switched to "script-tracking" technologies, measuring the number of total prescriptions (TRx) and new prescriptions (NRx) per week that each physician writes. This information is collected by commercial vendors. The physicians are then "deciled" into ten groups based on their writing patterns. Higher deciles are more aggressively targeted. Some pharmaceutical companies use additional information such as: Profitability of a prescription (script) Accessibility of the physician Tendency of the physician to use the pharmaceutical company's drugs Effect of managed care formularies on the ability of the physician to prescribe a drug The adoption sequence of the physician (that is, how readily the physician adopts new drugs in place of older treatments) The tendency of the physician to use a wide palette of drugs Influence that physicians have on their colleagues. Physicians are perhaps the most important component in sales. They write the prescriptions that determine which drugs will be used by people. Influencing the physician is the key to pharmaceutical sales. Historically, by a large pharmaceutical sales force. A medium-sized pharmaceutical company might have a sales force of 1000 representatives. The largest companies have tens of thousands of representatives around the world. Sales representatives called upon physicians regularly, providing clinical information, approved journal articles, and free drug samples. This is still the approach today; however, economic pressures on the industry are causing pharmaceutical companies to rethink the traditional sales process to physicians. The industry has seen a large scale adoption of Pharma CRM systems that works on laptops and more recently tablets. The new age pharmaceutical representative is armed with key data at his fingertips and tools to maximize the time spent with physicians. Pharmaceutical Company Payments Pharmaceutical and medical device companies have also paid physicians to use their drugs, which could affect how often a drug is prescribed. For example, one study that looked at physician payments and pimavanserin found that "extensive physician payments have been associated with increased pimavanserin prescription volume and Medicare expenditures." More specifically, drug reps help to create a culture of gifting, or the "pharmaceutical gift exchange," where actual monetary transactions are rare. In reality, gifts, both large and small, ranging from cups of coffee to travel to medical conferences are exchanged on a routine basis with high prescribers in an effort to shift their obligations from patients to prescriptions and have proven effective. Peer influence Key opinion leaders Key opinion leaders (KOL), or "thought leaders", are respected individuals, such as prominent medical school faculty, who influence physicians through their professional status. Pharmaceutical companies generally engage key opinion leaders early in the drug development process to provide advocacy and key marketing feedback. Some pharmaceutical companies identify key opinion leaders through direct inquiry of physicians (primary research). Recently, pharmaceutical companies have begun to use social network analysis to uncover thought leaders; because it does not introduce respondent bias, which is commonly found in primary research; it can identify and map out the entire scientific community for a disease state; and it has greater compliance with state and federal regulations; because physician prescribing patterns are not used to create the social network. Colleagues Physicians acquire information through informal contacts with their colleagues, including social events, professional affiliations, common hospital affiliations, and common medical school affiliations. Some pharmaceutical companies identify influential colleagues through commercially available prescription writing and patient level data. Doctor dinner meetings are an effective way for physicians to acquire educational information from respected peers and to influence the so-called "no-see" physicians - those that are reluctant to engage directly with pharmaceutical reps through detailing but may come to a dinner program where a local or national expert is talking. These meetings are sponsored by some pharmaceutical companies. Journal articles and technical documentation Legal cases and US congressional hearings have provided access to pharmaceutical industry documents revealing new marketing strategies for drugs. Activities once considered independent of promotional intent, including continuing medical education and medical research, are used, including paying to publish articles about promoted drugs for the medical literature, and alleged suppression of unfavorable study results. Private and public insurers Public and private insurers affect the writing of prescriptions by physicians through formularies that restrict the number and types of drugs that the insurer will cover. Not only can the insurer affect drug sales by including or excluding a particular drug from a formulary, they can affect sales by tiering, or placing bureaucratic hurdles to prescribing certain drugs. In January 2006, the United States instituted a new public prescription drug plan through its Medicare program. Known as Medicare Part D, this program engages private insurers to negotiate with pharmaceutical companies for the placement of drugs on tiered formularies. To consumers Only two countries as of 2008 allow direct to consumer advertising (DTCA): the United States and New Zealand. Since the late 1970s, DTCA of prescription drugs has become important in the United States. It takes two main forms: the promotion or creation of a disease out of a non-pathologic physical condition or the promotion of a medication. The rhetorical objective of direct-to-consumer advertising is to directly influence the patient-physician dialogue. Many patients will inquire about, or even demand a medication they have seen advertised on television. In the United States, recent years have seen an increase in mass media advertisements for pharmaceuticals. Expenditures on direct-to-users advertising almost quadrupled in the seven years between 1997 and 2005 since the FDA changed the guidelines, from $1.1 billion in 1997 to more than $4.2 billion in 2005, a 19.6% annual increase, according to the United States Government Accountability Office, 2006). The mass marketing to users of pharmaceuticals is banned in over 30 industrialized nations, but not in the US and New Zealand, which is considering a ban. Some feel it is better to leave the decision wholly in the hands of medical professionals; others feel that users education and participation in health is useful, but users need independent, comparative information about drugs (not promotional information). For these reasons, most countries impose limits on pharmaceutical mass marketing that are not placed on the marketing of other products. In some areas it is required that ads for drugs include a list of possible side effects, so that users are informed of both facets of a medicine. Canada's limitations on pharmaceutical advertising ensure that commercials that mention the name of a product cannot in any way describe what it does. Commercials that mention a medical problem cannot also mention the name of the product for sale; at most, they can direct the viewer to a website or telephone number operated by the pharmaceutical company. Reynold Spector has provided examples of how positive and negative hype can affect perceptions of pharmaceuticals using examples of certain cancer drugs, such as Avastin and Opdivo, in the former case and statins in the latter. Drug coupons In the United States, pharmaceutical companies often provide drug coupons to consumers to help offset the copayments charged by health insurers for prescription medication. These coupons are generally used to promote medications that compete with non-preferred products and cheaper, generic alternatives by reducing or eliminating the extra out-of-pocket costs that an insurers typically charge a patient for a non-preferred drug product. But sometimes coupons for brand-name drugs could potentially distort the market and leading to higher overall healthcare costs since they encourage the overuse of more expensive drugs over generic alternatives. Consumers often realize too late that the continued use of these drugs without coupons necessitates either switching to a cheaper generic or facing steep out-of-pocket expenses. Economics Pharmaceutical company spending on marketing exceeds that spent on research. In 2004 in Canada $1.7 billion a year was spent marketing drugs to physicians and in the United States $21 billion were spent in 2002. In 2005 money spent on pharmaceutical marketing in the United States was estimated at $29.9 billion with one estimate as high as $57 billion. When the US number are broken down 56% was free samples, 25% was detailing of physicians, 12.5% was direct to users advertising, 4% on hospital detailing, and 2% on journal ads. In the United States approximately $20 billion could be saved if generics were used instead of equivalent brand name products. Although pharmaceutical companies have made large investments in marketing their products, overall promotional spending has been decreasing over the last few years, and declined by 10 percent from 2009 to 2010. Pharmaceutical companies are cutting back mostly in detailing and sampling, while spending in mailings and print advertising grew since last year. Regulation and fraud European Union In the European Union, marketing of pharmaceuticals is regulated by EU (formerly EEC) Directive 92/28/EEC. Among other things, it requires member states to prohibit off-label marketing, and direct-to-consumer marketing of prescription-only medications. United States In the United States, marketing and distribution of pharmaceuticals is regulated by the Federal Food, Drug, and Cosmetic Act and the Prescription Drug Marketing Act, respectively. Food and Drug Administration (FDA) regulations require all prescription drug promotion to be truthful and not misleading, based on "substantial evidence or substantial clinical experience", to provide a "fair balance" between the risks and benefits of the promoted drug, and to maintain consistency with labeling approved by the FDA. The FDA Office of Prescription Drug Promotion enforces these requirements. In the 1990s, antipsychotics were "still seen as treatments for the most serious mental illnesses, like hallucinatory schizophrenia, and recast them for much broader uses". Drugs such as Abilify and Geodon were given to a broad range of patients, from preschoolers to octogenarians. In 2010, more than a half-million youths took antipsychotic drugs, and one-quarter of nursing-home residents have used them. Yet the government warns that the drugs may be fatal to some older patients and have unknown effects on children. Every major company selling the drugs—Bristol-Myers Squibb, Eli Lilly, Pfizer, AstraZeneca, and Johnson & Johnson—has either settled recent government cases, under the False Claims Act, for hundreds of millions of dollars or is currently under investigation for possible health care fraud. Following charges of illegal marketing, two of the settlements in 2009 set records for the largest criminal fines ever imposed on corporations. One involved Eli Lilly's antipsychotic Zyprexa, and the other involved Bextra. In the Bextra case, the government also charged Pfizer with illegally marketing another antipsychotic, Geodon; Pfizer settled that part of the claim for $301 million, without admitting any wrongdoing. The following is a list of the four largest settlements reached with pharmaceutical companies from 1991 to 2012, rank ordered by the size of the total settlement. Legal claims against the pharmaceutical industry have varied widely over the past two decades, including Medicare and Medicaid fraud, off-label promotion, and inadequate manufacturing practices. Evolution of marketing The emergence of new media and technologies in recent years is quickly changing the pharmaceutical marketing landscape in the United States. Both physicians and users are increasing their reliance on the Internet as a source of health and medical information, prompting pharmaceutical marketers to look at digital channels for opportunities to reach their target audiences. In 2008, 84% of U.S. physicians used the Internet and other technologies to access pharmaceutical, biotech or medical device information—a 20% increase from 2004. At the same time, sales reps are finding it more difficult to get time with doctors for in-person details. Pharmaceutical companies are exploring online marketing as an alternative way to reach physicians. Emerging e-promotional activities include live video detailing, online events, electronic sampling, and physician customer service portals such as PV Updates, MDLinx, Aptus Health (former Physicians Interactive), and Epocrates. Direct-to-users marketers are also recognizing the need to shift to digital channels as audiences become more fragmented and the number of access points for news, entertainment and information multiplies. Standard television, radio and print direct-to-users (DTC) advertisements are less relevant than in the past, and companies are beginning to focus more on digital marketing efforts like product websites, online display advertising, search engine marketing, social media campaigns, place-based media and mobile advertising to reach the over 145 million U.S. adults online for health information. In 2010, the FDA's Division of Drug Marketing, Advertising and Communications issued a warning letter concerning two unbranded consumer targeted Web sites sponsored by Novartis Pharmaceuticals Corporation as the websites promoted a drug for an unapproved use, the websites failed to disclose the risks associated with the use of the drug and made unsubstantiated dosing claims. See also Big Pharma conspiracy theory Big Pharma: How the World's Biggest Drug Companies Control Illness (2006) by Jacky Law Side Effects (2008) by Alison Bass Bad Pharma (2012) by Ben Goldacre Disease mongering Ethics in pharmaceutical sales Inverse benefit law National pharmaceuticals policy Pharmaceutical lobby Prescription Drug Marketing Act Prescription drug prices in the United States References Further reading drug marketing and sales Marketing Marketing Drug advertising
Pharmaceutical marketing
Chemistry
4,107
46,697,502
https://en.wikipedia.org/wiki/Sigfox
Sigfox is a French global network operator founded in 2010 that built wireless networks to connect low-power objects such as electricity meters and smartwatches, which need to be continuously on and emitting small amounts of data. Sigfox is based in Labège near Toulouse, France, and had over 375 employees. The firm also has offices in Madrid, San Francisco, Sydney and Paris. Sigfox had raised more than $300 million from investors that included Salesforce, Intel, Samsung, NTT, SK Telecom, energy groups Total and Air Liquide. In November 2016 Sigfox was valued at around €600 million. In January 2022 it filed for bankruptcy. In April 2022 Singapore-based IoT network firm Unabiz subsequently acquired Sigfox and its French network operations for a reported €25 million ($27m). Technology Sigfox employs differential binary phase-shift keying (DBPSK) and Gaussian frequency shift keying (GFSK) over the Short-range device band of 868 MHz in Europe, and the Industrial, Scientific and Medical radio band of 902 MHz in the US. It utilizes a wide-reaching signal that passes freely through solid objects, called "Ultra Narrowband" and requires little energy, being termed a "low-power wide-area network" (LPWAN). The network is based on one-hop star topology and requires a mobile operator to carry the generated traffic. The signal can also be used to easily cover large areas and to reach underground objects. As of November 2020, the Sigfox IoT network has covered a total of 5.8 million square kilometers in a total of 72 countries with 1.3 billion of the world population reached. Sigfox has partnered with a number of firms in the LPWAN industry such as Texas Instruments, Silicon Labs and ON Semiconductor. The ISM radio bands support limited bidirectional communication. The existing standard for Sigfox communications supports up to 140 uplink messages a day, each of which can carry a payload of 12 octets at a data rate of up to 100 bits per second. Coverage Map of coverage and countries under roll-out References Internet technology companies of France Internet of things companies Wireless networking French companies established in 2009 Companies that filed for Chapter 7 bankruptcy in 2022 Companies that have filed for Chapter 7 bankruptcy
Sigfox
Technology,Engineering
482
1,214,667
https://en.wikipedia.org/wiki/Centipede%20game
In game theory, the centipede game, first introduced by Robert Rosenthal in 1981, is an extensive form game in which two players take turns choosing either to take a slightly larger share of an increasing pot, or to pass the pot to the other player. The payoffs are arranged so that if one passes the pot to one's opponent and the opponent takes the pot on the next round, one receives slightly less than if one had taken the pot on this round, but after an additional switch the potential payoff will be higher. Therefore, although at each round a player has an incentive to take the pot, it would be better for them to wait. Although the traditional centipede game had a limit of 100 rounds (hence the name), any game with this structure but a different number of rounds is also called a centipede game. The unique subgame perfect equilibrium (and every Nash equilibrium) of these games results in the first player taking the pot on the first round of the game; however, in empirical tests, relatively few players do so, and as a result, achieve a higher payoff than in the subgame perfect and Nash equilibria. These results are taken to show that subgame perfect equilibria and Nash equilibria fail to predict human play in some circumstances. The Centipede game is commonly used in introductory game theory courses and texts to highlight the concept of backward induction and the iterated elimination of dominated strategies, which show a standard way of providing a solution to the game. Play One possible version of a centipede game could be played as follows: The addition of coins is taken to be an externality, as it is not contributed by either player. Formal definition The centipede game may be written as where and . Players and alternate, starting with player , and may on each turn play a move from with a maximum of rounds. The game terminates when is played for the first time, otherwise upon moves, if is never played. Suppose the game ends on round with player making the final move. Then the outcome of the game at round is defined as follows: If played , then gains coins and gains . If played , then gains coins and gains . Here, denotes the other player. Equilibrium analysis and backward induction Standard game theoretic tools predict that the first player will defect on the first round, taking the pile of coins for himself. In the centipede game, a pure strategy consists of a set of actions (one for each choice point in the game, even though some of these choice points may never be reached) and a mixed strategy is a probability distribution over the possible pure strategies. There are several pure strategy Nash equilibria of the centipede game and infinitely many mixed strategy Nash equilibria. However, there is only one subgame perfect equilibrium (a popular refinement to the Nash equilibrium concept). In the unique subgame perfect equilibrium, each player chooses to defect at every opportunity. This, of course, means defection at the first stage. In the Nash equilibria, however, the actions that would be taken after the initial choice opportunities (even though they are never reached since the first player defects immediately) may be cooperative. Defection by the first player is the unique subgame perfect equilibrium and required by any Nash equilibrium, it can be established by backward induction. Suppose two players reach the final round of the game; the second player will do better by defecting and taking a slightly larger share of the pot. Since we suppose the second player will defect, the first player does better by defecting in the second to last round, taking a slightly higher payoff than she would have received by allowing the second player to defect in the last round. But knowing this, the second player ought to defect in the third to last round, taking a slightly higher payoff than he would have received by allowing the first player to defect in the second to last round. This reasoning proceeds backwards through the game tree until one concludes that the best action is for the first player to defect in the first round. The same reasoning can apply to any node in the game tree. For a game that ends after four rounds, this reasoning proceeds as follows. If we were to reach the last round of the game, Player 2 would do better by choosing d instead of r, receiving 4 coins instead of 3. However, given that 2 will choose d, 1 should choose D in the second to last round, receiving 3 instead of 2. Given that 1 would choose D in the second to last round, 2 should choose d in the third to last round, receiving 2 instead of 1. But given this, Player 1 should choose D in the first round, receiving 1 instead of 0. There are a large number of Nash equilibria in a centipede game, but in each, the first player defects on the first round and the second player defects in the next round frequently enough to dissuade the first player from passing. Being in a Nash equilibrium does not require that strategies be rational at every point in the game as in the subgame perfect equilibrium. This means that strategies that are cooperative in the never-reached later rounds of the game could still be in a Nash equilibrium. In the example above, one Nash equilibrium is for both players to defect on each round (even in the later rounds that are never reached). Another Nash equilibrium is for player 1 to defect on the first round, but pass on the third round and for player 2 to defect at any opportunity. Empirical results Several studies have demonstrated that the Nash equilibrium (and likewise, subgame perfect equilibrium) play is rarely observed. Instead, subjects regularly show partial cooperation, playing "R" (or "r") for several moves before eventually choosing "D" (or "d"). It is also rare for subjects to cooperate through the whole game. For examples see McKelvey and Palfrey (1992), Nagel and Tang (1998) or Krockow et al. (2016) for a survey. Scholars have investigated the effect of increasing the stakes. As with other games, for instance the ultimatum game, as the stakes increase the play approaches (but does not reach) Nash equilibrium play. Since the empirical studies have produced results that are inconsistent with the traditional equilibrium analysis, several explanations of this behavior have been offered. To explain the experimental data, we either need some altruistic agents or some bounded rational agents. Preference-based explanation One reason people may deviate from equilibrium behavior is if some are altruistic. The basic idea is that you have a certain probability at each game to play against an altruistic agent and if this probability is high enough, you should defect on the last round rather than the first. If enough people are altruists, sacrificing the payoff of first-round defection is worth the price in order to determine whether or not your opponent is an altruist. McKelvey and Palfrey (1992) create a model with some altruistic agents and some rational agents who will end up playing a mixed strategy (i.e. they play at multiple nodes with some probability). To match well the experimental data, around 5% of the players need to be altruistic in the model. Elmshauser (2022) shows that a model including altruistic agents and uncertainty-averse agents (instead of rational agents) explain even better the experimental data. Some experiments tried to see whether players who passing a lot would also be the most altruistic agents in other games or other life situations (see for instance Pulford et al or Gamba and Regner (2019) who assessed Social Value Orientation). Players passing a lot were indeed more altruistic but the difference wasn't huge. Bounded rationality explanation Rosenthal (1981) suggested that if one has reason to believe his opponent will deviate from Nash behavior, then it may be advantageous to not defect on the first round. Another possibility involves error. If there is a significant possibility of error in action, perhaps because your opponent has not reasoned completely through the backward induction, it may be advantageous (and rational) to cooperate in the initial rounds. The quantal response equilibrium of McKelvey and Palfrey (1995) created a model with agents playing Nash equilibrium with errors and they applied it to the Centipede game. Another modelling able to explain behaviors in the centipede game is the level-k model, which is a cognitive hierarchy theory : a L0 player plays randomly, the L1 player best responds to the L0 player, the L2 player best responds to the L1 player and so on. In many games, scholars observed that most of the player were L2 or L3 players, which is consistent with the centipede game experimental data. Garcia-Pola et al. (2020) concluded from an experiment that most of the players either play following a Level-k logic or a Quantal response logic. However, Parco, Rapoport and Stein (2002) illustrated that the level of financial incentives can have a profound effect on the outcome in a three-player game: the larger the incentives are for deviation, the greater propensity for learning behavior in a repeated single-play experimental design to move toward the Nash equilibrium. Palacios-Huerta and Volij (2009) find that expert chess players play differently from college students. With a rising Elo, the probability of continuing the game declines; all Grandmasters in the experiment stopped at their first chance. They conclude that chess players are familiar with using backward induction reasoning and hence need less learning to reach the equilibrium. However, in an attempt to replicate these findings, Levitt, List, and Sadoff (2010) find strongly contradictory results, with zero of sixteen Grandmasters stopping the game at the first node. Qualitative research by Krockow et al., which employed think-aloud protocols that required players in a Centipede game to vocalise their reasoning during the game, indicated a range of decision biases such as action bias or completion bias, which may drive irrational choices in the game. Significance Like the prisoner's dilemma, this game presents a conflict between self-interest and mutual benefit. If it could be enforced, both players would prefer that they both cooperate throughout the entire game. However, a player's self-interest or players' distrust can interfere and create a situation where both do worse than if they had blindly cooperated. Although the Prisoner's Dilemma has received substantial attention for this fact, the Centipede Game has received relatively less. Additionally, Binmore (2005) has argued that some real-world situations can be described by the Centipede game. One example he presents is the exchange of goods between parties that distrust each other. Another example Binmore (2005) likens to the Centipede game is the mating behavior of a hermaphroditic sea bass which takes turns exchanging eggs to fertilize. In these cases, we find cooperation to be abundant. Since the payoffs for some amount of cooperation in the Centipede game are so much larger than immediate defection, the "rational" solutions given by backward induction can seem paradoxical. This, coupled with the fact that experimental subjects regularly cooperate in the Centipede game, has prompted debate over the usefulness of the idealizations involved in the backward induction solutions, see Aumann (1995, 1996) and Binmore (1996). See also Backward induction Experimental economics Traveler's dilemma Unexpected hanging paradox References External links EconPort article on the Centipede Game Rationality and Game Theory - AMS column about the centipede game Online experiment in VeconLab Play the Centipede game in your browser on gametheorygame.nl Non-cooperative games
Centipede game
Mathematics
2,417
970,932
https://en.wikipedia.org/wiki/Anal%20masturbation
Anal masturbation is an autoerotic practice in which a person masturbates by sexually stimulating their own anus and rectum. Common methods of anal masturbation include manual stimulation of the anal opening and the insertion of an object or objects. Items inserted may be sex toys such as anal beads, butt plugs, dildos, vibrators, or specially designed prostate massagers or enemas. Method Pleasure can be derived from anal masturbation due to the nerve endings in the anal and rectal areas. Men In men, orgasmic function through genitalia depends in part on the healthy functioning of the smooth muscles surrounding the prostate, and of the pelvic floor muscles. Anal masturbation can be especially pleasurable for those with a functioning prostate because it often stimulates the area, which also contains sensitive nerve endings. Some men find the quality of their orgasm to be significantly enhanced by the use of a butt plug or other anally inserted item during sexual activity. It is typical for a man to not reach orgasm as a receptive partner solely from anal sex. Women Some women also engage in anal masturbation. Alfred Kinsey in "Sexual Behavior in the Human Female" documented that "There still [are] other masturbatory techniques which were regularly or occasionally employed by some 11 percent of the females in the sample ... enemas, and other anal insertions, ... were employed." Other methods Enemas can be used as a form of anal masturbation, as noted above by Kinsey, sexual arousal by enemas being known as klismaphilia, but also, enemas or anal douches can, for hygienic reasons, be taken prior to anal masturbation if desired. Autosodomy Autosodomy is the penetration of one's own anus with their own penis. This is possible if the penis is long enough and the genitals are properly maneuvered. Safety Insertion of foreign objects into the anus is not without dangers. Unsafe anal masturbation methods cause harm and a potential trip to the hospital emergency room. However, anal masturbation can be carried out in greater safety by ensuring that the bowel is emptied before beginning, the anus and rectum are sufficiently lubricated and relaxed throughout, and the inserted object is not of too great a size. Objects Some anal stimulators are purposely ribbed or have a wave pattern in order to enhance pleasure and simulate intercourse. Stimulating the rectum with a rough-edged object or a finger (for the purposes of medically stimulating a bowel movement or other reasons) may lead to rectum wall tearing, especially if the fingernail is left untrimmed. Vegetables have rough edges and may have microorganisms on the surface, and thus could lead to infection if not sanitized before use. Risks associated with bleeding Minor injuries that cause some bleeding to the rectum pose measurable risk and often need treatment. Injury can be contained by cessation of anal stimulation at any sign of injury, bleeding, or pain. While minor bleeding may stop of its own accord, individuals with serious injury, clotting problems, or other medical factors could face serious risk and require medical attention. Prolonged or heavy bleeding can indicate a life-threatening situation, as the intestinal wall can be damaged, leading to internal injury of the peritoneal cavity and peritonitis, which can be fatal. Carefully using implements without sharp edges or rough surfaces carries a lower risk of damage to the intestinal wall. The treatment for persistent or heavy bleeding will require a visit to an emergency room for a sigmoidoscopy and cauterization in order to prevent further loss of blood. Apart from the volume of blood that is lost into the rectum, other easily observable indications that medical intervention is urgently needed as a result of blood loss are an elevated heart rate, a general feeling of faintness or weakness, and a loss of pleasure from the act. Rectal foreign bodies Butt plugs normally have a flared base to prevent complete insertion and should be carefully sanitized before and after use. Sex toys, including objects for rectal insertion, should not be shared in order to minimize the risk of disease. Objects such as lightbulbs or anything breakable such as glass or wax candles cannot safely be used in anal masturbation, as they may break or shatter, causing highly dangerous medical situations. Some objects can become lodged above the lower colon and could be seriously difficult to remove. Such foreign bodies should not be allowed to remain in place. Medical help should be sought if the object does not emerge on its own. Immediate assistance is recommended if the object is not a proper rectal toy (like a plug or something soft, for example), if it is either too hard, too large, has projections, or slightly sharp edges, or if any trace of injury happens (bleeding, pain, cramps). Small objects with dimensions similar to small stools are less likely to become lodged than medium-sized or large objects as they can usually be expelled by forcing a bowel movement. It is always safest if a graspable part of the object remains outside the body. Hygiene The biological function of the anus is to expel intestinal gas and feces from the body; therefore, when engaging in anal masturbation, hygiene is important. One may wish to cover butt plugs or other objects with a condom before insertion and then dispose of the condom afterward. To minimize the potential transfer of germs between sexual partners, there are practices of safe sex recommended by healthcare professionals. Oral or vaginal infection may occur similarly to penile anus-to-mouth or anilingus practices. See also Anal eroticism Anal sex Prostate massage References Anal eroticism Masturbation Sexual acts
Anal masturbation
Biology
1,202
2,008,927
https://en.wikipedia.org/wiki/Lanmeter
A LANMeter was a tool for testing Token Ring and Ethernet networks introduced by Fluke Corporation in 1993. It incorporated hardware testing (cable and network interface card) and active network testing in a handheld, battery operated package. It was discontinued in 2003. Variants References Computer network analysis
Lanmeter
Technology
57
241,784
https://en.wikipedia.org/wiki/Bromomethane
Bromomethane, commonly known as methyl bromide, is an organobromine compound with formula CH3Br. This colorless, odorless, nonflammable gas is produced both industrially and biologically. It is a recognized ozone-depleting chemical. According to the IPCC Fifth Assessment Report, it has a global warming potential of 2. It was used extensively as a pesticide until being phased out by most countries in the early 2000s. From a chemistry perspective, it is one of the halomethanes. Occurrence and manufacture Marine organisms are estimated to produce 56,000 tonnes annually. It is also produced in small quantities by certain terrestrial plants, such as members of the family Brassicaceae. In 2009, an estimated 24,000 tonnes of methyl bromide were produced. Its production was curtailed by the Montreal Protocol, such that in 1983, production was nearly twice that of 2009 levels. It is manufactured by treating methanol with bromine in the presence of sulfur or hydrogen sulfide: 6 CH3OH + 3 Br2 + S → 6 CH3Br + 2 H2O + H2SO4 Uses Most methyl bromide is used for fumigation purposes, while some is used to manufacture other products. It is widely applied as a soil sterilant, mainly for production of seed but also for some crops such as strawberries and almonds. Bromomethane is safer and more effective than some other soil sterilants. Its loss to the seed industry has resulted in changes to cultural practices, with increased reliance on soil steam sterilization, mechanical roguing, and fallow seasons. Bromomethane was also used as a general-purpose fumigant to kill a variety of pests including rats and insects. Bromomethane has poor fungicidal properties. Bromomethane is the only fumigant allowed (heat treatment is the only other option) under ISPM 15 regulations when exporting solid wood packaging (fork lift pallets, crates, bracing) to ISPM 15 compliant countries. Bromomethane is used to prepare golf courses, particularly to control Bermuda grass. The Montreal Protocol stipulates that bromomethane use be phased out. Bromomethane is also a precursor in the manufacture of pharmaceuticals such as neostigmine bromide, pancuronium bromide, propantheline bromide], pyridostigmine bromide, atropine derivatives, clidinium bromide, clobazam, demecarium bromide, glycopyrrolate, and vecuronium bromide. It is a precursor to many ordinary chemicals often as a methylating agent. Bromomethane was once used in specialty fire extinguishers, prior to the advent of less toxic halons, as it is electrically non-conductive and leaves no residue. It was used primarily for electrical substations, military aircraft, and other industrial hazards. It was never as popular as other agents due to its high cost and toxicity. Bromomethane was used from the 1920s to the 1960s, and continued to be used in aircraft engine fire suppression systems into the late 1960s. Regulation Bromomethane is readily photolyzed in the atmosphere to release bromine radicals, which are far more destructive to stratospheric ozone than chlorine. As such, it is subject to phase-out requirements of the 1987 Montreal Protocol on Ozone Depleting Substances. The London Amendment in 1990 added bromomethane to the list of ODS to be phased out. Phase-out began in the United States in 1993, manufactured amounts being capped at the 1991 level. All developed countries in the Montreal Protocol reduced both manufactured and imported amounts by 25% in 1999, 50% 2001, 75% 2003, 100% 2005. In 2003 the Global Environment Facility approved funds for a UNEP-UNDP joint project for methyl bromide total sector phase out in seven countries in Central Europe and Central Asia, which was due for completion in 2007. Australia In Australia, bromomethane is the preferred fumigant of the Department of Agriculture and Water Resources for most organic goods imported into Australia. The department conducts methyl bromide fumigation certification for both domestic and foreign fumigators who can then fumigate containers destined for Australia. A list of alternative fumigants is available for goods imported from Europe (in what's known as the BICON database), where methyl bromide fumigation has been banned. Alternatively, the department allows containers from Europe to be fumigated with methyl bromide on arrival to Australia. New Zealand In New Zealand, bromomethane is used as a fumigant for whole logs destined for export. Environmental groups and the Green Party oppose its use. In May 2011 the Environmental Risk Management Authority (ERMA) introduced new rules for its use which restrict the level of public exposure to the fumigant, set minimum buffer zones around fumigation sites, provide for notification to nearby residents and require users to monitor air quality during fumigations and report back to ERMA each year. All methyl bromide fumigations must use recapture technology by 2025. United States In the United States bromomethane is regulated as a pesticide under the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA; 7 U.S.C. 136 et seq.) and as a hazardous substance under the Resource Conservation and Recovery Act (RCRA; 42 U.S.C. 6901 et seq.), and is subject to reporting requirements under the Emergency Planning and Community Right-to-Know Act (EPCRA; 42 U.S.C. 11001 et seq.). The U.S. Clean Air Act (CAA; 42 U.S.C. 7401 et seq.). A 1998 amendment (P.L. 105-178, Title VI) conformed the Clean Air Act phase out date with that of the Montreal Protocol. While the Montreal Protocol severely restricted the use of bromomethane internationally, the United States has successfully lobbied for critical-use exemptions. Critical use exemptions allow the United States to continue using MeBr until it is scheduled to be completely phased out sometime in 2017. Chile Chile has phased out the use of bromomethane in traditional agriculture as of 2015, with exemption of the 100% pure formulation that is largely used for quarantine pest control and at pre-shipments of the fruit export industry. Alternatives Alternatives to bromomethane in the agricultural field are currently in use and further alternatives are in development, including propylene oxide and furfural. For Australia, a list of alternative fumigants is available for goods imported from Europe (in what's known as the BICON database), where methyl bromide fumigation has been banned. Chloropicrin has been used in combination with bromomethane, and standalone is a common alternative fumigant. It has been widely used since its initial success against Verticillium dahliae in strawberry. It is a suitable alternative for fungicidal action but does not quite have BM's nematicide or herbicide efficacy, and so is commonly combined with yet another fumigant. 1,3-Dichloropropene replaces some of both the fungicide and nematicide effects of BM, but is not a full-efficacy replacement. Methyl isothiocyanate is the breakdown product/a.i. of two applied products, metam sodium and granular dazomet. MITC does not redistribute through the soil as well as BM. Requires significantly more irrigation for activation. More strongly herbicidal than BM and so often used for that purpose alone. Much smaller doses stimulate weed germination. An alternative to bromomethane for structural termite fumigation is sulfuryl fluoride, which is a powerful greenhouse gas. This is also used in exported agricultural commodities in order to prevent the spread of invasive species. Potential future alternatives Iodomethane Propargyl bromide Ozone Health effects Brief exposure to high concentrations and prolonged inhalation of lower concentrations are problematic. Exposure levels leading to death vary from 1,600 to 60,000 ppm, depending on the duration of exposure (as a comparison exposure levels of 70 to 400 ppm of carbon monoxide cover the same spectrum of illness/death). Concentrations in the range of 60,000 ppm can be immediately fatal, while toxic effects can present following prolonged exposure to concentrations well under 1,000 ppm. "A TLV–TWA of 1 ppm (3.89 mg/m3) is recommended for occupational exposure to methyl bromide"-ACGIH 8 hour time weighted average. Immediately Dangerous To Life or Health Concentration by NIOSH: "The revised IDLH for methyl bromide is 250 ppm based on acute inhalation toxicity data in humans [Clarke et al. 1945]. This may be a conservative value due to the lack of relevant acute toxicity data for workers exposed to concentrations above 220 ppm. [Note: NIOSH recommends as part of its carcinogen policy that the "most protective" respirators be worn for methyl bromide at any detectable concentration.]" Detectable concentration by Drager Tube is 0.5 ppm. Respiratory, kidney, and neurological effects are of the greatest concern. Treatment of wood packaging requires a concentration of up to 16,000 ppm. NIOSH considers methyl bromide to be a potential occupational carcinogen as defined by the OSHA carcinogen policy [29 CFR 1990]. "Methyl bromide showed a significant dose-response relationship with prostate cancer risk." Excessive exposure Expression of toxicity following exposure may involve a latent period of several hours, followed by signs such as nausea, abdominal pain, weakness, confusion, pulmonary edema, and seizures. Individuals who survive the acute phase often require a prolonged convalescence. Persistent neurological deficits such as asthenia, cognitive impairment, optical atrophy, and paresthesia are frequently present after moderate to severe poisoning. Blood or urine concentrations of inorganic bromide, a bromomethane metabolite, are useful to confirm a diagnosis of poisoning in hospitalized patients or to assist in the forensic investigation of a case of fatal overdosage. Gallery See also Angelita C. et al. v. California Department of Pesticide Regulation Chloromethane Dibromomethane Bromoform Carbon tetrabromide References External links Chemical Alternatives to the agricultural use of Methyl Bromide Biological, Chemical & Practice based alternatives to the agricultural use of Methyl Bromide. Methyl Bromide Technical Fact Sheet - National Pesticide Information Center Methyl Bromide General Fact Sheet - National Pesticide Information Center Methyl Bromide Pesticide Information Profile - Extension Toxicology Network IARC Summaries & Evaluations Vol. 71 (1999) The banned pesticide in our soil MSDS at Oxford University Toxicological profile Environmental Health Criteria 166 OECD SIDS document ChemSub Online (Methyl bromide, Bromomethane). Del. family poisoned with methyl bromide in Caribbean in grave condition, governor says - July 2, 2015 Terminix Companies Sentenced for Applying Restricted-Use Pesticide to Residences in the U.S. Virgin Islands - November 20, 2017 Terminix Virgin Islands Branch Manager Pleads Guilty to Four Counts of Illegally Applying Restricted-Use Pesticide to Multiple Residences in the U.S. Virgin Islands - September 17, 2018 Bromoalkanes Halomethanes Halogenated solvents Fumigants IARC Group 3 carcinogens Methylating agents Ozone-depleting chemical substances
Bromomethane
Chemistry
2,426
24,884,720
https://en.wikipedia.org/wiki/Trickle-bed%20reactor
A trickle-bed reactor (TBR) is a chemical reactor that uses the downward movement of a liquid and the downward (co-current) or upward (counter-current) movement of gas over a packed bed of (catalyst) particles. It is considered to be the simplest reactor type for performing catalytic reactions where a gas and liquid (normally both reagents) are present in the reactor and accordingly it is extensively used in processing plants. Typical examples are liquid-phase hydrogenation, hydrodesulfurization, and hydrodenitrogenation in refineries (three phase hydrotreater) and oxidation of harmful chemical compounds in wastewater streams or of cumene in the cumene process. Also in the treatment of waste water trickle bed reactors are used where the required biomass resides on the packed bed surface. Kinetics Although the physical reactor is relatively simple, the hydrodynamics in the reactor are extremely complex. It is for this reason that TBRs have been extensively studied over the past five decades and currently the amount of open literature publications on TBRs is increasing, hinting that the understanding of the hydrodynamics is still limited. A good introduction to the hydrodynamics of TBR can be found in the classic article by Satterfield. Rate of reaction and mass transfer equations are derived by Fogler. See also Trickling filter References Chemical reactors
Trickle-bed reactor
Chemistry,Engineering
278
5,250,644
https://en.wikipedia.org/wiki/Shell%20higher%20olefin%20process
The Shell higher olefin process (SHOP) is a chemical process for the production of linear alpha olefins via ethylene oligomerization and olefin metathesis invented and exploited by Shell plc. The olefin products are converted to fatty aldehydes and then to fatty alcohols, which are precursors to plasticizers and detergents. The annual global production of olefins through this method is over one million tonnes. History The process was discovered by chemists at Shell Development Emeryville in 1968. At the time ecological considerations demanded the replacement of branched fatty alcohols used widely in detergents by linear fatty alcohols because the biodegradation of the branched compounds was slow, causing foaming of surface water. At the same time new gas oil crackers were being commissioned and ethylene supply was outpacing demand. The process was commercialized in 1977 by Shell plc and following an expansion of the Geismar, Louisiana (USA) plant in 2002 global annual production capacity was 1.2 million tons. Process Ethylene reacts by the catalyst to give longer chains. Unlike the Ziegler–Natta process, which aims to produce very long polymers, the oligomer stops growing after addition of 1–10 repeating units of ethylene. The fraction containing C12 to C18 olefins (40–50%) has direct commercial value in detergent production and is removed. For the remaining fraction to be of commercial interest two additional steps are required. The first step is liquid-phase isomerization using alkaline alumina catalyst leading to internal double bonds. For example, 1-octene is converted to 4-octene and 1-eicocene (a C20 hydrocarbon) is converted to 10-eicocene. In the second step olefin metathesis converts mixtures like these to 2-tetradecene which is a C14 component and again within commercial range. The internal olefins can also be reacted with an excess of ethylene with rhenium(VII) oxide supported on alumina as catalyst in an ethenolysis reaction, which causes the internal double bond to break up to form a mixture of α-olefins with odd and even carbon chain-length of the desired molecular weight. The C12 to C18 olefins subsequently are subjected to hydroformylation (oxo process) to give aldehydes. The aldehyde is hydrogenated to give fatty alcohols, which are suitable for manufacturing detergents. Catalytic cycle The first step in this process is the ethylene oligomerization to a mixture of even-numbered α-olefins at 80 to 120 °C and 70 to 140 bar (7 to 14 MPa) catalyzed by a nickel-phosphine complex. Such catalysts are typically prepared from diarylphosphino carboxylic acids, such as (C6H5)2PCH2CO2H. The process and its mechanism was elucidated by the group of Wilhelm Keim, first at Shell and later at the RWTH Aachen. Alternative routes In another olefin application of Shell cyclododecatriene is partially hydrogenated to cyclododecene and then subjected to ethenolysis to the terminal linear open-chain diene. The process was still in use at Essar Stanlow refinery until a serious explosion and following fire lead to the closure of the plant and the alcohols units it fed in 2018. References Chemical processes Catalysis American inventions
Shell higher olefin process
Chemistry
733
63,434,051
https://en.wikipedia.org/wiki/ISO/IEC%2027017
ISO/IEC 27017 is a security standard developed for cloud service providers and users to make a safer cloud-based environment and reduce the risk of security problems. It was published by the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) under the joint ISO and IEC subcommittee, ISO/IEC JTC 1/SC 27. It is part of the ISO/IEC 27000 family of standards, standards which provides best practice recommendations on information security management. This standard was built from ISO/IEC 27002, suggesting additional security controls for the cloud which were not completely defined in ISO/IEC 27002. This International Standard provides guidelines supporting the implementation of information security controls for cloud service customers, who implements the controls, and cloud service providers to support the implementations of those controls. The selection of appropriate information security controls and the application of the implementation guidance provided, will depend on a risk assessment and any legal, contractual, regulatory or other cloud-sector specific information security requirements. What does the standard provide? ISO/IEC 27017 provides guidelines for information security controls applicable to the use of cloud services by providing an additional implementation guidance for 37 controls specified in ISO/IEC 27002 and 7 additional controls related to cloud services which address the following: Who is responsible for what between the cloud service provider and the cloud customer. The removal or return of assets at the end of a contract. Protection and separation of the customer's virtual environment. Virtual machine configuration. Administrative operations and procedures associated with the cloud environment. Cloud customer monitoring of activity. Virtual and cloud network environment alignment. Structure of the standard The official title of the standard is "Information technology — Security techniques — Code of practice for information security controls based on ISO/IEC 27002 for cloud services". ISO/IEC 27017:2015 has eighteen sections, plus a long annex, which cover: 1. Scope 2. Normative References 3. Definitions and abbreviations 4. Cloud sector-specific concepts 5. Information security policies 6. Organization of information security 7. Human resource security 8. Asset management 9. Access control 10. Cryptography 11. Physical and environmental security 12. Operations security 13. Communications security 14. System acquisition, development and maintenance 15. Supplier relationships 16. Information security incident management 17. Information security aspects of business continuity management 18. Compliance References External links ISO Website Computer security standards Information assurance standards 27017
ISO/IEC 27017
Technology,Engineering
487
7,273,911
https://en.wikipedia.org/wiki/Interpersonal%20ties
In social network analysis and mathematical sociology, interpersonal ties are defined as information-carrying connections between people. Interpersonal ties, generally, come in three varieties: strong, weak or absent. Weak social ties, it is argued, are responsible for the majority of the embeddedness and structure of social networks in society as well as the transmission of information through these networks. Specifically, more novel information flows to individuals through weak rather than strong ties. Because our close friends tend to move in the same circles that we do, the information they receive overlaps considerably with what we already know. Acquaintances, by contrast, know people that we do not, and thus receive more novel information. Included in the definition of absent ties, according to the American sociologist Mark Granovetter, are those relationships (or ties) without substantial significance, such as "nodding" relationships between people living on the same street, or the "tie", for example, to a frequent vendor one would buy from. Such relations with familiar strangers have also been called invisible ties since they are hardly observable, and are often overlooked as a relevant type of ties. They nevertheless support people's sense of familiarity and belonging. Furthermore, the fact that two people may know each other by name does not necessarily qualify the existence of a weak tie. If their interaction is negligible the tie may be absent or invisible. The "strength" of an interpersonal tie is a linear combination of the amount of time, the emotional intensity, the intimacy (or mutual confiding), and the reciprocal services which characterize each tie. History One of the earliest writers to describe the nature of the ties between people was German scientist and philosopher, Johann Wolfgang von Goethe. In his classic 1809 novella, Elective Affinities, Goethe discussed the "marriage tie". The analogy shows how strong marriage unions are similar in character to particles of quicksilver, which find unity through the process of chemical affinity. In 1954, the Russian mathematical psychologist Anatol Rapoport commented on the "well-known fact that the likely contacts of two individuals who are closely acquainted tend to be more overlapping than those of two arbitrarily selected individuals". This argument became one of the cornerstones of social network theory. In 1973, stimulated by the work of Rapoport and Harvard theorist Harrison White, Mark Granovetter published The Strength of Weak Ties. This paper is now recognized as one of the most influential sociology papers ever written. To obtain data for his doctoral thesis, Granovetter interviewed dozens of people to find out how social networks are used to land new jobs. Granovetter found that most jobs were found through "weak" acquaintances. This pattern reminded Granovetter of his freshman chemistry lesson that demonstrated how "weak" hydrogen bonds hold together many water molecules, which are themselves composed of atoms held together by "strong" covalent bonds. In Granovetter's view, a similar combination of strong and weak bonds holds the members of society together. This model became the basis of his first manuscript on the importance of weak social ties in human life, published in May 1973. According to Current Contents, by 1986, the Weak Ties paper had become a citation classic, being one of the most cited papers in sociology. In a related line of research in 1969, anthropologist Bruce Kapferer, published "Norms and the Manipulation of Relationships in a Work Context" after doing field work in Africa. In the document, he postulated the existence of multiplex ties, characterized by multiple contexts in a relationship. In telecommunications, a multiplexer is a device that allows a transmission medium to carry a number of separate signals. In social relations, by extrapolation, "multiplexity" is the overlap of roles, exchanges, or affiliations in a social relationship. Research data In 1970, Granovetter submitted his doctoral dissertation to Harvard University, entitled "Changing Jobs: Channels of Mobility Information in a Suburban Community". The thesis of his dissertation illustrated the conception of weak ties. For his research, Dr. Granovetter crossed the Charles River to Newton, Massachusetts where he surveyed 282 professional, technical, and managerial workers in total. 100 were personally interviewed, in regards to the type of ties between the job changer and the contact person who provided the necessary information. Tie strength was measured in terms of how often they saw the contact person during the period of the job transition, using the following assignment: often = at least once a week occasionally = more than once a year but less than twice a week rarely = once a year or less Of those who found jobs through personal contacts (N=54), 16.7% reported seeing their contact often, 55.6% reported seeing their contact occasionally, and 27.8% rarely. When asked whether a friend had told them about their current job, the most frequent answer was "not a friend, an acquaintance". The conclusion from this study is that weak ties are an important resource in occupational mobility. When seen from a macro point of view, weak ties play a role in affecting social cohesion. Social networks In social network theory, social relationships are viewed in terms of nodes and ties. Nodes are the individual actors within the networks, and ties are the relationships between the actors. There can be many kinds of ties between the nodes. In its simplest form, a social network is a map of all of the relevant ties between the nodes being studied. Weak tie hypothesis The "weak tie hypothesis" argues, using a combination of probability and mathematics, as originally stated by Anatol Rapoport in 1957, that if A is linked to both B and C, then there is a greater-than-chance probability that B and C are linked to each other: That is, if we consider any two randomly selected individuals, such as A and B, from the set S = A, B, C, D, E, ..., of all persons with ties to either or both of them, then, for example, if A is strongly tied to both B and C, then according to probability arguments, the B–C tie is always present. The absence of the B–C tie, in this situation, would create, according to Granovetter, what is called the forbidden triad. In other words, the B–C tie, according to this logic, is always present, whether weak or strong, given the other two strong ties. In this direction, the "weak tie hypothesis" postulates that clumps or cliques of social structure will form, being bound predominately by "strong ties", and that "weak ties" will function as the crucial bridge between any two densely knit clumps of close friends. It may follow that individuals with few bridging weak ties will be deprived of information from distant parts of the social system and will be confined to the provincial news and views of their close friends. However, having a large number of weak ties can mean that novel information is effectively "swamped" among a high volume of information, even crowding out strong ties. The arrangement of links in a network may matter as well as the number of links. Further research is needed to examine the ways in which types of information, numbers of ties, quality of ties, and trust levels interact to affect the spreading of information. Strong ties hypothesis According to David Krackhardt, there are some problems in the Granovetter definition. The first one refers to the fact that the Granovetter definition of the strength of a tie is a curvilinear prediction and his question is "how do we know where we are on this theoretical curve?". The second one refers to the effective character of strong ties. Krackhardt says that there are subjective criteria in the definition of the strength of a tie such as emotional intensity and the intimacy. He thought that strong ties are very important in severe changes and uncertainty: He called this particular type of strong tie philo and define philos relationship as one that meets the following three necessary and sufficient conditions: Interaction: For A and B to be philos, A and B must interact with each other. Affection: For A and B to be philos, A must feel affection for B. Time: A and B, to be philos, must have a history of interactions with each other that have lasted over an extended period of time. The combination of these qualities predicts trust and predicts that strong ties will be the critical ones in generating trust and discouraging malfeasance. When it comes to major change, change that may threaten the status quo in terms of power and the standard routines of how decisions are made, then trust is required. Thus, change is the product of philos. Positive ties and negative ties Starting in the late 1940s, Anatol Rapoport and others developed a probabilistic approach to the characterization of large social networks in which the nodes are persons and the links are acquaintanceship. During these years, formulas were derived that connected local parameters such as closure of contacts, and the supposed existence of the B–C tie to the global network property of connectivity. Moreover, acquaintanceship (in most cases) is a positive tie. However, there are also negative ties such as animosity among persons. In considering the relationships of three, Fritz Heider initiated a balance theory of relations. In a larger network represented by a graph, the totality of relations is represented by a signed graph. This effort led to an important and non-obvious Structure Theorem for signed graphs, which was published by Frank Harary in 1953. A signed graph is called balanced if the product of the signs of all relations in every cycle is positive. A signed graph is unbalanced if the product is ever negative. The theorem says that if a network of interrelated positive and negative ties is balanced, then it consists of two subnetworks such that each has positive ties among its nodes and negative ties between nodes in distinct subnetworks. In other words, "my friend's enemy is my enemy". The imagery here is of a social system that splits into two cliques. There is, however, a special case where one of the two subnetworks may be empty, which might occur in very small networks. In these two developments, we have mathematical models bearing upon the analysis of the structure. Other early influential developments in mathematical sociology pertained to process. For instance, in 1952 Herbert A. Simon produced a mathematical formalization of a published theory of social groups by constructing a model consisting of a deterministic system of differential equations. A formal study of the system led to theorems about the dynamics and the implied equilibrium states of any group. Absent or invisible ties In a footnote, Mark Granovetter defines what he considers as absent ties: The concept of invisible tie was proposed to overcome the contradiction between the adjective "absent" and this definition, which suggests that such ties exist and might "usefully be distinguished" from the absence of ties. From this perspective, the relationship between two familiar strangers, such as two people living on the same street, is not absent but invisible. Indeed, because such ties involve only limited interaction (as in the case of 'nodding relationships'), if any, they are hardly observable, and are often overlooked as a relevant type of ties. Absent or invisible ties nevertheless support people's sense of familiarity and belonging. Latent tie Adding any network-based means of communication such as a new IRC channel, a social support group, a Webboard lays the groundwork for connectivity between formerly unconnected others. Similarly, laying an infrastructure, such as the Internet, intranets, wireless connectivity, grid computing, telephone lines, cellular service, or neighborhood networks, when combined with the devices that access them (phones, cellphones, computers, etc.) makes it possible for social networks to form. Such infrastructures make a connection available technically, even if not yet activated socially. These technical connections support latent social network ties, used here to indicate ties that are technically possible but not yet activated socially. They are only activated, i.e. converted from latent to weak, by some sort of social interaction between members, e.g. by telephoning someone, attending a group-wide meeting, reading and contributing to a Webboard, emailing others, etc. Given that such connectivity involves unrelated persons, the latent tie structure must be established by an authority beyond the persons concerned. Internet-based social support sites contain this profile. These are started by individuals with a particular interest in a subject who may begin by posting information and providing the means for online discussion. The individualistic perspective Granovetter's 1973 work proved to be crucial in the individualistic approach of the social network theory as seen by the number of references in other papers. His argument asserts that weak ties or "acquaintances", are less likely to be involved within the social network than strong ties (close friends and family). By not going further in the strong ties, but focusing on the weak ties, Granovetter highlights the importance of acquaintances in social networks. He argues, that the only thing that can connect two social networks with strong ties is a weak tie: "… these clumps / [strong ties networks] would not, in fact, be connected to one another at all were it not for the existence of weak ties. It follows that in an all-covering social network individuals are at a disadvantage with only a few weak links, compared to individuals with multiple weak links, as they are disconnected with the other parts of the network. Another interesting observation that Granovetter makes in his work is the increasing specialization of individuals creates the necessity for weak ties, as all the other specialist information and knowledge is present in large social networks consisting predominately of weak ties. Cross et al., (2001) confirm this by presenting six features which differentiate effective and ineffective knowledge sharing relations: "1)knowing what other person knows and thus when to turn to them; 2) being able to gain timely access to that person; 3) willingness of the person sought out to engage in the problem solving rather than dump information; 4) a degree of safety in the relationship that promoted learning and creativity; 5) the factors put by Geert Hofstede; and 6) individual characteristics, such as openness" (pp 5). This fits in nicely with Granovetter's argument that "Weak ties provide people with access to information and resources beyond those available in their own social circle; but strong ties have greater motivation to be of assistance and are typically more easily available." This weak/strong ties paradox is elaborated by myriad authors. The extent in which individuals are connected to others is called centrality. Sparrowe & Linden (1997) argue how the position of a person in a social network confer advantages such organizational assimilation, and job performance (Sparrowe et al., 2001); Burt (1992) expects it to result in promotions, Brass (1984) affiliates centrality with power and Friedkin (1993) with influence in decision power. Other authors, such as Krackhardt and Porter (1986) contemplate the disadvantages of the position is social networks such as organizational exit (see also Sparrowe et al., 2001) and Wellman et al.,(1988) introduce the use of social networks for emotional and material support. Blau and Fingerman, drawing from these and other studies, refer to weak ties as consequential strangers, positing that they provide some of the same benefits as intimates as well as many distinct and complementary functions. Labour market In the early 1990s, US social economist James D. Montgomery contributed to economic theories of network structures in the labour market. In 1991, Montgomery incorporated network structures in an adverse selection model to analyze the effects of social networks on labour market outcomes. In 1992, Montgomery explored the role of "weak ties", which he defined as non-frequent and transitory social relations in the labour market. He demonstrated that weak ties are positively correlated with higher wages and higher aggregate employment rates. See also Dependent origination Human bonding Six degrees of separation Bridge (interpersonal) Simmelian tie Social connection References External links Caves, Clusters, and Weak Ties: The Six Degrees World of Inventors – Harvard Business School, 28 November 2004 The Weakening of Strong Ties – Ross Mayfield, 15 September 2003 The Power of Weak Ties (in Recruiting) Interpersonal relationships
Interpersonal ties
Biology
3,362
201,308
https://en.wikipedia.org/wiki/Styrofoam
Styrofoam is a genericized trademarked brand of closed-cell extruded polystyrene foam (XPS), manufactured to provide continuous building insulation board used in walls, roofs, and foundations as thermal insulation and as a water barrier. This material is light blue in color and is owned and manufactured by DuPont. DuPont also has produced a line of green and white foam shapes for use in crafts and floral arrangements. The term styrofoam is often used in the United States as a colloquial term to refer to expanded (not extruded) polystyrene foam (EPS). Outside the United States, EPS is most commonly referred to as simply "polystyrene" with the term "styrofoam" being used in its capacity to describe all forms of extruded polystyrene, not just the Dupont brand itself. Polystyrene (EPS) is often used in food containers, coffee cups, and as cushioning material in packaging. Styrofoam is, however, a far less dense material than EPS and is more commonly suited to tasks such as thermal insulation. Additionally, it is moderately soluble in many organic solvents, cyanoacrylate, and the propellants and solvents of spray paint. History In the 1940s, researchers, originally at Dow's Chemical Physics Lab, led by Ray McIntire, found a way to make foamed polystyrene. They rediscovered a method first used by Swedish inventor Carl Georg Munters, and obtained an exclusive license to Munters's patent in the United States. Dow found ways to adapt Munters's method to make large quantities of extruded polystyrene as a closed cell foam that resists moisture. The patent on this adaptation was filed in 1947. Uses Styrofoam has a variety of uses. Styrofoam is composed of 98% air, making it lightweight and buoyant. DuPont produces Styrofoam building materials, including varieties of building insulation sheathing and pipe insulation. The claimed R-value of Styrofoam insulation is approximately 5 °F⋅ft2⋅h/BTU for 1 inch thick sheet. Styrofoam can be used under roads and other structures to prevent soil disturbances due to freezing and thawing. DuPont also produces Styrofoam blocks and other shapes for use by florists and in craft products. DuPont insulation Styrofoam has a distinctive blue color; Styrofoam for craft applications is available in white and green. Environmental issues The EPA and International Agency for Research on Cancer reported limited evidence that styrene is carcinogenic for humans and experimental animals, meaning that there is a positive association between exposure and cancer and that causality is credible, but that other explanations cannot be confidently excluded. See also the expansive list of environmental issues of polystyrene, among which it being non-biodegradable. See also List of generic and genericized trademarks National Inventors Hall of Fame Resin identification code Structural insulated panel References Dow Chemical Company Foams Plastic brands Brand name materials Brands that became generic Building insulation materials Organic polymers Swedish inventions 1941 in technology Products introduced in 1941 American inventions
Styrofoam
Chemistry
669
7,269,589
https://en.wikipedia.org/wiki/Thailand%20Institute%20of%20Nuclear%20Technology
The Thailand Institute of Nuclear Technology (TINT) (สถาบันเทคโนโลยีนิวเคลียร์แห่งชาติ) is a public organization in Bangkok, Thailand. Overview The institute is an entity established in December 2006 for national nuclear research and development. It is aimed to serve as the research body, cooperating with the Office of Atoms for Peace (OAP) who serves as the nuclear regulatory body of the country. TINT operates under Ministry of Higher Education, Science, Research and Innovation (MoST) and works closely with OAP and the International Atomic Energy Agency (IAEA). Research programs: Medical and Public Health Agricultural Material and Industrial Environmental Advanced Technology Nuclear operations: Safety Nuclear Engineering Reactor Operation External links สถาบันเทคโนโลยีนิวเคลียร์แห่งชาติ website (Thai) Thailand Institute of Nuclear Technology (TINT) website References Thailand Institute of Nuclear Technology Nuclear technology in Thailand Nuclear research institutes Public organizations of Thailand Scientific organizations based in Thailand Government agencies established in 2006 2006 establishments in Thailand Organizations based in Bangkok Research institutes established in 2006
Thailand Institute of Nuclear Technology
Engineering
188
15,561,472
https://en.wikipedia.org/wiki/William%20Thierry%20Preyer
William Thierry Preyer or Wilhelm Preyer (4 July 1841 – 15 July 1897) was an English-born biochemist, physiologist and psychologist who worked in Germany. He worked as a professor of physiology at the University of Jena and then at Berlin. Studying his own son among other children he examined developmental psychology, language acquisition and language pathology. Biography Preyer was born in Moss Side, Rusholme at Manchester, the son of an industrialist. He went to Clapham Grammar School near London and then studied at Gymnasiums in Duisberg and Bonn. In 1859 he went to study physiology and chemistry at Heidelberg, and received his doctorate in 1862. He studied under Du Bois-Reymond, Helmholz, Claude Bernard and Charles Adolphe Wurtz. In 1866 he earned his medical degree at the University of Bonn, and in 1869 succeeded Johann Nepomuk Czermak (1828-1873) as professor of physiology at the University of Jena. At Jena he was also director of the Physiology Institute. His students included Argentinian Roberto Wernicke. In 1888 he resigned from Jena due to poor health and then lectured for sometime at the University of Berlin. Preyer was a founder of scientific child psychology, and a pioneer in regards to research of human development based on empirical observation and experimentation. He was inspired by Charles Darwin's theory of evolution and Gustav Fechner’s work in psychophysics. He wrote a biography of Darwin and explained language acquisition in terms of evolutionary ideas. He proposed a myophysiological law to complement Fechner's law. Preyer examined telepathy and thought that it involved unconscious muscle reading. He authored Die Seele des Kindes (In English edition as The mind of the child) in 1882. This was a landmark book on developmental psychology written as a rigorous case study of his son Axel's development, including observational records. It was translated to English in 1888. He was also the author of another landmark book on developmental physiology titled Specielle Physiologie des Embryo (Special physiology of the embryo). Both works laid a foundation in their respective disciplines for future study of modern human development. At Jena, Preyer introduced experimental-scientific training methods into his lectures, and also created seminars in the field of physiology. Today, the "William Thierry Preyer Award" is issued by the European Society on Developmental Psychology for excellence in research of human development. Works De haemoglobino observationes et experimenta. dissertation, (University of Bonn) 1866. Die Blutkrystalle. Jena 1871 - The "blood crystal". Naturwissenschaftliche Thatsachen und Probleme. Paetel, Berlin, 1880 - Scientific facts and problems. Die Entdeckung des Hypnotismus. Dargestellt von W. Preyer … Nebst einer ungedruckten Original-Abhandlung von Braid in Deutscher Uebersetzung. Berlin: Paetel, 1881 - The discovery of hypnotism. represented by W. Preyer ... Also an unpublished original essay by James Braid in German translation. Die Seele des Kindes: Beobachtungen über die geistige Entwicklung des Menschen in den ersten Lebensjahren. Grieben, Leipzig, 1882 - The soul of the child: observations on the mental development of man in the first years of life. Der Hypnotismus. Ausgewählte Schriften von J. Braid. Deutsch herausgegeben von W. Preyer. Berlin: Paetel, 1882. - Hypnotism. Selected writings of James Braid. German edition by W. Preyer. Elemente der allgemeinen Physiologie: Kurz und leichtfasslich. Grieben, Leipzig, 1883 - Elements of general physiology. Der Hypnotismus: Vorlesungen gehalten an der K. Friedrich-Wilhelm’s-Universität zu Berlin, von W. Preyer. Nebst Anmerkungen und einer nachgelassenen Abhandlung von Braid aus dem Jahre 1845. Urban & Schwarzenberg, 1890 - Hypnotism: lectures held at the University of Berlin by W. Preyer; in addition to notes and an unpublished memoir by James Braid in 1845. Zur Psychologie des Schreibens: Mit besonderer Rücksicht auf individuelle Verschiedenheiten der Handschriften. Hamburg: Voss, 1895 References Bibliography VL People Biographical Information, William Thierry Preyer History of Psychology at Jena List of publications copied from an equivalent article at the German Wikipedia. External links German physiologists 1841 births 1897 deaths German psychologists University of Bonn alumni British emigrants to Germany Academic staff of the University of Jena People involved with the periodic table
William Thierry Preyer
Chemistry
1,016
61,607,156
https://en.wikipedia.org/wiki/Shemen%20%28bible%29
Shemen () is the most commonly used word for oil in the Hebrew scriptures, used around 170 times in a variety of contexts. Cooking oil In describing the ordination of Aaron and his sons, unleavened challah () made with oil, translated as 'cakes', and wafers () spread with oil are among the required offerings. The cakes, wafers and bread offering () made of the best quality of wheat are placed in a basket. After Aaron and his sons are anointed with oil and blood, the ram's tail fat, kidneys and other parts are burned as an offering, along with one oil cake, one wafer, and a piece of the unleavened bread. Then the remaining ram flesh is boiled for Aaron and his sons to eat along with the remainder of the bread and cakes. In the it is mentioned as in the description of the "good land": "A land of wheat and barley and the vine and figs and pomegranates, a land of olives for oil, and (date) honey". Based on this verse and additional descriptions given in , , and , olive oil appears to have been plentiful. Excavations at Tel Miqne-Ekron revealed over a hundred oil presses, and the region seems to have been central to a major olive oil industry. describes the wealth of the lands of Asher: "From Asher shall come fat bread [rich foods], and he will provide delicacies of a king". The relationship between fat (, ) and oil (, ) has been discussed by Ibn Ezra. The blessings of Asher's exceptionally fertile lands is given by Moses in : "May he dip his foot in oil". Describing the hardships of the wilderness, in the Israelites have only manna to eat, which they prepare into flat cakes called () that according to the passage tasted like (). Translated as rich cream by the JPS, the certain meaning is not known. Aside from , this verse is the only known use of . It was translated into Greek as cake with oil (), having also been used for the Hebrew in place of wafers in (where the taste is described "like a cake made with honey"). Ritual uses In two unblemished rams are brought before Aaron and his sons for their ordination as priests. One is sacrificed as a burnt offering, while the second is slaughtered and some of the blood mixed with anointing oil and sprinkled on the priestly vestments. It was also used to anoint kings. It is used for anointing oil in conjunction with Bethel and other sites that were "anointed" in the narrative of Jacob's Ladder and subsequent second visit to Bethel (). It is one of the offerings God demands of the Israelites for the Tabernacle in in the context of spices to be used to make anointing oil and incense, as well as for use in lamps. It is also used in the context of offerings in : "Will the Lord be pleased with thousands of rams, With ten thousands of rivers of oil? Shall I give my first born for my transgression, The fruit of my body for the sin of my soul?" discusses Israel's obligations to provide the daily oil for the lamps at the Tabernacle, and the weekly bread for the priests. There are various additional rules on the use of oil for lighting in different contexts such as searching for chametz during Pesach. Sometimes the shamash candle is made of wax, while olive oil is used for the other candles. Trade According to oil is exchanged with Tyre: "Judah, and the land of Israel, they were thy merchants: they traded in thy market wheat of Minnith, and Pannag, and honey, and oil, and balm." discusses the context of relations between Ephraim and Egypt: "Ephraim feedeth on wind, and followeth after the east wind: he daily increaseth lies and desolation; and they do make a covenant with the Assyrians, and oil is carried into Egypt". Perfume or cosmetic oil There are several biblical references to non-ritual cosmetic use. References Language of the Hebrew Bible Oils
Shemen (bible)
Chemistry
870
26,445,447
https://en.wikipedia.org/wiki/Neutron-acceptance%20diagram%20shading
Neutron-acceptance diagram shading (NADS) is a beam simulation technique. Unlike Monte-Carlo simulation codes like McStas, NADS does not trace individual neutrons but traces linearly-related bunches in a reduced-dimensionality phase space. Bunches are subdivided where necessary to follow accurately a simplified surface reflectivity model. This makes jnads results equivalent to Monte-Carlo simulations but about 5 orders of magnitude faster for difficult modelling tasks. Speed The raw speed of NADS makes it a particularly attractive tool for beam modelling where evolutionary algorithms are used. Tests on the C++ prototype engine could calculate the on-sample flux of a SANS instrument in 55 milliseconds on a single 2 GHz intel core 2 core. The java release (jnads) performs the same calculation in 0.8 seconds on the same hardware. A Monte-Carlo simulation of the same instrument would take 25 hours to complete with 1% statistical errors. Performing the same, unoptimised SANS simulation with full beam monitors in jnads (i.e. not just calculating the on-sample flux) takes about 45 seconds on the same hardware and gives you an idea of the beam divergence and homogeneity at the same time. Reliability NADS results are generally in excellent agreement with Monte-Carlo calculations. In strictly controlled tests, NADS and Monte-Carlo both produced identical results when simulating a SANS instrument. To date, no discrepancy has been found. Limitations It's strictly monochromatic (but you can get away with a 15% spread typical of velocity selectors) Your instrument must have independent horizontal and vertical planes. No crosstalk. Polarisation and time-of-flight are further complications that users have to consider manually. It's not a black box technique NADS provides the neutron flux. To calculate the neutron beam current NADS result must be multiplied by the wavelength band width. History NADS was born out of necessity. If simulating an instrument takes more than one CPU-day, then performing a full optimisation of a neutron guide hall requires more than two CPU-decades. NADS was designed with the goal of reducing the CPU time to less than one minute for all instrument geometries, making an optimisation of a neutron guide hall feasible within a week on a single desktop computer. The name NADS arose partly due to referee comments on the original article (ADS is already used widely in Astronomy, the authors should use a different acronym), and partly due to tongue-in-cheek discussions over coffee. NADS was used with particle-swarm optimisation to design a guide system for the ILL. The new guide system will feed two neutron spin echo instruments, a SANS instrument, a new three-axis spectrometer, a new reflectometer and fundamental physics beamlines at the ILL. Neutron scattering
Neutron-acceptance diagram shading
Chemistry
584
24,763,960
https://en.wikipedia.org/wiki/Groundwater-dependent%20ecosystems
Groundwater-Dependent Ecosystems (or GDEs) are ecosystems that rely upon groundwater for their continued existence. Groundwater is water that has seeped down beneath Earth's surface and has come to reside within the pore spaces in soil and fractures in rock, this process can create water tables and aquifers, which are large storehouses for groundwater. An ecosystem is a community of living organisms interacting with the nonliving aspects of their environment (such as air, soil, water, and even groundwater). With a few exceptions, the interaction between various ecosystems and their respective groundwater is a vital yet poorly understood relationship, and their management is not nearly as advanced as in-stream ecosystems. Methods of identification Isotopes Examining the composition of stable isotopes in the water found in soil, rivers, groundwater, and xylem (or vein systems) of vegetation, using mass spectroscopy, which measures and sort the masses in a sample, along with data on the changes in groundwater depth coupled with the time and vegetative rooting patterns, shows spatial changes over time in the use of groundwater by the vegetation in its respective ecosystem. Plants A groundwater-dependent ecosystem can also be inferred through plant water use and growth. In areas with high rainfall groundwater reliance can be seen by monitoring the water use made by the plants of the ecosystem in relation to the water storage in the soil of the area. If the use of water in the vegetation exceeds that of the water being stored in the soil it is a strong indication of groundwater utilization. In areas of prolonged drought the continuation of water flow and plant growth are highly indicative of a groundwater reliant area. Remote sensing/Geographical Information Systems (GIS) Remote Sensing is the scanning of Earth by satellite or aircraft to obtain information. GIS is a system designed to capture, store, analyze and manage geographic data. Together the data collected (such as elevation and bore holes measuring groundwater levels) can very accurately predict where groundwater-dependent ecosystems are, how extensive they are, and can guide field expeditions to the right areas for further confirmation and data collection on the GDEs. Classification Due to the high variety of ecosystems and their individual fluctuation in dependency on groundwater there is some uncertainty when it comes to defining an ecosystem strictly as groundwater-dependent or merely groundwater-using. Each ecosystem expresses a varying degree of dependency. An ecosystem can be directly or indirectly dependent, as well as have a variation in groundwater use throughout the seasons. There are a variety of methods for classifying types of groundwater-dependent ecosystems either by their geomorphological setting and/or by their respective groundwater flow mechanism (deep or shallow). Terrestrial Arid to humid environments Arid to humid terrestrial environments with no standing water but deeply rooted vegetation relies upon groundwater to support the producers of their ecosystem. The deeply rooted vegetation requires the groundwater to maintain a consistent or semi-consistent level to allow for their continued health and survival. Aquatic Springs Springs, arguably, rely the most heavily on the continued contribution of groundwater because they are a natural discharge from relatively deep groundwater flows rising to the surface. Springs are often in association with uniquely adapted plants and animals. Wetlands Wetlands require a shallow discharge of groundwater, it flows as a seepage into depressions in the land surface, in some instances wetlands feed off of perched groundwater which is groundwater separated from the regular water table by an impermeable layer. Marshes are a type of wetland and though not directly reliant on groundwater they use it as an area of recharge. Bogs, are also a type of wetland that is not directly reliant on groundwater but uses the presence of groundwater to provide the area with recharge as well as buoyancy. Rivers Rivers collect groundwater discharge from aquifers. This can happen seasonally, intermittently or constantly, and can keep an area's water needs stable during a dry season. Coastal Lagoons/estuaries Lagoons and estuaries use groundwater flow to help dilute the salinity in the water and helps support their distinctly unique coastal ecosystems. Threats Extraction The extraction of groundwater in both large and smaller amounts lowers the areas water table, and in too large of quantities can even collapse parts of the aquifer and permanently damage the quantity of water the aquifer can store. Urbanization Pollution Due to the increase in populated areas estuaries and other aquatic ecosystems face a greater threat of pollution. In many cases groundwater can become polluted through toxins, or even just excessive amounts of certain nutrients seeping down to the water table. This polluting of the groundwater can have many different effects on the related ecosystems in the case of an estuary in Cape Cod it was noted that an influx of new nitrogen had come from septic tank fields in the groundwater's flow path. Increased levels of nitrogen in aquatic ecosystems can cause eutrophication which is the process of excessive introduction of nutrients causing an abundance of plant growth which can result in the death of a variety of aquatic life. Recharge Urbanization of land has significant effects on groundwater recharge, deforestation and urbanizing limits the amount of surface area viable for water to actually infiltrate and contribute to the groundwater. References Hydrology Ecosystems Habitat
Groundwater-dependent ecosystems
Chemistry,Engineering,Biology,Environmental_science
1,048
51,487,006
https://en.wikipedia.org/wiki/Chemistry%20of%20wetland%20dredging
Wetland chemistry is largely affected by dredging, which can be done for a variety of purposes. Wetlands are areas within floodplains with both terrestrial and aquatic characteristics, including marshes, swamps, bogs, and others. It has been estimated that they occupy around 2.8x106 km2, about 2.2% of the Earth’s surface, but other estimates are even higher. It has also been estimated to have a worth of $14.9 trillion and are responsible for 75% of commercial and 90% of recreational harvest of fish and shellfish in the United States. Wetlands also hold an important role in water purification, storm protection, industry, travel, research, education, and tourism. Being heavily used and traveled through, dredging is common and leads to continuation of long-term damage of the ecosystem and land loss, and ultimately a loss in industry, homes, and protection. Wetlands undergo different chemical reactions depending on a variety of parameters, including salinity and pH. Redox reactions have a major effect on wetland ecosystems, as they depend heavily on salinity, pH, oxygen availability, and others. Common redox reactions in wetland include carbon, nitrogen, and sulfur transformations. Fluctuations in water flow and flooding can change the abundance of the oxidized or reduced species depending on the environment. Increased flooding and water flow can also change the availability of nutrients to local species. The further the wetlands change from their original states, the more difficult rebuilding land becomes. The types of mitigation efforts also change depending on the chemistry, so an understanding of the change is required for effective mitigation. Wetlands Definition Wetlands are areas of land submerged in water near both terrestrial and aquatic systems. They are highly diverse and are classified by the United States Fish and Wildlife Service into five categories: "The term wetland includes a variety of areas that fall into one of five categories: (1) areas with hydrophytes and hydric soils, such as those commonly known as marshes, swamps, and bogs; (2) areas without hydrophytes but with hydric soils - for example, flats where drastic fluctuation in water level, wave action, turbidity, or high concentration of salts may prevent the growth of hydrophytes; (3) areas with hydrophytes but nonhydric soils, such as margins of impoundments or excavations where hydrophytes have become established but hydric soils have not yet developed; (4) areas without soils but with hydrophytes such as the seaweed-covered portion of rocky shores; and (5) wetlands without soil and without hydrophytes, such as gravel beaches or rocky shores without vegetation". Wetlands can also be classified based on salinity, a type of classification often referenced in research where salinity is a major factor. These classifications are often referred to in parts per thousand (ppt) and include freshwater (0–2 ppt), intermediate (2–10 ppt), brackish (10–20 ppt), and saltwater (20+ ppt). Importance of wetlands Wetlands are sources of extreme biodiversity and ecological benefit. They contain a multitude of species of plants and animals, including 79 species classified as rare, threatened, or endangered. An estimate by the U.S. Fish and Wildlife Service indicates that wetlands provide for, directly and indirectly, up to 43% of federally threatened or endangered species. Wetlands are the leading producer of oysters, 50% of the shrimp crop, 75% of the alligator harvest, 27% of the oil and gas, and the largest port complex in the United States. The world’s wetlands have an estimated worth of $14.9 trillion. Wetlands also provide for disaster protection, including surge protection from hurricanes, as they and barrier islands help to break down the power of a storm before it reaches mainland. They also provide flood relief, as they are able to hold about three-acre feet (one million gallons) of water. This holding of water allows for rejuvenation of ecosystems, as new sediment is able to settle. Flooding also affects factors such as root penetration, soil temperature, conductivity, and bulk density. Wetlands are highly effective at removing pollutants and excess nutrients due to the slow water flow and absorption by the plant systems. This has been shown to be effective in the removal of nitrogen and phosphorus, the major nutrients involved in "dead zones". They are also major sinks for heavy metals and sulfur. Dredging Dredging is the removal of sediment, plant species, and debris from an aquatic area. Industry, travel, and recreation throughout wetlands often requires the dredging of canals, especially by oil industry to get out to their offshore drills through coastal wetlands. Canals widen after being dredged because of the increased water flow and loss of plant life, both attributing to increased erosion. It is estimated that there are of canals south of the Intracoastal Waterway, not including Lake Pontchartrain and Lake Maurepas, and that canals alone attribute to of land loss per year in the United States. The permits required to dredge these canals include stipulations of refilling, but these are not often enforced. John M. Barry, along with a group of private lawyers and coastal experts, filed a lawsuit in 2013 against 97 corporations who had violated their permits in Louisiana’s coastal wetlands in response to this. It is referred to as "the most ambitious environmental lawsuit ever" by the New York Times and has been met with political resistance. Wetland dredging chemistry Wetlands are dynamic systems that undergo a variety of chemical reactions depending greatly on the specific physicochemical properties of the area, such as temperature, pressure, dissolved organic matter, pH, salinity, and dissolved gases (CO2 and O2). The qualities that have the largest effect are salinity and pH. An increase in flooding (a result of dredging) increases the salinity of wetlands, as it allows saltwater to intrude, neutralizes the pH, and provides more anaerobic soil conditions. The conditions then effect the nutrient availability and redox reactions. Redox reactions Redox reactions are highly influential in wetland soil chemistry through transformations including those of carbon, sulfur, nitrogen. The abundance of oxygen changes the abundance of oxidized or reduced states of each compound. Areas of higher oxygen availability (aerobic) tend towards oxidized states and areas of low oxygen availability (anaerobic) tend towards reduced states. The abundance of each type results in a different ecosystem, as the plants and animals of the wetlands require specific conditions for their growth. Common wetland redox reactions include: 2NO3− +10e− +12H+ → N2 +6H2O SO4 +8e− +9H+ → HS− +4H2O CO2 +8e− +8H+ → CH4 + 2H2O MnO2 +2e− +4H+ → Mn2+ + 2H2O Fe(OH)3 +e− +3H+ → Fe3+ + 3H2O Dredging allows for an increased flow of water through wetlands, causing anaerobic soil conditions. This change in wetland type results in a change in redox state for each reaction undergone and thus changes the plant species available to grow in those areas. The redox potential (Eh) can help to show the relationship of the redox reactions through the Nernst equation: Eh=E0-(RT/nF)ln([Reductants]a/[Oxidants]b[H+]b) This equation allows for the calculation of the extent of reaction between two redox systems and can be used, for example, to decide whether a particular reaction will go to completion or not. An example of a change in these circumstances affecting the wetland system is in the transformation of pyrite (FeS2) through the reduction of SO42− (found in seawater). Fe(OH)3 + e− + H+ → Fe(OH)2 + H2O SO42− + 6e− + 8H+ → S + 4H2O S + 2e− + 2H+ → H2S Fe(OH)2 + H2S → FeS + 2H2O FeS + S → FeS2 (pyrite) The drainage of the resulted pyrite then results in oxidation to ferric hydroxide and sulfuric acid, causing extreme acidity (pH < 2). Nutrient availability Increased flooding also allows for saltwater intrusion, changing the salinity levels and killing off species of plants that normally grew and changing available nutrient, chemical, and oxygen levels as well. An increase in salinity leads to higher sulfate concentrations and higher sulfide emissions, and higher toxicity. It also results in a reduction of sulfur availability to plant species as it precipitates with trace metals such as zinc and copper. An example of this is ferrous sulfide (FeS), which gives wetland soils their black color and is the source of sulfur commonly found in coal deposits. Flooding also results in pH neutralization of generally acidic (with exceptions) wetlands. Acidic wetlands inhibit denitrification, thus flooding allows denitrification to occur, resulting in a loss of gaseous nitrogen forms to the atmosphere. The reaction is shown below: 5C6H12O6 +24NO3− +24H+ → 30CO2 +12N2 +42H2O Anaerobic soil conditions brought on by flooding allows for precipitation of phosphates with ferric iron and aluminum (acidic soils) or calcium and magnesium (basic soils) resulting in phosphorus being unavailable for uptake in plant species. Importance of wetland chemistry As the environment is altered through physical means (dredging), the occurring reactions change resulting in a decrease of the availability of nutrients and chemical species to plant species and the ecosystem. This then further changes the physical environment as these species are no longer able to survive. The loss of species then results in further changes to the chemical environment, as they are no longer present to remove excess nutrients. This also changes the physical environment further as the lack of survival of plant species results in open land and increased erosion. The change of the chemical environment also affects the mitigation techniques to be applied for rebuilding of wetlands as the survival of plant species that could potentially be planted depends on the chemical environment, and changes must be monitored for effective mitigation to take place. References Geochemistry Wetlands Dredging
Chemistry of wetland dredging
Chemistry,Environmental_science
2,171
76,758,860
https://en.wikipedia.org/wiki/Praveen%20Sethupathy
Praveen Sethupathy is an American geneticist, science author and journalist. He is a professor of physiological genomics and Chair of the Department of Biomedical Sciences at the Cornell University. He currently serves as one of the board directors at The BioLogos Foundation where he holds discussions on the relationship between science and religion. Education Sethupathy received his BA degree from Cornell University and a PhD in genomics from the University of Pennsylvania. He completed his post-doctoral fellowship at the National Human Genome Research Institute under the direction of the then Director of National Institutes of Health Dr. Francis Collins, after which he moved to the University of North Carolina at Chapel Hill as an assistant professor in the Department of Genetics in 2011. He was selected by Genome Technology as one of the nation's top 25 rising young investigators in genomics in that same year. He was recruited to Cornell University, where he became a frequent research collaboration with Nicolas Buchon. Career Science Sethupathy currently leads a research lab which is focused on genome-scale and molecular approaches to understanding physiology and human disease. He researches microRNA and the broader genetic factors related to diabetes, Crohn's disease, fibrolamellar carcinoma (a rare type of liver cancer), short-term memory, diabetes, and gut epithelium. Professor Sethupathy returned to the Cornell University as an Associate Professor where he studied. He teaches courses on various scientific topics such as stem cells, cancer and animal physiology. He also holds courses on the relationship between science and religion (particularly concerning Christianity), evolution theory and how to reconcile it with faith. Journalism Sethupathy is a science journalist and he is the author of more than 140 peer reviewed publications in various scientific journals such as PNAS, Cell, Science, etc. He also reviews various journals and has reviewed than 50 journals. He has also gotten several awards. He is also a writer on science and religion. He has advocated for compatibility between science and religion in his various works and frequently makes publications for The BioLogos Foundation. He also advocates against various pseudoscientific topics and has made various publications debunking them. Personal life Sethupathy is a Christian. He currently serves as a board director at The BioLogos Foundation, which is an organization that promotes harmony between science and religion. He believes in evolutionary creationism (also called theistic evolution) and considers there to be no conflicts between science and religion. He has also served on the advisory board of the Dialogue on Science, Ethics, and Religion in the American Association for the Advancement of Science (AAAS) and has also spoken in the Veritas Forum. See also The BioLogos Foundation Relationship between science and religion Theistic evolution References American geneticists Theistic evolutionists Writers about religion and science Members of The BioLogos Foundation Living people Cornell University alumni Cornell University faculty Year of birth missing (living people)
Praveen Sethupathy
Biology
592
33,289,559
https://en.wikipedia.org/wiki/Glycoside%20hydrolase%20family%2072
In molecular biology, glycoside hydrolase family 72 is a family of glycoside hydrolases. Glycoside hydrolases are a widespread group of enzymes that hydrolyse the glycosidic bond between two or more carbohydrates, or between a carbohydrate and a non-carbohydrate moiety. A classification system for glycoside hydrolases, based on sequence similarity, has led to the definition of >100 different families. This classification is available on the CAZy web site, and also discussed at CAZypedia, an online encyclopedia of carbohydrate active enzymes. This family includes yeast glycolipid proteins anchored to the membrane. It includes Candida albicans pH-regulated protein, which is required for apical growth and plays a role in morphogenesis, and Saccharomyces cerevisiae glycolipid anchored surface protein. References EC 3.2.1 Glycoside hydrolase families Protein families
Glycoside hydrolase family 72
Biology
218
3,425,287
https://en.wikipedia.org/wiki/Ore%20genesis
Various theories of ore genesis explain how the various types of mineral deposits form within Earth's crust. Ore-genesis theories vary depending on the mineral or commodity examined. Ore-genesis theories generally involve three components: source, transport or conduit, and trap. (This also applies to the petroleum industry: petroleum geologists originated this analysis.) Source is required because metal must come from somewhere, and be liberated by some process. Transport is required first to move the metal-bearing fluids or solid minerals into their current position, and refers to the act of physically moving the metal, as well as to chemical or physical phenomena which encourage movement. Trapping is required to concentrate the metal via some physical, chemical, or geological mechanism into a concentration which forms mineable ore. The biggest deposits form when the source is large, the transport mechanism is efficient, and the trap is active and ready at the right time. Ore genesis processes Endogenous Magmatic processes Fractional crystallization: separates ore and non-ore minerals according to their crystallization temperature. As early crystallizing minerals form from magma, they incorporate certain elements, some of which are metals. These crystals may settle onto the bottom of the intrusion, concentrating ore minerals there. Chromite and magnetite are ore minerals that form in this way. Liquid immiscibility: sulfide ores containing copper, nickel, or platinum may form from this process. As a magma changes, parts of it may separate from the main body of magma. Two liquids that will not mix are called immiscible; oil and water are an example. In magmas, sulfides may separate and sink below the silicate-rich part of the intrusion or be injected into the rock surrounding it. These deposits are found in mafic and ultramafic rocks. Hydrothermal processes These processes are the physicochemical phenomena and reactions caused by movement of hydrothermal water within the crust, often as a consequence of magmatic intrusion or tectonic upheavals. The foundations of hydrothermal processes are the source-transport-trap mechanism. Sources of hydrothermal solutions include seawater and meteoric water circulating through fractured rock, formational brines (water trapped within sediments at deposition), and metamorphic fluids created by dehydration of hydrous minerals during metamorphism. Metal sources may include a plethora of rocks. However most metals of economic importance are carried as trace elements within rock-forming minerals, and so may be liberated by hydrothermal processes. This happens because of: incompatibility of the metal with its host mineral, for example zinc in calcite, which favours aqueous fluids in contact with the host mineral during diagenesis. solubility of the host mineral within nascent hydrothermal solutions in the source rocks, for example mineral salts (halite), carbonates (cerussite), phosphates (monazite and thorianite), and sulfates (barite) elevated temperatures causing decomposition reactions of minerals Transport by hydrothermal solutions usually requires a salt or other soluble species which can form a metal-bearing complex. These metal-bearing complexes facilitate transport of metals within aqueous solutions, generally as hydroxides, but also by processes similar to chelation. This process is especially well understood in gold metallogeny where various thiosulfate, chloride, and other gold-carrying chemical complexes (notably tellurium-chloride/sulfate or antimony-chloride/sulfate). The majority of metal deposits formed by hydrothermal processes include sulfide minerals, indicating sulfur is an important metal-carrying complex. Sulfide deposition Sulfide deposition within the trap zone occurs when metal-carrying sulfate, sulfide, or other complexes become chemically unstable due to one or more of the following processes; falling temperature, which renders the complex unstable or metal insoluble loss of pressure, which has the same effect reaction with chemically reactive wall rocks, usually of reduced oxidation state, such as iron-bearing rocks, mafic or ultramafic rocks, or carbonate rocks degassing of the hydrothermal fluid into a gas and water system, or boiling, which alters the metal carrying capacity of the solution and even destroys metal-carrying chemical complexes Metal can also precipitate when temperature and pressure or oxidation state favour different ionic complexes in the water, for instance the change from sulfide to sulfate, oxygen fugacity, exchange of metals between sulfide and chloride complexes, et cetera. Metamorphic processes Lateral secretion Ore deposits formed by lateral secretion are formed by metamorphic reactions during shearing, which liberate mineral constituents such as quartz, sulfides, gold, carbonates, and oxides from deforming rocks, and focus these constituents into zones of reduced pressure or dilation such as faults. This may occur without much hydrothermal fluid flow, and this is typical of podiform chromite deposits. Metamorphic processes also control many physical processes which form the source of hydrothermal fluids, outlined above. Sedimentary or surficial processes (exogenous) Surficial processes are the physical and chemical phenomena which cause concentration of ore material within the regolith, generally by the action of the environment. This includes placer deposits, laterite deposits, and residual or eluvial deposits. Superficial deposits processes of ore formation include; Erosion of non-ore material. Deposition by sedimentary processes, including winnowing, density separation (e.g.; gold placers). Weathering via oxidation or chemical attack of a rock, either liberating rock fragments or creating chemically deposited clays, laterites, or supergene enrichment. Deposition in low-energy environments in beach environments. Sedimentary Exhalative Deposits (SEDEX), formed on the sea floor from metal-bearing brines. Classification of ore deposits Classification of hydrothermal ore deposits is also achieved by classifying according to the temperature of formation, which roughly also correlates with particular mineralising fluids, mineral associations and structural styles. This scheme, proposed by Waldemar Lindgren (1933) classified hydrothermal deposits as follows: Hypothermal — mineral ore deposits formed at great depth under conditions of high temperature. Mesothermal — mineral ore deposits formed at moderate temperature and pressure, in and along fissures or other openings in rocks, by deposition at intermediate depths, from hydrothermal fluids. Epithermal — mineral ore deposits formed at low temperatures (50–200 °C) near the Earth's surface (<1500 m), that fill veins, breccias, and stockworks. Telethermal — mineral ore deposits formed at shallow depth and relatively low temperatures, with little or no wall-rock alteration, presumably far from the source of hydrothermal solutions. Ore deposits are usually classified by ore formation processes and geological setting. For example, sedimentary exhalative deposits (SEDEX), are a class of ore deposit formed on the sea floor (sedimentary) by exhalation of brines into seawater (exhalative), causing chemical precipitation of ore minerals when the brine cools, mixes with sea water, and loses its metal carrying capacity. Ore deposits rarely fit neatly into the categories in which geologists wish to place them. Many may be formed by one or more of the basic genesis processes above, creating ambiguous classifications and much argument and conjecture. Often ore deposits are classified after examples of their type, for instance Broken Hill type lead-zinc-silver deposits or Carlin–type gold deposits. Genesis of common ores As they require the conjunction of specific environmental conditions to form, particular mineral deposit types tend to occupy specific geodynamic niches, therefore, this page has been organised by metal commodity. It is also possible to organise theories the other way, namely according to geological criteria of formation. Often ores of the same metal can be formed by multiple processes, and this is described here under each metal or metal complex. Iron Iron ores are overwhelmingly derived from ancient sediments known as banded iron formations (BIFs). These sediments are composed of iron oxide minerals deposited on the sea floor. Particular environmental conditions are needed to transport enough iron in sea water to form these deposits, such as acidic and oxygen-poor atmospheres within the Proterozoic Era. Often, more recent weathering is required to convert the usual magnetite minerals into more easily processed hematite. Some iron deposits within the Pilbara of Western Australia are placer deposits, formed by accumulation of hematite gravels called pisolites which form channel-iron deposits. These are preferred because they are cheap to mine. Lead zinc silver Lead-zinc deposits are generally accompanied by silver, hosted within the lead sulfide mineral galena or within the zinc sulfide mineral sphalerite. Lead and zinc deposits are formed by discharge of deep sedimentary brine onto the sea floor (termed sedimentary exhalative or SEDEX), or by replacement of limestone, in skarn deposits, some associated with submarine volcanoes (called volcanogenic massive sulfide ore deposits or VMS), or in the aureole of subvolcanic intrusions of granite. The vast majority of SEDEX lead and zinc deposits are Proterozoic in age, although there are significant Jurassic examples in Canada and Alaska. The carbonate replacement type deposit is exemplified by the Mississippi valley type (MVT) ore deposits. MVT and similar styles occur by replacement and degradation of carbonate sequences by hydrocarbons, which are thought important for transporting lead. Gold Gold deposits are formed via a very wide variety of geological processes. Deposits are classified as primary, alluvial or placer deposits, or residual or laterite deposits. Often a deposit will contain a mixture of all three types of ore. Plate tectonics is the underlying mechanism for generating gold deposits. The majority of primary gold deposits fall into two main categories: lode gold deposits or intrusion-related deposits. Lode gold deposits, also referred to as orogenic gold are generally high-grade, thin, vein and fault hosted. They are primarily made up of quartz veins also known as lodes or reefs, which contain either native gold or gold sulfides and tellurides. Lode gold deposits are usually hosted in basalt or in sediments known as turbidite, although when in faults, they may occupy intrusive igneous rocks such as granite. Lode-gold deposits are intimately associated with orogeny and other plate collision events within geologic history. It is thought that most lode gold deposits are sourced from metamorphic rocks by the dehydration of basalt during metamorphism. The gold is transported up faults by hydrothermal waters and deposited when the water cools too much to retain gold in solution. Intrusive related gold (Lang & Baker, 2001) is generally hosted in granites, porphyry, or rarely dikes. Intrusive related gold usually also contains copper, and is often associated with tin and tungsten, and rarely molybdenum, antimony, and uranium. Intrusive-related gold deposits rely on gold existing in the fluids associated with the magma (White, 2001), and the inevitable discharge of these hydrothermal fluids into the wall-rocks (Lowenstern, 2001). Skarn deposits are another manifestation of intrusive-related deposits. Placer deposits are sourced from pre-existing gold deposits and are secondary deposits. Placer deposits are formed by alluvial processes within rivers and streams, and on beaches. Placer gold deposits form via gravity, with the density of gold causing it to sink into trap sites within the river bed, or where water velocity drops, such as bends in rivers and behind boulders. Often placer deposits are found within sedimentary rocks and can be billions of years old, for instance the Witwatersrand deposits in South Africa. Sedimentary placer deposits are known as 'leads' or 'deep leads'. Placer deposits are often worked by fossicking, and panning for gold is a popular pastime. Laterite gold deposits are formed from pre-existing gold deposits (including some placer deposits) during prolonged weathering of the bedrock. Gold is deposited within iron oxides in the weathered rock or regolith, and may be further enriched by reworking by erosion. Some laterite deposits are formed by wind erosion of the bedrock leaving a residuum of native gold metal at surface. A bacterium, Cupriavidus metallidurans, plays a vital role in the formation of gold nuggets by precipitating metallic gold from a solution of gold (III) tetrachloride, a compound highly toxic to most other microorganisms. Similarly, Delftia acidovorans can form gold nuggets. Platinum Platinum and palladium are precious metals generally found in ultramafic rocks. The source of platinum and palladium deposits is ultramafic rocks which have enough sulfur to form a sulfide mineral while the magma is still liquid. This sulfide mineral (usually pentlandite, pyrite, chalcopyrite, or pyrrhotite) gains platinum by mixing with the bulk of the magma because platinum is chalcophile and is concentrated in sulfides. Alternatively, platinum occurs in association with chromite either within the chromite mineral itself or within sulfides associated with it. Sulfide phases only form in ultramafic magmas when the magma reaches sulfur saturation. This is generally thought to be nearly impossible by pure fractional crystallisation, so other processes are usually required in ore genesis models to explain sulfur saturation. These include contamination of the magma with crustal material, especially sulfur-rich wall-rocks or sediments; magma mixing; volatile gain or loss. Often platinum is associated with nickel, copper, chromium, and cobalt deposits. Nickel Nickel deposits are generally found in two forms, either as sulfide or laterite. Sulfide type nickel deposits are formed in essentially the same manner as platinum deposits. Nickel is a chalcophile element which prefers sulfides, so an ultramafic or mafic rock which has a sulfide phase in the magma may form nickel sulfides. The best nickel deposits are formed where sulfide accumulates in the base of lava tubes or volcanic flows — especially komatiite lavas. Komatiitic nickel-copper sulfide deposits are considered to be formed by a mixture of sulfide segregation, immiscibility, and thermal erosion of sulfidic sediments. The sediments are considered to be necessary to promote sulfur saturation. Some subvolcanic sills in the Thompson Belt of Canada host nickel sulfide deposits formed by deposition of sulfides near the feeder vent. Sulfide was accumulated near the vent due to the loss of magma velocity at the vent interface. The massive Voisey's Bay nickel deposit is considered to have formed via a similar process. The process of forming nickel laterite deposits is essentially similar to the formation of gold laterite deposits, except that ultramafic or mafic rocks are required. Generally nickel laterites require very large olivine-bearing ultramafic intrusions. Minerals formed in laterite nickel deposits include gibbsite. Copper Copper is found in association with many other metals and deposit styles. Commonly, copper is either formed within sedimentary rocks, or associated with igneous rocks. The world's major copper deposits are formed within the granitic porphyry copper style. Copper is enriched by processes during crystallisation of the granite and forms as chalcopyrite — a sulfide mineral, which is carried up with the granite. Sometimes granites erupt to surface as volcanoes, and copper mineralisation forms during this phase when the granite and volcanic rocks cool via hydrothermal circulation. Sedimentary copper forms within ocean basins in sedimentary rocks. Generally this forms by brine from deeply buried sediments discharging into the deep sea, and precipitating copper and often lead and zinc sulfides directly onto the sea floor. This is then buried by further sediment. This is a process similar to SEDEX zinc and lead, although some carbonate-hosted examples exist. Often copper is associated with gold, lead, zinc, and nickel deposits. Uranium Uranium deposits are usually sourced from radioactive granites, where certain minerals such as monazite are leached during hydrothermal activity or during circulation of groundwater. The uranium is brought into solution by acidic conditions and is deposited when this acidity is neutralised. Generally this occurs in certain carbon-bearing sediments, within an unconformity in sedimentary strata. The majority of the world's nuclear power is sourced from uranium in such deposits. Uranium is also found in nearly all coal at several parts per million, and in all granites. Radon is a common problem during mining of uranium as it is a radioactive gas. Uranium is also found associated with certain igneous rocks, such as granite and porphyry. The Olympic Dam deposit in Australia is an example of this type of uranium deposit. It contains 70% of Australia's share of 40% of the known global low-cost recoverable uranium inventory. Titanium and zirconium Mineral sands are the predominant type of titanium, zirconium, and thorium deposit. They are formed by accumulation of such heavy minerals within beach systems, and are a type of placer deposits. The minerals which contain titanium are ilmenite, rutile, and leucoxene, zirconium is contained within zircon, and thorium is generally contained within monazite. These minerals are sourced from primarily granite bedrock by erosion and transported to the sea by rivers where they accumulate within beach sands. Rarely, but importantly, gold, tin, and platinum deposits can form in beach placer deposits. Tin, tungsten, and molybdenum These three metals generally form in a certain type of granite, via a similar mechanism to intrusive-related gold and copper. They are considered together because the process of forming these deposits is essentially the same. Skarn type mineralisation related to these granites is a very important type of tin, tungsten, and molybdenum deposit. Skarn deposits form by reaction of mineralised fluids from the granite reacting with wall rocks such as limestone. Skarn mineralisation is also important in lead, zinc, copper, gold, and occasionally uranium mineralisation. Greisen granite is another related tin-molybdenum and topaz mineralisation style. Rare-earths, niobium, tantalum, lithium The overwhelming majority of rare-earth elements, tantalum, and lithium are found within pegmatite. Ore genesis theories for these ores are wide and varied, but most involve metamorphism and igneous activity. Lithium is present as spodumene or lepidolite within pegmatite. Carbonatite intrusions are an important source of these elements. Ore minerals are essentially part of the unusual mineralogy of carbonatite. Phosphate Phosphate is used in fertilisers. Immense quantities of phosphate rock or phosphorite occur in sedimentary shelf deposits, ranging in age from the Proterozoic to currently forming environments. Phosphate deposits are thought to be sourced from the skeletons of dead sea creatures which accumulated on the seafloor. Similar to iron ore deposits and oil, particular conditions in the ocean and environment are thought to have contributed to these deposits within the geological past. Phosphate deposits are also formed from alkaline igneous rocks such as nepheline syenites, carbonatites, and associated rock types. The phosphate is, in this case, contained within magmatic apatite, monazite, or other rare-earth phosphates. Vanadium Due to the presence of vanabins, concentration of vanadium found in the blood cells of Ascidia gemmata belonging to the suborder Phlebobranchia is 10,000,000 times higher than that in the surrounding seawater. A similar biological process might have played a role in the formation of vanadium ores. Vanadium is also present in fossil fuel deposits such as crude oil, coal, oil shale, and oil sands. In crude oil, concentrations up to 1200 ppm have been reported. Cosmic origins of rare metals Precious metals such as gold and platinum, but also many other rare and noble metals, largely originated within neutron star collisions - collisions between exceedingly heavy massive and dense remnants of supernovas. In the final moments of the collision, the physical conditions are so extreme that these heavy rare elements can be formed, and are sprayed into space. Interstellar dust and gas clouds contain some of these elements, as did the dust cloud from which our solar system formed. Those heavy metals fell to the centre of the molten core of earth, and are no longer accessible. However about 200 million years after Earth formed, a late heavy bombardment of meteors impacted earth. As Earth had already begun to cool and solidify, the material (including heavy metals) in that bombardment became part of earth's crust, rather than falling deep into the core. They became processed and exposed by geological processes over billions of years. It is believed that this represents the origin of many elements, and all heavy metals, that are found on earth today. See also References Evans, A.M., 1993. Ore Geology and Industrial Minerals, An Introduction., Blackwell Science, Groves, D.I. 1993. The Crustal Continuum Model for late-Archaean lode-gold deposits of the Yilgran Block, Western Australia. Mineralium Deposita 28, pp366–374. Lang, J.R. & Baker, T., 2001. Intrusion-related gold systems: the present level of understanding. Mineralium Deposita, 36, pp 477–489 Lindgren, Waldemar, 1933. Mineral Deposits, 4th ed., McGraw-Hill Robb, L. (2005), Introduction to Ore-Forming Processes (Blackwell Science). External links Ore textures Victoria, Australia, mineral endowment, Victorian Government geoscience portal. The "chessboard" classification scheme of mineral deposits (abstract) Magmatic "Goldilocks Zone" -Phys.org Economic geology Geochemistry Geological processes Ore deposits
Ore genesis
Chemistry
4,579
56,804,641
https://en.wikipedia.org/wiki/Elizabeth%20Landau
Elizabeth Rosa Landau is an American science writer and communicator. She is a Senior Communications Specialist at NASA Headquarters. She was a Senior Storyteller at the NASA Jet Propulsion Laboratory previously. Education Landau grew up in Bryn Mawr, Pennsylvania. As a child, she watched Carl Sagan's TV series Cosmos, which helped inspire her love of space. She earned a bachelor's degree in anthropology at Princeton University (magna cum laude) in 2006. As a Princeton student, she completed study-abroad programs at University of Seville and Universidad de León. During her junior year in Princeton, she was the editor-in-chief of Innovation, the university's student science magazine. In the summer of 2004, she became a production intern at CNN en Español in New York. She earned a master's in journalism from Columbia University, where she focused on politics. Career Landau began to write and produce for CNN's website in 2007 as a Master's Fellow, and returned full-time in 2008. Here she co-founded the CNN science blog, Light Years. She covered a variety of topics including Pi Day. In 2012, Landau interviewed Scott Maxwell about the Curiosity rover at the NASA Jet Propulsion Laboratory. NASA career In 2014, she became a media relations specialist at the NASA Jet Propulsion Laboratory, where she led media strategy for Dawn (spacecraft), Voyager, Spitzer, NuSTAR, WISE, Planck and Hershel. She led NASA's effort to share the TRAPPIST-1 exoplanet system with the world on February 22, 2017. In January 2018, she was appointed a Senior Storyteller at the Jet Propulsion Laboratory. In February 2020, she became a Senior Communications Specialist at NASA Headquarters. Writing career Landau has written for CNN, Marie Claire, New Scientist, Nautilus, Scientific American, Vice and The Wall Street Journal. Landau interviewed astronomer Virginia Trimble for Quanta Magazine in November 2019. References External links Living people American science writers Women science writers Columbia University Graduate School of Journalism alumni Princeton University alumni 21st-century American women journalists 21st-century American journalists 21st-century American women writers Year of birth missing (living people)
Elizabeth Landau
Technology
442