id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
8,920,112
https://en.wikipedia.org/wiki/Plazm%20%28magazine%29
Plazm magazine has been published since 1991 by a collective of designers, writers, and others in Portland, Oregon, United States. The complete catalog of Plazm magazine is included in the permanent collections of the San Francisco Museum of Modern Art, Princeton University, and the Denver Art Museum. Contributors Notable designers who have been affiliated with Plazm include David Carson, Art Chantry, Milton Glaser, Rebeca Mendez, Reza Abedini, Modern Dog, Scott Clum, John C. Jay, Bruce Licher, Frank Kozik, Pablo Medina, The Attik, Why Not Associates, and Ed Fella. Contributing artists have included Raymond Pettibon, Todd Haynes, Storm Tharp, Guillermo Gómez-Peña, Yoko Ono, Michael Brophy, Seripop, Rankin Renwick, Susan Seubert, and Terry Toedtemeier. Writers contributing to Plazm magazine include Julia Bryan-Wilson, Portland Monthly editor Randy Gragg, curator Stephanie Snyder, and Pere Ubu founder Dave Thomas, along with editors Jonathan Raymond and Tiffany Lee Brown. The magazine has also run original pieces by interviewees, such as a handwritten fax-rant from Iggy Pop and faux McDonald's employment applications from Poison Ivy and Lux Interior of The Cramps. Brief history (magazine) Founders of the magazine were Patrick Bardel, Joshua Berger, Karynn Fish, Neva Knott, Andrew McFarlane, Rueben Niesenfeld. Plazm magazine editors have been Neva Knott (issues 1-9), Yariv Rabinovitch (issues 10-17), and Jonathan Raymond (issues 18-28). In 2005, Tiffany Lee Brown joined Jon Raymond as co-editor of the magazine, and Sarah Gottesdiener became the magazine's editorial coordinator and frequent contributor. In 2010, New Oregon Arts and Letters, a nonprofit organization, became the magazine's publisher. The magazine started as a large-format newsprint quarterly publication and is now a thick, perfect-bound, four-color, book-style magazine published annually. The magazine's blog was launched in 2008 on plazm.com; local Portland newspaper The Oregonian wrote, "We always take Plazm's recommendations seriously" and "These guys are among the most creative characters in the city, though, and we've already bookmarked them." However, the newspaper noted that the blog's "first few entries seemed a little heavy on 'great typefaces we've known and loved'." Urban Honking referred to Plazm's "octopus identity" that has "spread tentacles into Portland's creative world and far beyond." Plazm design firm Plazm Design was founded in 1995 by Joshua Berger, Pete McCracken and Niko Courtelis. The design firm has created brand identities, advertising, interactive and retail experiences, rich media content, video, broadcast commercials, editorial content, custom typography, books, and magazines. Some designers who have worked for the firm have included Enrique Mosqueda, Jon Kieselhorst, Jon Steinhorst, Gus Nicklos, Carole Ambauen, Lotus Child, Ian Lynam, and Yoko Tsukahara. Plazm authored the book 'XXX: The Power of Sex in Contemporary Design' which won the Gold Medal at the Portland Design Festival "DNA-PDX." Plazm was listed in 1997 in I.D. as one of the world's 40 most influential design firms and has been featured in numerous publications and award shows including the 100 show, AIGA, the professional association for design national show, the Art Director's Club, Eye, Communication Arts, Graphis, and IDEA (Japan). Plazm received the creative resistance award from Adbusters in 2001. Clients of Plazm design have included Nike, LucasFilm, ESPN, Burgerville, The Cooley Gallery, Portland Center Stage, Jantzen Swimwear, and MTV. Plazmfonts In 1993 Pete McCracken founded Plazmfonts in collaboration with the magazine. As Director of Plazmfonts division he led the creative efforts in designing the exclusive corporate typefaces for Nike, Adidas, and MTV. In 2006, McCracken left the magazine to create an independent branding, letter-founder, publisher, and typeface design studio called Plazmfonts. Nonprofit status In 2010, the nonprofit organization New Oregon Arts & Letters became the publisher of Plazm magazine, winning a Regional Arts & Culture Council Opportunity Grant for printing costs of Plazm Issue #30, and an Oregon Cultural Trust grant to aid in developing a new website at plazm.org. In 2017, PICA, the Portland Institute for Contemporary Art, became Plazm magazine's new nonprofit fiscal sponsor. Plazm and social responsibility Plazm publishes a statement of social responsibility and environmental sustainability. Co-founder and current principal Joshua Berger became known for his work in ecological concerns and recycling systems in the late 1980s and early 1990s, as noted by Oregon Business Journal and other magazines. The Feminist Review and Adbusters magazine have taken note of Plazm's work in social responsibility and gender equality; the former called the magazine "challenging and explicit." Plazm nonprofit clients and collaborators receiving pro bono or discounted work for social, artistic, community, and environmental causes include the PICA (Portland Institute for Contemporary Art), ORLO, Pacific Northwest College of Art, New Oregon Arts & Letters, Northwest Film and Video Festival, Red Bull Theater, and KMHD radio. Plazm's Joshua Berger has shown political art in Times Square in the Urban Forest Project, The Organ Review of Art, UMASS, Mark Woolley Gallery, the Public Works series at Someday, and in 2GQ, a publication of 2 Gyrlz Performative Arts. References External links Steven Heller interview with Plazm founder Joshua Berger The Back Room, January 2008 - Jon Raymond, Tiffany Lee Brown and Joshua Berger in discussion with Stephanie Snyder CreativePro.com - Design Doyenne: Plazm Media's Fluid Approach to Design 1991 establishments in Oregon Visual arts magazines published in the United States Design magazines Magazines established in 1991 Magazines published in Portland, Oregon
Plazm (magazine)
[ "Engineering" ]
1,296
[ "Design magazines", "Design" ]
8,920,453
https://en.wikipedia.org/wiki/Thomas%20William%20K%C3%B6rner
Thomas William Körner (born 17 February 1946) is a British pure mathematician and the author of three books on popular mathematics. He is titular Professor of Fourier Analysis in the University of Cambridge and a Fellow of Trinity Hall. He is the son of the philosopher Stephan Körner and of Edith Körner. He studied at Trinity Hall, Cambridge, and wrote his PhD thesis Some Results on Kronecker, Dirichlet and Helson Sets there in 1971, studying under Nicholas Varopoulos. In 1972 he won the Salem Prize. He has written academic mathematics books aimed at undergraduates: Fourier Analysis Exercises for Fourier Analysis A Companion to Analysis Vectors, Pure and Applied Calculus for the Ambitious He has also written three books aimed at secondary school students, the popular 1996 title The Pleasures of Counting, Naive Decision Making (published 2008) on probability, statistics and game theory, and Where Do Numbers Come From? (published October 2019). References External links Professor Körner's website 1946 births Living people Alumni of Trinity Hall, Cambridge Fellows of Trinity Hall, Cambridge 20th-century British mathematicians 21st-century British mathematicians Mathematical analysts Cambridge mathematicians
Thomas William Körner
[ "Mathematics" ]
226
[ "Mathematical analysis", "Mathematical analysts" ]
8,920,717
https://en.wikipedia.org/wiki/Roasting%20%28metallurgy%29
Roasting is a process of heating a sulfide ore to a high temperature in the presence of air. It is a step in the processing of certain ores. More specifically, roasting is often a metallurgical process involving gas–solid reactions at elevated temperatures with the goal of purifying the metal component(s). Often before roasting, the ore has already been partially purified, e.g. by froth flotation. The concentrate is mixed with other materials to facilitate the process. The technology is useful in making certain ores usable but it can also be a serious source of air pollution. Roasting consists of thermal gas–solid reactions, which can include oxidation, reduction, chlorination, sulfation, and pyrohydrolysis. In roasting, the ore or ore concentrate is treated with very hot air. This process is generally applied to sulfide minerals. During roasting, the sulfide is converted to an oxide, and sulfur is released as sulfur dioxide, a gas. For the ores Cu2S (chalcocite) and ZnS (sphalerite), balanced equations for the roasting are: 2 Cu2S + 3 O2 → 2 Cu2O + 2 SO2 2 ZnS + 3 O2 → 2 ZnO + 2 SO2 The gaseous product of sulfide roasting, sulfur dioxide (SO2) is often used to produce sulfuric acid. Many sulfide minerals contain other components such as arsenic that are released into the environment. Up until the early 20th century, roasting was started by burning wood on top of ore. This would raise the temperature of the ore to the point where its sulfur content would become its source of fuel, and the roasting process could continue without external fuel sources. Early sulfide roasting was practiced in this manner in "open hearth" roasters, which were manually stirred (a practice called "rabbling") using rake-like tools to expose unroasted ore to oxygen as the reaction proceeded. This process released large amounts of acidic, metallic, and other toxic compounds. Results of this include areas that even after 60–80 years are still largely lifeless, often exactly corresponding to the area of the roast bed, some of which are hundreds of metres wide by kilometres long. Roasting is an exothermic process. Roasting operations The following describe different forms of roasting: Oxidizing roasting Oxidizing roasting, the most commonly practiced roasting process, involves heating the ore in excess of air or oxygen, to burn out or replace the impurity element, generally sulfur, partly or completely by oxygen. For sulfide roasting, the general reaction can be given by: 2MS (s) + 3O2 (g) -> 2MO (s) + 2SO2 (g) Roasting the sulfide ore, until almost complete removal of the sulfur from the ore, results in a dead roast. Volatilizing roasting Volatilizing roasting, involves oxidation at elevated temperatures of the ores, to eliminate impurity elements in the form of their volatile oxides. Examples of such volatile oxides include As2O3, Sb2O3, ZnO and sulfur oxides. Careful control of the oxygen content in the roaster is necessary, as excessive oxidation can form non-volatile oxides. Chloridizing roasting Chloridizing roasting transforms certain metal compounds to chlorides through oxidation or reduction. Some metals such as uranium, titanium, beryllium and some rare earths are processed in their chloride form. Certain forms of chloridizing roasting may be represented by the overall reactions: 2NaCl + MS + 2O2 -> Na2SO4 + MCl, 4NaCl + 2MO + S2 + 3O2 -> 2Na2SO4 + 2MCl2 The first reaction represents the chlorination of a sulfide ore involving an exothermic reaction. The second reaction involving an oxide ore is facilitated by addition of elemental sulfur. Carbonate ores react in a similar manner as the oxide ore, after decomposing to their oxide form at high temperatures. Sulfating roasting Sulfating roasting oxidizes certain sulfide ores to sulfates in a supply of air to enable leaching of the sulfate for further processing. Magnetic roasting Magnetic roasting involves controlled roasting of the ore to convert it into a magnetic form, thus enabling easy separation and processing in subsequent steps. For example, controlled reduction of haematite (non magnetic Fe2O3) to magnetite (magnetic Fe3O4). Reduction roasting Reduction roasting partially reduces an oxide ore before the actual smelting process. Sinter roasting Sinter roasting involves heating the fine ores at high temperatures, where simultaneous oxidation and agglomeration of the ores take place. For example, lead sulfide ores are subjected to sinter roasting in a continuous process after froth flotation to convert the fine ores to workable agglomerates for further smelting operations. References Metallurgy Metallurgical processes
Roasting (metallurgy)
[ "Chemistry", "Materials_science", "Engineering" ]
1,064
[ "Metallurgical processes", "Metallurgy", "nan", "Materials science" ]
8,920,916
https://en.wikipedia.org/wiki/John%20Marburger
John Harmen "Jack" Marburger III (February 8, 1941 – July 28, 2011) was an American physicist who directed the Office of Science and Technology Policy in the administration of President George W. Bush, serving as the Science Advisor to the President. His tenure was marred by controversy regarding his defense of the administration against allegations from over two dozen Nobel Laureates, amongst others, that scientific evidence was being suppressed or ignored in policy decisions, including those relating to stem cell research and global warming. However, he has also been credited with keeping the political effects of the September 11 attacks from harming science research—by ensuring that tighter visa controls did not hinder the movement of those engaged in scientific research—and with increasing awareness of the relationship between science and government. He also served as the President of Stony Brook University from 1980 until 1994, and director of Brookhaven National Laboratory from 1998 until 2001. Early life Marburger was born in Staten Island, New York, to Virginia Smith and John H. Marburger Jr., and grew up in Severna Park, Maryland. He attended Princeton University, graduating in 1962 with a B.A. in physics, followed by a Ph.D. in applied physics from Stanford University in 1967. After completing his education, he served as a professor of physics and electrical engineering at the University of Southern California beginning in 1966, specializing in the theoretical physics of nonlinear optics and quantum optics, and co-founded the Center for Laser Studies at that institution. He rose to become chairman of the physics department in 1972, and then dean of the College of Letters, Arts and Sciences in 1976. He was engaged as a public speaker on science, including hosting a series of educational television programs on CBS. He was also outspoken on campus issues, and was designated the university's spokesperson during a scandal over preferential treatment of athletes. Stony Brook University In 1980, Marburger left USC to become the third president of Stony Brook University in Long Island, New York. At the time, state budget cuts were afflicting the university, and he returned it to growth with increases in the university's science research funding from the federal government. From 1988 to 1994, Marburger chaired Universities Research Association, the organization that operated Fermilab and oversaw construction of the ill-fated Superconducting Super Collider, an experience that is credited with convincing him of the influence government had in how science is carried out. During this time he also served as a trustee of Princeton University. He stepped down as President of Stony Brook University in 1994, and began doing research again as a member of the faculty. Chair of Shoreham commission In 1983, he was picked by New York Governor Mario Cuomo to chair a scientific fact-finding commission on the Shoreham Nuclear Power Plant, a job that required him to find common ground between the many viewpoints represented on the commission. The commission eventually recommended the closure of the plant, a course he personally disagreed with. Cuomo had formed the commission in mid-May 1983 to provide him with recommendations regarding the plant's safety, the adequacy of emergency plans, and the economics of operating the plant. The commission's consensus recommendations included unanimous findings that no emergency evacuation of the plant could be conducted without the cooperation of Suffolk County, which was refusing to approve an evacuation plan; that the construction of the plant would have been prevented if it had been started after new Nuclear Regulatory Commission regulations were put into effect after the Three Mile Island accident in 1979; and that operating the plant would not reduce utility costs. Marburger himself at the time emphasized that the governor had not been seeking a consensus but rather encouraged multiple viewpoints to be reflected, and characterized the consensus conclusions as not the only important section of the report. Marburger characterized his participation as a learning experience, and the experience was credited with profoundly changing his view on the relationship between the scientific community and the public. He had never been to a public hearing prior to his participation in the Shoreham commission, and he said that he had initially expected that the issues could be resolved by examining scientific data and establishing failure probabilities. However, he quickly became aware of the importance of the public participation process itself, stating that it was "one of the rare opportunities for the public to feel they were being heard and taken seriously." Marburger's conduct on the committee was praised by activists on both sides of the debate, with his focus on listening to all viewpoints and his ability to not take disagreements personally being especially noted. Brookhaven National Laboratory In January 1998, Marburger became president of Brookhaven Science Associates, which subsequently won a bid to operate Brookhaven National Laboratory for the federal government, and he became the director of the lab. He took office after a highly publicized scandal in which tritium leaked from the lab's High Flux Beam Reactor, leading to calls by activists to shut down the lab. Rather than directly oppose the activists, Marburger created policies that improved the environmental management of the lab as well as community involvement and transparency. Marburger also presided over the commissioning of the Relativistic Heavy Ion Collider, expanded the lab's program in medical imaging and neuroscience, and placed more emphasis on its technology transfer program. The tritium leak, combined with other disclosures about improper handling and disposal of hazardous waste, had caused Secretary of Energy Federico Peña to fire the lab's previous manager, Associated Universities, Inc. Upon starting as the laboratory's director, Marburger noted the increased importance of health and environmental concerns since the beginning of the Cold War, stating that "getting the people at Brookhaven to understand that won't be simple, and there may be some disagreement on how we should do it, but that's my job." Marburger set up a permanent community advisory council and met with local environmental groups to increase communication between them and the laboratory's management. By 2001, when Marburger left to join the Bush administration, local environmental groups credited him with having largely dissipated the distrust that had existed between the groups when he started. In 2001 he was elected a Fellow of the American Physical Society for "his contributions to laser physics and for his scientific leadership as Director of Brookhaven National Laboratory". Bush administration In September 2001, Marburger became Director of the Office of Science and Technology Policy under George W. Bush. Marburger was a noted Democrat, a fact that Nature magazine stated was relevant to the decision by the administration to take the unusual step of withholding from Marburger the title of Assistant to the President that previous science advisors had been granted. His tenure was marked by controversy as he defended the Bush administration from accusations that political influence on science was distorting scientific research in federal agencies and that scientific evidence was being suppressed or ignored in policy decisions, especially on the topics of abstinence-only birth control education, climate change policy, and stem cell research. Marburger defended the Bush Administration from these accusations, saying they were inaccurate or motivated by partisanship, especially on the issue of science funding levels. Marburger continued to be personally respected by many of his academic colleagues. Marburger's tenure as Director was the longest in the history of that post. After the September 11 attacks, he helped to establish the DHS Directorate for Science and Technology within the new Department of Homeland Security. He has been called a central player opposing new restrictions of international scientific exchanges of people and ideas after the attacks. He later was responsible for reorienting the nation's space policy after the Space Shuttle Columbia disaster, and played an important part in the nation's re-entry into the International Thermonuclear Experimental Reactor program. Marburger was also known for his support of the emerging field of science of science policy, which seeks to analyze how science policy decisions affects a nation's ability to produce and benefit from innovation. In February 2004, the Union of Concerned Scientists published a report accusing the Bush administration of manipulating science for political purposes, listing more than 20 alleged incidents of censoring scientific results or applying a litmus test in the appointment of supposedly scientific advisory panel members. In April 2004, Marburger published a statement rebutting the report and exposing errors and incomplete explanations in it, and stating that "even when the science is clear—and often it is not—it is but one input into the policy process," but "in this Administration, science strongly informs policy." The Union of Concerned Scientists issued a revised version of their report after Marburger's statement was published. Marburger also called the report's conclusions illusory and the result of focusing on unrelated incidents within a vast government apparatus, and attributed the controversy as being related to the upcoming elections. It was noted that Marburger enjoyed close personal relationships with President Bush, White House Chief of Staff Andrew Card and Office of Management and Budget Director Joshua Bolten, attesting to his active involvement within the administration. Marburger responded to criticism of his support for Bush administration policies in 2004, stating "No one will know my personal positions on issues as long as I am in this job. I am here to make sure that the science input to policy making is sound and that the executive branch functions properly with respect to its science and technology missions." On the topic of stem cell research, he in 2004 said that stem cells "offer great promise for addressing incurable diseases and afflictions. But I can't tell you when a fertilized egg becomes sacred. That's not my job. That's not a science issue. And so whatever I think about reproductive technology or choice, or whatever, is irrelevant to my job as a science adviser." However, in February 2005, in a speech at the annual conference of the National Association of Science Writers, he stated, "Intelligent design is not a scientific theory.... I don't regard intelligent design as a scientific topic". Also In 2005, he told The New York Times that "global warming exists, and we have to do something about it." Sherwood Boehlert, the Republican chair of the House Committee on Science during most of Marburger's tenure, said that "the challenge he faced was serving a president who didn't really want much scientific advice, and who let politics dictate the direction of his science policy... and he was in the unenviable position of being someone who had earned the respect of his scientific colleagues while having to be identified with policies that were not science-based." On the other hand, Robert P. Crease, a colleague of Marburger at Stony Brook University, characterized him as someone who "[went] to the White House as a scientist, not an advocate. He refused to weigh in on high-profile, politically controversial issues, but instead set about fixing broken connections in the unwieldy machinery by which the government approves and funds scientific projects.... Some bitterly criticized him for collaborating with the Bush administration. But he left the office running better than when he entered." Later life Marburger returned to Stony Brook University as a faculty member in 2009, and co-edited the book The Science of Science Policy: A Handbook, which was published in 2011. He also served as Vice President for Research but stepped down on July 1, 2011. Marburger died Thursday, July 28, 2011, at his home in Port Jefferson, New York, after four years of treatment for non-Hodgkin's lymphoma. He was survived by his wife, two sons, and a grandson. His final publication, a book on quantum physics for laypeople called Constructing Reality: Quantum Theory and Particle Physics, was published shortly after his death. References External links |- 1941 births 2011 deaths 20th-century American writers 21st-century American non-fiction writers American nuclear physicists American science writers United States biotechnology law Brookhaven National Laboratory staff California Democrats Deaths from lymphoma in New York (state) Deaths from non-Hodgkin lymphoma Energy policy of the United States Fellows of the American Physical Society Fermilab George W. Bush administration personnel NASA oversight New York (state) Democrats Nonlinear optics Nuclear energy in the United States People from Severna Park, Maryland Scientists from Los Angeles People from Port Jefferson, New York People from Staten Island Princeton University alumni Quantum optics American quantum physicists Space policy Stanford University alumni Stem cell research Presidents of Stony Brook University American theoretical physicists United States Department of Homeland Security officials University of Southern California faculty Scientists from New York (state) Directors of the Office of Science and Technology Policy
John Marburger
[ "Physics", "Chemistry", "Biology" ]
2,582
[ "Biotechnology law", "Stem cell research", "Quantum optics", "Quantum mechanics", "Translational medicine", "Tissue engineering", "United States biotechnology law" ]
8,921,015
https://en.wikipedia.org/wiki/Types%20of%20volcanic%20eruptions
Several types of volcanic eruptions—during which material is expelled from a volcanic vent or fissure—have been distinguished by volcanologists. These are often named after famous volcanoes where that type of behavior has been observed. Some volcanoes may exhibit only one characteristic type of eruption during a period of activity, while others may display an entire sequence of types all in one eruptive series. There are three main types of volcanic eruption: Magmatic eruptions are the most well-observed type of eruption. They involve the decompression of gas within magma that propels it forward. Phreatic eruptions are driven by the superheating of steam due to the close proximity of magma. This type exhibits no magmatic release, instead causing the granulation of existing rock. Phreatomagmatic eruptions are driven by the direct interaction of magma and water, as opposed to phreatic eruptions, where no fresh magma reaches the surface. Within these broad eruptive types are several subtypes. The weakest are Hawaiian and submarine, then Strombolian, followed by Vulcanian and Surtseyan. The stronger eruptive types are Pelean eruptions, followed by Plinian eruptions; the strongest eruptions are called Ultra-Plinian. Subglacial and phreatic eruptions are defined by their eruptive mechanism, and vary in strength. An important measure of eruptive strength is the Volcanic Explosivity Index an order-of-magnitude scale, ranging from 0 to 8, that often correlates to eruptive types. Mechanisms Volcanic eruptions arise through three main mechanisms: Gas release under decompression, causing magmatic eruptions Ejection of entrained particles during steam eruptions, causing phreatic eruptions Thermal contraction from chilling on contact with water, causing phreatomagmatic eruptions In terms of activity, there are explosive eruptions and effusive eruptions. The former are characterized by gas-driven explosions that propel magma and tephra. The latter pour out lava without significant explosion. Impact Volcanic eruptions vary widely in strength. On the one extreme there are effusive Hawaiian eruptions, which are characterized by lava fountains and fluid lava flows, which are typically not very dangerous. On the other extreme, Plinian eruptions are large, violent, and highly dangerous explosive events. Volcanoes are not bound to one eruptive style, and frequently display many different types, both passive and explosive, even in the span of a single eruptive cycle. Volcanoes do not always erupt vertically from a single crater near their peak, either. Some volcanoes exhibit lateral and fissure eruptions. Notably, many Hawaiian eruptions start from rift zones. Scientists believed that pulses of magma mixed together in the magma chamber before climbing upward—a process estimated to take several thousands of years. Columbia University volcanologists found that the eruption of Costa Rica's Irazú Volcano in 1963 was likely triggered by magma that took a nonstop route from the mantle over just a few months. Volcanic explosivity index The volcanic explosivity index (commonly shortened to VEI) is a scale, from 0 to 8, for measuring the strength of eruptions but does not capture all of the properties that may be perceived to be important. It is used by the Smithsonian Institution's Global Volcanism Program in assessing the impact of historic and prehistoric lava flows. It operates in a way similar to the Richter scale for earthquakes, in that each interval in value represents a tenfold increasing in magnitude (it is logarithmic). The vast majority of volcanic eruptions are of VEIs between 0 and 2. Magmatic Magmatic eruptions produce juvenile clasts during explosive decompression from gas release. They range in intensity from the relatively small lava fountains on Hawaii to catastrophic Ultra-Plinian eruption columns more than high, bigger than the eruption of Mount Vesuvius in 79 AD that buried Pompeii. Hawaiian Hawaiian eruptions are a type of volcanic eruption named after the Hawaiian volcanoes, such as Mauna Loa, with this eruptive type is hallmark. Hawaiian eruptions are the calmest types of volcanic events, characterized by the effusive eruption of very fluid basalt-type lavas with low gaseous content. The volume of ejected material from Hawaiian eruptions is less than half of that found in other eruptive types. Steady production of small amounts of lava builds up the large, broad form of a shield volcano. Eruptions are not centralized at the main summit as with other volcanic types, and often occur at vents around the summit and from fissure vents radiating out of the center. Hawaiian eruptions often begin as a line of vent eruptions along a fissure vent, a so-called "curtain of fire." These die down as the lava begins to concentrate at a few of the vents. Central-vent eruptions, meanwhile, often take the form of large lava fountains (both continuous and sporadic), which can reach heights of hundreds of meters or more. The particles from lava fountains usually cool in the air before hitting the ground, resulting in the accumulation of cindery scoria fragments; when the air is especially thick with clasts, they cannot cool off fast enough due to the surrounding heat, and hit the ground still hot, the accumulation of which forms spatter cones. If eruptive rates are high enough, they may even form splatter-fed lava flows. Hawaiian eruptions are often extremely long lived; Puʻu ʻŌʻō, a volcanic cone on Kilauea, erupted continuously for over 35 years. Another Hawaiian volcanic feature is the formation of active lava lakes, self-maintaining pools of raw lava with a thin crust of semi-cooled rock. Flows from Hawaiian eruptions are basaltic, and can be divided into two types by their structural characteristics. Pahoehoe lava is a relatively smooth lava flow that can be billowy or ropey. They can move as one sheet, by the advancement of "toes", or as a snaking lava column. A'a lava flows are denser and more viscous than pahoehoe, and tend to move slower. Flows can measure thick. A'a flows are so thick that the outside layers cools into a rubble-like mass, insulating the still-hot interior and preventing it from cooling. A'a lava moves in a peculiar way—the front of the flow steepens due to pressure from behind until it breaks off, after which the general mass behind it moves forward. Pahoehoe lava can sometimes become A'a lava due to increasing viscosity or increasing rate of shear, but A'a lava never turns into pahoehoe flow. Hawaiian eruptions are responsible for several unique volcanological objects. Small volcanic particles are carried and formed by the wind, chilling quickly into teardrop-shaped glassy fragments known as Pele's tears (after Pele, the Hawaiian volcano deity). During especially high winds these chunks may even take the form of long drawn-out strands, known as Pele's hair. Sometimes basalt aerates into reticulite, the lowest density rock type on earth. Although Hawaiian eruptions are named after the volcanoes of Hawaii, they are not necessarily restricted to them; the highest lava fountain recorded was during the 23 November 2013 eruption of Mount Etna in Italy, which reached a stable height of around for 18 minutes, briefly peaking at a height of . Volcanoes known to have Hawaiian activity include: Puʻu ʻŌʻō, a parasitic cinder cone located on Kilauea on the island of Hawaii which erupted continuously from 1983 to 2018. The eruptions began with a -long fissure-based "curtain of fire" on 3 January 1983. These gave way to centralized eruptions on the site of Kilauea's east rift, eventually building up the cone. For a list of all of the volcanoes of Hawaii, see List of volcanoes in the Hawaiian – Emperor seamount chain. Mount Etna, Italy. Mount Mihara in 1986 (see above paragraph) Strombolian Strombolian eruptions are a type of volcanic eruption named after the volcano Stromboli, which has been erupting nearly continuously for centuries. Strombolian eruptions are driven by the bursting of gas bubbles within the magma. These gas bubbles within the magma accumulate and coalesce into large bubbles, called gas slugs. These grow large enough to rise through the lava column. Upon reaching the surface, the difference in air pressure causes the bubble to burst with a loud pop, throwing magma in the air in a way similar to a soap bubble. Because of the high gas pressures associated with the lavas, continued activity is generally in the form of episodic explosive eruptions accompanied by the distinctive loud blasts. During eruptions, these blasts occur as often as every few minutes. The term "Strombolian" has been used indiscriminately to describe a wide variety of volcanic eruptions, varying from small volcanic blasts to large eruptive columns. In reality, true Strombolian eruptions are characterized by short-lived and explosive eruptions of lavas with intermediate viscosity, often ejected high into the air. Columns can measure hundreds of meters in height. The lavas formed by Strombolian eruptions are a form of relatively viscous basaltic lava, and its end product is mostly scoria. The relative passivity of Strombolian eruptions, and its non-damaging nature to its source vent allow Strombolian eruptions to continue unabated for thousands of years, and also makes it one of the least dangerous eruptive types. Strombolian eruptions eject volcanic bombs and lapilli fragments that travel in parabolic paths before landing around their source vent. The steady accumulation of small fragments builds cinder cones composed completely of basaltic pyroclasts. This form of accumulation tends to result in well-ordered rings of tephra. Strombolian eruptions are similar to Hawaiian eruptions, but there are differences. Strombolian eruptions are noisier, produce no sustained eruptive columns, do not produce some volcanic products associated with Hawaiian volcanism (specifically Pele's tears and Pele's hair), and produce fewer molten lava flows (although the eruptive material does tend to form small rivulets). Volcanoes known to have Strombolian activity include: Parícutin, Mexico, which erupted from a fissure in a cornfield in 1943. Two years into its life, pyroclastic activity began to wane, and the outpouring of lava from its base became its primary mode of activity. Eruptions ceased in 1952, and the final height was . This was the first time that scientists are able to observe the complete life cycle of a volcano. Mount Etna, Italy, which has displayed Strombolian activity in recent eruptions, for example in 1981, 1999, 2002–2003, and 2009. Mount Erebus in Antarctica, the southernmost active volcano in the world, having been observed erupting since 1972. Eruptive activity at Erebus consists of frequent Strombolian activity. Mount Batutara, Indonesia, exhibited continuous Strombolian eruption since 2014. Stromboli itself. The namesake of the mild explosive activity that it possesses has been active throughout historical time; essentially continuous Strombolian eruptions, occasionally accompanied by lava flows, have been recorded at Stromboli for more than a millennium. Vulcanian Vulcanian eruptions are a type of volcanic eruption named after the volcano Vulcano. It was named so following Giuseppe Mercalli's observations of its 1888–1890 eruptions. In Vulcanian eruptions, intermediate viscous magma within the volcano make it difficult for vesiculate gases to escape. Similar to Strombolian eruptions, this leads to the buildup of high gas pressure, eventually popping the cap holding the magma down and resulting in an explosive eruption. Unlike Strombolian eruptions, ejected lava fragments are not aerodynamic; this is due to the higher viscosity of Vulcanian magma and the greater incorporation of crystalline material broken off from the former cap. They are also more explosive than their Strombolian counterparts, with eruptive columns often reaching between high. Lastly, Vulcanian deposits are andesitic to dacitic rather than basaltic. Initial Vulcanian activity is characterized by a series of short-lived explosions, lasting a few minutes to a few hours and typified by the ejection of volcanic bombs and blocks. These eruptions wear down the lava dome holding the magma down, and it disintegrates, leading to much more quiet and continuous eruptions. Thus an early sign of future Vulcanian activity is lava dome growth, and its collapse generates an outpouring of pyroclastic material down the volcano's slope. Deposits near the source vent consist of large volcanic blocks and bombs, with so-called "bread-crust bombs" being especially common. These deeply cracked volcanic chunks form when the exterior of ejected lava cools quickly into a glassy or fine-grained shell, but the inside continues to cool and vesiculate. The center of the fragment expands, cracking the exterior. The bulk of Vulcanian deposits are fine grained ash. The ash is only moderately dispersed, and its abundance indicates a high degree of fragmentation, the result of high gas contents within the magma. In some cases these have been found to be the result of interaction with meteoric water, suggesting that Vulcanian eruptions are partially hydrovolcanic. Volcanoes that have exhibited Vulcanian activity include: Sakurajima, Japan has been the site of Vulcanian activity near-continuously since 1955. Tavurvur, Papua New Guinea, one of several volcanoes in the Rabaul Caldera. Irazú Volcano in Costa Rica exhibited Vulcanian activity in its 1963–1965 eruption. Anak Krakatoa, Indonesia, repeated vulcanian activities since its rise in 1930 until the present time. Vulcanian eruptions are estimated to make up at least half of all known Holocene eruptions. Peléan Peléan eruptions (or nuée ardente) are a type of volcanic eruption named after the volcano Mount Pelée in Martinique, the site of a Peléan eruption in 1902 that is one of the worst natural disasters in history. In Peléan eruptions, a large amount of gas, dust, ash, and lava fragments are blown out the volcano's central crater, driven by the collapse of rhyolite, dacite, and andesite lava domes that often creates large eruptive columns. An early sign of a coming eruption is the growth of a so-called Peléan or lava spine, a bulge in the volcano's summit preempting its total collapse. The material collapses upon itself, forming a fast-moving pyroclastic flow (known as a block-and-ash flow) that moves down the side of the mountain at tremendous speeds, often over per hour. These landslides make Peléan eruptions one of the most dangerous in the world, capable of tearing through populated areas and causing serious loss of life. The 1902 eruption of Mount Pelée caused tremendous destruction, killing more than 30,000 people and completely destroying St. Pierre, the worst volcanic event in the 20th century. Peléan eruptions are characterized most prominently by the incandescent pyroclastic flows that they drive. The mechanics of a Peléan eruption are very similar to that of a Vulcanian eruption, except that in Peléan eruptions the volcano's structure is able to withstand more pressure, hence the eruption occurs as one large explosion rather than several smaller ones. Volcanoes known to have Peléan activity include: Mount Pelée, Martinique. The 1902 eruption of Mount Pelée completely devastated the island, destroying St. Pierre and leaving only 3 survivors. The eruption was directly preceded by lava dome growth. Mayon Volcano, the Philippines most active volcano. It has been the site of many different types of eruptions, Peléan included. Approximately 40 ravines radiate from the summit and provide pathways for frequent pyroclastic flows and mudflows to the lowlands below. Mayon's most violent eruption occurred in 1814 and was responsible for over 1200 deaths. The 1951 eruption of Mount Lamington. Prior to this eruption the peak had not even been recognized as a volcano. Over 3,000 people were killed, and it has become a benchmark for studying large Peléan eruptions. Mount Sinabung, Indonesia. History of its eruptions since 2013 are showing the volcano emits pyroclastic flows with frequent collapses of its lava domes. Plinian Plinian eruptions (or Vesuvian eruptions) are a type of volcanic eruption named for the historical eruption of Mount Vesuvius in 79 AD that buried the Roman towns of Pompeii and Herculaneum and, specifically, for its chronicler Pliny the Younger. The process powering Plinian eruptions starts in the magma chamber, where dissolved volatile gases are stored in the magma. The gases vesiculate and accumulate as they rise through the magma conduit. These bubbles agglutinate and once they reach a certain size (about 75% of the total volume of the magma conduit) they explode. The narrow confines of the conduit force the gases and associated magma up, forming an eruptive column. Eruption velocity is controlled by the gas contents of the column, and low-strength surface rocks commonly crack under the pressure of the eruption, forming a flared outgoing structure that pushes the gases even faster. These massive eruptive columns are the distinctive feature of a Plinian eruption, and reach up into the atmosphere. The densest part of the plume, directly above the volcano, is driven internally by gas expansion. As it reaches higher into the air the plume expands and becomes less dense, convection and thermal expansion of volcanic ash drive it even further up into the stratosphere. At the top of the plume, powerful winds may drive the plume away from the volcano. These highly explosive eruptions are usually associated with volatile-rich dacitic to rhyolitic lavas, and occur most typically at stratovolcanoes. Eruptions can last anywhere from hours to days, with longer eruptions being associated with more felsic volcanoes. Although they are usually associated with felsic magma, Plinian eruptions can occur at basaltic volcanoes, if the magma chamber differentiates with upper portions rich in silicon dioxide, or if magma ascends rapidly. Plinian eruptions are similar to both Vulcanian and Strombolian eruptions, except that rather than creating discrete explosive events, Plinian eruptions form sustained eruptive columns. They are also similar to Hawaiian lava fountains in that both eruptive types produce sustained eruption columns maintained by the growth of bubbles that move up at about the same speed as the magma surrounding them. Regions affected by Plinian eruptions are subjected to heavy pumice airfall affecting an area in size. The material in the ash plume eventually finds its way back to the ground, covering the landscape in a thick layer of many cubic kilometers of ash. The most dangerous eruptive feature are the pyroclastic flows generated by material collapse, which move down the side of the mountain at extreme speeds of up to per hour and with the ability to extend the reach of the eruption hundreds of kilometers. The ejection of hot material from the volcano's summit melts snowbanks and ice deposits on the volcano, which mixes with tephra to form lahars, fast moving mudflows with the consistency of wet concrete that move at the speed of a river rapid. Major Plinian eruptive events include: The AD 79 eruption of Mount Vesuvius buried the Roman towns of Pompeii and Herculaneum under a layer of ash and tephra. It is the model Plinian eruption. Mount Vesuvius has erupted several times since then. Its last eruption was in 1944 and caused problems for the allied armies as they advanced through Italy. It was the contemporary report by Pliny the Younger that led scientists to refer to Vesuvian eruptions as "Plinian". The 1980 eruption of Mount St. Helens in Washington, which ripped apart the volcano's summit, was a Plinian eruption of Volcanic Explosivity Index (VEI) 5. The strongest types of eruptions, with a VEI of 8, are so-called "Ultra-Plinian" eruptions, such as the one at Lake Toba 74 thousand years ago, which put out 2800 times the material erupted by Mount St. Helens in 1980. Hekla in Iceland, an example of basaltic Plinian volcanism being its 1947–48 eruption. The past 800 years have been a pattern of violent initial eruptions of pumice followed by prolonged extrusion of basaltic lava from the lower part of the volcano. Pinatubo in the Philippines on 15 June 1991, which produced of dacitic magma, a high eruption column, and released 17 megatons of sulfur dioxide. Kelud, Indonesia erupted in 2014 and ejected around volcanic ashes which caused economic disruptions across Java. Phreatomagmatic Phreatomagmatic eruptions are eruptions that arise from interactions between water and magma. They are driven by thermal contraction of magma when it comes in contact with water (as distinguished from magmatic eruptions, which are driven by thermal expansion). This temperature difference between the two causes violent water-lava interactions that make up the eruption. The products of phreatomagmatic eruptions are believed to be more regular in shape and finer grained than the products of magmatic eruptions because of the differences in eruptive mechanisms. There is debate about the exact nature of phreatomagmatic eruptions, and some scientists believe that fuel-coolant reactions may be more critical to the explosive nature than thermal contraction. Fuel coolant reactions may fragment the volcanic material by propagating stress waves, widening cracks and increasing surface area that ultimately leads to rapid cooling and explosive contraction-driven eruptions. Surtseyan A Surtseyan (or hydrovolcanic) eruption is a type of volcanic eruption characterized by shallow-water interactions between water and lava, named after its most famous example, the eruption and formation of the island of Surtsey off the coast of Iceland in 1963. Surtseyan eruptions are the "wet" equivalent of ground-based Strombolian eruptions, but because they take place in water they are much more explosive. As water is heated by lava, it flashes into steam and expands violently, fragmenting the magma it contacts into fine-grained ash. Surtseyan eruptions are typical of shallow-water volcanic oceanic islands, but they are not confined to seamounts. They can happen on land as well, where rising magma that comes into contact with an aquifer (water-bearing rock formation) at shallow levels under the volcano can cause them. The products of Surtseyan eruptions are generally oxidized palagonite basalts (though andesitic eruptions do occur, albeit rarely), and like Strombolian eruptions Surtseyan eruptions are generally continuous or otherwise rhythmic. A defining feature of a Surtseyan eruption is the formation of a pyroclastic surge (or base surge), a ground hugging radial cloud that develops along with the eruption column. Base surges are caused by the gravitational collapse of a vaporous eruptive column, one that is denser overall than a regular volcanic column. The densest part of the cloud is nearest to the vent, resulting in a wedge shape. Associated with these laterally moving rings are dune-shaped depositions of rock left behind by the lateral movement. These are occasionally disrupted by bomb sags, rock that was flung out by the explosive eruption and followed a ballistic path to the ground. Accumulations of wet, spherical ash known as accretionary lapilli are another common surge indicator. Over time Surtseyan eruptions tend to form maars, broad low-relief volcanic craters dug into the ground, and tuff rings, circular structures built of rapidly quenched lava. These structures are associated with single vent eruptions. If eruptions arise along fracture zones, rift zones may be dug out. Such eruptions tend to be more violent than those which form tuff rings or maars, an example being the 1886 eruption of Mount Tarawera. Littoral cones are another hydrovolcanic feature, generated by the explosive deposition of basaltic tephra (although they are not truly volcanic vents). They form when lava accumulates within cracks in lava, superheats and explodes in a steam explosion, breaking the rock apart and depositing it on the volcano's flank. Consecutive explosions of this type eventually generate the cone. Volcanoes known to have Surtseyan activity include: Surtsey, Iceland. The volcano built itself up from depth and emerged above the Atlantic Ocean off the coast of Iceland in 1963. Initial hydrovolcanics were highly explosive, but as the volcano grew, rising lava interacted less with water and more with air, until finally Surtseyan activity waned and became more Strombolian. Ukinrek maars in Alaska, 1977, and Capelinhos in the Azores, 1957, both examples of above-water Surtseyan activity. Mount Tarawera in New Zealand erupted along a rift zone in 1886, killing 150 people. Ferdinandea, a seamount in the Mediterranean Sea, breached sea level in July 1831 and caused a sovereignty dispute between Italy, France, and Great Britain. The volcano did not build tuff cones strong enough to withstand erosion and soon disappeared back below the waves. The underwater volcano Hunga Tonga in Tonga breached sea level in 2009. Both of its vents exhibited Surtseyan activity for much of the time. It was also the site of an earlier eruption in May 1988. Submarine Submarine eruptions occur underwater. An estimated 75% of volcanic eruptive volume is generated by submarine eruptions near mid ocean ridges alone. Problems detecting deep sea volcanic eruptions meant their details were virtually unknown until advances in the 1990s made it possible to observe them. Submarine eruptions may produce seamounts, which may break the surface and form volcanic islands. Submarine volcanism is driven by various processes. Volcanoes near plate boundaries and mid-ocean ridges are built by the decompression melting of mantle rock that rises on an upwelling portion of a convection cell to the crustal surface. Eruptions associated with subducting zones, meanwhile, are driven by subducting plates that add volatiles to the rising plate, lowering its melting point. Each process generates different rock; mid-ocean ridge volcanics are primarily basaltic, whereas subduction flows are mostly calc-alkaline, and more explosive and viscous. Spreading rates along mid-ocean ridges vary widely, from per year at the Mid-Atlantic Ridge, to up to along the East Pacific Rise. Higher spreading rates are a probable cause for higher levels of volcanism. The technology for studying seamount eruptions did not exist until advancements in hydrophone technology made it possible to "listen" to acoustic waves, known as T-waves, released by submarine earthquakes associated with submarine volcanic eruptions. The reason for this is that land-based seismometers cannot detect sea-based earthquakes below a magnitude of 4, but acoustic waves travel well in water and over long periods of time. A system in the North Pacific, maintained by the United States Navy and originally intended for the detection of submarines, has detected an event on average every 2 to 3 years. The most common underwater flow is pillow lava, a rounded lava flow named for its unusual shape. Less common are glassy, marginal sheet flows, indicative of larger-scale flows. Volcaniclastic sedimentary rocks are common in shallow-water environments. As plate movement starts to carry the volcanoes away from their eruptive source, eruption rates start to die down, and water erosion grinds the volcano down. The final stages of eruption cap the seamount in alkalic flows. There are about 100,000 deepwater volcanoes in the world, although most are beyond the active stage of their life. Some exemplary seamounts are Kamaʻehuakanaloa (formerly Loihi), Bowie Seamount, Davidson Seamount, and Axial Seamount. Subglacial Subglacial eruptions are a type of volcanic eruption characterized by interactions between lava and ice, often under a glacier. The nature of glaciovolcanism dictates that it occurs at areas of high latitude and high altitude. It has been suggested that subglacial volcanoes that are not actively erupting often dump heat into the ice covering them, producing meltwater. This meltwater mix means that subglacial eruptions often generate dangerous jökulhlaups (floods) and lahars. The study of glaciovolcanism is still a relatively new field. Early accounts described the unusual flat-topped steep-sided volcanoes (called tuyas) in Iceland that were suggested to have formed from eruptions below ice. The first English-language paper on the subject was published in 1947 by William Henry Mathews, describing the Tuya Butte field in northwest British Columbia, Canada. The eruptive process that builds these structures, originally inferred in the paper, begins with volcanic growth below the glacier. At first the eruptions resemble those that occur in the deep sea, forming piles of pillow lava at the base of the volcanic structure. Some of the lava shatters when it comes in contact with the cold ice, forming a glassy breccia called hyaloclastite. After a while the ice finally melts into a lake, and the more explosive eruptions of Surtseyan activity begins, building up flanks made up of mostly hyaloclastite. Eventually the lake boils off from continued volcanism, and the lava flows become more effusive and thicken as the lava cools much more slowly, often forming columnar jointing. Well-preserved tuyas show all of these stages, for example Hjorleifshofdi in Iceland. Products of volcano-ice interactions stand as various structures, whose shape is dependent on complex eruptive and environmental interactions. Glacial volcanism is a good indicator of past ice distribution, making it an important climatic marker. Since they are embedded in ice, as glacial ice retreats worldwide there are concerns that tuyas and other structures may destabilize, resulting in mass landslides. Evidence of volcanic-glacial interactions are evident in Iceland and parts of British Columbia, and it is even possible that they play a role in deglaciation. Glaciovolcanic products have been identified in Iceland, the Canadian province of British Columbia, the U.S. states of Hawaii and Alaska, the Cascade Range of western North America, South America and even on the planet Mars. Volcanoes known to have subglacial activity include: Mauna Kea in tropical Hawaii. There is evidence of past subglacial eruptive activity on the volcano in the form of a subglacial deposit on its summit. The eruptions originated about 10,000 years ago, during the last ice age, when the summit of Mauna Kea was covered in ice. In 2008, the British Antarctic Survey reported a volcanic eruption under the Antarctica ice sheet 2,200 years ago. It is believed to be that this was the biggest eruption in Antarctica in the last 10,000 years. Volcanic ash deposits from the volcano were identified through an airborne radar survey, buried under later snowfalls in the Hudson Mountains, close to Pine Island Glacier. Iceland, well known for both glaciers and volcanoes, is often a site of subglacial eruptions. An example an eruption under the Vatnajökull ice cap in 1996, which occurred under an estimated of ice. As part of the search for life on Mars, scientists have suggested that there may be subglacial volcanoes on the red planet. Several potential sites of such volcanism have been reviewed, and compared extensively with similar features in Iceland: Phreatic Phreatic eruptions (or steam-blast eruptions) are a type of eruption driven by the expansion of steam. When cold ground or surface water come into contact with hot rock or magma it superheats and explodes, fracturing the surrounding rock and thrusting out a mixture of steam, water, ash, volcanic bombs, and volcanic blocks. The distinguishing feature of phreatic explosions is that they only blast out fragments of pre-existing solid rock from the volcanic conduit; no new magma is erupted. Because they are driven by the cracking of rock strata under pressure, phreatic activity does not always result in an eruption; if the rock face is strong enough to withstand the explosive force, outright eruptions may not occur, although cracks in the rock will probably develop and weaken it, furthering future eruptions. Often a precursor of future volcanic activity, phreatic eruptions are generally weak, although there have been exceptions. Some phreatic events may be triggered by earthquake activity, another volcanic precursor, and they may also travel along dike lines. Phreatic eruptions form base surges, lahars, avalanches, and volcanic block "rain." They may also release deadly toxic gas able to suffocate anyone in range of the eruption. Volcanoes known to exhibit phreatic activity include: Mount St. Helens, which exhibited phreatic activity just prior to its catastrophic 1980 eruption (which was itself Plinian). Taal Volcano, Philippines, 1965 2020 La Soufrière of Guadeloupe (Lesser Antilles), 1975–1976 activity. Soufrière Hills volcano on Montserrat, West Indies, 1995–2012. Poás Volcano, has frequent geyser like phreatic eruptions from its crater lake. Mount Bulusan, well known for its sudden phreatic eruptions. Mount Ontake, all historical eruptions of this volcano have been phreatic including the deadly 2014 eruption. Mount Kerinci, Indonesia, produces almost annual phreatic eruptions. See also References Further reading . This is the original landmark paper by William Henry Mathews that first described tuyas and subglacial eruptions. External links Category:Volcanic eruptions at Wikimedia Commons (Videos) USGS Hawaiian Volcano Observatory (HVO) homepage. USGS. Distinguishing eruptive types. How Volcanoes Work. San Diego State University. Live-Stream: Earth's Volcanoes and Eruptions Weather lore
Types of volcanic eruptions
[ "Physics" ]
7,048
[ "Weather", "Physical phenomena", "Weather lore" ]
8,921,202
https://en.wikipedia.org/wiki/DSSP%20%28algorithm%29
The DSSP algorithm is the standard method for assigning secondary structure to the amino acids of a protein, given the atomic-resolution coordinates of the protein. The abbreviation is only mentioned once in the 1983 paper describing this algorithm, where it is the name of the Pascal program that implements the algorithm Define Secondary Structure of Proteins. Algorithm DSSP begins by identifying the intra-backbone hydrogen bonds of the protein using a purely electrostatic definition, assuming partial charges of −0.42 e and +0.20 e to the carbonyl oxygen and amide hydrogen respectively, their opposites assigned to the carbonyl carbon and amide nitrogen. A hydrogen bond is identified if E in the following equation is less than -0.5 kcal/mol: where the terms indicate the distance between atoms A and B, taken from the carbon (C) and oxygen (O) atoms of the C=O group and the nitrogen (N) and hydrogen (H) atoms of the N-H group. Based on this, nine types of secondary structure are assigned. The 310 helix, α helix and π helix have symbols G, H and I and are recognized by having a repetitive sequence of hydrogen bonds in which the residues are three, four, or five residues apart respectively. Two types of beta sheet structures exist; a beta bridge has symbol B while longer sets of hydrogen bonds and beta bulges have symbol E. T is used for turns, featuring hydrogen bonds typical of helices, S is used for regions of high curvature (where the angle between and is at least 70°). As of DSSP version 4, PPII helices are also detected based on a combination of backbone torsion angles and the absence of hydrogen bonds compatible with other types. PPII helices have symbol P. A blank (or space) is used if no other rule applies, referring to loops. These eight types are usually grouped into three larger classes: helix (G, H and I), strand (E and B) and loop (S, T, and C, where C sometimes is represented also as blank space). π helices In the original DSSP algorithm, residues were preferentially assigned to α helices, rather than π helices. In 2011, it was shown that DSSP failed to annotate many "cryptic" π helices, which are commonly flanked by α helices. In 2012, DSSP was rewritten so that the assignment of π helices was given preference over α helices, resulting in better detection of π helices. Versions of DSSP from 2.1.0 onwards therefore produce slightly different output from older versions. Variants In 2002, a continuous DSSP assignment was developed by introducing multiple hydrogen bond thresholds, where the new assignment was found to correlate with protein motion. See also STRIDE (algorithm) an alternative algorithm Chris Sander (scientist) References External links DSSP Analysis tool Continuous DSSP tool Protein structure
DSSP (algorithm)
[ "Chemistry" ]
598
[ "Protein structure", "Structural biology" ]
8,921,317
https://en.wikipedia.org/wiki/Lifson%E2%80%93Roig%20model
In polymer science, the Lifson–Roig model is a helix-coil transition model applied to the alpha helix-random coil transition of polypeptides; it is a refinement of the Zimm–Bragg model that recognizes that a polypeptide alpha helix is only stabilized by a hydrogen bond only once three consecutive residues have adopted the helical conformation. To consider three consecutive residues each with two states (helix and coil), the Lifson–Roig model uses a 4x4 transfer matrix instead of the 2x2 transfer matrix of the Zimm–Bragg model, which considers only two consecutive residues. However, the simple nature of the coil state allows this to be reduced to a 3x3 matrix for most applications. The Zimm–Bragg and Lifson–Roig models are but the first two in a series of analogous transfer-matrix methods in polymer science that have also been applied to nucleic acids and branched polymers. The transfer-matrix approach is especially elegant for homopolymers, since the statistical mechanics may be solved exactly using a simple eigenanalysis. Parameterization The Lifson–Roig model is characterized by three parameters: the statistical weight for nucleating a helix, the weight for propagating a helix and the weight for forming a hydrogen bond, which is granted only if three consecutive residues are in a helical state. Weights are assigned at each position in a polymer as a function of the conformation of the residue in that position and as a function of its two neighbors. A statistical weight of 1 is assigned to the "reference state" of a coil unit whose neighbors are both coils, and a "nucleation" unit is defined (somewhat arbitrarily) as two consecutive helical units neighbored by a coil. A major modification of the original Lifson–Roig model introduces "capping" parameters for the helical termini, in which the N- and C-terminal capping weights may vary independently. The correlation matrix for this modification can be represented as a matrix M, reflecting the statistical weights of the helix state h and coil state c. The Lifson–Roig model may be solved by the transfer-matrix method using the transfer matrix M shown at the right, where w is the statistical weight for helix propagation, v for initiation, n for N-terminal capping, and c for C-terminal capping. (In the traditional model n and c are equal to 1.) The partition function for the helix-coil transition equilibrium is where V is the end vector , arranged to ensure the coil state of the first and last residues in the polymer. This strategy for parameterizing helix-coil transitions was originally developed for alpha helices, whose hydrogen bonds occur between residues i and i+4; however, it is straightforward to extend the model to 310 helices and pi helices, with i+3 and i+5 hydrogen bonding patterns respectively. The complete alpha/310/pi transfer matrix includes weights for transitions between helix types as well as between helix and coil states. However, because 310 helices are much more common in the tertiary structures of proteins than pi helices, extension of the Lifson–Roig model to accommodate 310 helices - resulting in a 9x9 transfer matrix when capping is included - has found a greater range of application. Analogous extensions of the Zimm–Bragg model have been put forth but have not accommodated mixed helical conformations. References Polymer physics Protein structure Statistical mechanics
Lifson–Roig model
[ "Physics", "Chemistry", "Materials_science" ]
729
[ "Polymer physics", "Structural biology", "Polymer chemistry", "Statistical mechanics", "Protein structure" ]
8,921,481
https://en.wikipedia.org/wiki/Multiplicity%20%28statistical%20mechanics%29
In statistical mechanics, multiplicity (also called statistical weight) refers to the number of microstates corresponding to a particular macrostate of a thermodynamic system. Commonly denoted , it is related to the configuration entropy of an isolated system via Boltzmann's entropy formula where is the entropy and is the Boltzmann constant. Example: the two-state paramagnet A simplified model of the two-state paramagnet provides an example of the process of calculating the multiplicity of particular macrostate. This model consists of a system of microscopic dipoles which may either be aligned or anti-aligned with an externally applied magnetic field . Let represent the number of dipoles that are aligned with the external field and represent the number of anti-aligned dipoles. The energy of a single aligned dipole is while the energy of an anti-aligned dipole is thus the overall energy of the system is The goal is to determine the multiplicity as a function of ; from there, the entropy and other thermodynamic properties of the system can be determined. However, it is useful as an intermediate step to calculate multiplicity as a function of and This approach shows that the number of available macrostates is . For example, in a very small system with dipoles, there are three macrostates, corresponding to Since the and macrostates require both dipoles to be either anti-aligned or aligned, respectively, the multiplicity of either of these states is 1. However, in the either dipole can be chosen for the aligned dipole, so the multiplicity is 2. In the general case, the multiplicity of a state, or the number of microstates, with aligned dipoles follows from combinatorics, resulting in where the second step follows from the fact that Since the energy can be related to and as follows: Thus the final expression for multiplicity as a function of internal energy is This can be used to calculate entropy in accordance with Boltzmann's entropy formula; from there one can calculate other useful properties such as temperature and heat capacity. References Statistical mechanics
Multiplicity (statistical mechanics)
[ "Physics", "Chemistry" ]
432
[ "Thermodynamics stubs", "Statistical mechanics", "Physical chemistry stubs", "Thermodynamics" ]
8,921,507
https://en.wikipedia.org/wiki/Subacute%20combined%20degeneration%20of%20spinal%20cord
Subacute combined degeneration of spinal cord, also known as myelosis funiculus, or funicular myelosis, also Lichtheim's disease, and Putnam-Dana syndrome, refers to degeneration of the posterior and lateral columns of the spinal cord as a result of vitamin B12 deficiency (most common). It may also occur similarly as result of vitamin E deficiency, and copper deficiency. It is usually associated with pernicious anemia. Signs and symptoms The onset is gradual and uniform. The pathological findings of subacute combined degeneration consist of patchy losses of myelin in the dorsal and lateral columns. Patients present with weakness of the legs, arms, and trunk, and tingling and numbness that progressively worsens. Vision changes and change of mental state may also be present. Bilateral spastic paresis may develop and pressure, vibration, and touch sense are diminished. A positive Babinski sign may be seen. Prolonged deficiency of vitamin B12 leads to irreversible nervous system damage. HIV-associated vacuolar myelopathy can present with a similar pattern of dorsal column and corticospinal tract demyelination. It has been thought that if someone is deficient in vitamin B12 and folic acid, the vitamin B12 deficiency must be treated first. However, the basis for this has been challenged, although due to ethical considerations it is no longer able to be tested if "neuropathy is made more severe as a result of giving folic acid to vitamin B12- deficient individuals". And that if this were the case, then the mechanism remains unclear. Administration of nitrous oxide anesthesia can precipitate subacute combined degeneration in people with subclinical vitamin B12 deficiency, while chronic nitrous oxide exposure can cause it even in persons with normal B12 levels. Posterior column dysfunction decreases vibratory sensation and proprioception (joint sense). Lateral corticospinal tract dysfunction produces spasticity and dorsal spinocerebellar tract dysfunction causes ataxia. Cause In general, the most common cause of this condition is a deficiency of vitamin B12. This may be due to a dietary deficiency, malabsorption in the terminal ileum, lack of intrinsic factor secreted from gastric parietal cells, or low gastric pH inhibiting attachment of intrinsic factor to ileal receptors. The disease can also be caused by inhalation of nitrous oxide, which inactivates vitamin B12. Vitamin E deficiency, which is associated with malabsorption disorders such as cystic fibrosis and Bassen-Kornzweig syndrome, can cause a similar presentation due to the degeneration of the dorsal columns. Diagnosis Serum vitamin B12, methylmalonic acid, Schilling test, and a complete blood count, looking for megaloblastic anemia if there is also folic acid deficiency or macrocytic anemia. The Schilling test is no longer available in most areas. MRI-T2 images may reveal increased signal within the white matter of the spinal cord, predominantly in the posterior columns and possibly in the spinothalamic tracts. Treatment Therapy with vitamin B12 results in partial to full recovery where SACD has been caused by vitamin B12 deficiency, depending on the duration and extent of neurodegeneration. References External links Histopathology Neurodegenerative disorders
Subacute combined degeneration of spinal cord
[ "Chemistry" ]
711
[ "Histopathology", "Microscopy" ]
8,921,722
https://en.wikipedia.org/wiki/Racah%20parameter
The Racah parameters are a set of parameters used in atomic and molecular spectroscopy to describe the amount of total electrostatic repulsion in an atom that has multiple electrons. When an atom has more than one electron, there will be some electrostatic repulsion between the electrons. The amount of repulsion varies from atom to atom, depending upon the number of electrons, their spin, and the orbitals that they occupy. The total repulsion can be expressed in terms of three parameters A, B and C which are known as the Racah parameters after Giulio Racah, who first described them. They are generally obtained empirically from gas-phase spectroscopic studies of atoms. They are often used in transition-metal chemistry to describe the repulsion energy associated with an electronic term. For example, the interelectronic repulsion of a 3P term is A + 7B, and of a 3F term is A - 8B, and the difference between them is therefore 15B. Definition The Racah parameters are defined as where are Slater integrals and are the Slater-Condon parameters where is the normalized radial part of an electron orbital, and . See also Tanabe–Sugano diagram Nephelauxetic effect References External links Multiplets in Transition Metal Ions in E. Pavarini, E. Koch, F. Anders, and M. Jarrell (eds.): Correlated Electrons: From Models to Materials, Jülich 2012, Coordination chemistry Spectroscopy
Racah parameter
[ "Physics", "Chemistry", "Astronomy" ]
305
[ "Spectroscopy stubs", "Molecular physics", "Spectrum (physical sciences)", "Instrumental analysis", "Coordination chemistry", "Astronomy stubs", "Molecular physics stubs", "Spectroscopy", "Physical chemistry stubs" ]
8,921,732
https://en.wikipedia.org/wiki/Refining%20%28metallurgy%29
In metallurgy, refining consists of purifying an impure metal. It is to be distinguished from other processes such as smelting and calcining in that those two involve a chemical change to the raw material, whereas in refining the final material is chemically identical to the raw material. Refining thus increases the purity of the raw material via processing. There are many processes including pyrometallurgical and hydrometallurgical techniques. Lead Cupellation One ancient process for extracting the silver from lead was cupellation. This process involved melting impure lead samples in a cupel, a small porous container designed for purification that would aid in the oxidation process, while being able to withstand the heat needed to melt these metals in a furnace. This reaction would oxidize the lead to litharge, along with any other impurities present, whereas the silver would not get oxidized. In the 18th century, the process was carried on using a kind of reverberatory furnace, but differing from the usual kind in that air was blown over the surface of the molten lead from bellows or (in the 19th century) blowing cylinders. Pattinson Process The Pattinson process was patented by its inventor, Hugh Lee Pattinson, in 1833 who described it as, "An improved method for separating silver from lead". It exploited the fact that in molten lead (containing traces of silver), the first metal to solidify out of the liquid is lead, leaving the remaining liquid richer in silver. Pattinson's equipment consisted a row of up to 13 iron pots, each heated from below. Some lead, naturally containing a small percentage of silver, was loaded into the central pot and melted. This was then allowed to cool. As the lead solidified it is removed using large, perforated iron ladles and moved to the next pot in one direction, and the remaining metal which was now richer in silver was then transferred to the next pot in the opposite direction. The process was repeated from one pot to the next, the lead accumulating in the pot at one end and metal enriched in silver in the pot at the other. The level of enrichment possible is limited by the lead-silver eutectic and typically the process stopped around 600 to 700 ounces per ton (approx. 2%), so further separation is carried out by cupellation. The process was economic for lead containing at least 250 grams of silver per ton. Parkes Process The Parkes process, patented in 1850 by Alexander Parkes, uses molten zinc. Zinc is not miscible with lead and when the two molten metals are mixed, the zinc separates and floats to the top with ~2% lead. However, silver dissolves more easily in zinc, so the upper layer of zinc carries a significant portion of the silver. The melt is then cooled until the zinc solidifies and the dross is skimmed off. The silver is then recovered by volatilizing the zinc. The Parkes process largely replaced the Pattinson process, except where the lead contained insufficient silver. In such a case, the Pattinson process provided a method to enrich it in silver to about 40 to 60 ounces per ton, at which concentration it could be treated using the Parkes process. Copper Fire refining The initial product of copper smelting was impure "blister" copper, which contained sulfur and oxygen. To remove these impurities, the blister copper was repeatedly melted and solidified, undergoing a cycle of oxidation and reduction. In one of the previous melting stages, lead was added. Gold and silver preferentially dissolved in this, thus providing a means of recovering these precious metals. To produce purer copper suitable for making copper plates or hollow-ware, further melting processes were undertaken, using charcoal as fuel. The repeated application of such fire-refining processes was capable of producing copper that was 98.5-99.5% pure. Electrolytic refining The purest copper is obtained by an electrolytic process, undertaken using a slab of impure copper as the anode and a thin sheet of pure copper as the cathode. The electrolyte is an acidic solution of copper (II) sulfate. By passing electricity through the cell, copper is dissolved from the anode and deposited on the cathode. However, impurities either remain in solution or collect as an insoluble sludge. This process only became possible following the invention of the dynamo; it was first used in South Wales in 1869. Iron Wrought iron The product of the blast furnace is pig iron, which contains 4–5% carbon and usually some silicon. To produce a forgeable product, a further process was needed (usually described as fining, rather than refining). From the 16th century, this was undertaken in a finery forge. At the end of the 18th century, this began to be replaced by puddling (in a puddling furnace), which was in turn gradually superseded by the production of mild steel by the Bessemer process. Refined iron The term refining is used in a narrower context. Henry Cort's original puddling process only worked where the raw material was white cast iron, rather than the grey pig iron that was the usual raw material for finery forges. To use grey pig iron, a preliminary refining process was necessary to remove silicon. The pig iron was melted in a running out furnace and then run out into a trough. This process oxidized the silicon to form a slag, which floated on the iron and was removed by lowering a dam at the end of the trough. The product of this process was a white metal, known as finers metal or refined iron. Precious metals Precious metal refining is the separation of precious metals from noble-metalliferous materials. Examples of these materials include used catalysts, electronic assemblies, ores, or metal alloys. Process In order to isolate noble-metalliferous materials, pyrolysis and/or hydrolysis procedures are used. In pyrolysis, the noble-metalliferous products are released from the other materials by solidifying in a melt to become cinder and then poured off or oxidized. In hydrolysis, the noble-metalliferous products are dissolved either in aqua regia (consisting of hydrochloric acid and nitric acid) or in a hydrochloric acid and chlorine gas in solution. Subsequently, certain metals can be precipitated or reduced directly with a salt, gas, organic, and/or nitro hydrate connection. Afterwards, they go through cleaning stages or are recrystallized. The precious metals are separated from the metal salt by calcination. The noble-metalliferous materials are hydrolyzed first and thermally prepared (pyrolyzed) thereafter. The processes are better yielding when using catalysts that may sometimes contain precious metals themselves. When using catalysts, the recycling product is removed in each case and driven several times through the cycle. See also List of alumina refineries Bibliography J. Day and R. F. Tylecote, The Industrial Revolution in Metals (The Institute of Metals, London 1991). Söderberg, A. 2011. Eyvind Skáldaspillir's silver - refining and standards in pre-monetary economies in the light of finds from Sigtuna and Gotland. Situne Dei 2011. Edberg, R. Wikström, A. (eds). Sigtuna. R. F. Tylecote, A history of metallurgy (Institute of materials, London 1992). Newcastle University: Hugh Lee Pattinson References Metallurgical processes
Refining (metallurgy)
[ "Chemistry", "Materials_science" ]
1,577
[ "Metallurgical processes", "Metallurgy" ]
8,921,892
https://en.wikipedia.org/wiki/British%20Society%20of%20Master%20Glass%20Painters
The British Society of Master Glass Painters (BSMGP) is a British trade association for the art and craft of stained glass. Founded in 1921, it is a membership organisation which exists to represent the trade of glass painting and staining in Britain. The founding subscribers included John Hardman, Walter Tower (Kempe & Co), Arthur Powell (James Powell and Sons) and Thomas Grylls (Burlison & Grylls) and Percy Bacon. BSMGP activities include: lectures, conferences, exhibitions, forums, and guided walks. It also offers publications such as an annual journal and a quarterly newsletter. Additionally, it houses an extensive reference library, available to its members only. The current president is Prince Richard, Duke of Gloucester. See also Worshipful Company of Glaziers and Painters of Glass References External links The British Society of Master Glass Painters Glass makers Crafts organizations Design institutions Architecture organisations based in the United Kingdom Arts organizations established in 1921 Arts organisations based in the United Kingdom 1921 establishments in the United Kingdom
British Society of Master Glass Painters
[ "Engineering" ]
208
[ "Design", "Design institutions" ]
8,923,567
https://en.wikipedia.org/wiki/Brian%20Jones%20Presents%20the%20Pipes%20of%20Pan%20at%20Joujouka
Brian Jones Presents the Pipes of Pan at Joujouka is an album by the Moroccan group the Master Musicians of Joujouka, released on Rolling Stones Records and distributed by Atco Records in 1971. It was produced by Brian Jones of the Rolling Stones, who recorded a performance by the group on 29 July 1968 in the village of Jajouka in Morocco. Jones called the tracks "a specially chosen representation" of music played in the village during the annual week-long Rites of Pan Festival. It was significant for presenting the Moroccan group to a global audience, drawing other musicians to Jajouka, including American composer Ornette Coleman who collaborated with the group. The album was reissued in 1995. The executive producers were Philip Glass, Kurt Munkacsi, and Rory Johnston, with notes by Bachir Attar, Paul Bowles, William S. Burroughs, Stephen Davis, Jones, Brion Gysin, and David Silver. This deluxe album included additional graphics, more extensive notes by David Silver and Burroughs, and a second CD, produced by Cliff Mark, with two "full-length remixes." Background The music of Jajouka is regarded as becoming famous in the West following British writer Brion Gysin and American writer Paul Bowles' documentation of their experience hearing it at a festival in Sidi-Kacem in 1950. Entranced with the music's sound, they were led to the village to hear the music in person by Moroccan painter Mohamed Hamri. Gysin, along with Hamri, later brought Brian Jones to hear the village music in 1968. The album's music included songs meant for the village's "most important religious holiday festival, Aid el Kbir". The festival's ritual of dressing a young boy dressed as "Bou Jeloud, the Goat God" wearing the "skin of a freshly slaughtered goat", involved the child's running to "spread panic through the darkened village" as the musicians played with abandon. Gysin connected the ritual, performed to protect the village's health in the coming year, to the fertility festival of Lupercalia and the "ancient Roman rites of Pan"; he referred to the Bou Jeloud dancer as "Pan" and "the Father of Skins". This name stuck, leading to the reference to Pan in the album's title. Jones, recording engineer George Chkiantz, and Gysin travelled to the village in 1968, accompanied by Hamri and Jones' girlfriend Suki Potier to record the musicians using a portable Uher recorder. Jones worked on the two-track recordings in London, adding stereo phasing, echo, and other effects. Jones edited the full-band selection to 14 minutes by "cross-phasing fragments of a work that runs to some ninety minutes in uncut form". The album includes three types of music: repetitive vocal chants "similar to those employed throughout Islam", flute and drum music featuring "several distinct melodic motifs and improvisations over a drone" played by two flutists and several drummers, and the full village orchestra's drum and horn music played to accompany the "frenzied dance of Bou Jeloud, a Moroccan Pan". The New York Times reviewer Robert Palmer reported that the call-and-response horn motifs are "handed down from generation to generation", noting that the "drumming rhythms are definitely African", and paraphrased Gysin as connecting the musical origins to Spain, "from the Moorish courts of Cordova and Seville". The cover illustration on the 1971 album was originally a painting by Mohamed Hamri depicting the master musicians with Brian Jones in the center. Jones edited the album and prepared the artwork together with designer and illustrator Dave Field, who also designed the Joujouka logo and painted a depiction of a carpet design on the inside cover. Jones finished producing the LP several months before his death in 1969. The album's release date was initially set for September 3, 1971, but was pushed back to October 8. Legacy In 1995, a CD reissue of the album was issued. It was licensed from Musidor by Point Music. A new 1990s photo of Bachir Attar, by his wife and manager American photographer, replaced Hamri's original painting of Brian Jones and the Master Musicians of Joujouka which Jones had chosen as his cover. It also included in a side bar a photo of the late Jones by Michael Cooper as well as further contemporary photos of and a "Bou Jeloud" dancer by Nutting. The CD's album title changed to "Brian Jones Presents The Pipes of Pan At Jajouka" to tie in with The Master Musicians of Jajouka led by Bachir Attar. The name Master Musicians of Jajouka was used on the Master Musicians of Joujouka's second album due to contract conflicts. While the original vinyl album consisted of "two untitled, unbroken LP sides", the reissue separated the songs into six tracks with titles. The reissue cut the Master Musicians of Joujouka out of their rights and resulted in international protests organized by Frank Rynne and Joe Ambrose at concerts by Bachir Attar in London, New York and San Francisco as well as Philip Glass concerts in London and elsewhere. Brion Gysin's original sleeve-notes were altered to remove all reference to the central role that Hamri played in introducing him to the music of the village. A Brion Gysin illustration decorated an essay by Paul Bowles in the liner notes. The CD's executive producers were Philip Glass, Kurt Munkacsi, and Rory Johnston. Brian Jones was credited as producer. The multi-page booklet also included reminiscences and edited essays about the original band written by Brion Gysin, (who died in 1986 and therefore was not consulted), David Silver, Stephen Davis, William S. Burroughs, Brian Jones, and Bachir Attar. The Master Musicians of Joujouka, mentored by Hamri from the 1950s until his death in 2000, continued releasing records on Sub Rosa Records, with further releases including the acclaimed “Live in Paris”, recorded at Centre Pompidou Paris in 2016, using their original name, “Master Musicians of Joujouka” as used on the 1971 release and Mohamed Hamri's Tales of Joujouka. And the group The Master Musicians of Jajouka led by Bachir Attar continues to record music and now issues CDs once on their own label Jajouka Records, in addition to performing on regular tours and recording music for film scores. In 1995, the Master Musicians of Joujouka, and Mohamed Hamri launched an international campaign demanding their interest in their recording with Brian Jones be recognised and that the re-release be withdrawn from sale until their concerns were addressed. The group led by the second youngest son of Hadj Abdesalam Attar still perform under the name Master Musicians of Jajouka led by Bachir Attar, recording the song "" in Tangier with the Rolling Stones for their Steel Wheels album (1989). Led by Attar's son and successor, as band leader Bachir Attar, also released soundtrack recordings under the Jajouka name and album recordings under the name Master Musicians of Jajouka Featuring Bachir Attar in the 1990s and 2000s. According to Bachir Attar the Master Musicians of that early group were led by tribal chief Hadj Abdesalam Attar. Rikki Stein who never was manager of the Master Musicians of Jajouka noted that in 1971 the leader was Hadj Abdesalam Attar. However, Berdous and Mfdal were musicians with Hadj Abdesalam Attar and Bachir Attar until their deaths in the late 1990s. This throws doubt on the claim that Hadj Abdelsalm Attar was leader, tribal or otherwise, in the late 1960s or early 1970s. However, Rikki Stein has since pointed out that there were regular elections held amongst the musicians and their supporters, who were also permitted to vote. In the late sixties and until 1971 Hadj Abdelsalam Attar was the 'Rais' (President) of the Masters, while Hamri was president of The Jahjouka Folklore Association of the Tribe Ahl Serif created collectively by the musicians of Jajouka. ´El Hadj was considered a great Jajouka musician, despite his propensity for black magic. Subsequently, though, in the early seventies elections were held and Maalim Fedal was elected Rais and continued to retain that title, certainly until the European tour organised by Rikki Stein in 1980. Critical legacy The Daily Telegraph reviewer Tom Horan identified the Master Musicians as the world's first world music band and described Brian Jones Presents the Pipes of Pan at Jajouka'''' as a "field recording that Jones subsequently retouched back in Britain using modern studio technology". He said the album "tapped perfectly into the druggy mysticism that characterised the era". Richie Unterberger of AllMusic described it as a "document of Moroccan traditional music that achieves trance-like effects through its hypnotic, insistent percussion, eerie vocal chanting, and pipes." He noted that as the record was among the first recordings of this style of music to receive relatively wide exposure in Europe and North America, it "anticipated the wider popularity of trance-like music among both electronic rock and progressive African musicians later in the 20th century. In 1998, The Wire included Presents the Pipes of Pan at Jajouka in their list of "100 Records That Set the World on Fire (While No One Was Listening)". They noted that Jones "deployed the full arsenal of psychedelic signal processing" to enhance the music and his own experience of the musicians, resulting in an LP that "documents a millennia-old music, the sound of panic itself, as well as the fragmented mind of Jones in the months before his death." They added of its prescient musical style: According to author Louise Grey, the album was influential enough that other figures besides Jones, such as Ornette Coleman, Bill Laswell and Richard Horowitz, were also drawn into working with the Joujouka musicians. She added: "With supporters like this, Joujouka could hardly fail to generate interest in those interested in psychotropic music – even if there was a series of acrimonious fallings-out between the musicians after the appearance of their famous friends." The Independent writer Phil Sweeney highlighted the album's "ghita flutes and assorted drums" and wrote that parts of the album resemble "nothing so much as a Scottish regimental pipe band running amok on a mixture of amphetamine sulphate, Special Brew and helium." In 1999, Rob Chapman of Mojo wrote that Jones entered the project "with all the anthropological fervour of a Samuel Charters or Alan Lomax", but in his doctoring of the tapes, the resulting album is a "proto-dub masterpiece", as belatedly recognised by the Rolling Stones when they collaborated with the Master Musicians of Joujouka for Steel Wheels. Track listing All songs written by Pipes of Pan at Joujouka "55 ("Hamsa oua Hamsine)" – 0:58 "War Song/Standing" + "One Half (Kaim Oua Nos") – 2:22 "Take Me with You Darling, Take Me with You (Dinimaak A Habibi Dinimaak)" – 8:06 "Your Eyes Are Like a Cup of Tea (Al Yunic Sharbouni Ate)" – 10:35 "I Am Calling Out (L'Afta)" – 5:55 "Your Eyes Are Like a Cup of Tea" (reprise with flute) – 18:04 Titles come from Point Music reissue track listings as original vinyl release package had no titles References Album designed and illustrated by Dave Field Further reading Davis, Stephen (2001). Old Gods Almost Dead. Broadway Books, , pp 135–137, 172, 195–201, 227, 248–253, 270, 354, 504–505. Jennings, Nicholas (October 12, 1995). Liveeye PREVIEW: The Master Musicians of Jajouka. Eye Weekly. (Retrieved February 6, 2007.) Palmer, Robert (October 14, 1971). "Jajouka: Up the Mountain". Rolling Stone, p. 43. Palmer, Robert (March 23, 1989). "Into the Mystic". Rolling Stone, p. 106. Palmer, Robert (December 19, 1971). "Music for a Moroccan Pan". The New York Times. Palmer, Robert (June 11, 1992). "Up the Mountain". Rolling Stone, p. 40. Wyman, Bill and Coleman, Ray Stone Alone'', (London, 1990), p. 515 Rondeau, Daniel "Tanger Et Autres Marocs". Ed. Nil January 1997 External links The official site for The Master Musicians of Jajouka led by Bachir Attar The official site for The Master Musicians of Joujouka [ Allmusic.com listing] Master Musicians of Joujouka albums Sufi music albums 1971 live albums Rolling Stones Records live albums 1971 debut albums Arabic-language albums Albums produced by Brian Jones Psychedelic music albums Field recording World music albums by Moroccan artists
Brian Jones Presents the Pipes of Pan at Joujouka
[ "Engineering" ]
2,754
[ "Audio engineering", "Field recording" ]
8,923,815
https://en.wikipedia.org/wiki/Ascendency
Ascendency or ascendancy is a quantitative attribute of an ecosystem, defined as a function of the ecosystem's trophic network. Ascendency is derived using mathematical tools from information theory. It is intended to capture in a single index the ability of an ecosystem to prevail against disturbance by virtue of its combined organization and size. One way of depicting ascendency is to regard it as "organized power", because the index represents the magnitude of the power that is flowing within the system towards particular ends, as distinct from power that is dissipated naturally. Almost half a century earlier, Alfred J. Lotka (1922) had suggested that a system's capacity to prevail in evolution was related to its ability to capture useful power. Ascendency can thus be regarded as a refinement of Lotka's supposition that also takes into account how power is actually being channeled within a system. In mathematical terms, ascendency is the product of the aggregate amount of material or energy being transferred in an ecosystem times the coherency with which the outputs from the members of the system relate to the set of inputs to the same components (Ulanowicz 1986). Coherence is gauged by the average mutual information shared between inputs and outputs (Rutledge et al. 1976). Originally, it was thought that ecosystems increase uniformly in ascendency as they developed, but subsequent empirical observation has suggested that all sustainable ecosystems are confined to a narrow "window of vitality" (Ulanowicz 2002). Systems with relative values of ascendency plotting below the window tend to fall apart due to lack of significant internal constraints, whereas systems above the window tend to be so "brittle" that they become vulnerable to external perturbations. Sensitivity analysis on the components of the ascendency reveals the controlling transfers within the system in the sense of Liebig (Ulanowicz and Baird 1999). That is, ascendency can be used to identify which resource is limiting the functioning of each component of the ecosystem. It is thought that autocatalytic feedback is the primary route by which systems increase and maintain their ascendencies (Ulanowicz 1997.) References Ulanowicz, R.E. 1986. Growth & Development: Ecosystems Phenomenology. Springer-Verlag, NY. 203 p. Ulanowicz, R.E. 1997. Ecology, the Ascendent Perspective. Columbia University Press, NY. 201p. Information theory Entropy and information Trophic ecology
Ascendency
[ "Physics", "Mathematics", "Technology", "Engineering" ]
517
[ "Telecommunications engineering", "Physical quantities", "Applied mathematics", "Entropy and information", "Computer science", "Entropy", "Information theory", "Dynamical systems" ]
8,924,366
https://en.wikipedia.org/wiki/Dewar%E2%80%93Chatt%E2%80%93Duncanson%20model
The Dewar–Chatt–Duncanson model is a model in organometallic chemistry that explains the chemical bonding in transition metal alkene complexes. The model is named after Michael J. S. Dewar, Joseph Chatt and L. A. Duncanson. The alkene donates electron density into a π-acid metal d-orbital from a π-symmetry bonding orbital between the carbon atoms. The metal donates electrons back from a (different) filled d-orbital into the empty π* antibonding orbital. Both of these effects tend to reduce the carbon-carbon bond order, leading to an elongated C−C distance and a lowering of its vibrational frequency. In Zeise's salt K[PtCl3(C2H4)].H2O the C−C bond length has increased to 134 picometres from 133 pm for ethylene. In the nickel compound Ni(C2H4)(PPh3)2 the value is 143 pm. The interaction also causes carbon atoms to "rehybridise" from sp2 towards sp3, which is indicated by the bending of the hydrogen atoms on the ethylene back away from the metal. In silico calculations show that 75% of the binding energy is derived from the forward donation and 25% from backdonation. This model is a specific manifestation of the more general π backbonding model. Main group elements can also form π-complexes with alkenes and alkynes. The β-diketiminato aluminum(I) complex Al{HC(CMeNAr)2} (Ar = 2,6-diisopropylphenyl), which bears an Al-based spx lone pair, reacts with alkenes and alkynes to give alumina(III)cyclopropanes and alumina(III)cyclopropenes in a process analogous to the formation of π-complexes by transition metals. However, in most cases, the backbonding interaction is absent in these complexes due to the lack of energetically accessible filled orbitals for backdonation, resulting in π-complexes that dissociate readily and are therefore more challenging to observe or isolate. References Organometallic chemistry Chemical bonding
Dewar–Chatt–Duncanson model
[ "Physics", "Chemistry", "Materials_science" ]
471
[ "Chemical bonding", "Organometallic chemistry", "Condensed matter physics", "nan" ]
8,924,477
https://en.wikipedia.org/wiki/Ralph%20Benjamin
Ralph Benjamin (17 November 1922 – 7 May 2019) was a British scientist and electrical engineer. Biography Benjamin was born in Darmstadt, Germany. He attended boarding school in Switzerland from 1937, and was sent to England in 1939 as a refugee. He studied at Ellesmere College and at Imperial College London where he graduated with a 1st class honours in Electronic Engineering. He joined the Royal Naval Scientific Service in 1944, beginning his career at the Admiralty Surface Weapons Establishment (ASWE). Benjamin invented the first trackball called roller ball in 1946, patented in 1947. Between 1947 and 1957 he developed the first force-wide integrated Command and Control System. This included patenting the use of an interlaced cursor controlled by a tracker ball to link displays to stored digital information, the first ever digital compression of video data, and the creation of the navy's first digital data link and network which is still in use NATO-wide as "Link 11". NATO During the fifties and sixties he was a leading member of national Advanced Computer Techniques Project and in 1961 he was acting international chairman NATO "Von Karman" studies on "Man and Machine" and "Command and Control". From 1961 to 1964 he was Head of Research and Deputy Director, Admiralty Surface Weapons Establishment then in 1964 he became Chief Scientist Admiralty Underwater Weapons Establishment (AUWE), combined with Director, AUWE, and MoD Director Underwater Weapons R&D – posts he held until 1971. Original publications during this time resulted in a DSc and he published a textbook on "Modulation, Resolution and Signal processing" that was later unofficially translated into Russian. He also trained as a navy diver to better understand some of the challenges faced by the Royal Navy. GCHQ In 1971 he became Chief Scientist, Chief Engineer and Superintending Director at GCHQ where he stayed until 1982. He was responsible for fast-track Research, Development, Procurement, and Deployment and use of equipment and techniques for Signals Intelligence. During most of this time he was also Chief Scientific Advisor to the Intelligence Services and national Co-ordinator Intelligence R&D. At GCHQ, Benjamin played an important role in the original development of "non-secret cryptography", later independently discovered by Rivest, Shamir, and Adleman and termed public-key cryptography. Teaching As a visiting professor at the University of Surrey between 1972 and 1978 he helped to start the Surrey University mini-satellite programme. Following retirement from the civil service he became Head of Communications Techniques & Networks at the Supreme Headquarters Allied Powers Europe (SHAPE) Technical Centre from 1982 to 1987. Graduate NATO Staff College, 1983. On his return to England he became a visiting Research Professor at University College, London, and since 1993, Bristol University. Until recently he was also a visiting professor at Imperial College, the Open University, and the Royal Military College of Science, and Member of Court at Brunel University. He also had substantial involvement in Defence Scientific Advisory Council, DSAC. He was given an honorary DEng by Bristol University in 2000. He has won the IET Heinrich Hertz premium twice, and also the Marconi premium and the Clarke Maxwell premium. In 2006 he was given the Achievement in Electronics Award and also in 2006 the Oliver Lodge Medal for IT. His autobiography, called Five Lives in One, was published in 1996. He died on 7 May 2019 at the age of 96. References 1922 births 2019 deaths Engineers from Darmstadt People from the People's State of Hesse People educated at Ellesmere College Alumni of Imperial College London 20th-century British engineers Jewish emigrants from Nazi Germany to the United Kingdom GCHQ people Fellows of the Institution of Engineering and Technology Fellows of the Royal Academy of Engineering Companions of the Order of the Bath Admiralty personnel of World War II
Ralph Benjamin
[ "Engineering" ]
767
[ "Institution of Engineering and Technology", "Fellows of the Institution of Engineering and Technology" ]
8,924,644
https://en.wikipedia.org/wiki/Data%20hub
A data hub is a center of data exchange that is supported by data science, data engineering, and data warehouse technologies to interact with endpoints such as applications and algorithms. Features A data hub differs from a data warehouse in that it is generally unintegrated and often at different grains. It differs from an operational data store because a data hub does not need to be limited to operational data. A data hub differs from a data lake by homogenizing data and possibly serving data in multiple desired formats, rather than simply storing it in one place, and by adding other value to the data such as de-duplication, quality, security, and a standardized set of query services. A data lake tends to store data in one place for availability, and allow/require the consumer to process or add value to the data. Data hubs are ideally the "go-to" place for data within an enterprise, so that many point-to-point connections between callers and data suppliers do not need to be made, and so that the data hub organization can negotiate deliverables and schedules with various data enclave teams, rather than being an organizational free-for-all as different teams try to get new services and features from many other teams. References Data management Database management systems
Data hub
[ "Technology" ]
258
[ "Data management", "Data" ]
8,924,740
https://en.wikipedia.org/wiki/Rod%20Burstall
Rodney Martineau "Rod" Burstall (born 1934) is a British computer scientist and one of four founders of the Laboratory for Foundations of Computer Science at the University of Edinburgh. Biography Burstall studied physics at the University of Cambridge, then an M.Sc. in operational research at the University of Birmingham. He worked for three years before returning to Birmingham University to earn a Ph.D. in 1966 with thesis titled Heuristic and Decision Tree Methods on Computers: Some Operational Research Applications under the supervision of N. A. Dudley and K. B. Haley. Burstall was an early and influential proponent of functional programming, pattern matching, and list comprehension, and is known for his work with Robin Popplestone on COWSEL (renamed POP-1) and POP-2, innovative programming languages developed at the University of Edinburgh around 1970, and later work with John Darlington on NPL and program transformation and with David MacQueen and Don Sannella on Hope, a precursor to Standard ML, Miranda, and Haskell. In 1995, he was elected a Fellow of the Royal Society of Edinburgh. Burstall retired in 2000, becoming Professor Emeritus. In 2002 David Rydeheard and Don Sannella assembled a festschrift for Burstall that was published in Formal Aspects of Computing. In 2009, he was awarded the Association for Computing Machinery (ACM) SIGPLAN Programming Language Achievement Award. Books May 1971: Programming in POP-11, Edinburgh University Press. 1980: (with Alan Bundy) Artificial Intelligence: An Introductory Course, Edinburgh University Press. 1988: (with D. E. Rydeheard) Computational Category Theory, Prentice-Hall, . References External links University of Edinburgh home page (Archived) Rod Burstall Home Page (Archived) 1934 births Living people Scientists from Liverpool English computer scientists Programming language researchers Programming language designers English computer programmers Formal methods people History of computing in the United Kingdom Academics of the University of Edinburgh Alumni of the University of Cambridge Alumni of the University of Birmingham
Rod Burstall
[ "Technology" ]
408
[ "History of computing", "History of computing in the United Kingdom" ]
8,924,792
https://en.wikipedia.org/wiki/Racah%20W-coefficient
Racah's W-coefficients were introduced by Giulio Racah in 1942. These coefficients have a purely mathematical definition. In physics they are used in calculations involving the quantum mechanical description of angular momentum, for example in atomic theory. The coefficients appear when there are three sources of angular momentum in the problem. For example, consider an atom with one electron in an s orbital and one electron in a p orbital. Each electron has electron spin angular momentum and in addition the p orbital has orbital angular momentum (an s orbital has zero orbital angular momentum). The atom may be described by LS coupling or by jj coupling as explained in the article on angular momentum coupling. The transformation between the wave functions that correspond to these two couplings involves a Racah W-coefficient. Apart from a phase factor, Racah's W-coefficients are equal to Wigner's 6-j symbols, so any equation involving Racah's W-coefficients may be rewritten using 6-j symbols. This is often advantageous because the symmetry properties of 6-j symbols are easier to remember. Racah coefficients are related to recoupling coefficients by Recoupling coefficients are elements of a unitary transformation and their definition is given in the next section. Racah coefficients have more convenient symmetry properties than the recoupling coefficients (but less convenient than the 6-j symbols). Recoupling coefficients Coupling of two angular momenta and is the construction of simultaneous eigenfunctions of and , where , as explained in the article on Clebsch–Gordan coefficients. The result is where and . Coupling of three angular momenta , , and , may be done by first coupling and to and next coupling and to total angular momentum : Alternatively, one may first couple and to and next couple and to : Both coupling schemes result in complete orthonormal bases for the dimensional space spanned by Hence, the two total angular momentum bases are related by a unitary transformation. The matrix elements of this unitary transformation are given by a scalar product and are known as recoupling coefficients. The coefficients are independent of and so we have The independence of follows readily by writing this equation for and applying the lowering operator to both sides of the equation. The definition of Racah W-coefficients lets us write this final expression as Algebra Let be the usual triangular factor, then the Racah coefficient is a product of four of these by a sum over factorials, where and The sum over is finite over the range Relation to Wigner's 6-j symbol Racah's W-coefficients are related to Wigner's 6-j symbols, which have even more convenient symmetry properties Cf. or See also Clebsch–Gordan coefficients 3-j symbol 6-j symbol Pandya theorem Notes Further reading External links Rotational symmetry Representation theory of Lie groups
Racah W-coefficient
[ "Physics" ]
586
[ "Symmetry", "Rotational symmetry" ]
8,924,801
https://en.wikipedia.org/wiki/Trimethylsilyl%20azide
Trimethylsilyl azide is the organosilicon compound with the formula . A colorless liquid, it is a reagent in organic chemistry, serving as the equivalent of hydrazoic acid. Preparation Trimethylsilyl azide is commercially available. It may be prepared by the reaction of trimethylsilyl chloride and sodium azide: Reactions The compound hydrolyzes to hydrazoic acid: The compound adds to ketones and aldehydes to give the siloxy azides and subsequently tetrazoles: It ring-opens epoxides to give azido alcohols. It has been used in the Oseltamivir total synthesis. Safety Trimethylsilyl azide is incompatible with moisture, strong oxidizing agents, and strong acids. Azides are often explosive, as illustrated by their use in air bags. References Azido compounds Reagents for organic chemistry Trimethylsilyl compounds
Trimethylsilyl azide
[ "Chemistry" ]
195
[ "Functional groups", "Trimethylsilyl compounds", "Reagents for organic chemistry" ]
8,924,880
https://en.wikipedia.org/wiki/Secondary%20suite
A secondary suite (also known as a accessory dwelling unit (ADU), in-law apartment, granny flat, granny annex or garden suite) is a self-contained apartments, cottages, or small residential units, that is located on a property that has a separate main, single-family home, duplex, or other residential unit. In some cases, the ADU or in-law is attached to the principal dwelling or is an entirely separate unit, located above a garage, across a carport, or in the backyard on the same property. Reasons for wanting to add a secondary suite to a property may be to receive additional income, provide social and personal support to a family member, or obtain greater security. Description Background Naming conventions vary by time-period and location but secondary suites can also be referred to as an accessory dwelling unit (ADU), mother-in-law suite, granny flat, coach house, laneway house, Ohana dwelling unit, granny annexe, granny suite, in-law suite, and accessory apartment. The prevalence of secondary suites is also dependent on time and location with varying rates depending on the country, state, or city. Furthermore, regulations on secondary suites can vary widely in different jurisdictions with some allowing them with limited regulation while others ban them entirely through zoning, limit who may live in the units (for example, family members only), or regulate if units can be rented. Spatial relationship to main residence A secondary suite is considered "secondary" or "accessory" to the primary residence on the parcel. It normally has its own entrance, kitchen, bathroom and living area. There are three main types of accessory units: interior, interior with modification, and detached. Examples include: A suite above a rear detached garage (a "garage apartment, garage suite, coachhouse, or Fonzie flat"), A suite above the main floor of a single-detached dwelling, (an "up-and-down duplex") A suite below the main floor of a single-detached dwelling (a "basement suite"). A suite attached to a single-detached dwelling at grade (similar to a "duplex", but that word implies two distinct legal parcel of land with houses that simply share a wall) A suite detached from the principal dwelling (a "garden suite" or "guesthouse" (called a "laneway house" if it faces the back lane)). A granny flat, granny annex, mother-in-law cottage and the like are generic familial names for an ADU. Benefits and drawbacks Benefits Higher density residential areas have many advantages. They require less resources for transport, heating and cooling, infrastructure and maintenance. They allow for closer-knit communities by facilitating interaction between neighbors, especially children and teenagers. Creating affordable housing options as secondary suites are typically small, easy to construct, and require no land acquisition. Enabling seniors to "age-in-place" by creating small and affordable units where seniors can downsize in their own neighborhood. Some of the recent popularity of secondary suites in the United States can be attributed to the activities of the American Association of Retired Persons (AARP) and other organizations that support seniors. Supporting diverse and multi-generational households as seniors, young-adults, or other relatives can live on the same property as their families while maintaining independence and privacy. For seniors, this arrangement can improve social life, allow to easily provide care, and possibly live in more walkable neighborhoods when they can no longer drive. Facilitating homeownership by providing a reliable extra income that can support mortgage payments and home maintenance. Creating sustainable and energy-efficient housing as smaller and/or attached units require fewer resources. ADUs can be integrated into the scale and character of single-family neighborhoods while also promoting workforce housing in these neighborhoods. Municipal budgets may benefit from new taxable housing that does not require new infrastructure or significant utility upgrades. Drawbacks Linked properties cannot easily be sold separately. In case of shared ownership each party may require permission from the other party to make changes to the building. By country Australia In Australia, the term 'granny flat' is often used for a secondary dwelling on a property. The land is not subdivided with construction requiring approval from the council or relevant authority. The approval processes vary between States and Territories, and between councils. This is different from a dual occupancy, where two primary dwellings are developed on one allotment of land, being either attached, semi-detached or detached. In 2018, New South Wales led the construction of new granny flats while Victoria had the fewest number of new granny flats constructed. In 2019, the federal government launched a study concerning prefabricated buildings and smaller homes citing affordable housing, extra space for family members, and support for the construction industry as reasons for the study. The government set aside $2 million for the initial study and then plans to set up an innovation lab to help manufacturers design prefabricated buildings. Canada Secondary suites have existed in Canada since the 19th century where they took the form of coach houses, servant houses, stables converted to permanent apartments, and small apartments for young people within large houses. Secondary suites became increasingly popular during the economic crisis of 1929 and the housing shortage following WWII. During this period the Canadian government actively supported the creation of secondary suites. However, suburbanization and zoning changes in the 1950s and 60s led to a decrease in secondary suites in Canada. More recently, secondary suites are increasing in popularity and many municipalities are reexamining their regulations to support secondary suites. CMHC (government program) The Canada Mortgage and Housing Corporation provides a financial assistance program to help Canadians create affordable housing for low-income seniors and adults with a disability within a secondary suite. The program is called the Residential Rehabilitation Assistance Program (RRAP) -- Secondary/Garden Suite. The maximum fully forgivable loan depends on the location of the property: Southern Areas of Canada: $24,000/unit Northern areas of Canada: $28,000/unit Far northern areas: $36,000/unit A 25% supplement in assistance is available in remote areas. British Columbia After adopting legislation in 2009 to support secondary suites, Vancouver, British Columbia has become a leading city of their construction in North America. In the city, approximately a third of single-family houses have legally permitted secondary suites, many of which are laneway houses. The Housing Policy Branch of British Columbia's Ministry of Community, Aboriginal and Women's Services published a guide for local governments to implement secondary suite programs called 'Secondary Suites: A Guide For Local Governments'. The current issue is dated September 2005. The intent of the guide is to "help local governments develop and implement secondary suite programs". It also highlights good secondary suite practices as well as providing practical information to "elected officials, planners, community groups, homeowners, developers, and others interested in secondary suites". Europe In German speaking countries an interior secondary suite is known as an Einliegerwohnung. In the United Kingdom, "granny flats" are increasing in popularity with one in twenty UK households (5%) having such a space. 7% of householders say they have plans to develop this type of space in the future. 27% of those surveyed were making plans for older relatives, 25% were planning for grown-up children, 24% were planning to rent as holiday lets, and 16% were planning to take in lodgers. In Norway, particularly in the bigger cities, it is quite common to build separate adjoined smaller flats that the owner of the main flat will rent out. In Sweden, a friggebod is a small house or room which can be built without any planning permission on a land lot with a single-family or a duplex house. United States In the United States, secondary suites are generally referred to as accessory dwelling units or "ADUs". Zoning permissions and laws concerning accessory dwelling units can vary widely by state and municipality. Accessory dwelling units were popular in the early 20th century in the United States, but became less common after WWII when a shift to suburban development occurred and many municipalities banned ADUs through zoning regulations. With increases in the price of housing in many cities and suburbs, increased awareness of the disadvantages of low-density car-oriented development patterns, and an increased need to care for aging Americans, many government entities and advocacy groups have supported ADUs. Some critics perceive ADUs to be a threat to the character of single-family residential neighborhoods. Several states have enacted legislation to promote accessory dwelling units. California In California, Government Code Sections 65852.150, 65852.2 & 65852.22 pertain to local regulation of ADUs. SB 1069 and AB 2299 are California bills approved in 2016 and effective 1 January 2017, that limit local government authority to prohibit ADUs in certain cases (and also reduce cost and bureaucracy hurdles to construction). On 1 January 2020, the state of California passed the most lenient ADU laws in the country allowing not one but two types of accessory units, the accessory dwelling unit (ADU) and the junior accessory dwelling unit (JADU). State-exempt ADUs can now be at least , while JADUs are limited to . Other states The states of Vermont and New Hampshire have also adopted a number of bills that promote accessory dwelling units and reduce regulatory barriers to ADU construction. The State of Illinois considered, but did not adopt, HB 4869 which would have required municipalities to permit (and reasonably regulate) accessory dwelling units (ADUs). Several local governments across the United States have enacted ordinances to both permit and promote accessory dwelling units. Some cities have included accessory dwelling units in larger missing middle housing and affordable housing strategies including Seattle, Portland, and Minneapolis. Many other communities have maintained wide-spread single-family zoning but still updated codes to permit accessory dwelling units. Notable examples include large cities such as Los Angeles, CA and Chicago, IL. Diverse smaller jurisdictions that permit accessory dwelling units include Lexington, KY, Santa Cruz, CA, and the County of Maui in Hawaii. Honolulu, Hawaii has a unique form of accessory dwelling units known as an "Ohana Dwelling Unit". Ohana Dwellings were created as a permitted use in the zoning code in 1981 as a way to encourage the private sector to create more housing units (without government subsidy), preserve green fields (open space) and ease housing affordability. In 2015, Honolulu amended its zoning code to allows ADUs as a sort of Ohana Dwelling, but with fewer restrictions. To prevent creating further complexities for existing Ohana Dwellings, some of which have been condominimized and owned separately from the main house, Ohana Dwellings remain a permitted use (with different requirements and benefits than ADUs) in the zoning code. ADUs are an important component of Honolulu's Affordable Housing Strategy. See also Bedsit Garage apartment Laneway house Laneway House (Toronto, 1993) Secondary suites in Canada References House types Urban planning
Secondary suite
[ "Engineering" ]
2,245
[ "Urban planning", "Architecture" ]
8,925,452
https://en.wikipedia.org/wiki/6-j%20symbol
Wigner's 6-j symbols were introduced by Eugene Paul Wigner in 1940 and published in 1965. They are defined as a sum over products of four Wigner 3-j symbols, The summation is over all six allowed by the selection rules of the 3-j symbols. They are closely related to the Racah W-coefficients, which are used for recoupling 3 angular momenta, although Wigner 6-j symbols have higher symmetry and therefore provide a more efficient means of storing the recoupling coefficients. Their relationship is given by: Symmetry relations The 6-j symbol is invariant under any permutation of the columns: The 6-j symbol is also invariant if upper and lower arguments are interchanged in any two columns: These equations reflect the 24 symmetry operations of the automorphism group that leave the associated tetrahedral Yutsis graph with 6 edges invariant: mirror operations that exchange two vertices and a swap an adjacent pair of edges. The 6-j symbol is zero unless j1, j2, and j3 satisfy triangle conditions, i.e., In combination with the symmetry relation for interchanging upper and lower arguments this shows that triangle conditions must also be satisfied for the triads (j1, j5, j6), (j4, j2, j6), and (j4, j5, j3). Furthermore, the sum of the elements of each triad must be an integer. Therefore, the members of each triad are either all integers or contain one integer and two half-integers. Special case When j6 = 0 the expression for the 6-j symbol is: The triangular delta is equal to 1 when the triad (j1, j2, j3) satisfies the triangle conditions, and zero otherwise. The symmetry relations can be used to find the expression when another j is equal to zero. Orthogonality relation The 6-j symbols satisfy this orthogonality relation: Asymptotics A remarkable formula for the asymptotic behavior of the 6-j symbol was first conjectured by Ponzano and Regge and later proven by Roberts. The asymptotic formula applies when all six quantum numbers j1, ..., j6 are taken to be large and associates to the 6-j symbol the geometry of a tetrahedron. If the 6-j symbol is determined by the quantum numbers j1, ..., j6 the associated tetrahedron has edge lengths Ji = ji+1/2 (i=1,...,6) and the asymptotic formula is given by, The notation is as follows: Each θi is the external dihedral angle about the edge Ji of the associated tetrahedron and the amplitude factor is expressed in terms of the volume, V, of this tetrahedron. Mathematical interpretation In representation theory, 6-j symbols are matrix coefficients of the associator isomorphism in a tensor category. For example, if we are given three representations Vi, Vj, Vk of a group (or quantum group), one has a natural isomorphism of tensor product representations, induced by coassociativity of the corresponding bialgebra. One of the axioms defining a monoidal category is that associators satisfy a pentagon identity, which is equivalent to the Biedenharn-Elliot identity for 6-j symbols. When a monoidal category is semisimple, we can restrict our attention to irreducible objects, and define multiplicity spaces so that tensor products are decomposed as: where the sum is over all isomorphism classes of irreducible objects. Then: The associativity isomorphism induces a vector space isomorphism and the 6j symbols are defined as the component maps: When the multiplicity spaces have canonical basis elements and dimension at most one (as in the case of SU(2) in the traditional setting), these component maps can be interpreted as numbers, and the 6-j symbols become ordinary matrix coefficients. In abstract terms, the 6-j symbols are precisely the information that is lost when passing from a semisimple monoidal category to its Grothendieck ring, since one can reconstruct a monoidal structure using the associator. For the case of representations of a finite group, it is well known that the character table alone (which determines the underlying abelian category and the Grothendieck ring structure) does not determine a group up to isomorphism, while the symmetric monoidal category structure does, by Tannaka-Krein duality. In particular, the two nonabelian groups of order 8 have equivalent abelian categories of representations and isomorphic Grothdendieck rings, but the 6-j symbols of their representation categories are distinct, meaning their representation categories are inequivalent as monoidal categories. Thus, the 6-j symbols give an intermediate level of information, that in fact uniquely determines the groups in many cases, such as when the group is odd order or simple. See also Clebsch–Gordan coefficients 3-j symbol Racah W-coefficient 9-j symbol Representations of classical Lie groups Notes References External links (Gives exact answer) (accurate; C, fortran, python) (fast lookup, accurate; C, fortran) Rotational symmetry Representation theory of Lie groups Quantum mechanics Monoidal categories
6-j symbol
[ "Physics", "Mathematics" ]
1,113
[ "Mathematical structures", "Theoretical physics", "Monoidal categories", "Quantum mechanics", "Category theory", "Symmetry", "Rotational symmetry" ]
8,925,785
https://en.wikipedia.org/wiki/HD%20Hyundai
HD Hyundai () is one of the largest South Korean conglomerates engaged in shipbuilding, heavy equipment, machinery, and the petroleum industry. HD Hyundai started its shipbuilding business in a small village in Ulsan, South Korea, in 1972 and grew into a global heavy industries company. It is a major supplier in the heavy industries and energy sector, ranging from shipbuilding and marine engineering to oil refining, petrochemicals, and smart energy management businesses. HD Hyundai rebranded its name of Hyundai Heavy Industries Group (HHI Group) to 'HD Hyundai' in 2022 to mark its 50th anniversary. Businesses HD Hyundai operates three core businesses - shipbuilding, heavy equipment, and energy - through HD Korea Shipbuilding & Offshore Engineering, HD Hyundai XiteSolution, and HD Hyundai Oilbank. HD Korea Shipbuilding & Offshore Engineering is a sub-holding company that controls the group's shipbuilding companies, including HD Hyundai Heavy Industries, HD Hyundai Samho, and HD Hyundai Mipo. HD Hyundai XiteSolution is another sub-holding company that oversees heavy equipment business, having HD Hyundai Infracore and HD Hyundai Construction Equipment as subsidiaries. HD Hyundai Oilbank is one of the four major oil refiners in South Korea, along with SK Energy, GS Caltex, and S-Oil. Affiliates The subsidiaries of HD Hyundai Group are as follows: Marine HD Korea Shipbuilding & Offshore Engineering HD Hyundai Heavy Industries HD Hyundai Mipo HD Hyundai Samho HD Hyundai Marine Solution HD Hyundai Engineering & Technology Avikus Energy HD Hyundai Oilbank HD Hyundai Chemical HD Hyundai & Shell Base Oil HD Hyundai OCI HD Hyundai Cosmo HD Hyundai Electric HD Hyundai Energy Solutions Industrial HD Hyundai XiteSolution HD Hyundai Construction Equipment HD Hyundai Infracore HD Hyundai Robotics Support and Service Ulsan HD Football Club Hotel SEAMARQ See also Asan Medical Center Munhwa Ilbo Ulsan HD FC References External links Conglomerate companies of South Korea Chaebol Hyundai Engine manufacturers of South Korea Automotive transmission makers Forklift truck manufacturers Truck manufacturers of South Korea Electrical generation engine manufacturers Gas engine manufacturers Diesel engine manufacturers Marine engine manufacturers Photovoltaics manufacturers Electrical equipment manufacturers Electrical engineering companies of South Korea Electric transformer manufacturers Construction equipment manufacturers of South Korea Companies in the KOSPI 200
HD Hyundai
[ "Engineering" ]
450
[ "Electrical engineering organizations", "Photovoltaics manufacturers", "Engineering companies", "Electrical equipment manufacturers" ]
8,926,228
https://en.wikipedia.org/wiki/Artelinic%20acid
Artelinic acid (or its salt, artelinate) is an experimental drug that is being investigated as a treatment for malaria. It is a semi-synthetic derivative of the natural compound artemisinin. Artelinic acid has a lower rate of neurotoxicity than the related artemisinin derivatives arteether and artemether, but is three times more toxic than artesunate. At present, artelinic acid seems unlikely to enter routine clinical use, because it offers no clear benefits over the artemesinins already available (artesunate and artemether). Artelinic acid has not yet been evaluated for use in humans. References Antimalarial agents Benzoic acids Organic peroxides Trioxanes Heterocyclic compounds with 3 rings
Artelinic acid
[ "Chemistry" ]
162
[ "Organic compounds", "Organic peroxides" ]
8,926,812
https://en.wikipedia.org/wiki/3form
3form Free Knowledge Exchange is one of the earliest examples of human-based computation and human-based genetic algorithm. It uses both human-based selection and three types of human-based innovation (contributing new content, mutation, and recombination), in order to implement collaborative problem-solving between humans. See also LinkedIn Answers Human-based Genetic Algorithm References The Kaieteur Institute for Knowledge Management (2001), Categories of digital knowledge exchanges online Kosorukoff, A (2001), Human-based Genetic Algorithm. IEEE Transactions on Systems, Man, and Cybernetics, SMC-2001, 3464-3469 Hideyuki Takagi (2001), Interactive Evolutionary Computation: Fusion of the Capabilities of EC Optimization and Human Evaluation, Proceedings of the IEEE, vol.89, no. 9, pp. 1275–1296 Kosorukoff, A. & Goldberg, D. E. (2001) Genetic algorithms for social innovation and creativity (Illigal report No 2001005). Urbana, IL: University of Illinois at Urbana-Champaign online Kosorukoff, A, Goldberg D. E. (2002), Genetic algorithm as a form of organization, Proceedings of Genetic and Evolutionary Computation Conference, GECCO-2002, pp 965–972 Ajwani, D et al. (Eds) Fast Track to The Social Web, Digit magazine, August 2007 p. 116 online Gloor, P et al. (2008) MIT Handbook of collective intelligence, Examples of collective intelligence online Javadi, E.; Gebauer, J. "Collaborative Knowledge Creation and Problem Solving: A Systems Design Perspective," System Sciences, 2009. HICSS '09. 42nd Hawaii International Conference on, vol., no., pp. 1–10, 5-8 Jan. 2009 doi: 10.1109/HICSS.2009.111 External links 3form website Knowledge markets Human-based computation
3form
[ "Technology" ]
400
[ "Information systems", "Human-based computation" ]
8,927,344
https://en.wikipedia.org/wiki/9-j%20symbol
In physics, Wigner's 9-j symbols were introduced by Eugene Paul Wigner in 1937. They are related to recoupling coefficients in quantum mechanics involving four angular momenta: Recoupling of four angular momentum vectors Coupling of two angular momenta and is the construction of simultaneous eigenfunctions of and , where , as explained in the article on Clebsch–Gordan coefficients. Coupling of three angular momenta can be done in several ways, as explained in the article on Racah W-coefficients. Using the notation and techniques of that article, total angular momentum states that arise from coupling the angular momentum vectors , , , and may be written as Alternatively, one may first couple and to and and to , before coupling and to : Both sets of functions provide a complete, orthonormal basis for the space with dimension spanned by Hence, the transformation between the two sets is unitary and the matrix elements of the transformation are given by the scalar products of the functions. As in the case of the Racah W-coefficients the matrix elements are independent of the total angular momentum projection quantum number (): Symmetry relations A 9-j symbol is invariant under reflection about either diagonal as well as even permutations of its rows or columns: An odd permutation of rows or columns yields a phase factor , where For example: Reduction to 6j symbols The 9-j symbols can be calculated as sums over triple-products of 6-j symbols where the summation extends over all admitted by the triangle conditions in the factors: . Special case When the 9-j symbol is proportional to a 6-j symbol: Orthogonality relation The 9-j symbols satisfy this orthogonality relation: The triangular delta is equal to 1 when the triad (j1, j2, j3) satisfies the triangle conditions, and zero otherwise. 3n-j symbols The 6-j symbol is the first representative, , of -j symbols that are defined as sums of products of of Wigner's 3-jm coefficients. The sums are over all combinations of that the -j coefficients admit, i.e., which lead to non-vanishing contributions. If each 3-jm factor is represented by a vertex and each j by an edge, these -j symbols can be mapped on certain 3-regular graphs with edges and nodes. The 6-j symbol is associated with the K4 graph on 4 vertices, the 9-j symbol with the utility graph on 6 vertices (K3,3), and the two distinct (non-isomorphic) 12-j symbols with the Q3 and Wagner graphs on 8 vertices. Symmetry relations are generally representative of the automorphism group of these graphs. See also Clebsch–Gordan coefficients 3-j symbol, also called 3-jm symbol Racah W-coefficient 6-j symbol References External links (Gives answer in exact fractions) (Answer as floating point numbers) (accurate; C, fortran, python) (fast lookup, accurate; C, fortran) Rotational symmetry Representation theory of Lie groups Quantum mechanics
9-j symbol
[ "Physics" ]
638
[ "Theoretical physics", "Quantum mechanics", "Symmetry", "Rotational symmetry" ]
8,927,420
https://en.wikipedia.org/wiki/EDDS
Ethylenediamine-N,N'-disuccinic acid (EDDS) is an aminopolycarboxylic acid. It is a colourless solid that is used as chelating agent that may offer a biodegradable alternative to EDTA, which is currently used on a large scale in numerous applications. Structure and properties EDDS has two chiral centers, and as such three stereoisomers. These are the enantiomeric (R,R) and (S,S) isomers and the achiral meso (R,S) isomer. As a biodegradable replacement for EDTA, only the (S,S) stereoisomer is of interest. The (R,S) and (R,R) stereoisomers are less biodegradable, whereas the (S,S) stereoisomer has been shown to be very effectively biodegraded even in highly polluted soils. Synthesis EDDS was first synthesized from maleic acid and ethylenediamine. Some microorganisms have been manipulated for industrial-scale synthesis of (S,S)-EDDS from ethylenediamine and fumaric acid or maleic acid, which proceeds as follows: From aspartic acid (S,S)-EDDS is produced stereospecifically by the alkylation of an ethylenedibromide with L-aspartic acid. Racemic EDDS is produced by the reaction of ethylenediamine with fumaric acid or maleic acid. Coordination chemistry In comparing the effectiveness of (S,S)-EDDS versus EDTA as chelating agents for iron(III): Because of the lower stability for [Fe(S,S)-EDDS]−, the useful range being roughly 3<pH(S,S)-EDDS<9 and 2<pHEDTA<11. However, this range is sufficient for most applications. Another comparison that can be made between (S,S)-EDDS and EDTA is the structure of the chelated complex. EDTA’s six donor sites form five five-membered chelate rings around the metal ion, four NC2OFe rings and one C2N2Fe ring. The C2N2Fe ring and two of NC2OFe rings define a plane, and two NC2OFe rings are perpendicular to the plane that contains the C2-symmetry axis. The five-membered rings are slightly strained. EDDS’s six donor sites form both five- and six-membered chelate rings around the metal ion: two NC2OFe rings, two NC3OFe rings, and one C2N2Fe ring. Studies of the crystal structure of the Fe[(S,S)-EDDS]− complex show that the two five-membered NC3OFe rings project out of the plane of the complex, reducing the equatorial ring strain that exists in the Fe[EDTA]− complex. The complex also has C2 symmetry. Uses (S,S)-EDDS is a biodegradable chelating agent that offers an alternative to EDTA, of which 80 million kilograms are produced annually. Under natural conditions, EDTA has been found to convert to ethylenediaminetriacetic acid and then cyclize to the diketopiperazine, which accumulates in the environment as a persistent organic pollutant. (S,S)-EDDS was developed commercially as a biodegradable chelator and stabilizing agent in detergent and cosmetic formulations. When EDDS is applied in chemical-enhanced soil remediation in excessive case (e.g., when applied for ex-situ soil washing), higher extraction efficiency for heavy metals can be achieved and the amount of extraction is less independent with the EDDS dosage; On the other hand, during soil remediation which involves continuous flushing, metal extraction is often limited by the amount of EDDS. Under EDDS deficiency, initial unselective extraction of heavy metals was observed, followed by heavy metal exchange and re-adsorption of heavy metals that have lower stability constant with EDDS. External links Sigma Aldrich page on EDDS, containing a link to a MSDS References Chelating agents Amino acids
EDDS
[ "Chemistry" ]
895
[ "Amino acids", "Biomolecules by chemical classification", "Chelating agents", "Process chemicals" ]
8,928,247
https://en.wikipedia.org/wiki/Cladoptosis
Cladoptosis (Ancient Greek "branch", "falling" [noun]; sometimes pronounced with the p silent) is the regular shedding of branches. It is the counterpart for branches of the familiar process of regular leaf shedding by deciduous trees. As in leaf shedding, an abscission layer forms, and the branch is shed cleanly. Functions of cladoptosis Cladoptosis is thought to have three possible functions: self-pruning (i.e. programmed plant senescence), drought response (characteristic of xerophytes) and liana defence. Self-pruning is the shedding of branches that are shaded or diseased, which are potentially a drain on the resources of the tree. Drought response is similar to the leaf-fall response of drought-deciduous trees; however, leafy shoots are shed in place of leaves. Western red cedar (Thuja plicata) provides an example, as do other members of the family Cupressaceae. In tropical forests, infestation of tree canopies by woody climbers or lianas can be a serious problem. Cladoptosis – by giving a clean bole with no support for climbing plants – may be an adaptation against lianas, as in the case of Castilla. See also Abscission Marcescence: the opposite phenomenon – withered branches (or leaves) stay on References Further reading External links Cladoptosis in Thuja - UBC Botanical Garden Plant physiology Plant anatomy
Cladoptosis
[ "Biology" ]
305
[ "Plant physiology", "Plants" ]
8,928,339
https://en.wikipedia.org/wiki/Life%20annuity
A life annuity is an annuity, or series of payments at fixed intervals, paid while the purchaser (or annuitant) is alive. The majority of life annuities are insurance products sold or issued by life insurance companies however substantial case law indicates that annuity products are not necessarily insurance products. Annuities can be purchased to provide an income during retirement, or originate from a structured settlement of a personal injury lawsuit. Life annuities may be sold in exchange for the immediate payment of a lump sum (single-payment annuity) or a series of regular payments (flexible payment annuity), prior to the onset of the annuity. The payment stream from the issuer to the annuitant has an unknown duration based principally upon the date of death of the annuitant. At this point the contract will terminate and the remainder of the fund accumulated is forfeited unless there are other annuitants or beneficiaries in the contract. Thus a life annuity is a form of longevity insurance, where the uncertainty of an individual's lifespan is transferred from the individual to the insurer, which reduces its own uncertainty by pooling many clients. History The instrument's evolution has been long and continues as part of actuarial science. Ulpian is credited with generating an actuarial life annuity table between AD 211 and 222. Medieval German and Dutch cities and monasteries raised money by the sale of life annuities, and it was recognized that pricing them was difficult. The early practice for selling this instrument did not consider the age of the nominee, thereby raising interesting concerns. These concerns got the attention of several prominent mathematicians over the years, such as Huygens, Bernoulli, de Moivre and others: even Gauss and Laplace had an interest in matters pertaining to this instrument. It seems that Johan de Witt was the first writer to compute the value of a life annuity as the sum of expected discounted future payments, while Halley used the first mortality table drawn from experience for that calculation. Meanwhile, the Paris Hôtel-Dieu offered some fairly priced annuities that roughly fit the Deparcieux table discounted at 5%. Continuing practice is an everyday occurrence with well-known theory founded on robust mathematics, as witnessed by the hundreds of millions worldwide who receive regular remuneration via pension or the like. The modern approach to resolving the difficult problems related to a larger scope for this instrument applies many advanced mathematical approaches, such as stochastic methods, game theory, and other tools of financial mathematics. Types Defined benefit pension plans Defined benefit pension plans are a form of life annuity typically provided by employers or governments (such as Social Security in the United States). The size of payouts is usually determined based on the employee's years of service, age and salary. Individual annuity Individual annuities are insurance products marketed to individual consumers. With the complex selection of options available, consumers can find it difficult to decide rationally on the right type of annuity product for their circumstances. Deferred annuity There are two phases for a deferred annuity: the accumulation or deferral phase in which the customer deposits (or pays premiums) and accumulates money into an account; the distribution or annuitization phase in which the insurance company makes income payments until the death of the annuitants named in the contract Deferred annuities grow capital by investment in the accumulation phase (or deferral phase) and make payments during the distribution phase. A single premium deferred annuity (SPDA) allows a single deposit or premium at the issue of the annuity with only investment growth during the accumulation phase. A flexible premium deferred annuity (FPDA) allows additional payments or premiums following the initial premium during the accumulation phase. The phases of an annuity can be combined in the fusion of a retirement savings and retirement payment plan: the annuitant makes regular contributions to the annuity until a certain date and then receives regular payments from it until death. Sometimes there is a life insurance component added so that if the annuitant dies before annuity payments begin, a beneficiary gets either a lump sum or annuity payments. Immediate annuity An annuity with only a distribution phase is an immediate annuity, single premium immediate annuity (SPIA), payout annuity, or income annuity. Such a contract is purchased with a single payment and makes payments until the death of the annuitant(s). Fixed and variable annuity Annuities that make payments in fixed amounts or in amounts that increase by a fixed percentage are called fixed annuities. Variable annuities, by contrast, pay amounts that vary according to the investment performance of a specified set of investments, typically bond and equity mutual funds. Variable annuities are used for many different objectives. One common objective is deferral of the recognition of taxable gains. Money deposited in a variable annuity grows on a tax-deferred basis, so that taxes on investment gains are not due until a withdrawal is made. Variable annuities offer a variety of funds ("subaccounts") from various money managers. This gives investors the ability to move between subaccounts without incurring additional fees or sales charges. Variable annuities have been criticized for their high commissions, contingent deferred sale charges, tax deferred growth, high taxes on profits, and high annual costs. Sales abuses became so prevalent that in November 2007, the Securities and Exchange Commission approved FINRA Rule 2821 requiring brokers to determine specific suitability criteria when recommending the purchase or exchange (but not the surrender) of deferred variable annuities. Guaranteed annuity A pure life annuity ceases to make payments on the death of the annuitant. A guaranteed annuity or life and certain annuity, makes payments for at least a certain number of years (the "period certain"); if the annuitant outlives the specified period certain, annuity payments then continue until the annuitant's death, and if the annuitant dies before the expiration of the period certain, the annuitant's estate or beneficiary is entitled to collect the remaining payments certain. The tradeoff between the pure life annuity and the life-with-period-certain annuity is that in exchange for the reduced risk of loss, the annuity payments for the latter will be smaller. Joint annuity Joint-life and joint-survivor annuities make payments until the death of one or both of the annuitants respectively. For example, an annuity may be structured to make payments to a married couple, such payments ceasing on the death of the second spouse. In joint-survivor annuities, sometimes the instrument reduces the payments to the second annuitant after death of the first. Impaired life annuity There has also been a significant growth in the development of enhanced or impaired annuities. These involve improving the terms offered due to a medical diagnosis which is severe enough to reduce life expectancy. A process of medical underwriting is involved and the range of qualifying conditions has increased substantially in recent years. Both conventional annuities and Purchase Life Annuities can qualify for impaired terms. Valuation Valuation is the calculation of economic value or worth. Valuation of an annuity is calculated as the actuarial present value of the annuity, which is dependent on the probability of the annuitant living to each future payment period, as well as the interest rate and timing of future payments. Life tables provide the probabilities of survival necessary for such calculations. Annuities by region United States With a "single premium" or "immediate" annuity, the "annuitant" pays for the annuity with a single lump sum. The annuity starts making regular payments to the annuitant within a year. A common use of a single premium annuity is as a destination for roll-over retirement savings upon retirement. In such a case, a retiree withdraws all of the money he/she has saved during working life in, for example, an Individual Retirement Account (IRA), and uses some or all of the money to buy an annuity whose payments will replace the retiree's wage payments for the rest of his/her life. The advantage of such an annuity is that the annuitant has a guaranteed income for life, whereas if the retiree were instead to withdraw money regularly from the retirement account (income drawdown), he/she might run out of money before death, or alternatively not have as much to spend while alive as could have been possible with an annuity purchase. Another common use for an income annuity is to pay recurring expenses, such as assisted living expenses, mortgage or insurance premiums. The disadvantage of such an annuity is that the election is irrevocable and, because of inflation, a guaranteed income for life is not the same thing as guaranteeing a comfortable income for life. United Kingdom In the United Kingdom conversion of pension income into an annuity was compulsory by the age of 75 until new legislation was introduced by the coalition government in April 2011. The new rules allow individuals to delay the decision to purchase an annuity indefinitely. The rules (known as the 'pension freedoms') also mean that from the age of 55, people with money in a 'money purchase' or 'defined contribution' pension scheme have more choice and flexibility in accessing their pension savings. They can now choose from a variety of products, including lifetime annuities, fixed term annuities and flexi-access drawdown, or they can take all of their pension savings as cash. In the UK there are a large market of annuities of different types. The most common are those where the source of the funds required to buy the annuity is from a pension scheme. Examples of these types of annuity, often referred to as a Compulsory Purchase Annuity, are conventional annuities, with profit annuities and unit linked, or "third way" annuities. Annuities purchased from savings (i.e. not from a pension scheme) are referred to as Purchase Life Annuities and Immediate Vesting Annuities. In October 2009, the International Longevity Centre-UK published a report on Purchased Life Annuities (Time to Annuitise). In the UK it has become common for life companies to base their annuity rates on an individual's location. Legal & General were the first company to do this in 2007. Canada In Canada the most common type of annuity is the life annuity, which is normally purchased by persons at their retirement age with tax-sheltered funds or with savings funds. The monthly payments from annuities with tax-sheltered funds are fully taxable when withdrawn as neither the capital or return thereon has been taxed in any way. Conversely income from annuities purchased with savings funds is divided between the return of capital and interest earned, with only the latter being taxable. An annuity can be a single life annuity or a joint life annuity where the payments are guaranteed until the death of the second annuitant. It is regarded as ideal for retirees as it is the only income of any financial product that is fully guaranteed. In addition, while the monthly payments are for the upkeep and enjoyment of the annuitants, any guaranteed payments on non-registered annuities are continued to beneficiaries after the second death. This way the balance of the guaranteed payments supports family members and becomes a two-generation income. Internationally Some countries developed more options of value for this type of instrument than others. However, a 2005 study reported that some of the risks related to longevity are poorly managed "practically everywhere" due to governments backing away from defined benefit promises and insurance companies being reluctant to sell genuine life annuities because of fears that life expectancy will go up. Longevity insurance is now becoming more common in the UK and the U.S. (see Future of annuities, below) while Chile, in comparison to the U.S., has had a very large life annuity market for 20 years. Future of annuities It is expected that the aging of the baby boomer generation in the US will increase the demand for this type of instrument and for it to be optimized for the annuitant. This growing market will drive improvements necessitating more research and development of instruments and increase insight into the mechanics involved on the part of the buying public. An example of increased scrutiny and discussion is that related to privatization of part of the U.S. Social Security Trust Fund. In late 2010, discussions related to cutting Federal taxes raised anew the following concern: how much would an annuity cost a retiree if he or she had to replace his or her Social Security income? Assuming that the average benefit from Social Security is $14,000 per year, the replacement cost would be about $250,000 for a 66-year-old individual. The figures are based upon the individual receiving an inflation-adjusted stream that would pay for life and be insured. European Court of Justice ruling In March 2011 a European Court of Justice ruling was made that prevents annuity providers from setting different premiums for men and women. Annuity rates for men are generally lower than those for women because men, on average, have shorter life expectancies. The change means that either annuity rates for men will rise, annuity rates for women will fall, or a combination of both. In the UK any annuities that are taken out after 21 December 2012 will have to comply with the ruling. See also Annuity (European financial arrangements)#Life annuity Certificate of life Tontine Life estate References External links Math and spreadsheet for purchase and deferral decision Variable Annuities Interview- Legal Perspective Retirement Annuities Actuarial science
Life annuity
[ "Mathematics" ]
2,867
[ "Applied mathematics", "Actuarial science" ]
8,928,525
https://en.wikipedia.org/wiki/LocalTalk-to-Ethernet%20bridge
A LocalTalk-to-Ethernet bridge is a network bridge that joins AppleTalk networks running on two different kinds of link – LocalTalk, the lower layers AppleTalk originally used, and Ethernet. This was an important class of products in the late 1980s and early 1990s, before Ethernet support became universal on the Mac lineup. Some LocalTalk–Ethernet bridges carried only AppleTalk traffic, while others were also able to carry other protocols. LocalTalk only carried AppleTalk traffic directly, but MacIP was a protocol that tunneled Internet Protocol (IP) traffic in AppleTalk. A LocalTalk–Ethernet bridge supporting MacIP allowed e.g. any Macintosh without an Ethernet port to be part of an IP network, such as the Internet. Examples Hardware devices: Asante: AsanteTalk Cayman Systems: GatorBox Compatible Systems: Ether Route/TCP, Ether Route II, RISC Router 3000E Dayna Communications: EtherPrint, EtherPrint Plus, EtherPrint-T, EtherPrint-T Plus Farallon: EtherPrint, EtherWave LocalTalk Adapter, InterRoute/5, StarRouter, EtherMac iPrint Adapter LT FOCUS Enhancements EtherLAN PRINT Hayes Inter-bridge Kinetics: FastPath - in later years, available from Shiva Networks Sonic Systems: microPrint, microBridge TCP/IP Transware: EtherWay Tribe Computer Works: TribeStar Webster Computer Corporation: MultiGate, MultiPort Gateway, MultiPort/LT Software in MacTCP era (<1995): Apple IP Gateway from Apple Computer SuperBridge/TCP from Sonic Systems Software in Open Transport era (>1995): Internet Gateway from Vicomsoft IPNetRouter from Sustainable Softworks LocalTalk Bridge from Apple Computer Other Software macipgw Netatalk References External links Oxford University resource regarding DDP-IP Gateways LocalTalk Bridge v2.1 download (CNET) Sustainable Softworks IPNetRouter Webster MultiPort/LT guide Webster Computer Corporation Multiport Gateway - Software Version 4.7 & documentation Asante AsantéTalk (Product Info) Asante AsantéTalk (Online Shop) Sonic MicroPrint Manual Sonic MicroPrint Software Usenet post regarding successful use of the AsanteTalk bridge with an Apple IIgs macipgw project on Sourceforge Apple Inc. hardware Network protocols Networking hardware Physical layer protocols
LocalTalk-to-Ethernet bridge
[ "Engineering" ]
478
[ "Computer networks engineering", "Networking hardware" ]
8,929,654
https://en.wikipedia.org/wiki/Print%20%28magazine%29
Print is an American design and culture website that began as Print, A Quarterly Journal of the Graphic Arts, in 1940, and continued publishing a physical edition through the end of 2017 as Print. As a printed publication, Print was a general-interest magazine, written by cultural reporters and critics who looked at design in its social, political, and historical contexts, from newspapers and book covers to Web-based motion graphics, from corporate branding to indie-rock posters. During its run, Print won five National Magazine Awards and a number of Folio: Eddies, including Best Full Issue in its final year. Print ceased publication in 2017, with a promise to focus the brand on "a robust and thriving online community." Its publisher, F+W Media, declared bankruptcy in 2019, and a group of independent partners subsequently purchased PRINT from the company that arose out of F+W, Peak Media Properties. Founding The journal was founded by William Edwin Rudge to demonstrate “the far reaching importance of the graphic arts” including art prints, commercial printing, wallpaper, etc. Contents were eclectic covering typography, book making, book printing, fine prints as well as the trade journal aspects of printing candy bar wrappers. Initially the publication included original prints such as the frontispiece for Vol 1, #1 (Jun 1940) a two color woodcut by Hans Alexander Mueller and Vol 1, #3 (December 1940) a black and white wood engraving by Paul Landacre. By Volume 8 (1953) the focus of the periodical had shifted to a trade journal. Name changes Vol 1, #1 (Jun 1940) Print: A Quarterly Journal of the Graphic Arts Vol 3, #2 (Summer 1942) combined with The Printing Art. An Illustrated Monthly Magazine of the Art of Printing and of the Allied Arts but continued under Print: A Quarterly Journal of the Graphic Arts Until Vol 7, #1 (Aug 1951) Print: combining: Print, A Quarterly Journal of Graphic Arts, Vol. VII, Number 1 and The Print Collector's Quarterly, Volume XXX, Number 4. Vol 7, #2 (Jan 1952) Print, The Magazine of the Graphic Arts - until Vol 9, #2 (Oct/Nov 1954) Print - until Vol 11, #4 (Jan/Feb 1958) Print, The Magazine of Visual Communication - until Vol 12, #1 (July/Aug 1958) Print, America's Graphic Design Magazine at least until May/June 2005 Vol 59, #3. References Bimonthly magazines published in the United States Communication design Defunct magazines published in the United States Design magazines Graphic design Magazines established in 1940 Magazines disestablished in 2017 Magazines published in Austin, Texas Quarterly magazines published in the United States Visual arts magazines published in the United States
Print (magazine)
[ "Engineering" ]
562
[ "Design magazines", "Design", "Communication design" ]
8,929,820
https://en.wikipedia.org/wiki/First%20International%20Computer
First International Computer, Inc. (FIC; ) is a Taiwanese original equipment manufacturer and system integrator for automotive electronics and smart building controls. FIC provides design consultancy and supply chain management services for automotive electronic suppliers worldwide. History Founded in 1979 by Dr. Ming-J Chien, in Taipei, Taiwan, FIC used to be a famous computer and component manufacturer in worldwide countries from 1979~2010; And in the year 2004, Mr. Leo Chien, the son of Dr. Ming, joined the group, and contributed his expertise as COO of FIC in 2008; and in 2011, Mr. Leo Chien began to lead the group towards a new business development, with insight into the foreseeable market demand and analysis, and gradually moved the team towards the direction of automotive electronic design and manufacturing business. Throughout the years, he integrated the group's relevant technical resources in order to support the automotive electronic design business more comprehensively and became CEO of FIC in 2016. FIC is publicly listed on the Taiwan Stock Exchange (TSE 3701). 1979: Charlene Wang and Ming Chien found company as a sales agent for main frame and micro computers. 1983: The company begins assembling its first PC computer systems under the Leo brand. 1987: The company enters motherboard manufacturing with large-scale production facility in Hsien-Tien. 1989: First International begins assembling PCs with Intel processors. 1991: U.S. and European subsidiaries are opened and production of the first in-house personal computer design begins. 1994: A configuration plant is opened in the Netherlands. 1996: A manufacturing and configuration plant is opened in Austin, Texas. 1997: A plant is opened in the Czech Republic. 1998: A plant is opened in Brazil. 1999: A large scale production facility is opened in Guanzhou, in mainland China. 2002: A new manufacturing headquarters is set up in China. 2003: Created the first AIO PC and became the e-book OEM partner of Panasonic. 2004: Transformed to a Holding company as known as FICG. 2005: Became the NB ODM partner of Fujitsu Siemens. 2007: Launched 7” UMPC. 2008: Obtained the value-added notebook orders from Fujitsu. 2010: Announced reseller agreement with Tridium to provide solutions in Green House & Environmental Controls. 2011: Focus on Automotive Electronic Design Business, and Spun off the IPC BU to a new company-Ubiqconn Technology 2012: Signed a Letter of Intent on healthcare business with NTT DATA Corporation. 2014: Ubiqconn is selected as In-vehicle System Provider for 2014 FIFA World Cup in Brazil. 2016: Expands its factory-installed products in Automotive solution. 2021: FIC Green System has passed the cloud DNP3.0 certification of Taipower. Service Categories First International Computer services include: DMS (Design Manufacturing Service), Automotive Electronic Design and Supply Chain Management, Smart Building & Traffic IoT System integrations. For Automotive Electronics Design solutions, the design services include AR HUD design, digital instrument cluster design, OBDII design, ADAS design, car ECU design, infotainment display design, telematics design, BMS design and Smart Fleet Management. Company Perspectives As Computer, Communication and Consumer Electronics (3C) markets have converged, FIC has adapted and developed to become a respected producer of 3C related products. Many new and innovative products have joined the FIC range and the company is no longer simply known for its award-winning range of motherboards. In 2011, FIC started to focus its core business in the fields of automotive electronic designs, smart building & traffic automation system and green energy system integration. In recent years, FIC becomes a DMS/ODM solution provider (Design Manufacturing Service) for global Tier 1 & 2 automotive suppliers. See also List of companies of Taiwan 1980 establishments in Taiwan Computer companies of Taiwan Computer hardware companies Companies established in 1980 Electronics companies of Taiwan Motherboard companies Companies based in Taipei Taiwanese brands
First International Computer
[ "Technology" ]
827
[ "Computer hardware companies", "Computers" ]
9,611,129
https://en.wikipedia.org/wiki/RAF%20kinase
RAF kinases are a family of three serine/threonine-specific protein kinases that are related to retroviral oncogenes. The mouse sarcoma virus 3611 contains a RAF kinase-related oncogene that enhances fibrosarcoma induction. RAF is an acronym for Rapidly Accelerated Fibrosarcoma. RAF kinases participate in the RAS-RAF-MEK-ERK signal transduction cascade, also referred to as the mitogen-activated protein kinase (MAPK) cascade. Activation of RAF kinases requires interaction with RAS-GTPases. The three RAF kinase family members are: A-RAF B-RAF c-Raf References EC 2.7.11
RAF kinase
[ "Chemistry" ]
150
[ "Biochemistry stubs", "Protein stubs" ]
9,611,593
https://en.wikipedia.org/wiki/Neurotrophin-4
Neurotrophin-4 (NT-4), also known as neurotrophin-5 (NT-5), is a protein that in humans is encoded by the NTF4 gene. It is a neurotrophic factor that signals predominantly through the TrkB receptor tyrosine kinase. NT-4 was first discovered and isolated from xenopus and viper in the year 1991 by Finn Hallbook et.al See also Tropomyosin receptor kinase B § Agonists References Further reading External links Neurotrophic factors Peptide hormones Growth factors Developmental neuroscience Proteins TrkB agonists
Neurotrophin-4
[ "Chemistry" ]
130
[ "Biomolecules by chemical classification", "Growth factors", "Signal transduction", "Molecular biology", "Proteins", "Neurochemistry", "Neurotrophic factors" ]
9,611,617
https://en.wikipedia.org/wiki/Landslide%20mitigation
Landslide mitigation refers to several human-made activities on slopes with the goal of lessening the effect of landslides. Landslides can be triggered by many, sometimes concomitant causes. In addition to shallow erosion or reduction of shear strength caused by seasonal rainfall, landslides may be triggered by anthropic activities, such as adding excessive weight above the slope, digging at mid-slope or at the foot of the slope. Often, individual phenomena join to generate instability over time, which often does not allow a reconstruction of the evolution of a particular landslide. Therefore, landslide hazard mitigation measures are not generally classified according to the phenomenon that might cause a landslide. Instead, they are classified by the sort of slope stabilization method used: Geometric methods, in which the geometry of the hillside is changed (in general the slope); Hydrogeological methods, in which an attempt is made to lower the groundwater level or to reduce the water content of the material Chemical and mechanical methods, in which attempts are made to increase the shear strength of the unstable mass or to introduce active external forces (e.g. anchors, rock or ground nailing) or passive (e.g. structural wells, piles or reinforced ground) to counteract the destabilizing forces. Each of these methods varies somewhat with the type of material that makes up the slope. Rock slopes Reinforcement measures Reinforcement measures generally consist of the introduction of metal elements which increase the shear strength of the rock and to reduce the stress release created when the rock is cut. Reinforcement measures are made up of metal rock nails or anchors. Anchorage subjected to pretensioning is classified as active anchorage. Passive anchorage, not subjected to pretensioning, can be used both to nail single unstable blocks and to reinforce large portions of rock. Anchorage can also be used as pre-reinforcement elements on a scarp to limit hillside decompression associated with cutting. Parts of an anchorage include: the header: the set of elements (anchor plate, blocking device, etc.) that transmit the traction strength of the anchor to the anchored structure or to the rock the reinforcement: part of the anchor, concreted and otherwise, placed under traction; can be constituted by a metal rod, a metal cable, a strand, etc. the length of the foundation: the deepest portion of the anchor, fixed to the rock with chemical bonds or mechanical devices, which transfer the load to the rock itself the free length: the non-concreted length. When the anchorage acts over a short length it is defined as a bolt, which is not structurally connected to the free length, made up of an element resistant to traction (normally a steel bar of less than 12 m protected against corrosion by a concrete sheath). The anchorage device may be connected to the ground by chemical means, mechanical expansion or concreting. In the first case, polyester resin cartridges are placed in a perforation to fill the ring space around the end part of the bolt. The main advantage of this type of anchorage lies in its simplicity and in the speed of installation. The main disadvantage is in its limited strength. In the second case, the anchorage is composed of steel wedges driven into the sides of the hole. The advantage of this type of anchorage lies in the speed of installation and in the fact that the tensioning can be achieved immediately. The main disadvantage with this type of anchorage is that it can only be used with hard rock, and the maximum traction force is limited. In the third case, the anchorage is achieved by concreting the whole metal bar. This is the most-used method since the materials are cheap and installation is simple. Injected concrete mixes can be used in many different rocks and grounds, and the concrete sheath protects the bar from corrosion. The concrete mixture is generally made up of water and cement in the ratio W/C = 0.40-0.45, producing a sufficiently fluid mixture to allow pumping into the hole, while at the same time, providing high mechanical strength when set. As far as the working mechanism of a rock nail is concerned, the strains of the rock induce a stress state in the nail composed of shear and traction stress, due to the roughness of the joints, to their opening and to the direction of the nail, generally non-orthogonal to the joint itself. The execution phases of setting up the nail provides for: formation of any header niche and perforation setting up of a reinforcement bar (e.g. a 4–6 m long FeB44k bar) concrete injection of the bar sealing of the header or of the top part of the hole It is anyway opportune to close up and cement any cracks in the rock to prevent pressure caused by water during the freeze-thaw cycles from producing progressive breaking in the reinforcement system set up. To this purpose a procedure is provided for of: cleaning out and washing of the cracks; plastering of the crack; predisposition of the injection tubes at suitable inter-axes, parallel to the crack, through which the concrete mix is injected; sequential injection of the mixture from bottom to top and at low pressure (1-3 atm.) until refusal or until no flow back of the mixture is noted from the tubes placed higher up. The injection mixtures have approximately the following composition: cement 10 kg; water 65 l fluidity and anti-shrinkage additive or bentonite 1-5 kg. Shotcrete As defined by the American Concrete Institute, shotcrete is mortar or concrete conveyed through a hose and pneumatically projected at high velocity onto a surface. Shotcrete is also called spray-concrete, or spritzbeton (German). Drainage The presence of water within a rocky hillside is one of the major factors leading to instability. Knowledge of the water pressure and of the runoff mode is important to stability analysis, and to planning measures to improve hillside stability. Hoek and Bray (1981) provide a scheme of possible measures to reduce not only the amount of water, which is itself negligible as a cause of instability, but also the pressure applied by the water. The proposed scheme was elaborated taking three principles into account: Preventing water entering the hillside through open or discontinuity traction cracks Reducing water pressure in the vicinity of potential breakage surfaces through selective shallow and sub-shallow drainage. Placing drainage in order to reduce water pressure in the immediate vicinity of the hillside. The measures that can be achieved to reduce the effects of water can be shallow or deep. Shallow drainage work mainly intercepts surface runoff and keeps it away from potentially unstable areas. In reality, on rocky hillsides this type of measure alone is usually insufficient to stabilise a hillside. Deep drainage is the most effective. Sub horizontal drainage is very effective in reducing pore-pressure along crack surfaces or potential breakage surfaces. In rocks, the choice of drain spacing, slope, and length is dependent on the hillside geometry and, more importantly, the structural formation of the mass. Features such as position, spacing and discontinuity opening persistence condition, apart from the mechanical characteristics of the rock, the water runoff mode inside the mass. Therefore, only by intercepting the mostly drained discontinuities can there be an efficient result. Sub horizontal drains are accompanied by surficial collectors which gather the water and take it away through networks of small surface channels. Vertical drainage is generally associated with sunken pumps which have the task of draining the water and lowering the groundwater level. The use of continuous cycle pumps implies very high running costs conditioning the use of this technique for only limited periods. Drainage galleries are rather different in terms of efficiency. They are considered to be the most efficient drainage system for rocks even if they have the drawback of requiring very high technological and financial investment. In particular, used in rocks this technique can be highly efficient in lowering water pressure. Drainage galleries can be associated with a series of radial drains which augment their efficiency. The positioning of this type of work is certainly connected to the local morphological, geological and structural conditions. Geometry modification This type of measure is used in those cases in which, below the material to be removed, the rock face is sound and stable (for example unstable material at the top of the hillside, rock blocks thrusting out from the hillside profile, vegetation that can widen the rock joints, rock blocks isolated from the joints). Detachment measures are carried out where there are risk conditions due to infrastructures or the passage of people at the foot of the hillside. Generally this type of measure can solve the problem by eliminating the hazard. However, it should be ensured that once the measure is carried out, the problem does not re-emerge in the short term. In fact, where there are very cracked rocks, the shallower rock portions can undergo mechanical incoherence, sometimes encouraged by extremes of climate, causing the isolation of unstable blocks. The measure can be effected in various ways, which range from demolition with pick axes to the use of explosives. In the case of high and/or not easily accessible faces it is necessary to turn to specialists who work acrobatically. When explosives are used, sometimes controlled demolition is needed, with the aim of minimising or nullifying the undesired effects resulting from the explosion of the charges, safeguarding the integrity of the surrounding rock. Controlled demolition is based on the drilling of holes placed at a short distance from each other and parallel to the scarp to be demolished. The diameter of the holes generally varies from 40 to 80 mm; the spacing of the holes is generally about 10 to 12 times the diameter. The charge fuse times are established so that those at the outer edges explode first and the more internal ones successively, so that the area of the operation is delimited. Protection measures The protection of natural and quarry faces can have two different aims: Protecting the rock from alteration or weathering Protecting infrastructure and towns from rockfalls. Identification of the cause of alteration or the possibility of rockfall allows mitigation measures to be tailored to individual sites. The most-used passive protection measures are boulder-gathering trenches at the foot of the hillside, metal containment nets, and boulder barriers. Boulder barriers are generally composed of suitably rigid metal nets. Various structural types are on the market, for which the manufacturers specify the kinetic energy of absorption based on an elemental analysis of the structure under projectile collision conditions. Another type of boulder containment barrier is the earth embankment, sometimes reinforced with geo-synthetics (reinforced ground). The advantages of such earthworks over nets are: easier maintenance, higher absorption of kinetic energy, and lower environmental impacts. Soil slopes Geometric modification The operation of re-profiling a slope with the aim of improving its stability, can be achieved by either: Lowering the angle of the slope, or Positioning infill at the foot of the slope Slope angles can be reduced by digging out the brow of the slope, usually in a step-wise fashion. This method is effective for correcting shallow forms of instability, where movement is limited to layers of ground near the surface and when the slopes are higher than 5m. Steps created by this method may also reduce surface erosion. However, caution is necessary to avoid the onset of local breakage following the cuts. In contrast, infill at the foot of the slope has a stabilising effect on a translational or deep rotational landslide, in which the landslide surface at the top submerges and describes a sub-vertical surface that re-emerges in the area at the foot of the slope. The process of infill at the foot of the slope may include construction of berms, gravitational structures such as gabions, or reinforced ground (i.e., concrete blocks). The choice between reducing the slope or infilling at the foot is usually controlled by location-specific constraints at the top or at the foot of the slope. In cases of slope stabilisation where there are no constraints (usually natural slopes) a combination of slope reduction and infilling at the foot of the slope is adopted to avoid heavy work of just one type. In the case of natural slopes the choice of re-profiling scheme is not as simple as that for artificial slopes. The natural profile is often highly irregular with large areas of natural creep, so that its shallow development can make some areas unserviceable as a cutting or infill point. Where the buried shapes of older landslides are complicated, depositing infill material in one area can trigger a new landslide. When planning this type of work the stepping effect of the cuts and infill should be taken into account: their beneficial influence on the increase in safety factor will be reduced in relationship to the size of the landslide under examination. It is very important to ensure that neither the cuts nor the infill mobilise any existing or potential creep plane(s). Usually, infilling at the foot of the landslide is cheaper than cutting at the top. Moreover, in complex and compound landslides, infill at the foot of the slope, at the tip of the foot itself, has a lesser probability of interfering with the interaction of the individual landslide elements. An important aspect of stabilisation work that changes the morphology of the slope is that cuts and infill generate non-drained charge and discharge stresses. In the case of positioning infill, the safety factor SF, will be less in the short term than in the long term. In the case of a cut in the slope, SF will be less in the long term than in the short term. Therefore, in both cases the SF must be calculated in both the short and long terms. Finally, the effectiveness of infill increases with time so long as it is associated with an appropriate infill drainage system, achieved with an underlying drainage cover or appropriate shallow drainage. More generally, therefore, re-profiling systems are associated with and integrated by surficial protection of the slope against erosion and by regulation of meteoric waters through drainage systems made up of ditches and small channels (clad or unclad and prefabricated) to run off the water collected. These surficial water regulation systems are designed by modelling the land itself around the body of the landslide. These provisions will serve the purpose of avoiding penetration of the landslide body by circulating water or into any cracks or fissures, further decreasing ground shear strength. Surface erosion control Water near the surface of the hillside can cause the erosion of surface material due to water runoff. This process tends to weaken the slope by removing material and triggering excess pore pressures due to the water flow. For defense against erosion, several solutions may be used. The following measures share the superficial character of their installation and low environmental impact. Geomats are anti-eroding biomats or bionets that are purpose-made synthetic products for the protection and grassing of slopes subject to surface wash. Geomats provide two main erosion control mechanisms: containment and reinforcement of the surficial ground; and protection from the impact of the raindrops. Geogrids made of geosynthetic materials Steel wire mesh may be used for soil and rock slope stabilization. After leveling, the surface is covered by a steel-wire mesh, which is fastened to the slope and tensioned. It is a cost-effective approach. Wicker or brushwood mats made of vegetable material. Very long and flexible willow branches can be used, which are then covered with infill soil. Alternating stakes of different woody species are used and they are woven to form a barrier against the downward drag of the material eroded by free water on the surface. Coir (coconut fiber) geotextiles are used globally for bioengineering and slope stabilization applications due to the mechanical strength necessary to hold soil together. Coir geotextiles last for 3–5 years depending on the weight, and as the product degrades, it converts itself it to humus, which enriches the soil. Draining techniques Drainage systems reduce the water level inside a potentially unstable hillside, which leads to reduction in pore water pressures in the ground and an increase in the shear strength within the slope. The reduction in pore pressure by drainage can be achieved by shallow and/or deep drains, depending on hillside morphology, the kinematics of movement predicted and the depth of creep surfaces. Usually, shallow drainage is adopted where the potential hillside movement is shallow, affecting a depth of 5-6m. Where there are deeper slippage surfaces, deep drainage must be introduced, but shallow drainage systems may also be installed, with the aim of running off surface water. Shallow drainage Shallow drainage is facilitated through trenches. Traditional drainage trenches are cut in an unbroken length and filled with highly permeable, granular, draining material. Shallow drainage trenches may also be equipped with geocomposites. The scarped sides of the trenches are covered with geocomposite panels. The bottom of the trenches houses a drainage tube placed in continuity to the geocomposite canvas. Deep drainage Deep drainage modifies the filtration routes in the ground. Often more expensive than shallow drains, deep drains are usually more effective because they directly remove the water that induces instability within the hillside. Deep drainage in earth slopes can be achieved in several ways: Large diameter drainage wells with sub-horizontal drains These systems can serve a structural function, a drainage function, or both. The draining elements are microdrains, perforated and positioned sub-horizontally and fanned out, oriented uphill to favour water discharge by gravity. The size of the wells is chosen with the aim of allowing the insertion and functioning of the perforation equipment for the microdrains. Generally, the minimum internal diameter is greater than 3.5 m for drains with a length of 20 to 30 m. Longer drains require wells with a diameter of up to 8–10 m. To determine the network of microdrains planners take into consideration the makeup of the subsoil and the hydraulic regime of the slope. The drainage in these wells is passive, realised by linking the bottom of adjacent wells by sub-horizontal perforations (provided with temporary sheathing pipes) in which the microdrains are placed at a gradient of about 15-20° and are equipped with microperforated PVC pipes, protected by non-filtering fabric along the draining length. Once the drain is embedded in the ground, the temporary sheathing is completely removed and the head of the drain is cemented to the well. In this way a discharge line is created linking all the wells emerging to the surface downhill, where the water is discharged naturally without the help of pumps. The wells are placed at such a distance apart that the individual collecting areas of the microdrains, appertaining to each well, are overlaid. In this way all the volume of the slope involved with the water table is drained. Medium-diameter drainage wells linked at the bottom. The technique involves the dry cutting with temporary sheathing pipes, of aligned drainage wells, with a diameter of 1200–1500 mm., positioned at an interaxis of 6–8 m., their bottoms linked together to a bottom tube for the discharge of drained water. In this way the water discharge takes place passively, due to gravity by perforated pipes with mini-tubes, positioned at the bottom of the wells themselves. The linking pipes, generally made of steel, are blind in the linking length and perforated or windowed in the length corresponding to the well. The wells have a concrete bung at the bottom and are filled, after withdrawal of the temporary sheathing pipe, with dry draining material and are closed with an impermeable clay bung. In normal conditions, these wells reach a depth of 20–30 m, but, in especially favourable cases, may reach 50 m. Some of these wells have drainage functions across their whole section and others can be inspected. The latter serve for maintenance of the whole drainage screen. Such wells that can be inspected are also a support point for the creation of new drainage wells and access for the installation, also on a later occasion, for a range of sub-horizontal drains at the bottom or along the walls of the wells themselves, with the purpose of increasing the drainage capacity of the well. Isolated wells fitted with drainage pumps This system provides for the installation of a drainage pump for each well. The distribution of the wells is established according to the permeability of the land to be drained and the lowering of the water pressure to be achieved. The use of isolated wells with drainage pumps leads to high operational costs and imposes a very time-consuming level of control and maintenance. Deep drainage trenches Deep drainage trenches consist of unbroken cuts with a small cross-section that can be covered at the bottom with geofabric canvas having a primary filter function. They are filled with draining material that has a filtering function and exploits the passive drainage to carry away the drained water downhill. The effectiveness of these systems is connected to the geometry of the trench and the continuity of the draining material along the whole trench. As far as the geometry of the cut is concerned attention should be paid to the slope given to the bottom of the cut. In fact, deep drainage trenches do not have bottom piping that is inserted in the end part of the trench, downhill, where the depth of the cut is reduced until the campaign level is reached. Drainage galleries fitted with microdrains Drainage galleries constitute a rather expensive stabilisation provision for large, deep landslide movements, used where the ground is unsuitable for cutting trenches or drainage wells and where it is impossible to work on the surface owing to a lack of space for the work machinery. Their effectiveness is due to the extensiveness of the area to be drained. Moreover, these drainage systems must be installed on the stable part of the slope. Drainage systems made up of microdrains are placed inside galleries with lengths that can reach 50–60 m. The sizes of the galleries are conditioned by the need to insert the drain perforation equipment. For this reason the minimum transversal internal size of the galleries vary from a minimum of 2 m, when using special reduced size equipment, to at least 3.5 m, when using traditional equipment. Siphon drain This is a technique conceived and developed in France, which works like the system of isolated drainage wells but overcoming the inconvenience of installing a pump for each well. Once motion is triggered in the siphon tube, without the entry of air into the loop, the flow of water is uninterrupted. For this reason, the two ends of the siphon tube are submerged in the water of two permanent storage tanks. This drain is created vertically starting from the campaign level but can also be sub-vertical or inclined. The diameter of the well can vary from 100 to 300 mm;. Inside a PVC pipe is placed or a perforated or microperforated steel pipe, filled with draining material. The siphon drain in this way carries off drainage water by gravity without the need for drainage pumps or pipes linking the bottom of each well. This system proves to be economically advantageous and relatively simple to set up, but requires a programme of controls and maintenance. Microdrains Microdrains is a simple to create drainage system with contained costs. They consist of small diameter perforations, made from surface locations, in trenches, in wells or in galleries. The microdrains are set to work in a sub-horizontal or sub-vertical position, according to the type of application. Drainage anti-slide pile (DASP) The Drainage anti-slide pile (DASP) is a reinforced concrete structure with a hollow upper section and a solid lower section, designed to resist slope deformation. The hollow part is filled with compacted, high-permeability gravels and can drain water via a vertical drain-pipe or sub-horizontal pipes connected to the slope surface. Reinforcement measures Stabilization of a hillside by increasing the mechanical strength of the unstable ground, can be achieved in two ways: Insertion of reinforcement elements into the ground The improvement of the mechanical characteristics of the ground through chemical, thermal, or mechanical treatment. Insertion of reinforcement elements into the ground Types of mechanical reinforcement include: Large diameter wells supported by one or more crowns of consolidated and possibly Reinforced Earth columns Anchors Networks of micropiles Soil nailing Geogrids for reinforced ground Cellular faces Large diameter wells To guarantee slope stability it may be necessary to insert very rigid, strong elements. These elements are large diameter full section or ring section reinforced concrete wells with circular or elliptical cross-sections. The depth of the static wells can reach 30-40m. Often the static stabilising action of the wells is integrated with a series of microdrains laid out radially on several levels, reducing pore-pressures. Anchors Stabilising an unstable slope also can be achieved by the application of active forces to the unstable ground. These forces increase the normal stress and therefore resistance to friction along the creeping surface. Anchors can be applied for this purpose, linked at the surface to each other by a beam frame, which is generally made of reinforced concrete. The anchors are fixed in a place known to be stable. They are usually installed with orthogonal axes to the slope surface and therefore, at first, approximately orthogonal to the surface of the creep. Sometimes anchorage problems occur, as in the case of silty-clayey ground. Where there is water or the anchors are embedded in a clayey sub-layer, the adherence of the anchor to the ground must be confirmed. The surface contained within the grid of the beam frame should also be protected, using geofabrics, in order to prevent erosion from removing the ground underlying the beam frame. Networks of micropiles This solution requires the installation of a series of micropiles that make up a three-dimensional grid, variably tilted and linked at the head by a rigid reinforced concrete mortise. This structure constitutes a reinforcement for the ground, inducing an intrinsic improvement of the ground characteristics incorporated in the micropiles. This type of measure is used in cases of smaller landslides. The effectiveness of micropiles is linked to the insertion of micropiles over the entire landslide area. In the case of rotational landslides in soft clay, the piles contribute to increasing the resisting moment by friction on the upper part of the pile shaft found in the landslide. In the case of suspended piles, strength is governed by the part of the pile offering the least resistance. In practice, those piles in the most unstable area of the slope are positioned first, in order to reduce any possible lateral ground displacements. Preliminary design methods for the micropiles, are entrusted to computer codes that carry out numerical simulations, but which are subject to simplifications in the models that necessitate characterizations of rather precise potential landslide material. Nailing The soil nailing technique applied to temporarily and/or permanently stabilise natural slopes and artificial scarps is based on a fundamental principle in construction engineering: mobilizing the intrinsic mechanical characteristics of the ground, such as cohesion and the angle of internal friction, so that the ground actively collaborates with the stabilisation work. Nailing, on a par with anchors, induces normal stress, thereby increasing friction and stability within the hillside. One nailing method is rapid response diffuse nailing: CLOUJET, where the nails are embedded in the ground by means of an expanded bulb obtained by means of injecting mortar at high pressure into the anchorage area. Drainage is important to the CLOUJET method since the hydraulic regime, considered in the form of pore-pressure applied normally to the fractured surfaces, directly influences the characteristics of the system. The drained water, both through fabric and by means of pipes embedded in the ground, flows together at the foot of the slope in a collector installed parallel to the direction of the face. Another nailing system is the soil nail and root technology (SNART). Here, steel nails are inserted very rapidly into a slope by percussion, vibration or screw methods. Grid spacing is typically 0.8 to 1.5 m, nails are 25 to 50 mm in diameter and may be as long as 20 m. Nails are installed perpendicular to and through the failure plane, and are designed to resist bending and shear (rather than tension) using geotechnical engineering principles. Potential failure surfaces less than 2 m deep normally require the nails to be wider near the top, which may be achieved with steel plates fastened at the nail heads. Plant roots often form an effective and aesthetic facing to prevent soil loss between the nails. Geogrids Geogrids are synthetic materials used to reinforce the ground. The insertion of geosynthetic reinforcements (generally in the direction in which the deformation has developed) has the function of conferring greater stiffness and stability upon the ground, increasing its capacity to be subjected to greater deformations without fracturing. Cellular faces Cellular faces, also known by the name of "crib faces" are special supporting walls made of head grids prefabricated in reinforced concrete or wood (treated with preservatives). The heads have a length of about 1–2 m and the wall can reach 5 m in height. Compacted granular material is inserted in the spaces of the grid. The modularity of the system confers notable flexibility of use, both in terms of adaptability to the ground morphology, and because the structure does not require a deep foundation other than a laying plane of lean concrete used to make the support plane of the whole structure regular. Vegetation may be planted in the grid spaces, camouflaging of the structure. Chemical, thermal and mechanical treatments A variety of treatments may be used to improve the mechanical characteristics of the soil volume affected by landslides. Among these treatments, the technique of jet-grouting is often used, often as a substitute for and/or complement to previously discussed structural measures. The phases of jet-grouting work are: Perforation phase: insertion, with perforation destroying the nucleus, of a set of poles into the ground up to the depth of treatment required by the project. Extraction and programmed injection phase: injection of the mixture at very high pressure is done during the extraction phase of the set of poles. It is in this phase that through the insistence of the jet in a certain direction for a certain interval of time, the effect is obtained by the speed of extraction and rotation of the set of poles, so that volumes of ground can be treated in the shape and size desired. (see) The high energy jet produces a mixture of the ground and a continuous and systematic "claquage" with only a local effect within the radius of action without provoking deformations at the surface that could induce negative consequences on the stability of adjacent constructions. The projection of the mixture at high speed through the nozzles, using the effect of the elevated energy in play, allows the modification of the natural disposition and mechanical characteristics of the ground in the desired direction and in accordance with the mixture used (cement, bentonite, water, chemical, mixtures etc.). Depending on the characteristics of the natural ground, the type of mixture used, and work parameters, compression strength from 1 to 500 kgf/cm² (100 kPa to 50 MPa) can be obtained in the treated area. The realisation of massive consolidated ground elements of various shapes and sizes (buttresses and spurs) within the mass to be stabilised, is achieved by acting opportunely on the injection parameters. In this way the following can be obtained: thin diaphragms, horizontal and vertical cylinders of various diameter and generally any geometrical shapes. Another method for improving the mechanical characteristics of the ground is thermal treatment of potentially unstable hillsides made up of clayey materials. Historically, unstable clayey slopes along railways were hardened by lighting of wood or coal fires within holes dug into the slope. In large diameter holes (from 200 to 400 mm.), about 0.8-1.2m. apart and horizontally interconnected, burners were introduced to form cylinders of hardened clay. The temperatures reached were around 800 °C. These clay cylinders worked like piles giving greater shear strength to the creep surface. This system was useful for surface creep, as in the case of an embankment. In other cases the depth of the holes or the amount of fuel necessary led to either the exclusion of this technique or made the effort ineffective. Other stabilisation attempts were made by using electro-osmotic treatment of the ground. This type of treatment is applicable only in clayey grounds. It consists of subjecting the material to the action of a continuous electrical field, introducing pairs of electrodes embedded in the ground. These electrodes, when current is introduced cause the migration of the ion charges in the clay. Therefore, the inter-pore waters are collected in the cathode areas and they are dragged by the ion charges. In this way a reduction in water content is achieved. Moreover, by suitable choice of anodic electrode a structural transformation of the clay can be induced due to the ions freed by the anode triggering a series of chemo-physical reactions improving the mechanical characteristics of the unstable ground. This stabilisation method, however, is effective only in homogeneous clayey grounds. This condition is hard to find in unstable slopes, therefore electro-osmotic treatment, after some applications, has been abandoned. See also , mitigation of a similar disaster type Rockfall protection embankment References Bomhad E. N. (1986). Stabilità dei pendii, Dario Flaccovio Editore, Palermo. Cruden D. M. & Varnes D. J. (1996). Landslide types and process. In "Landslides – Investigation and Mitigation", Transportation Research Board special Report n. 247, National Academy Press, Washington DC, 36–75. Fell R. (1994). Landslide risk assessment and acceptable risk, Can. Geotech. J., vol. 31, 261–272. Giant G. (1997). Caduta di massi – Analisi del moto e opere di protezione, Hevelius edizioni, Naples. Hunge O. (1981). Dynamics of rock avalanches and other types of mass movements. PhD Thesis, University of Alberta, Canada. Peck R.P. (1969). Advantages and limitations of the observational method in applied soil mechanics, Geotechnique 19, n. 2, 171–187. Tambura F. (1998). Stabilizzazione di pendii – Tipologie, tecnologie, realizzazioni, Hevelius edizioni, Naples. Tanzini M. (2001). Fenomeni franosi e opere di stabilizzazione, Dario Flaccovio Editore, Palermo Terzaghi K. & Peck R. B. (1948). Soil mechanics in engineering practice, New York, Wiley. Coir Green (1998). " Erosion Control – Soil Erosion Landslide analysis, prevention and mitigation
Landslide mitigation
[ "Environmental_science" ]
7,171
[ "Environmental soil science", " prevention and mitigation", "Landslide analysis" ]
9,612,092
https://en.wikipedia.org/wiki/NacNac
NacNac is a class of anionic bidentate ligands. 1,3-Diketimines are often referred to as "HNacNac", a modification of the abbreviation Hacac used for 1,3-diketones. These species can exist as a mixture of tautomers. Preparation of ligands and complexes Acetylacetone and related 1,3-diketones condense with primary alkyl- or arylamines resulting in replacement of the carbonyl oxygen atoms with NR groups, where R = aryl, alkyl. To prepare 1,3-diketimines from bulky amines, e.g. 2,4,6-trimethylanilines, prolonged reaction times are required. 2,6-Diisopropylaniline is a common bulky building block. Deprotonation of HNacNac compounds affords anionic bidentate ligands that form a variety of coordination complexes. Some derivatives with large R groups can be used to stabilize low valent main group and transition metal complexes. Unlike the situation for the acetylacetonates, the steric properties of the coordinating atoms in NacNac− ligands is adjustable by changes in the R substituent. Attachment to a metal center is usually carried out by initial deprotonation of HNacNac with n-butyllithium; the lithium derivative is then treated with a metal chloride to eliminate lithium chloride. In some cases, HNacNacs also serve as charge-neutral 1,3-diimine ligands. Related NacNac ligands NacNac ligands are diimine analogues of acetylacetonate ligands. An intermediate class of ligands are derived from monoimino-ketones. The first Dipp-NacNac ligand was synthesized by Dr. Francis S. Mair in 1998. See also Diimine References Ligands Coordination chemistry
NacNac
[ "Chemistry" ]
405
[ "Ligands", "Coordination chemistry" ]
9,612,212
https://en.wikipedia.org/wiki/Flame%20rectification
Flame rectification is a phenomenon in which a flame can act as an electrical rectifier. The effect is commonly described as being caused by the greater mobility of electrons relative to that of positive ions within the flame, and the asymmetric nature of the electrodes used to detect the phenomenon. This effect is used by rectification flame sensors to detect the presence of flame. The rectifying effect of the flame on an AC voltage allows the presence of flame to be distinguished from a resistive leakage path. One experimental study suggested that the effect is caused by the ionization process occurring mostly at the base of the flame, making it more difficult for the electrode further from the base of the flame to attract positive ions from the burner, yet leaving the electron current largely unchanged with distance because of the greater mobility of the electron charge carriers. See also Flame detection Flame supervision device References External links A video of a flame being used as a rectifier in a simple AM radio Using a flame as a triode amplifier Plasma technology and applications
Flame rectification
[ "Physics" ]
214
[ "Plasma technology and applications", "Plasma physics" ]
9,612,222
https://en.wikipedia.org/wiki/Travel%20plan
A travel plan is a package of actions designed by a workplace, school or other organisation to encourage safe, healthy and sustainable travel options. By reducing car travel, travel plans can improve health and wellbeing, free up car parking space, and make a positive contribution to the community and the environment. Every travel plan is different, but most successful plans have followed a structured process in their development. The term has now largely replaced green transport plan as the accepted UK term for the concept that first emerged in the US in the 1970s (as site-based transportation demand management) and subsequently transferred to the Netherlands in 1989, where the terms company or commuter mobility management were applied. Features The following common features underpin the site travel plan concept: Travel plans are not really an instrument themselves but a delivery mechanism or strategy for other mostly transport-focused measures. Travel plans are delivered by an additional 'agent' that is not a part of the 'traditional' transport policy institutional structure. Travel plans are initiated in one of two ways, either by the organisation or by the government. Travel plans seek to deliver transport and related benefits to the community at large as well as directly to the participating organisation. Travel plans are, to some extent, site-specific and tailored to the specific contextual circumstances. Travel plans deliver, to some extent, a package or a strategy of a wide variety of transport instruments. They can work well as a 'package approach', allowing complementary tools to be implemented in one go, which means effective but unpopular tools (such as parking restrictions) can be introduced alongside popular but expensive tools (like bus subsidies) to deliver the required benefits whilst cancelling out the negative impacts. Next, the use of the additional 'agent' such as a workplace, school or even a football club means that travel plans replace the largely negative relationship between local authorities and citizens with a more positive relationship (such as between employer and employee or between school and parent/pupil). Finally, the site-specific nature of travel plans means they are developed at the neighbourhood level and focus directly on the transport needs of the users in that local area. The concept works by developing balanced packages of user-focused transport tools in a partnership that seeks to provide meaningful benefits to each of the stakeholders involved: improved travel choices to the individuals; cost savings, happier and healthier staff, and better company image to the implementing organisations; additional business opportunities to service providers; and congestion reduction and improved air quality to the government. Workplace The UK Department for Transport defines workplace travel plans as a package of measures produced by employers to encourage staff to use alternatives to single-occupancy car use. The first travel Plans in the UK were adopted in Nottingham by Nottinghamshire County Council in 1995. Travel plans are now common in the UK, and are starting to become more common in many places throughout Europe as well as in Australia and New Zealand. A workplace can choose to develop a travel plan at any time or may be required to develop a travel plan as a condition of planning consent for an expansion or new development. Typical actions in a workplace travel plan include improving facilities for pedestrians and cyclists (showers, lockers and cycle parking), promotion and subsidy of public transport, encouraging carpooling, working from home, and teleconferencing. School Making it safer and easier for children to walk, cycle or catch public transport to school has long-term health benefits, reduces air pollution and traffic congestion, and helps children arrive at school awake, refreshed and ready to learn. Because of the many benefits, local councils in the UK, Australia and New Zealand are actively involved in helping schools to develop and implement travel plans. In Canada, a national pilot project running from 2010 to 2012 is designed to bring stakeholders together to build school travel plans collaboratively. Typical actions in a school travel plan include promoting the health benefits of walking, providing more or better pedestrian crossings, tighter enforcement of parking and traffic rules around the school, providing cycle training, and setting up a walking school bus. School travel planning groups like Green Communities Canada also work on a policy level to encourage multi-tiered governmental policies that support active travel. Framework A framework travel plan may be used for speculative development such as a business park where the occupiers of buildings are not known or where there will be multiple occupiers (such as a shared office block). Other organisations There are many examples of successful travel plans for tertiary campuses. Successful tertiary travel plans are usually prepared with the assistance of the local public transport agency. As well as the initiatives listed for school or workplace travel plans, tertiary travel plans can include a U-pass system for student travel on public transport. The development of travel plans for hospitals is a relatively new and interesting field of travel planning. Planning consent A real-estate developer may be required to provide a travel plan as a condition to gaining planning consent. A typical travel plan for a new development will provide for the promotion of sustainable transport through marketing initiatives and for contributions to public transport and to walking and cycling infrastructure. In the UK, a travel plan can form part of a Section 106 agreement, under the Town and Country Planning Act 1990. See also Transportation planning Travel behavior Travel blending Transportation Demand Management References External links UK workplace travel plan guidance from Department for Transport Additional UK workplace travel plan guidance from Transport for London UK school travel plan guidance from Department for Transport Travel Plan guidance for Australia from TravelSmart Travel Plan guidance for New Zealand from Land Transport NZ Additional Travel Plan guidance for New Zealand from TravelWise Victoria Transport Policy Institute Transportation planning Sustainable transport Travel
Travel plan
[ "Physics" ]
1,122
[ "Physical systems", "Transport", "Sustainable transport", "Travel" ]
9,612,488
https://en.wikipedia.org/wiki/Misiurewicz%20point
In mathematics, a Misiurewicz point is a parameter value in the Mandelbrot set (the parameter space of complex quadratic maps) and also in real quadratic maps of the interval for which the critical point is strictly pre-periodic (i.e., it becomes periodic after finitely many iterations but is not periodic itself). By analogy, the term Misiurewicz point is also used for parameters in a multibrot set where the unique critical point is strictly pre-periodic. This term makes less sense for maps in greater generality that have more than one free critical point because some critical points might be periodic and others not. These points are named after the Polish-American mathematician Michał Misiurewicz, who was the first to study them. Mathematical notation A parameter is a Misiurewicz point if it satisfies the equations: and: so: where: is a critical point of , and are positive integers, denotes the -th iterate of . Name The term "Misiurewicz point" is used ambiguously: Misiurewicz originally investigated maps in which all critical points were non-recurrent; that is, in which there exists a neighbourhood for every critical point that is not visited by the orbit of this critical point. This meaning is firmly established in the context of the dynamics of iterated interval maps. Only in very special cases does a quadratic polynomial have a strictly periodic and unique critical point. In this restricted sense, the term is used in complex dynamics; a more appropriate one would be Misiurewicz–Thurston points (after William Thurston, who investigated post-critically finite rational maps). Quadratic maps A complex quadratic polynomial has only one critical point. By a suitable conjugation any quadratic polynomial can be transformed into a map of the form which has a single critical point at . The Misiurewicz points of this family of maps are roots of the equations: Subject to the condition that the critical point is not periodic, where: k is the pre-period n is the period denotes the n-fold composition of with itself i.e. the nth iteration of . For example, the Misiurewicz points with k= 2 and n= 1, denoted by M2,1, are roots of: The root c= 0 is not a Misiurewicz point because the critical point is a fixed point when c= 0, and so is periodic rather than pre-periodic. This leaves a single Misiurewicz point M2,1 at c = −2. Properties of Misiurewicz points of complex quadratic mapping Misiurewicz points belong to, and are dense in, the boundary of the Mandelbrot set. If is a Misiurewicz point, then the associated filled Julia set is equal to the Julia set and means the filled Julia set has no interior. If is a Misiurewicz point, then in the corresponding Julia set all periodic cycles are repelling (in particular the cycle that the critical orbit falls onto). The Mandelbrot set and Julia set are locally asymptotically self-similar around Misiurewicz points. Types Misiurewicz points in the context of the Mandelbrot set can be classified based on several criteria. One such criterion is the number of external rays that converge on such a point. Branch points, which can divide the Mandelbrot set into two or more sub-regions, have three or more external arguments (or angles). Non-branch points have exactly two external rays (these correspond to points lying on arcs within the Mandelbrot set). These non-branch points are generally more subtle and challenging to identify in visual representations. End points, or branch tips, have only one external ray converging on them. Another criterion for classifying Misiurewicz points is their appearance within a plot of a subset of the Mandelbrot set. Misiurewicz points can be found at the centers of spirals as well as at points where two or more branches meet. According to the Branch Theorem of the Mandelbrot set, all branch points of the Mandelbrot set are Misiurewicz points. Most Misiurewicz parameters within the Mandelbrot set exhibit a "center of a spiral". This occurs due to the behavior at a Misiurewicz parameter where the critical value jumps onto a repelling periodic cycle after a finite number of iterations. At each point during the cycle, the Julia set exhibits asymptotic self-similarity through complex multiplication by the derivative of this cycle. If the derivative is non-real, it implies that the Julia set near the periodic cycle has a spiral structure. Consequently, a similar spiral structure occurs in the Julia set near the critical value, and by Tan Lei's theorem, also in the Mandelbrot set near any Misiurewicz parameter for which the repelling orbit has a non-real multiplier. The visibility of the spiral shape depends on the value of this multiplier. The number of arms in the spiral corresponds to the number of branches at the Misiurewicz parameter, which in turn equals the number of branches at the critical value in the Julia set. Even the principal Misiurewicz point in the 1/3-limb, located at the end of the parameter rays at angles 9/56, 11/56, and 15/56, is asymptotically a spiral with infinitely many turns, although this is difficult to discern without magnification. External arguments External arguments of Misiurewicz points, measured in turns are: Rational numbers Proper fractions with an even denominator Dyadic fractions with denominator and finite (terminating) expansion: Fractions with a denominator and repeating expansion: The subscript number in each of these expressions is the base of the numeral system being used. Examples of Misiurewicz points of complex quadratic mapping End points Point is considered an end point as it is a tip of a filament, and the landing point of the external ray for the angle 1/6. Its critical orbit is . Point is considered an end point as it is the endpoint of the main antenna of the Mandelbrot set. and the landing point of only one external ray (parameter ray) of angle 1/2. It is also considered an end point because its critical orbit is , following the Symbolic sequence = C L R R R ... with a pre-period of 2 and period of 1. Branch points Point is considered a branch point because it is a principal Misiurewicz point of the 1/3 limb and has 3 external rays: 9/56, 11/56 and 15/56. Other points These are points which are not-branch and not-end points. Point is near a Misiurewicz point . This can be seen because it is a center of a two-arms spiral, the landing point of 2 external rays with angles: and where the denominator is , and has a preperiodic point with pre-period and period . Point is near a Misiurewicz point , as it is the landing point for pair of rays: , and has pre-period and period . See also Arithmetic dynamics Feigenbaum point Dendrite (mathematics) References Further reading Michał Misiurewicz (1981), "Absolutely continuous measures for certain maps of an interval" (in French). Publications Mathématiques de l'IHÉS, 53 (1981), p. 17-51 External links Preperiodic (Misiurewicz) points in the Mandelbrot set by Evgeny Demidov M & J-sets similarity for preperiodic points. Lei's theorem by Douglas C. Ravenel Misiurewicz Point of the logistic map by J. C. Sprott Fractals Systems theory Dynamical systems
Misiurewicz point
[ "Physics", "Mathematics" ]
1,639
[ "Functions and mappings", "Mathematical analysis", "Mathematical objects", "Fractals", "Mathematical relations", "Mechanics", "Dynamical systems" ]
9,612,499
https://en.wikipedia.org/wiki/Slot%20%28computer%20architecture%29
A slot comprises the operation issue and data path machinery surrounding a set of one or more execution unit (also called a functional unit (FU)) which share these resources. The term slot is common for this purpose in very long instruction word (VLIW) computers, where the relationship between operation in an instruction and pipeline to execute it is explicit. In dynamically scheduled machines, the concept is more commonly called an execute pipeline. Modern conventional central processing units (CPU) have several compute pipelines, for example: two arithmetic logic units (ALU), one floating point unit (FPU), one Streaming SIMD Extensions (SSE) (such as MMX), one branch. Each of them can issue one instruction per basic instruction cycle, but can have several instructions in process. These are what correspond to slots. The pipelines may have several FUs, such as an adder and a multiplier, but only one FU in a pipeline can be issued to in a given cycle. The FU population of a pipeline (slot) is a design option in a CPU. References Computer architecture
Slot (computer architecture)
[ "Technology", "Engineering" ]
225
[ "Computing stubs", "Computers", "Computer engineering", "Computer architecture" ]
9,613,511
https://en.wikipedia.org/wiki/Digital%20economy
The digital economy is a portmanteau of digital computing and economy, and is an umbrella term that describes how traditional brick-and-mortar economic activities (production, distribution, trade) are being transformed by the Internet and World Wide Web technologies. The digital economy is backed by the spread of information and communication technologies (ICT) across all business sectors to enhance productivity. A phenomenon referred to as the Internet of Things (IoT) is increasingly prevalent, as consumer products are embedded with digital services and devices. According to the WEF, 70% of the global economy will be made up of digital technology over the next 10 years (from 2020 onwards). This is a trend accelerated by the COVID-19 pandemic and the tendency to go online. The future of work, especially since the COVID-19 pandemic, also contributed to the digital economy. More people are now working online, and with the increase of online activity that contributes to the global economy, companies that support the systems of the Internet are more profitable. Digital transformation of the economy alters conventional notions about how businesses are structured, how consumers obtain goods and services, and how states need to adapt to new regulatory challenges. The digital economy has the potential to shape economic interactions between states, businesses and individuals profoundly. The emergence of the digital economy has prompted new debates over privacy rights, competition, and taxation, with calls for national and transnational regulations of the digital economy. Definition The digital economy, also referred to as the new economy, refers to an economy in which digital computing technologies are used in economic activities. The term digital economy came into use during the early 1990s. For example, many academic papers were published by New York University’s Center for Digital Economy Research. The term was the title of Don Tapscott's 1995 book, The Digital Economy: Promise and Peril in the Age of Networked Intelligence. According to Thomas Mesenbourg (2001), three main components of the digital economy concept can be identified: E-business infrastructure (hardware, software, telecom, networks, human capital, etc.), E-business (how business is conducted, any process that an organization conducts over computer-mediated networks), E-commerce (transfer of goods, for example when a book is sold online). Bill Imlah states that new applications are blurring these boundaries and adding complexity, for example, social media and Internet search. In the last decade of the 20th century, Nicholas Negroponte (1995) used a metaphor of shifting from processing atoms to processing bits: "The problem is simple. When information is embodied in atoms, there is a need for all sorts of industrial-age means and huge corporations for delivery. But suddenly, when the focus shifts to bits, the traditional big guys are no longer needed. Do-it-yourself publishing on the Internet makes sense. It does not for a paper copy." The digital economy is variously known as the Internet Economy, Web Economy, Cryptoeconomy, and New Economy. Since the digital economy is continuously replacing and expanding the traditional economy, there is no clear delineation between the two integrated economy types. The digital economy results from billions of daily online transactions (data exchanges) among people, organizations (businesses, educational institutions, non-profits), and distributed computing devices (servers, laptops, smartphones, etc.) enabled by Internet, World Wide Web, and blockchain technologies. Development of the concept There are varied definitions of the digital economy. There are multiple similar concepts for broadly the same phenomenon. According to the OECD, the Digital Economy can be defined in three different approaches: Bottom-up approach: characterizing industries’ and firms‘ output or production processes to decide whether they should be included in the Digital Economy, Top-down or trend-based approach: first identifying the key trends driving the digital transformation and then analyzing the extent to which these are reflected in the real economy, Flexible or tiered approach: breaking the Digital Economy into core and non-core components, and thereby finding a compromise between adaptability and the need to arrive at some common ground on the meaning of the term. Bottom-up definition Bottom-up definitions define the Digital Economy as the aggregate of a specific indicator for a set of industries identified as actors in the Digital Economy. Whether an industry is considered an actor depends on the nature of the products (narrow) or the proportion of digital inputs used in production processes (broad). Hence, from a bottom-up and narrow perspective, the Digital Economy is "all industries or activities that directly participate in producing, or crucially reliant on digital inputs." For instance, McKinsey adds up the economic outputs of the ICT sector and e-commerce market in terms of online sales of goods and consumer spending on digital equipment. While this definition is adept at measuring the impact of digitalization on economic growth, it only focuses on the nature of output and offers an incomplete view of the Digital Economy's development. In a bottom-up and broad perspective, the Digital Economy is "all industries using digital inputs as part of their production process". Examples of digital inputs include digital infrastructure, equipment, and software but can include data and digital skills. Top-down definition Top-down definitions identify broad trends at play in the digital transformation and define the Digital Economy as the result of their combined impact on value creation. These include such spillovers as changes in labor market demand and regulations, platform economy, sustainability, and equality. Unlike the bottom-up definition, the top-down definition has units of analysis extending beyond firms, industries, and sectors to include individuals, communities, and societies. While the latter definition is more inclusive, the IMF notes that it is subjective, qualitative, and open-ended, thus limiting meaningful comparative analysis. Flexible definition To reconcile the bottom-up and top-down definitions of the Digital Economy, Bukht and Heeks stated that the Digital Economy consists of all sectors making extensive use of digital technologies (i.e. their existence depends on digital technologies), as opposed to sectors making intensive use of digital technologies (i.e. simply employing digital technologies to increase productivity). Under this definition, the Digital Economy is stratified into three nested tiers: Core: comprising the digital sector and associated core technologies. Examples include hardware manufacturing, software and IT consulting, information services, and telecommunications, Narrow scope: the digital economy comprising digital services and the platform-based economy, Broad scope: the digitalized economy comprising digitalized sectors such as e-Business, e-Commerce, advanced manufacturing, precision agriculture, algorithmic economy, sharing economy, and gig economy. These digitalized sectors phenomenologically give rise to the Fourth Industrial Revolution. Elements of the digital economy The Digital Economy consists of all sectors making extensive use of digital technologies (i.e. their existence depends on digital technologies). However, digitalization spans many economic sectors, making it far from trivial precisely delimit the digital economy within the entire societal economy. A narrow definition would typically just encompass core digital sectors that refers to the provisioning of digital technologies, products, services, infrastructure, and solutions, as well as all forms of economic activities that are completely dependent on digital technologies and data elements. This includes key sectors like information and communication technology (ICT), but also other economic activities such as internet finance and digital commerce that are not seen as a part of the ICT-sector. Broader definitions also include industrial digitalization, i.e. the production quantity and efficiency improvement brought about by the application of digital technology in traditional industries, as an important extension of the digital economy into the wider societal economy. Examples of industrial digitalization in traditional sectors include remote sensing, automated farming equipment, GPS-route optimization, etc. However, few studies include industrial digitalization in the digital economy. Information technology The information technology (IT) sector of the U.S. now makes up about 8.2% of the country's GDP and accounts for twice its share of the GDP as compared to the last decade. 45% of spending on business equipment are investments in IT products and services, which is why companies such as Intel, Microsoft, and Dell have grown from $12 million in 1987 to more than half a billion in 1997. The widespread adoption of ICT combined with the rapid decline in price and increase in the performance of these technologies, has contributed to the development of new activities in the private and public sectors. These new technologies provide market reach, lower costs, and new opportunities for products and services that were not needed before. This changes the way multinational enterprises (MNE) and startups design their business models. Digital platforms A digital platform operator is an entity or person offering an online communication service to the public based on computer algorithms used to classify content, goods, or services offered online, or the connection of several parties for the sale of goods, the provision of a service, or the exchange or sharing of content, goods, and services. Most of the largest digital platform companies are located in either the United States or China. Digital trade In the U.S. in the 1990s, the Clinton Administration proposed The Framework for Global Electronic Commerce. It contained the promotion of five principles used to guide the U.S. government's actions towards electronic commerce so that the digital economy's growth potential remains high. These five principles include the leadership of the private sector, the government avoiding undue restrictions on e-commerce, limited government involvement, the government's recognition of the Internet's unique qualities, and the facilitation of e-commerce on a global basis. Governments have primarily restricted digital trade through three means: Data flow restrictions: regulations that require that companies store data (e.g. personal information, business records, financial data, government data) in a particular country or go through a process before transferring the data abroad. For example, the EU's GDPR law only permits transfers of data on EU individuals to countries that have implemented certain data privacy safeguards and been certified by the EU. Data localization requirements: regulations that require that data be stored on servers within a country Digital services taxes: taxes on revenues from the sale of digital services or goods (e.g. online sales, digital advertising, e-commerce, data, streaming). By 2022, 29 countries had digital service taxes. Gig economy Gig work is labor that consists of temporary and flexible jobs usually done over delivery apps and rideshare services such as Grubhub, Uber, Lyft, and Uber Eats. It can be desirable to those who want more flexibility in their schedule and can allow workers to make additional income outside of their traditional jobs. Most gig work supplements workers' traditional jobs. The full size of the gig economy and number of workers is not yet known. Katz and Krueger estimated that only 0.5% of gig workers make most of their income off of platforms like Uber, Lyft, Grubhub, and DoorDash. Since these workers are considered independent contractors, these companies are not responsible for giving its workers benefits packages like it would for regular full-time employees. This has resulted in the formation of unions between gig and platform workers and various reforms within the industry. Blockchain and Tokenized equity-sharing gig economy platforms or applications are being developed to accelerate the gig economy as a full fledged digital economy contributor using new technologies. Impact on retail The digital economy has had a substantial impact on retail sales of consumer product goods. One effect has been the fast proliferation of retailers with no physical presence, such as eBay or Amazon. Additionally, traditional retailers such as Walmart and Macy's have restructured their businesses to adapt to a digital economy. Some retailers, like Forever 21, have declared bankruptcy as a result of their failure to anticipate and adapt to a digital economy. Others, such as Bebe stores have worked with outside vendors to completely convert their business one that is exclusively digital. These vendors, such as IBM and Microsoft, have enabled smaller retailers to compete with large, multi-national established brands. Key features Mobility Mobility of intangibles Both development and exploitation of intangible assets are key features of the digital economy. This investment in and development of intangibles such as software is a core contributor to value creation and economic growth for companies in the digital economy. In early 2000, companies started substantially increasing the amount of capital allocated to intangibles such as branding, design and, technology rather than in hardware, machinery or property. Mobility of business functions Advancements in information and communication technologies (ICT) have significantly reduced the cost associated with the organization and coordination of complex activities over a long period. Some businesses are increasingly able to manage their global operations on an integrated basis from a central location separate geographically from the locations in which the operations are carried out, and where their suppliers or customers are. Consequently, it has allowed businesses to expand access to remote markets and provide goods and services across borders. Reliance on data The Digital economy relies on personal data collection. In 1995, the Data Protection directive (Directive 95/46/CE, art.2), defined data as "any information relating to a natural person who can be identified by reference to his identification number or to information which is specific to him". At that time, this regulation emerged in response to the need to integrate the European market. By adopting common European data protection standards, the EU was able to harmonize conflicting national laws that were emerging as a trade barrier, inhibiting commerce in Europe. For this reason, GDPR and its predecessor were viewed as internal market instruments, facilitating the creation of a digital, single market by allowing an unhindered flow of data within the entire common market. Due to its ability to bridge the information asymmetry between supply and demand, data now has an economic value. When platforms compile personal data, they gather preferences and interests, which allow companies to exert a targeted action on the consumer through advertising. Algorithms classify, reference, and prioritize the preferences of individuals to better predict their behavior. Via free access to platforms in exchange for the collection of personal data, they make the content non-rival. Thus, the intangibility of content tends to give a collective natural aspect to this information accessible to everyone, to benefit public good by creating a digital public space. The McKinsey Global Institute Report (2014) notes five broad ways in which leveraging big data can create value for businesses: Creating transparency by making data more easily accessible to stakeholders with the capacity to use the data, Managing performance by enabling experimentation to analyze variability in performance and understand its root causes, Segmenting populations to customize products and services, Improve decision making by replacing or supporting human decision making with automated algorithms, Improve the development of new business models, products, and services. In 2011, the Boston Consulting Group estimated that personal data collected in Europe was worth 315 billion euros. Network effect The Network effect occurs when the value of a product or service to the user increases exponentially with the number of other users using the same product or service. For instance, WhatsApp provides a free communication platform with friends and contacts. The utility to use it relies on the fact that a substantial part of or friends and colleagues are already users. Multi-sided market The Digital market can be labeled a ‘multi-sided’ market. The notion developed by French Nobel prize laureate Jean Tirole is based on the idea that platforms are ‘two-sided’. This can explain why some platforms can offer free content, with customers on one side and the software developers or advertisers on the other. On a market where multiple groups of persons interact through platforms as intermediaries, the decisions of each group affect the outcome of the other group of persons through a positive or negative externality. When the users spend time on a page or click on links, this creates a positive externality for the advertiser displaying a banner there. The digital Multinational enterprises (MNEs) do not collect revenue from the user side but from the advertiser side, thanks to the sale of online advertisement. Response Given its expected broad impact, traditional firms are actively assessing how to respond to the changes brought about by the digital economy. For corporations, the timing of their response is of the essence. Banks are trying to innovate and use digital tools to improve their traditional business. Governments are investing in infrastructure. In 2013, the Australian National Broadband Network, for instance, aimed to provide a 1 GB/second download speed fiber-based broadband to 93% of the population over ten years. Digital infrastructure is essential for leveraging investment in digital transformation. According to a survey conducted in 2021, 16% of EU enterprises regard access to digital infrastructure to be a substantial barrier to investment. Some traditional companies have tried to respond to the regulatory challenge imposed by the Digital economy, including through tax evasion. Due to the immaterial nature of digital activities, these digital multinational enterprises (MNEs) are extremely mobile, which allows them to optimize tax evasion. They can carry out high volumes of sales from a tax jurisdiction. Concretely, governments face MNE fiscal optimization from companies locating their activity in the countries where tax is the lowest. On the other hand, companies can undergo double taxation for the same activity or be confronted with legal and tax vagueness. The Conseil National du Numérique concluded that the shortfall in corporate tax gain for Apple, Google, Amazon, and Facebook was worth approximately 500 million euros in 2012. According to 55% of businesses surveyed in the European Investment Bank's Investment survey in 2021, the COVID-19 pandemic has increased the demand for digitalization. 46% of businesses report that they have grown more digital. 34% of enterprises that do not yet utilise advanced digital technology saw the COVID-19 crisis as a chance to focus on digitisation. Firms that have incorporated innovative digital technology are more positive about their industry's and the overall economic condition in the recovery from the COVID-19 pandemic. There is, however, a discrepancy between businesses in more developed locations and less developed regions. Businesses in poorer regions are more concerned about the pandemic's consequences. Companies in affected areas anticipate long-term effects on their supply chain from the outbreak. A bigger proportion of businesses anticipate permanent employment losses as a result of the digitalization transformation brought on by COVID-19. During the pandemic, 53% of enterprises in the European Union that had previously implemented advanced digital technology invested more to become more digital. 34% of non-digital EU organizations viewed the crisis as a chance to begin investing in their digital transformation. 38% of firms reported in a survey that they focused on basic digital technologies, while 22% focused on advanced technologies (such as robotics, AI). Organizations that invested in both advanced and basic digital technologies were found most likely to outperform during the pandemic. After the COVID-19 outbreak, the number of non-digital enterprises that downsized was also greater than the share of non-digital firms that had positive job growth. Non-digital companies had a negative net employment balance. Small and medium-sized businesses are falling behind big and medium-sized businesses. Only 30% of microenterprises in the European Union claimed to have taken action to advance digitalization in 2022, compared to 63% of major businesses. In comparison to 71% in the United States, the proportion of EU enterprises employing advanced digital technology increased from 2021 to 2022, reaching 69%. One in two American businesses (surveyed) and 42% of European businesses increased their investments in digitalization in response to the pandemic in 2022. In Europe, 31% of people work for companies that are non-digital, compared to 22% of people in the United States. This is also due to the fact that the European Union has many more small businesses than the United States. Smaller businesses are less digital, which has repercussions for the employees they hire. Non-digital enterprises tend to pay lower wages and are less likely to create new employment. They have also been less inclined to train their employees throughout the pandemic. Enterprises in the EU have lower adoption rates for the internet of things than firms in the US. The variations in adoption rates between the European Union and the United States are driven by the lower use of technologies connected to the internet of things. In Eastern and Central Europe, manufacturing enterprises were the most likely to have implemented various digital technologies (47%) during and after the COVID-19 pandemic, while construction firms were the least likely (14%). Large enterprises (49% versus 27%) were more likely than SMEs to employ various technologies at the same time. Enterprises in these regions excel at robotics (49%), the Internet of Things (42%), and platform implementation (38%). A cashless society describes an economic state in which transactions no longer use physical currency (such as banknotes and coins) as the medium. Transactions which would historically have been undertaken with cash are often now undertaken electronically. EU digital area Remaining barriers to fulfill the Digital Single Market The Digital Single Market (DSM) was included as part of the Single Market Act initiatives adopted by the European Commission (EC). The question had already come up earlier in 1990 and was brought up again later in 2010, emerging at a sensitive moment in the post-crisis of 2008, and used as a catalyst for action. The crisis created opportunities to place the Single Market upfront in the European agenda and was aimed to resolve two issues: financial supervision and economic coordination. This gave a new dimension to the Market. The proposal for the DSM had been made under the strategy of the Commission entitled "Digital Agenda for Europe" in the political guidelines of the second Barroso Commission and pointed out the need to eliminate barriers in order to implement the European Digital Market as an attempt to relaunch the Single Market. This strategy was similar to the one used for the Internal Market in 1985 and focused on one of the weaknesses of the latter, namely the fragmentation of the national digital market. Building on the Monti report, the communication 'Towards a Single Market Act' detailed 50 proposals to reform the SM by the end of 2012. But the DSM was only adopted in 2015 and the proposal for a directive of the European Parliament and the Council was made in September 2016. The DSM is presented as a key priority in the economy of Union, even if there were several attempts to deepen the integration, there are still obstacles remaining. The creation of the DSM constitutes a catalyst to resolve several issues, and was supposed to have a widespread multiplier effect throughout sectors across the EU. The EU Commission faced several obstacles. The commission acts in a way to deeply transform the Single Market. However, the EC lacked political support to enhance the impact of its decision. The issue of the low salience was a causal factor explaining the limits of the commission's commitment to reform the single market. Even though the member states approved the DSM, and the definition for the DSM was accepted by European institutions as a key priority, only one proposal was adopted at the end of 2012. Despite being a priority in the SMA I & II, legislative initiatives failed due to the high cost of implementation measures. Also, there were its potential ‘blockbuster for economic gains’  and the protest of citizens against sovereign debt countries' rescue and bank bail-outs. The slow adoption of the proposal is partly due to member states’ protectionist temptations after the economic crisis. Each state wanted to put forward its preferences and legislation concerning this field. With regard to artificial intelligence (AI), the Commission adopted various initiatives with no meaningful coordination. The more pervasive the digital ecosystem becomes, the more sector-specific regulatory framework may need to be merged into general regimes. Though the Commission used the crisis as a window of opportunity, it did not allow it to go further in implementing a high transformation of the Single Market. The crisis context pushed the political actors to move forward to better manage the crisis, but did not permit it to fully implement the DSM. Current challenges One of the key priorities of the EU is to guarantee fair competition. Yet, within the Digital Market, the competition may be distorted. With more exertion of network effects comes higher barriers to entry (difficulty for a new entrant to enter the market and compete) in the market. Vertical or horizontal mergers and acquisitions take place in closed ecosystems. In order to limit this problem in the digital ecosystem, the EU aims to qualify certain firms as either as an "abuse of dominant position" or a "cartel" which are against the competition prosperity within the Single Market. Digital companies such as the GAFA prosper thanks to their various free services that they make available to consumers, which appear beneficial for consumers, but less so for firms in potential competition. It my be difficult for regulators to sanction firms such as GAFA, due to the jobs and services they provide worldwide. Challenges for the regulator Certain challenges may exist for regulators. One example is in identifying and defining platforms. Member states lack coordination, and may be independent of the regulator, who can not have a global vision of the market. Also, tax evasion of digital MNEs has become a growing concern for most of the European governments, including the European Commission. Attracting foreign investment is less and less seen as a relevant reason to implement tax cuts. Aside from the fiscal revenue shortfall, this issue has taken a political turn in recent years since some people and politicians feel that, in a time of financial crisis, these highly profitable firms do not contribute to the national effort. Strength within the EU digital policy The Digital Market is characterized by its heterogeneity. The European Market is in a difficult position to compete with other advanced countries within the Digital World (such as US or China). There are currently no European digital champions. The European Digital Market is divided in regulations, standards, usages, and languages. The member states cannot meet the demand, or support innovation (R&D), due to the fact that the digital environment is by nature global. As noted by the European parliament, taxation on Digital Market could bring about 415bn euros to the EU economy, and be considered as an incentive to further deepen the EU integration (EP opinion's 2014). Mechanisms of control The EU controls ex-post (in the case of abuse of dominance for example) and seems to be very cautious in term of concurrence (exclusive competence). The EU sanctions cartels’ behavior and examines mergers in order to preserve competition and protect small and medium enterprises (SMEs) entering the market. Within the digital market, mergers often create digital firm dominance, thus possibly preventing European equivalents. Moreover, regulation could in theory protect people working in the digital sector or for the digital sector (such as Uber drivers, a case recently in France), which could present opportunity. However, the EU may need to be cautious with regulation in order to create barriers at the market entry. European Commission versus Google In 2017, the EC fined Google €2.42 billion for abusing its dominant position as a search engine by giving an illegal advantage to Google Shopping. The EC aimed to pave the way to relieve firms suffering from its abuse of dominant position. Moreover, it sought to prove that the EC's strategy does works and companies may be fined at high rates. Juncker Commission The Digital Economy has been a concern for the Juncker Commission concern since the 1st Barroso Commission. Yet, it is only under the Juncker Commission that the strategy of the DSM was adopted on 6 May 2015 as it was ranked as the second priority out of the 10 priorities for the new Commission's mandate. Throughout this document, the DSM emphasized 3 policy pillars: improving access to digital goods and services, an environment where digital networks and services can prosper, digital as a driver of growth. As a key priority for the newly President-elect Juncker, he made Andrus Ansip, the vice-president of the Commission, in charge of the DSM. The decision to approach the DSM from a different point of view is also because the digital space is in constant evolution with the growing importance of online platform and the change of market share. The DSM was a priority because of its economic importance; the total of EU e-commerce reached 240 billion € in 2011, and out of that 44 billion were cross-border trade between member states. Impacts Economy According to a 2016 estimate, the Digital Economy represented $11.5 trillion, or 15.5% of global GDP (18.4% of GDP in developed economies and 10 per cent in developing economies on average). It found that the digital economy had grown two and a half times faster than global GDP over the previous 15 years, almost doubling in size since 2000. Most of the value in the digital economy was produced in only a few economies: the United States (35%), China (13%) and Japan (8%). The EU together with Iceland, Liechtenstein and Norway accounted for another 25%. Some scholars have argued that the digital economy entails unequal economic exchanges where users and consumers provide value to digital firms in the form of data but are not compensated for doing so. Energy The Digital Economy uses a tenth of the world's electricity. The move to the cloud has also caused the rise in electricity use and carbon emissions. A server room at a data center can use, on average, enough electricity to power 180,000 homes. The Digital Economy can be used for mining Bitcoin which, according to Digiconomist, uses an average of 70.69 TWh of electricity per year. The number of households that can be powered using the amount of power that bitcoin mining uses is around 6.5 million in the US. Privacy rights Data gathering and tracking of individual behaviors by digital firms has implications for privacy rights. Data collected on individuals can be analyzed and monetized by technology firms without compensation to users. The data is not only used to predict behaviors, but influence behavior. The data collected is at risk of breaches where personal information can be intentionally or inadvertently exposed. Taxation The digital economy has implications for international tax rules. Digital technology companies produce goods that are not necessarily tied to specific geographical locations, which complicates taxation of those companies. Digital technology can therefore enable tax evasion and tax avoidance. Competition and antitrust The digital economy is characterized by network effects, rapid development of economies of scale, first-mover advantages and winner-takes-all dynamics, which make it possible for a small number of firms to gain a dominant market position and impede entry by potential competitors. These dynamics create concerns about market power, which could enable firms to charge higher prices and pay lower wages than if they experienced competition. Market power could also lead to outsized political influence by dominant technology firms, leading to deregulation. In some cases, digital platform companies can pit their users against governments, thus discouraging stringent regulations. Job displacement and offshoring By increasing automation of tasks previously performed by human workers, the digital economy has the potential to cause job displacement. Whether automation causes net job displacement depends on whether the gains from automation lead to greater consumer demand (by lowering prices for goods and services, and increasing household incomes) and whether the introduction of new labor-intensive tasks will create new jobs. Digital technology has facilitated the spread of global value chains and made it easier for capital in developed countries to access labor in the developing world, which can lead to greater offshoring and potentially harm low-skilled workers in developed countries. Labor rights The rise of digital platform companies has implications for the nature of work (in particular in the gig economy) and labor rights. Gig workers are generally classified as ‘independent workers’ (with temporary, off-site, autonomous contracts) which challenges the application of labor and occupational health and safety law. As a result, online platforms encourage the flexibilization of jobs and a higher volatility of the labor market, as opposed to traditional companies. Gig economy companies such as Deliveroo and Uber hire self-employed drivers who sign a contract with the digital platform while the way they work is similar to a regular employee statute. Yet, for the first time, in March 2020, France's top court (Cour de Cassation) ruling acknowledged that an Uber driver could not qualify as a ‘self-employed’ contractor because he could not build his clientele or set his prices, establishing a relation of a subordinate of the company. See also Digital currency E-commerce Online shopping Knowledge economy Cryptoeconomics References Cashless society Economic systems Information economy E-commerce Electronics industry Supply chain management
Digital economy
[ "Technology" ]
6,694
[ "Information and communications technology", "Information technology", "E-commerce", "Electronics industry" ]
9,613,870
https://en.wikipedia.org/wiki/Susan%20Solomon
Susan Solomon is an American atmospheric chemist, working for most of her career at the National Oceanic and Atmospheric Administration (NOAA). In 2011, Solomon joined the faculty at the Massachusetts Institute of Technology, where she serves as the Ellen Swallow Richards Professor of Atmospheric Chemistry & Climate Science. Solomon, with her colleagues, was the first to propose the chlorofluorocarbon free radical reaction mechanism that is the cause of the Antarctic ozone hole. Her most recent book, Solvable: how we healed the earth, and how we can do it again (2024) focuses on solutions to current problems, as do books by data scientist Hannah Ritchie, marine biologist, Ayana Elizabeth Johnson and climate scientist Katharine Hayhoe. Solomon is a member of the U.S. National Academy of Sciences, the European Academy of Sciences, and the French Academy of Sciences. In 2002, Discover magazine recognized her as one of the 50 most important women in science. In 2008, Solomon was selected by Time magazine as one of the 100 most influential people in the world. She also serves on the Science and Security Board for the Bulletin of the Atomic Scientists. Biography Early life Solomon was born in Chicago, Illinois. Her interest in science began as a child watching The Undersea World of Jacques Cousteau. In high school she placed third in a national science competition, with a project that measured the percentage of oxygen in a gas mixture. Solomon received a B.S. degree in chemistry from the Illinois Institute of Technology in 1977. She then received an M.S. in chemistry in 1979 followed by a Ph.D. in 1981 in atmospheric chemistry, both from the University of California, Berkeley. Personal life Solomon married Barry Sidwell in 1988. She is Jewish. Work Solomon was the head of the Chemistry and Climate Processes Group of the National Oceanic and Atmospheric Administration Chemical Sciences Division until 2011. In 2011, she joined the faculty of the Department of Earth, Atmospheric and Planetary Sciences at the Massachusetts Institute of Technology. Books The Coldest March: Scott's Fatal Antarctic Expedition, Yale University Press, 2002 – Depicts the tale of Captain Robert Falcon Scott's failed 1912 Antarctic expedition, specifically applying the comparison of modern meteorological data with that recorded by Scott's expedition in an attempt to shed new light on the reasons for the demise of Scott's polar party. Aeronomy of the Middle Atmosphere: Chemistry and Physics of the Stratosphere and Mesosphere, 3rd Edition, Springer, 2005 – Describes the atmospheric chemistry and physics of the middle atmosphere from altitude. The Ozone Hole Solomon, working with colleagues at the NOAA Earth System Research Laboratories, postulated the mechanism that the Antarctic ozone hole was created by a heterogeneous reaction of ozone and chlorofluorocarbons free radicals on the surface of ice particles in the high altitude clouds that form over Antarctica. In 1986 and 1987 Solomon led the National Ozone Expedition to McMurdo Sound, where the team gathered the evidence to confirm the accelerated reactions. Solomon was the solo leader of the expedition, and the only woman on the team. Her team measured levels of chlorine oxide 100 times higher than expected in the atmosphere, which had been released by the decomposition of chlorofluorocarbons by ultraviolet radiation. Solomon later showed that volcanoes could accelerate the reactions caused by chlorofluorocarbons, and so increase the damage to the ozone layer. Her work formed the basis of the U.N. Montreal Protocol, an international agreement to protect the ozone layer by regulating damaging chemicals. Solomon has also presented some research which suggests that implementation of the Montreal Protocols is having a positive effect. For her critical contribution to saving the ozone layer, Solomon was a winner of the 2021 Future of Life Award along with Joe Farman and Stephen O. Andersen. Jim Hansen, former Director of the NASA Goddard Institute for Space Studies and Director of Columbia University's Program on Climate Science, Awareness and Solutions said, "In Farman, Solomon and Andersen we see the tremendous impact individuals can have not only on the course of human history, but on the course of our planet's history. My hope is that others like them will emerge in today's battle against climate change." Professor Guus Velders, a climate scientist at Utrecht University said, "Susan Solomon is a deserving recipient of the Future of Life Award. Susan not only explained the processes behind the formation of the ozone hole, she also played an active role as an interface between the science and policy of the Montreal Protocol." The Coldest March – A book Using research work conducted by English explorer and navy officer Robert Falcon Scott, Solomon also wrote and spoke about Scott's 1911 expedition inThe Coldest March: Scott's Fatal Antarctic Expedition to counter a longstanding argument that blamed Scott for his and his crew's demise during that expedition. Scott attributed his death to unforeseen weather conditions – a claim that has been contested by British journalist and author Roland Huntford. Huntford claimed that Scott was a prideful and under-prepared leader. Solomon has defended Scott and said that "modern data side squarely with Scott", describing the weather conditions in 1911 as unusual. In the voluminous book (778 pages, 150+21 figures, 1444 references, 23 maps, 39 tables and 2 schemes) recently published by Dr. Krzysztof Sienicki, a theoretical physicist, Chapter 4 of this book examines Dr. Susan Solomon's analysis of the Terra Nova Expedition and demonstrates numerous errors and misrepresentations in her work. Below is a concise summary of the key findings including the key errors and criticisms: 1. Data Manipulation and Cherry-Picking: Dr. Solomon is accused of selectively presenting temperature data to falsely suggest that Captain Roald Amundsen experienced more favorable conditions than Captain Scott. Specifically, she omitted data points that contradicted her argument, such as temperatures above the long-term mean (pages 174–179 and 227–244), 2. Fabrication of Meteorological Data: The chapter claims that Solomon fabricated temperature data to support her thesis of an "Extreme Cold Snap." She is accused of falsifying temperature trends and extending analysis periods to include unrelated warm days to "warm up" data(pages 248–182 and 702–715), 3. Logical Fallacies: Solomon is critiqued for employing the Gambler’s fallacy, cherry-picking, and affirming the consequent to support her conclusions about the weather conditions faced by Captain Scott (pages 165–198 and 210–229), 4. Misrepresentation of Statistical Methods: Solomon allegedly failed to conduct proper statistical error analysisStatistics, hypothesis testing, and probability distribution analysis, which undermines the credibility of her conclusions (pages 192–200 and 700–710), 5. Misinterpretation of Historical Data: Solomon is accused of attributing modern weather station data incorrectly to the conditions of 1912. This includes comparing non-interchangeable geographical locations and inaccurately interpreting automated weather station readings (pages 165–170 and 255–289), 6. Subjective Assessments and Bias: The chapter accuses Solomon of dismissing Captain Scott's responsibility by attributing his failures solely to luck and weather, which is labeled as an overly subjective and biased approach (pages 179–181 and 220–223), 7. Errors in Critical Figures and Tables: The document identifies discrepancies in Solomon's figures and tables, noting that none of them accurately represent the true meteorological data from the Terra Nova expedition (pages 178–211 and 702–711). For a summary of Solomon's errors and manipulations, see also Chapter 17 (p. 658) and the following sections: Appendix 2 (p. 658): Errors and Fallacies in Drs. Solomon and Stearns' paper, "On the Role of the Weather in the Deaths of R. F. Scott and his Companions." Appendix 3 (p. 668): Data Dragging and Fabrication in Dr. Solomon's book, "The Coldest March: Scott's Fatal Antarctic Expedition." Intergovernmental Panel on Climate Change Solomon served on the Intergovernmental Panel on Climate Change. She was a contributing author for the Third Assessment Report. She was also co-chair of Working Group I for the Fourth Assessment Report. Awards 1991 – Henry G. Houghton Award for research in physical meteorology, awarded by the American Meteorological Society 1994 – Solomon Saddle (), a snow saddle at about elevation, named in her honor 1994 – Solomon Glacier (), an Antarctic glacier named in her honor 1999 – National Medal of Science, awarded by the President of the United States 2000 – Carl-Gustaf Rossby Research Medal, awarded by the American Meteorological Society 2004 – Blue Planet Prize, awarded by the Asahi Glass Foundation 2006 – V. M. Goldschmidt Award 2006 – Inducted into the Colorado Women's Hall of Fame 2007 – William Bowie Medal, awarded by the American Geophysical Union 2007 — Prix Georges Lemaître 2007 – As a member of IPCC, which received half of the Nobel Peace Prize in 2007, she shared a stage receiving the prize with Al Gore (who received the other half). 2008 – Grande Médaille (Great Medal) of the French Academy of Sciences 2008 – Foreign Member of the Royal Society 2008 – Member of the American Philosophical Society 2009 – Volvo Environment Prize, awarded by the Royal Swedish Academy of Sciences 2009 – Inducted into the National Women's Hall of Fame 2010 – Service to America Medal, awarded by the Partnership for Public Service 2012 – Vetlesen Prize, for work on the ozone hole, shared with Jean Jouzel. She was the first woman to receive this prize. 2013 – BBVA Foundation Frontiers of Knowledge Award in the Climate Change category 2015 – Honorary Doctorate (honoris causa) from Brown University. 2017 – Arthur L. Day Prize and Lectureship by the National Academy of Sciences for substantive work in atmospheric chemistry and climate change 2018 – Bakerian Lecture 2018 – Crafoord Prize in Geosciences 2019 – Made one of the members of the inaugural class of the Government Hall of Fame 2021 – On 31 July she was appointed as ordinary Member of the Pontifical Academy of Sciences 2021 – 2021 Future of Life Award (Ozone Layer) 2021 – NAS Award for Chemistry in Service to Society 2023 – Honorary Doctorate from Duke University 2023 – Female Innovator Prize from the VinFuture Foundation References External links Oral History Interview with Susan Solomon. (1997-09-05). American Meteorological Society Oral History Project. UCAR Archives. 1956 births Living people American geophysicists Atmospheric chemists American women chemists Illinois Institute of Technology alumni UC Berkeley College of Chemistry alumni Carl-Gustaf Rossby Research Medal recipients Members of the French Academy of Sciences Foreign members of the Royal Society Members of the United States National Academy of Sciences National Oceanic and Atmospheric Administration personnel National Medal of Science laureates 20th-century American women scientists 21st-century American women scientists Women geophysicists 20th-century American chemists 21st-century American scientists Members of Academia Europaea Intergovernmental Panel on Climate Change contributing authors Recipients of the V. M. Goldschmidt Award Vetlesen Prize winners
Susan Solomon
[ "Chemistry" ]
2,291
[ "Geochemists", "Recipients of the V. M. Goldschmidt Award" ]
9,614,445
https://en.wikipedia.org/wiki/Oxytocin%20receptor
The oxytocin receptor, also known as OXTR, is a protein which functions as receptor for the hormone and neurotransmitter oxytocin. In humans, the oxytocin receptor is encoded by the OXTR gene which has been localized to human chromosome 3p25. Function and location The OXTR protein belongs to the G-protein coupled receptor family, specifically Gq, and acts as a receptor for oxytocin. Its activity is mediated by G proteins that activate several different second messenger systems. Oxytocin receptors are expressed by the myoepithelial cells of the mammary gland, and in both the myometrium and endometrium of the uterus at the end of pregnancy. The oxytocin-oxytocin receptor system plays an important role as an inducer of uterine contractions during parturition and of milk ejection. OXTR is also associated with the central nervous system. The gene is believed to play a major role in social, cognitive, and emotional behavior. A decrease in OXTR expression by methylation of the OXTR gene is associated with Callous and unemotional traits in adolescence, rigid thinking in anorexia nervosa, problems with facial and emotional recognition, and difficulties in the affect regulation. A reduction in this gene is believed to lead to prenatal stress, postnatal depression, and social anxiety. Further research must be gathered before concluding these findings, however strong evidence is pointing in this direction. Studies on OXTR methylation—which downregulates oxytocin mechanisms—suggest this process is associated with increased gray matter density in the amygdala, implicating OXTR regulation in stress and parasympathetic regulation. In some mammals, oxytocin receptors are also found in the kidney and heart. Mesolimbic dopamine pathways The oxytocinergic circuit projecting from the paraventricular hypothalamic nucleus (PVN) innervates the ventral tegmental area (VTA) dopaminergic neurons that project to the nucleus accumbens, i.e., the mesolimbic pathway. Activation of the PVN→VTA projection by oxytocin affects sexual, social, and addictive behavior via this link to the mesolimbic pathway; specifically, oxytocin exerts a prosexual and prosocial effect in this region. Polymorphism The receptors for oxytocin (OXTR) have genetic differences with varied effects on individual behavior. The polymorphism (rs53576) occurs on the third intron of OXTR in three types: GG, AG, AA. The GG allele is connected with oxytocin levels in people . A-allele carrier individuals are associated with more sensitivity to stress, fewer social skills, and more mental health issues than the GG-carriers. In a study looking at empathy and stress, individuals with the allele GG scored higher than A-carrier individuals in a "Reading the Mind in the Eyes" test. GG carriers, with their naturally higher levels of oxytocin , were better able to distinguish between emotions. A-allele carriers responded with more stress to stressful situations than GG-allele carriers. A-allele carriers had lower scores on psychological resources, like optimism, mastery, and self-esteem, than GG individuals when measured with factor analysis for depressive symptomology and psychological resources, along with the Beck Depression Inventory. A-allele carriers had higher depressive symptomology and lower psychological resources than GG individuals. A-allele individuals scored lower in human sociality than GG people on a Tridimensional Personality Questionnaire. AA individuals had the lowest amygdala activation while processing emotionally salient information and those with GG had the highest activity when tested using BOLD during an fMRI. On the other hand, variations at the CD38 rs3796863 and OXTR rs53576 loci were not associated with psychosocial characteristics of adolescents assessed with the Strengths and Difficulties Questionnaire (SDQ); in studies with a similar design, authors recommend replication with larger samples and greater power to detect small effects, especially in age–sex subgroups of adolescents. The frequency of the A allele varies among ethnic groups, being significantly more common among East Asians than Europeans. Some evidence suggests an association between OXTR gene polymorphism, IQ, and autism spectrum disorder (ASD). Studies have done research focusing on variants in the third intron of the gene, a region that is strongly correlated with personality traits and ASD. OXTR knockout mice have shown abnormal behaviors such as social impairments and aggressiveness. These abnormalities can be reduced with oxytocin or oxytocin agonist administration. Overall, the study suggests that rare variants are considerably more abundant in individuals with ASD compared to that of a normal individual, however further research with larger sample sizes must be completed before concluding any information. Ligands Several selective ligands for the oxytocin receptor have recently been developed, but close similarity between the oxytocin and related vasopressin receptors make it difficult to achieve high selectivity with peptide derivatives. However the search for a druggable, non-peptide template has led to several potent, highly selective, orally bioavailable oxytocin antagonists. Oxytocin receptor agonists have also been developed. Agonists Peptide Carbetocin Demoxytocin Lipo-oxytocin-1 Merotocin Oxytocin Non-peptide LIT-001 — improved social deficits in mice; non-selective over vasopressin receptors TC OT 39 – non-selective over vasopressin receptors WAY-267,464 – anxiolytic in mice; possibly non-selective over vasopressin receptors Antagonists Peptide Atosiban Barusiban Non-peptide Epelsiban L-368,899 (CAS# 148927-60-0) L-371,257 (CAS# 162042-44-6) – peripherally selective (i.e. poor blood brain barrier penetration, few central effects) L-372,662 Nolasiban Retosiban (GSK-221,149) SSR-126,768 WAY-162,720 – centrally active following peripheral administration References External links G protein-coupled receptors Genes on human chromosome 3
Oxytocin receptor
[ "Chemistry" ]
1,355
[ "G protein-coupled receptors", "Signal transduction" ]
9,614,993
https://en.wikipedia.org/wiki/Chromatographic%20response%20function
Chromatographic response function, often abbreviated to CRF, is a coefficient which measures the quality of the separation in the result of a chromatography. The CRF concept have been created during the development of separation optimization, to compare the quality of many simulated or real chromatographic separations. Many CRFs have been proposed and discussed. In high performance liquid chromatography the CRF is calculated from various parameters of the peaks of solutes (like width, retention time, symmetry etc.) are considered into the calculation. In TLC the CRFs are based on the placement of the spots, measured as RF values. Examples in thin layer chromatography The CRFs in thin layer chromatography characterize the equal-spreading of the spots. The ideal case, when the RF of the spots are uniformly distributed in <0,1> range (for example 0.25,0.5 and 0.75 for three solutes) should be characterized as the best situation possible. The simplest criteria are and product (Wang et al., 1996). They are the smallest difference between sorted RF values, or product of such differences. Another function is the multispot response function (MRF) as developed by De Spiegeleer et al.{Analytical Chemistry (1987):59(1),62-64} It is based also of differences product. This function always lies between 0 and 1. When two RF values are equal, it is equal to 0, when all RF values are equal-spread, it is equal to 1. The L and U values – upper and lower limit of RF – give possibility to avoid the band region. The last example of coefficient sensitive to minimal distance between spots is Retention distance (Komsta et al., 2007) The second group are criteria insensitive for minimal difference between RF values (if two compounds are not separated, such CRF functions will not indicate it). They are equal to zero in equal-spread state increase when situation is getting worse. There are: Separation response (Bayne et al., 1987) Performance index (Gocan et al., 1991) Informational entropy (Gocan et al., 1991, second reference) Retention uniformity (Komsta et al., 2007) In all above formulas, n is the number of compounds separated, Rf (1...n) are the Retention factor of the compounds sorted in non-descending order, Rf0 = 0 and Rf(n+1) = 1. References Q.S. Wang, B.W. Yan, J. Planar Chromatogr. 9 (1996) 192. B.J.M. de Spiegeleer, P.H.M. de Moerloose, G.A.S. Sleghers, Anal. Chem. 59 (1987) 62. C.K. Bayne, C.Y. Ma, J. Liq. Chromatogr. 10 (1987) 3529. S. Gocan, M. Mihaly, Stud Univ B-B Chemia, 1 (1991) 18. S. Gocan, J. Planar Chromatogr. 4 (1991) 169. Ł. Komsta, W. Markowski, G. Misztal, J. Planar Chromatogr. 20 (2007) 27. See also Chromatography Chromatography
Chromatographic response function
[ "Chemistry" ]
714
[ "Chromatography", "Separation processes" ]
9,615,240
https://en.wikipedia.org/wiki/Nitazoxanide
Nitazoxanide, sold under the brand name Alinia among others, is a broad-spectrum antiparasitic and broad-spectrum antiviral medication that is used in medicine for the treatment of various helminthic, protozoal, and viral infections. It is indicated for the treatment of infection by Cryptosporidium parvum and Giardia lamblia in immunocompetent individuals and has been repurposed for the treatment of influenza. Nitazoxanide has also been shown to have in vitro antiparasitic activity and clinical treatment efficacy for infections caused by other protozoa and helminths; evidence suggested that it possesses efficacy in treating a number of viral infections as well. Chemically, nitazoxanide is the prototype member of the thiazolides, a class of drugs which are synthetic nitrothiazolyl-salicylamide derivatives with antiparasitic and antiviral activity. Tizoxanide, an active metabolite of nitazoxanide in humans, is also an antiparasitic drug of the thiazolide class. Nitazoxanide tablets were approved as a generic medication in the United States in 2020. Uses Nitazoxanide is an effective first-line treatment for infection by Blastocystis species and is indicated for the treatment of infection by Cryptosporidium parvum or Giardia lamblia in immunocompetent adults and children. It is also an effective treatment option for infections caused by other protozoa and helminths (e.g., Entamoeba histolytica, Hymenolepis nana, Ascaris lumbricoides, and Cyclospora cayetanensis). Chronic hepatitis B Nitazoxanide alone has shown preliminary evidence of efficacy in the treatment of chronic hepatitis B over a one-year course of therapy. Nitazoxanide 500 mg twice daily resulted in a decrease in serum HBV DNA in all of 4 HBeAg-positive patients, with undetectable HBV DNA in 2 of 4 patients, loss of HBeAg in 3 patients, and loss of HBsAg in one patient. Seven of 8 HBeAg-negative patients treated with nitazoxanide 500 mg twice daily had undetectable HBV DNA and 2 had loss of HBsAg. Additionally, nitazoxanide monotherapy in one case and nitazoxanide plus adefovir in another case resulted in undetectable HBV DNA, loss of HBeAg and loss of HBsAg. These preliminary studies showed a higher rate of HBsAg loss than any currently licensed therapy for chronic hepatitis B. The similar mechanism of action of interferon and nitazoxanide suggest that stand-alone nitazoxanide therapy or nitazoxanide in concert with nucleos(t)ide analogs have the potential to increase loss of HBsAg, which is the ultimate end-point of therapy. A formal phase 2 study is being planned for 2009. Chronic hepatitis C Romark initially decided to focus on the possibility of treating chronic hepatitis C with nitazoxanide. The drug garnered interest from the hepatology community after three phase II clinical trials involving the treatment of hepatitis C with nitazoxanide produced positive results for treatment efficacy and similar tolerability to placebo without any signs of toxicity. A meta-analysis from 2014 concluded that the previous held trials were of low-quality and withheld with a risk of bias. The authors concluded that more randomized trials with low risk of bias are needed to determine if Nitazoxanide can be used as an effective treatment for chronic hepatitis C patients. Contraindications Nitazoxanide is contraindicated only in individuals who have experienced a hypersensitivity reaction to nitazoxanide or the inactive ingredients of a nitazoxanide formulation. Adverse effects The side effects of nitazoxanide do not significantly differ from a placebo treatment for giardiasis; these symptoms include stomach pain, headache, upset stomach, vomiting, discolored urine, excessive urinating, skin rash, itching, fever, flu syndrome, and others. Nitazoxanide does not appear to cause any significant adverse effects when taken by healthy adults. Overdose Information on nitazoxanide overdose is limited. Oral doses of 4 grams in healthy adults do not appear to cause any significant adverse effects. In various animals, the oral LD50 is higher than 10 . Interactions Due to the exceptionally high plasma protein binding (>99.9%) of nitazoxanide's metabolite, tizoxanide, the concurrent use of nitazoxanide with other highly plasma protein-bound drugs with narrow therapeutic indices (e.g., warfarin) increases the risk of drug toxicity. In vitro evidence suggests that nitazoxanide does not affect the CYP450 system. Pharmacology Pharmacodynamics The anti-protozoal activity of nitazoxanide is believed to be due to interference with the pyruvate:ferredoxin oxidoreductase (PFOR) enzyme-dependent electron-transfer reaction that is essential to anaerobic energy metabolism. PFOR inhibition may also contribute to its activity against anaerobic bacteria. It has also been shown to have activity against influenza A virus in vitro. The mechanism appears to be by selectively blocking the maturation of the viral hemagglutinin at a stage preceding resistance to endoglycosidase H digestion. This impairs hemagglutinin intracellular trafficking and insertion of the protein into the host plasma membrane. Nitazoxanide modulates a variety of other pathways in vitro, including glutathione-S-transferase and glutamate-gated chloride ion channels in nematodes, respiration and other pathways in bacteria and cancer cells, and viral and host transcriptional factors. Pharmacokinetics Following oral administration, nitazoxanide is rapidly hydrolyzed to the pharmacologically active metabolite, tizoxanide, which is 99% protein bound. Tizoxanide is then glucuronide conjugated into the active metabolite, tizoxanide glucuronide. Peak plasma concentrations of the metabolites tizoxanide and tizoxanide glucuronide are observed 1–4 hours after oral administration of nitazoxanide, whereas nitazoxanide itself is not detected in blood plasma. Roughly of an oral dose of nitazoxanide is excreted as its metabolites in feces, while the remainder of the dose excreted in urine. Tizoxanide is excreted in the urine, bile and feces. Tizoxanide glucuronide is excreted in urine and bile. Chemistry Acetic acid [2-[(5-nitro-2-thiazolyl)amino]-oxomethyl]phenyl ester is a carboxylic ester and a member of benzamides. It is functionally related to a salicylamide. Nitazoxanide is the prototype member of the thiazolides, which is a drug class of structurally-related broad-spectrum antiparasitic compounds. Nitazoxanide belongs to the class of drugs known as thiazolides. It is a broad-spectrum anti-infective drug that significantly modulates the survival, growth, and proliferation of a range of extracellular and intracellular protozoa, helminths, anaerobic and microaerophilic bacteria, in addition to viruses. Nitazoxanide is a light yellow crystalline powder. It is poorly soluble in ethanol and practically insoluble in water. The molecular formula of Nitazoxanide is C12H9N3O5S and its molecular weight is 307.28 g/mol2. Tizoxanide, an active metabolite of nitazoxanide in humans, is also an antiparasitic drug of the thiazolide class. IUPAC Name: [[[2-[(5-nitro-1,3-thiazol-2-yl)carbamoyl]phenyl] acetate2]] Canonical SMILES: CC(=O)OC1=CC=CC=C1C(=O)NC2=NC=C(S2)N+[O-]2 MeSH Synonyms: 1) 2-(acetolyloxy)-n-(5-nitro-2-thiazolyl)benzamide 2) Alinia 3) Colufase 4) Cryptaz 5) Daxon 6) Heliton 7) Ntz 8) Taenitaz History Nitazoxanide was originally discovered in the 1980s by Jean-François Rossignol at the Pasteur Institute. Initial studies demonstrated activity versus tapeworms. In vitro studies demonstrated much broader activity. Dr. Rossignol co-founded Romark Laboratories, with the goal of bringing nitazoxanide to market as an anti-parasitic drug. Initial studies in the USA were conducted in collaboration with Unimed Pharmaceuticals, Inc. (Marietta, GA) and focused on development of the drug for treatment of cryptosporidiosis in AIDS. Controlled trials began shortly after the advent of effective anti-retroviral therapies. The trials were abandoned due to poor enrollment and the FDA rejected an application based on uncontrolled studies. Subsequently, Romark launched a series of controlled trials. A placebo-controlled study of nitazoxanide in cryptosporidiosis demonstrated significant clinical improvement in adults and children with mild illness. Among malnourished children in Zambia with chronic cryptosporidiosis, a three-day course of therapy led to clinical and parasitologic improvement and improved survival. In Zambia and in a study conducted in Mexico, nitazoxanide was not successful in the treatment of cryptosporidiosis in advanced infection with human immunodeficiency virus at the doses used. However, it was effective in patients with higher CD4 counts. In treatment of giardiasis, nitazoxanide was superior to placebo and comparable to metronidazole. Nitazoxanide was successful in the treatment of metronidazole-resistant giardiasis. Studies have suggested efficacy in the treatment of cyclosporiasis, isosporiasis, and amebiasis. Recent studies have also found it to be effective against beef tapeworm (Taenia saginata). Pharmaceutical products Dosage forms Nitazoxanide is currently available in two oral dosage forms: a tablet (500 mg) and an oral suspension (100 mg per 5 ml when reconstituted). An extended release tablet (675 mg) has been used in clinical trials for chronic hepatitis C; however, this form is not currently marketed or available for prescription. Brand names Nitazoxanide is sold under the brand names Adonid, Alinia, Allpar, Annita, Celectan, Colufase, Daxon, Dexidex, Diatazox, Kidonax, Mitafar, Nanazoxid, Parazoxanide, Netazox, Niazid, Nitamax, Nitax, Nitaxide, Nitaz, Nizonide, , Pacovanton, Paramix, Toza, and Zox. Research , nitazoxanide was in phase 3 clinical trials for the treatment influenza due to its inhibitory effect on a broad range of influenza virus subtypes and efficacy against influenza viruses that are resistant to neuraminidase inhibitors like oseltamivir. Nitazoxanide is also being researched as a potential treatment for COVID-19, chronic hepatitis B, chronic hepatitis C, rotavirus and norovirus gastroenteritis. References Further reading Acetate esters Antiparasitic agents Antiviral drugs Nitrothiazoles Salicylamide ethers
Nitazoxanide
[ "Biology" ]
2,621
[ "Antiviral drugs", "Biocides", "Antiparasitic agents" ]
9,615,728
https://en.wikipedia.org/wiki/Dethridge%20wheel
The Dethridge wheel is an irrigation tool that was invented in 1910 by John Stewart Dethridge (1865–1926). It works in a similar way to a traditional water wheel and rotates as water passes through its vanes. The rotations are then measured The Dethridge wheel was prevalent throughout the 20th century and was used in several countries including Australia, India, Indonesia, Israel, Africa and the United States. History In 1910, the Victorian State and Water Supply Commissioner, John Stewart Dethridge, developed the "Dethridge Direct-Measuring Water Meter" or "Dethridge Wheel". Its initial use was to accurately measure the flow of water at specific irrigation sites in Australia, especially in areas throughout New South Wales and Victoria. The flow of water had to be regulated to ensure that there was a sustainable and efficient use of the water in irrigation. The Dethridge Wheel was being used all the way until the 21st century and there are several countries that use its technological insights. Due to the fact that Australia is vulnerable to drought and loss of water, The flow of water had to be regulated to ensure that there was a sustainable and efficient use of the water in irrigation. The Dethridge Wheel was not the only invention that was developed by John Dethridge, but it is widely regarded as one of the most influential pieces of machinery that he is credited with inventing. The Dethridge wheel was used until the 21st century and there are several countries that use its technological insights. Due to its consistent use many people, especially in Australia, have an appreciation for the tool. In 1965, a memorial was erected in Griffith, New South Wales, Australia to commemorate the Dethridge Wheel and the Murrumbidgee Irrigation Area. The Murrumbidgee Irrigation Area or "MIA" is a portion of the Riverina Area which was established to bring water from local rivers to assist food production. Past use The Dethridge wheel was developed to measure water levels in irrigation canals as the water passed over land. It has been described as "simple" by faculty at the Darmstadt University of Applied Sciences as it used uncomplicated technology to measure the volume of water that passed through it.  The wheel functioned by allowing water to pass under it, in turn causing the wheel to spin. The speed of the wheel's rotations while water was passing through would then be measured and would provide accurate data concerning the use of water for farmers throughout Australia. The wheel was an influential tool for water usage and sustainability throughout the 20th century. One of the locations where the Dethridge wheel was used continually is the Murray-Darling Basin Area. The area is and encompasses the Murray and Darling rivers, which are among the longest rivers in all of the Australian Continent. In 1950, a quarterly review of irrigated dairy farms in Victoria was released. It mentioned the use of the Dethridge wheel as a water measurement device and said, "Figures for water used must be treated with reserve due to the difficulties associated with the measurement of water delivered to farms, even where the Dethridge wheels are installed". Early first-hand accounts of the Dethridge wheels use in the 1950s provide evidence of the importance of the wheel as an irrigation tool in Australia. Modern use At the start of the 21st century, the Dethridge wheel began to fall into obsolescence. The importance of water management and water allocation systems was being continually investigated. Research was beginning to show that tools such as the Dethridge wheel were too inaccurate and didn't have an outlook of environmental fragility. In 2002, the TCC or "Total Channel Control" system was implemented in several water canals throughout the Northern parts of Victoria, Australia. The TCC was part of a broad effort to modernize the irrigation techniques and to bring more sustainability in open canal water distribution. Channel Automation was piloted in 2002 in the Central Goulburn Channel System in the Shepparton irrigation region of northern Victoria. Dethridge wheels were replaced by newer technology known as FlumeGate, which was introduced by the Total Channel Control system. FlumeGate aimed to supersede the use of the Dethridge wheels and provide more accurate measurement data for irrigators. Although these new additions were supported by government funding, it was stated that the new technology had received negative feedback from individual irrigators. The effort to push to more efficient irrigation techniques was heavily influenced by the Northern Victorian Irrigation Renewals Project (NVIRP), which recognized the environmental impacts in the late 1990s.  Due to the fact that the Dethridge Wheel's design causes it to block any given flow of water, it results in lots of disturbance, in turn causing a substantial loss of water and affecting water consumption over time.  Observations of water consumption were taken in Northern Victoria and they found that the distribution of water in the area was losing about 990 gigalitres of water each year. Additionally, the Northern Victoria Irrigation Renewals project worked closely with community groups and government agencies to bring modernization to the Goulburn Murray irrigation District (GMID). The significance behind the Goulburn Murray Irrigation District is supported by irrigation statistics. The Goulburn Murray Irrigation District covers an area and contributes around 30% of Victoria, Australia's gross agricultural production. The Dethridge wheel was studied to see its potential as a Low Head Power Generator. In the early 21st century the transition to environmentally sound technology was introduced by the Kyoto Protocol of 1997. The aim was to reduce overall greenhouse gas emissions as fears of climate change were realized. The Dethridge wheel was experimented with to see how it could be used as a hydroelectric power generator for low head sites in open channel flow. The Dethridge wheel offered the same effect that a Zuppinger Wheel when being tested as a low head power generator, and was considered effective in its implementation when using irrigation to produce energy. Dethridge wheels were replaced by fully automated machinery such as Total Channel Control to prevent loss and leakage in irrigation efforts. Total Channel Control offered water metering and promised an increase in the peak flow rates of water.  Measurements taken from Goulburn Murray Water Field also found that the Dethridge Wheel was causing irrigators to receive more water than they were supposed to which brings into question the cost-effectiveness of the Dethridge Wheel when being used as an irrigation meter. In 1999, the Dethridge wheel was used as a water meter and measured the volumes of drainage to improve drainage systems in Griffith, New South Wales by CSIRO Land and Water. The Griffith City Council aimed to improve the sewage systems around Griffith which is a major regional city in the Murrumbidgee Irrigation Area. The Dethridge wheel's use in sewage and waste management in the area is one of many reasons why it has held a significance in Griffith, New South Wales. In an attempt to improve the performance of the Dethridge wheel and to study its use as a hydraulic device, two researchers, S. Paudel and N. Saenger, developed a "CFD" or Computational Fluid Dynamics model of the wheel. Paudel and Saenger worked at the Department of Civil Engineering, Darmstadt University of Applied Sciences, and were influential in research behind the Dethridge wheel. They worked to develop a three-dimensional CAD model of the wheel and ran tests to measure flow characteristics and different physical aspects. The development of a Computational Fluid Dynamic model is described as tedious due to the free-flowing aspects of water and its interaction with a moving object. Channel automation controversy Although the turn of the century largely marked the end of the Dethridge wheel's widespread use, there was still frequent debate behind the newer Channel Automation. Automation that was heavily influenced by the Northern Victoria Renewals Project and channel automation systems which were introduced in 2002 left irrigators in controversy. The new FlumeGates that were introduced to several northern irrigation points promised to improve ordering time and speed up irrigation. Regardless of the improvements or lack thereof, the channel automation received negative feedback and ultimately left the Victorian Government confused. Data The Dethridge wheel was used as a hydroelectric power generator and it was used in experiments by many different scientific studies. Its importance as a low head power generator was backed up by statistics relating to hydropower. Out of all the world's total renewable and green electricity supply, hydropower makes up 76%. The Dethridge wheel was found to have an efficiency of 60% when implemented for electricity. Research conducted in Indonesia found that the efficiency of the Dethridge wheel when being used for hydropower decreases when the water discharge is increased. This observation was attributed to the rotation of the wheel not being in proportion to the discharge of water. References Australian inventions Irrigation Flow meters
Dethridge wheel
[ "Chemistry", "Technology", "Engineering" ]
1,784
[ "Measuring instruments", "Flow meters", "Fluid dynamics" ]
9,616,023
https://en.wikipedia.org/wiki/C%20to%20HDL
C to HDL tools convert C language or C-like computer code into a hardware description language (HDL) such as VHDL or Verilog. The converted code can then be synthesized and translated into a hardware device such as a field-programmable gate array. Compared to software, equivalent designs in hardware consume less power (yielding higher performance per watt) and execute faster with lower latency, more parallelism and higher throughput. However, system design and functional verification in a hardware description language can be tedious and time-consuming, so systems engineers often write critical modules in HDL and other modules in a high-level language and synthesize these into HDL through C to HDL or high-level synthesis tools. C to is another name for this methodology. RTL refers to the register transfer level representation of a program necessary to implement it in logic. History Early development on C to HDL was done by Ian Page, Charles Sweeney and colleagues at Oxford University in the 1990s who developed the Handel-C language. They commercialized their research by forming Embedded Solutions Limited (ESL) in 1999 which was renamed Celoxica in September 2000. In 2008, the embedded systems departments of Celoxica was sold to Catalytic for $3 million and which later merged to become Agility Computing. In January 2009, Mentor Graphics acquired Agility's C synthesis assets. Celoxica continues to trade concentrating on hardware acceleration to process transactions in the financial sector and other industries. Applications C to HDL techniques are most commonly applied to applications that have unacceptably high execution times on existing general-purpose supercomputer architectures. Examples include bioinformatics, computational fluid dynamics (CFD), financial processing, and oil and gas survey data analysis. Embedded applications requiring high performance or real-time data processing are also an area of use. System-on-chip (SoC) design may also take advantage of C to HDL techniques. C-to-VHDL compilers are very useful for large designs or for implementing code that might change in the future. Designing a large application entirely in HDL may be very difficult and time-consuming; the abstraction of a high level language for such a large application will often reduce total development time. Furthermore, an application coded in HDL will almost certainly be more difficult to modify than one coded in a higher level language. If the designer needs to add new functionality to the application, adding a few lines of C code will almost always be easier than remodeling the equivalent HDL code. Flow to HDL tools have a similar aim, but with flow rather than C-based design. Example tools SmartHLS (originally LegUp), ANSI C to Verilog tool developed by Microchip Technology, based on LLVM compiler. CBG CtoV A tool developed 1995-99 by DJ Greaves (University of Cambridge) that instantiated RAMs and interpreted various SystemC constructs and datatypes. C-to-Verilog tool (NISC) from University of California, Irvine Altium Designer 6.9 and 7.0 (a.k.a. Summer 08) from Altium Nios II C-to-Hardware Acceleration Compiler from Altera Catapult C tool from Mentor Graphics Cynthesizer from Forte Design Systems SystemC from Celoxica (defunct) Handel-C from Celoxica (defunct) DIME-C from Nallatech Impulse C from Impulse Accelerated Technologies Instant-SoC from FPGA-Cores FpgaC which is an open source initiative SA-C programming language Cascade (C to RTL synthesizer) from CriticalBlue Mitrion-C from Mitrionics SPARK (a C-to-VHDL) from University of California, San Diego VLSI/VHDL CAD Group Index of Useful Tools from Case Western Reserve University MyHDL is a Python-subset compiler and simulator to VHDL and Verilog See also Comparison of EDA Software Electronic design automation (EDA) High-level synthesis Silicon compiler Hardware acceleration References External links A good article on Dr Dobbs Journal about ImpulseC. An overview of flows by Daresbury Labs. An Overview of Hardware Compilation and the Handel-C language. Xilinx's ESL initiative, some products listed and C to VHDL tools. Altium's C-to-Hardware Compiler overview. Altera's Nios II C2H Acceleration Compiler White Paper. Hardware description languages Program transformation Hardware acceleration
C to HDL
[ "Technology", "Engineering" ]
925
[ "Electronic engineering", "Hardware acceleration", "Hardware description languages", "Computer systems" ]
9,616,151
https://en.wikipedia.org/wiki/VFinity
VFinity was a privately held software company based in New York City. It was founded in 2000 as a consulting and software customization company Wan Net Technology by former Chinese democracy leader Shen Tong, and became VFinity in 2004. VFinity sold web-based enterprise systems for digital asset management. Its products were used by broadcasters, archives, educational and financial institutions in the United States and Asia to manage their multimedia assets. The VFinity platform was soft-launched during National Association of Broadcasters (NAB) 2006, which won recognition that launched its president Shen Tong into a Keynote Speech at NAB Super Session the following year. Independent research reports had also followed VFinity's attempted entry into various markets by Frost & Sullivan Hot Company Watch List in 2007, ABI key players in World Digital Asset Management Markets in 2008, and Forrester Rich Media Management Software 2010. According to these research and other trade publications in higher education, information technology, archive and library science, broadcast technology, digital asset management among others, VFinity main value propositions and differentiation from other traditional DAM vendors are its Web2.0 approach to architecture, metadata, and user experiences. VFinity and Shen Tong promoted notions of "Context is King", search centric extreme ease of use with zero user training, folksonomy (or free tagging) combined with taxonomy (expert tagging), zero client (or Web client). VFinity platform seems to differ from traditional enterprise software especially in professional media industry in its Webby approach. VFinity clients included cultural institutions, production and advertisement companies, film archives, national archives, and broadcasters: TriBeCa Film Festival, Beijing Olympics, Cathay Financials, National Taiwan University, Brandeis University, a PBS station producing prime time programming, Bloomberg L.P. References Film and video technology Software companies established in 2000 Information technology management 2000 establishments in New York City American companies established in 2000 nl:Digital Asset Management
VFinity
[ "Technology" ]
398
[ "Information technology", "Information technology management" ]
9,616,402
https://en.wikipedia.org/wiki/Remote%20camera
A remote camera, also known as a trail camera or game camera, is a camera placed by a photographer in areas where the photographer generally cannot be at the camera to snap the shutter. This includes areas with limited access, tight spaces where a person is not allowed, or just another angle so that the photographer can simultaneously take pictures of the same moment from different locations. Remote cameras are most widely used in sports photography. 35 mm digital or film, and medium format cameras are the most common types of cameras that are used. Uses and practices Remote cameras are used by photographers to take more pictures from different angles. Remotes are very popular in sports and wildlife photography. Cameras are often placed in angles that a photographer cannot physically be during a shoot. Sport use examples include behind the backboard at a basketball game or overhead in the rafters of an arena during a hockey game. Placement Remote cameras placed in suspended positions usually are mounted with clamps and arms such as the Bogen Super Clamp and Variable Friction Arm, often referred to as "Magic Arms". The camera and lens are connected to the variable friction arm which is attached to the Super Clamp which in turn is secured to a fixed item such as a basketball post, hand railing, or rafter. Ground plates or tripods are typically used for remote cameras placed on the ground. Triggering Remote cameras can be fired via hand triggers, sound triggers, radio transmitters (mainly Bluetooth shutters), a built-in self-timer, or a proximity sensor – in which case they are known as camera traps. For remotes that are in close proximity to the photographer, hand or sound triggers can be used. A hand trigger consists of a button or switch that is connected to the camera via a wire that is set to fire the camera's shutter. For remotes that are placed away from the photographer, radio triggering systems such as the Bluetooth shutter button, Pocket Wizards or Flash Wizards are used. A radio trigger consists of a button or switch that is connected to a radio triggering transmitter or transceiver which is set to fire a radio triggering receiver or transceiver that is connected to the camera via a wire that is set to fire the camera's shutter. For rocket launches, including the Space Shuttle, remote cameras are triggered by the sound of the launch. Game camera A game camera is a rugged and weather-proof camera designed for extended and unsupervised use outdoors. The images they produce, taken automatically when motion is sensed, are used for game surveillance by hunters, farmers, ranchers and wildlife hobbyists and professionals. These cameras are intended to be strapped on trees or mounted on tripods (or other items), and they are motion-activated. This motion sensor enables the camera to capture images or videos of animals without using up all of its storage space. However, lots of photos of waving plants and moving water can clog up memory cards. These cameras have been instrumental in the rediscovery of multiple species once thought to be extinct or driven out of an area, such as with the black-naped pheasant-pigeon, and fishers in Washington state. They have also used by people endeavouring to take photographs of the non-existent creature Bigfoot (among other cryptids). They can also be helpful for animal loss/rescue in documenting the presence and species of animals, such as determining whether a runaway dog is returning to its home at night or verifying the species actually eating the food left for a stray/feral cat. See also Camera trap Digital camera Film format Infrared photography Movie camera Selfie stick Video camera References Cameras Photography equipment Optical devices
Remote camera
[ "Materials_science", "Technology", "Engineering" ]
738
[ "Glass engineering and science", "Recording devices", "Optical devices", "Cameras" ]
9,617,193
https://en.wikipedia.org/wiki/Mary%20River%20turtle
The Mary River turtle (Elusor macrurus) is an endangered species of short-necked turtle in the family Chelidae. The species is endemic to the Mary River in south-east Queensland, Australia. Although this turtle was known to inhabit the Mary River for nearly 30 years, it was not until 1994 that it was recognised as a new species. There has been a dramatic decrease in its population due to low reproduction rates and an increase of depredation on nests. Taxonomy and common names The Mary River turtle was first formally described in 1994. The Mary River turtle is also commonly called the green-haired turtle and the punk turtle, due to the algae that grows on its head or shell. Elusor is a monotypic genus representing a very old lineage of turtles that has all but disappeared from the evolutionary history of Australia. Description Unusually for turtles, males are larger than females in this species. Male Mary River turtles are one of Australia's largest turtles. Specimens in excess of straight carapace length have been recorded. Hatchlings have a straight carapace length of . Adult Mary River turtles have an elongated, streamlined carapace that can be plain in colour or intricately patterned. Overall colour can vary from rusty red to brown and almost black. The plastron varies from cream to pale pink. The skin colouration is similar to that of the shell and often has salmon pink present on the tail and limbs. The iris can be pale blue. The Mary River turtle uses bimodal respiration, and so is capable of absorbing oxygen via the cloaca whilst underwater. Because of this, it can stay under the water for three days at a time. However, it does regularly come to the surface to breathe air in the usual way. A unique feature of the Mary River turtle is the very large tail of males, which can measure almost two-thirds of the carapace length. Also unusual is that the tail is laterally compressed like a paddle. The tail also has haemal arches, a feature lost in all other Australian chelids. While haemal arches are documented in many cryptodiran species (including big-headed, common snapping, alligator snapping, and Pacific pond turtles), the only other pleurodiran with haemal arches is the mata mata of South America. It is probably a derived feature, but its function is not understood. Another unique feature is the exceptionally long barbels under the mandible. Proportionately, the Mary River turtle has the smallest head and largest hind feet of all the species within the catchment, which contributes to its distinction of being the fastest swimmer. The Mary River turtle is occasionally informally referred to as the green-haired turtle due to the fact that many specimens are covered with growing strands of algae which resemble hair. It is also sometimes referred to as the bum-breathing turtle due to its use of its cloaca for respiration. Threats The Mary River turtle experiences many threats. Predation of hatchlings occurs by red foxes, wild dogs, and fish, especially when the turtle is at the hatchling and juvenile stages of its life. Its number one threat is the looting of its nests by dogs, foxes, and goannas. The land around the Mary River has been cleared many times, leading to low quality water and a build up of silt. Invasive plants along the river bank have also contributed to the lack of breeding success because the plants make it difficult for the Mary River turtle to go ashore and lay its eggs. The Mary River turtle has the ability to blend into muddy waters and wait for unsuspecting prey to pass. Its algae-covered shell also allows it to stay hidden from predators. Ecology and behaviour Little is known about the ecology and behaviour of the Mary River turtle. It inhabits flowing and well-oxygenated sections of the Mary River basin from Gympie to Maryborough, using terrestrial nest sites. Its habitat consists of riffles and shallow parts that alternate with deeper pools. It prefers to inhabit clear and slow moving water. The Mary River turtle takes an unusually long time to mature; it has been estimated that females take 25 years, and males, 30 years to become adults. Mature males may be aggressive towards other males or turtles of other species. The species is omnivorous, taking plant matter such as algae as well as bivalves and other small animal prey, such as fish, frogs, and sometimes even ducklings. Conservation In the 1960s and 1970s, the Mary River turtle was popular as a pet in Australia, with about 15,000 sent to shops every year during a 10-year period. They were originally known as the "penny turtle" or "pet shop turtle". This species is currently listed as Critically Endangered under Queensland's Nature Conservation Act 1992, and under the federal Environment Protection and Biodiversity Conservation Act 1999. The international conservation body IUCN lists it as Endangered on the IUCN Red List. It is also listed on the Zoological Society of London's Evolutionarily Distinct and Globally Endangered list, part of the EDGE of Existence programme. The Mary River Turtle has also secured 30th place on the ZSL's Evolutionary Distinct and Globally Endangered list for reptiles. It is Australia's most endangered freshwater turtle species. The Mary River turtle was the second-most endangered freshwater turtle after the western swamp turtle (Pseudemydura umbrina) of Western Australia. The Mary River turtle was listed amongst the world's top 25 most endangered turtle species by the Turtle Conservation Fund in 2003. Australia's first reptile-focused, non-profit conservation organization, the Australian Freshwater Turtle Conservation and Research Association, were the first to breed this species in captivity for release into the wild in 2007. A purpose-built hatchery was built along the banks of the Mary River in 2019/2020. This hatchery was built to install nests and it is planned to be used in the future in order to grow the Mary River Turtle population. See also List of Nature Conservation Act endangered fauna of Queensland References External links Master of Science Thesis, University of Queensland. AFTCRA Inc. The Australian Freshwater Turtle Conservation and Research Association Mary River Turtle – Tiaro & District Landcare Group QLD Gympie Land Care Amonline www.qm.qld.gov.au/features/endangered/animals/river_turtle.asp Image of the Mary River turtle Turtles of Australia EPBC Act endangered biota Nature Conservation Act endangered biota Elusor Mary River (Queensland) Pets in Australia Reptiles described in 1994 EDGE species Endangered fauna of Australia
Mary River turtle
[ "Biology" ]
1,330
[ "EDGE species", "Biodiversity" ]
9,617,268
https://en.wikipedia.org/wiki/Separation%20property%20%28finance%29
A separation property is a crucial element of modern portfolio theory that gives a portfolio manager the ability to separate the process of satisfying investing clients' assets into two separate parts. The first part is the determination of the "optimum risky portfolio". This portfolio is the same for all clients. In one version, it has the highest Sharpe ratio. See mutual fund separation theorem for a discussion of other possibilities. It is the construction of a universal portfolio that is kept separate from the individual needs of each client. The second part is tailoring the use of that portfolio to the risk-aversive needs of each individual client. This is achieved through simulation of a given risk-return range by allocating the client's total investments partly to that universal portfolio and partly to the risk-free asset. See also Markowitz model #Choosing the best portfolio - an expansion of the above Mutual fund separation theorem - relating to the construction of optimal portfolios Fisher separation theorem - discussing an analogous result in corporate finance References Finance theories Mathematical finance Financial economics
Separation property (finance)
[ "Mathematics" ]
208
[ "Applied mathematics", "Mathematical finance" ]
9,617,564
https://en.wikipedia.org/wiki/Oja%27s%20rule
Oja's learning rule, or simply Oja's rule, named after Finnish computer scientist Erkki Oja (, ), is a model of how neurons in the brain or in artificial neural networks change connection strength, or learn, over time. It is a modification of the standard Hebb's Rule that, through multiplicative normalization, solves all stability problems and generates an algorithm for principal components analysis. This is a computational form of an effect which is believed to happen in biological neurons. Theory Oja's rule requires a number of simplifications to derive, but in its final form it is demonstrably stable, unlike Hebb's rule. It is a single-neuron special case of the Generalized Hebbian Algorithm. However, Oja's rule can also be generalized in other ways to varying degrees of stability and success. Formula Consider a simplified model of a neuron that returns a linear combination of its inputs using presynaptic weights : Oja's rule defines the change in presynaptic weights given the output response of a neuron to its inputs to be where is the learning rate which can also change with time. Note that the bold symbols are vectors and defines a discrete time iteration. The rule can also be made for continuous iterations as Derivation The simplest learning rule known is Hebb's rule, which states in conceptual terms that neurons that fire together, wire together. In component form as a difference equation, it is written , or in scalar form with implicit -dependence, , where is again the output, this time explicitly dependent on its input vector . Hebb's rule has synaptic weights approaching infinity with a positive learning rate. We can stop this by normalizing the weights so that each weight's magnitude is restricted between 0, corresponding to no weight, and 1, corresponding to being the only input neuron with any weight. We do this by normalizing the weight vector to be of length one: . Note that in Oja's original paper, , corresponding to quadrature (root sum of squares), which is the familiar Cartesian normalization rule. However, any type of normalization, even linear, will give the same result without loss of generality. For a small learning rate the equation can be expanded as a Power series in . . For small , our higher-order terms go to zero. We again make the specification of a linear neuron, that is, the output of the neuron is equal to the sum of the product of each input and its synaptic weight to the power of , which in the case of is synaptic weight itself, or . We also specify that our weights normalize to , which will be a necessary condition for stability, so , which, when substituted into our expansion, gives Oja's rule, or . Stability and PCA In analyzing the convergence of a single neuron evolving by Oja's rule, one extracts the first principal component, or feature, of a data set. Furthermore, with extensions using the Generalized Hebbian Algorithm, one can create a multi-Oja neural network that can extract as many features as desired, allowing for principal components analysis. A principal component is extracted from a dataset through some associated vector , or , and we can restore our original dataset by taking . In the case of a single neuron trained by Oja's rule, we find the weight vector converges to , or the first principal component, as time or number of iterations approaches infinity. We can also define, given a set of input vectors , that its correlation matrix has an associated eigenvector given by with eigenvalue . The variance of outputs of our Oja neuron then converges with time iterations to the principal eigenvalue, or . These results are derived using Lyapunov function analysis, and they show that Oja's neuron necessarily converges on strictly the first principal component if certain conditions are met in our original learning rule. Most importantly, our learning rate is allowed to vary with time, but only such that its sum is divergent but its power sum is convergent, that is . Our output activation function is also allowed to be nonlinear and nonstatic, but it must be continuously differentiable in both and and have derivatives bounded in time. Applications Oja's rule was originally described in Oja's 1982 paper, but the principle of self-organization to which it is applied is first attributed to Alan Turing in 1952. PCA has also had a long history of use before Oja's rule formalized its use in network computation in 1989. The model can thus be applied to any problem of self-organizing mapping, in particular those in which feature extraction is of primary interest. Therefore, Oja's rule has an important place in image and speech processing. It is also useful as it expands easily to higher dimensions of processing, thus being able to integrate multiple outputs quickly. A canonical example is its use in binocular vision. Biology and Oja's subspace rule There is clear evidence for both long-term potentiation and long-term depression in biological neural networks, along with a normalization effect in both input weights and neuron outputs. However, while there is no direct experimental evidence yet of Oja's rule active in a biological neural network, a biophysical derivation of a generalization of the rule is possible. Such a derivation requires retrograde signalling from the postsynaptic neuron, which is biologically plausible (see neural backpropagation), and takes the form of where as before is the synaptic weight between the th input and th output neurons, is the input, is the postsynaptic output, and we define to be a constant analogous the learning rate, and and are presynaptic and postsynaptic functions that model the weakening of signals over time. Note that the angle brackets denote the average and the ∗ operator is a convolution. By taking the pre- and post-synaptic functions into frequency space and combining integration terms with the convolution, we find that this gives an arbitrary-dimensional generalization of Oja's rule known as Oja's Subspace, namely See also BCM theory Contrastive Hebbian learning Generalized Hebbian algorithm Independent components analysis Principal component analysis Self-organizing map Synaptic plasticity References External links Oja, Erkki: Oja learning rule in Scholarpedia Oja, Erkki: Aalto University Computational neuroscience Artificial neural networks Neural circuitry Biophysics Hebbian theory Management cybernetics
Oja's rule
[ "Physics", "Biology" ]
1,360
[ "Applied and interdisciplinary physics", "Biophysics" ]
9,617,836
https://en.wikipedia.org/wiki/Broadcast%20Television%20Systems%20Inc.
Broadcast Television Systems (BTS) was a joint venture between Robert Bosch GmbH's Fernseh Division and Philips Broadcast in Breda, Netherlands, formed in 1986. History Philips had been in the broadcast market for many years with a line of PC- and LDK- Norelco professional video cameras and other video products. By the 1980s, the Norelco name was dropped in favour of Philips. Robert Bosch GmbH's Fernseh Division also had a long history going back to the early days of television (1929). BTS's North America headquarters was at first located in Salt Lake City, Utah. This was later moved to Simi Valley, California, in 1991, later returning to Salt Lake City. Also in 1991, BTS Latin America entered into an agreement to provide Televisa SA of Mexico what was believed to be until that time, the largest equipment sale in history. In 1995 Philips Electronics North America Corp. fully acquired BTS Inc., renaming it Philips Broadcast-Philips Digital Video Systems. The BTS Inc.'s Darmstadt factory in Germany was near the Darmstadt Train Station and European Space Operations Centre this was later moved a short distance to Weiterstadt, Germany. In March 2001, Philips' broadcast video division was sold to Thomson SA, the current owner; the division was called Thomson Multimedia. In 2002, the French electronics giant Thomson SA also acquired the Grass Valley Group from Terry Gooding of San Diego, CA, USA. Grass Valley, Inc., the Thomson, Grass Valley, a Thomson Brand is headquartered in Nevada City, California. The Thomson Film Division, located in Weiterstadt, including the product line of Spirit DataCine, Bones Work station and LUTher 3D Color Space converter, was sold to Parter Capital Group. The sale was made public on Sept. 9, 2008 and completed on Dec. 1, 2008. The new headquarters is in Weiterstadt, in the former Bosch Fernseh - BTS factory. Parter Capital Group will continue to have worldwide offices to support products from Weiterstadt, Germany. The new name of the company is Digital Film Technology. On October 1, 2012, Precision Mechatronics and DFT were acquired by Prasad Corp, part of Prasad Studios. In 2013 DFT moved from Weiterstadt to Arheilgen-Darmstadt, Germany. Grass Valley operated offices in the cities of all the former acquisitions: Cergy, France (Thomson World Headquarters) Salt Lake City, Utah, USA – from TeleMation Inc. – Bell and Howell – BTS Beaverton, Oregon, USA – from Tektronix Nevada City, California, USA – from Grass Valley Group Breda, Netherlands – from Philips – Norelco – BTS After the financial crisis of 2008, Thomson became upside-down in its financial covenants and was forced by its creditors to divest itself of Grass Valley and other manufacturing entities. On January 29, 2009, Thomson announced that they were putting the Grass Valley division up for sale. In 2010, the Grass Valley business unit, not including the head-end and transmission businesses, was acquired by private equity firm Francisco Partners and resumed operating as an independent company on January 1, 2011. Grass Valley still maintains offices worldwide. Grass Valley was sold to Belden on February 6, 2014, Belden also owns Miranda. Products See: Fernseh - for German-made products TeleMation Inc. for SLC products. Philips invented the plumbicon pick up video camera tube in 1965; almost all of their color cameras used this award-winning tube. Starting with the LDK 90 camera, Philips used their Frame transfer CCD - Charge-coupled device. Philips' patented Dynamic Pixel Management (DPM) FT-17 CCD technology won awards and was first used in the 1994 LDK10 and LDK10p camera. Philips-BTS product from Breda, Netherlands, professional video camera products: EL-8020 B&W Studio 5 fixed lens LDK2 1970s Norelco LDH10 Norelco LDH20 Norelco LDH-0200 Studio Norelco LDK3 Studio PC-80 Norelco LDK4800 ? Triax repeater ? Camera ? LDK5 1971 Studio 3 tubes Philips LDK6 1982 Studio 3 tubes Norelco/BTS LDK9P BTS CCD 1993 HandHeld LDK10 BTS DPM CCD 1994 LDK10P BTS DPM CCD 1994 LDK11 1976 ENG Backpack Norelco LDK12 ENG LDK13 1971 ENG Backpack Norelco LDK14 1977 ENG 3 tubes Philips LDK15 1974? ENG Norelco LDK20 ~1997 BTS CCD LDK23HS BTS CCD Super slow mo LDK25 Studio LDK26 1982 Studio LDK33 Early Handheld LDK44 1984 Studio/ENG 3 tubes LDK54 Handheld 3 tubes LDK63 LDK65 LDK90 1987 BTS CCD HandHeld LDK91 BTS CCD HandHeld LDK93 BTS CCD HandHeld LDK491 ENG Philips DIODE GUN PLUMBICONS LDK614 LDK6 handheld LDK700 BTS CCD LDK910 BTS CCD Studio LDK9000 BTS CCD HDTV LDM42 B&W 1968 Studio LDM53 B&W Studio PC60 1965 Studio 3 tubes Norelco PC70 1967 Studio 3 tubes Norelco PC80 Studio Norelco LDK3 PC100 Studio Norelco PCP70 Handheld Norelco PCP90 1968 Handheld Norelco VIDEO 80 Handheld Current: LDK 300 CCD Thomson Grassvalley LDK 400 CCD Thomson Grassvalley LDK 500 CCD Thomson Grassvalley 2003 LDK23HS Mk2 CCD Super slow mo Thomson Grassvalley LDK 5000 CCD Thomson Grassvalley TTV 1657D CCD Thomson Grassvalley LDK 20S CCD Thomson Grassvalley LDK 1707 CCD Thomson Grassvalley LDK 4000 CCD HDTV Thomson Grassvalley LDK 5000 CCD HDTV Thomson Grassvalley LDK 6000 CCD HDTV Thomson Grassvalley LDK 6200 CCD HDTV Super SloMo LDK 8000 CCD HDTV/SDTV Philips early VTRs: LDL110 Portable NL1500 cassette LDL110 Portable 1977 NL1702 VR202 VR2350 Awards Outstanding Achievement in Technical/Engineering Development Awards from National Academy of Television Arts & Sciences. 1966-1967 PLUMBICON TUBE - N.V. Philips -Breda 1987-1988 FGS 4000 computer animation system CGI- BTS -SLC, UT 1991-1992 Triaxial cable Technology for Color Television Cameras -N.V. Philips -Breda 1992-1993 Prism Technology for Color Television Cameras -N.V. Philips -Breda 1993-1994 Controlled Edge Enhancement Utilizing Skin Hue KeyingBTS and Ikegami (Joint Award) -Breda 1997-1998 Development of a High Resolution Digital Film Scanner Eastman Kodak and Philips Germany. See Spirit DataCine 2000-2001 Pioneering developments in shared video-data storage systems for use in television video servers - BTS/Philips/Thomson/ - SLC, UT 2002-2003 Technology to simultaneously encode multiple video qualities and the corresponding metadata to enable real-time conformance and / or playout of the higher quality video (nominally broadcast) based on the decisions made using the lower quality proxies Montage. Philips and Thomson. Photo gallery See also Fernseh TeleMation Inc. Philips Robert Bosch GmbH Norelco Grass Valley (company) Thomson SA Professional video camera References tmquest.com onBTS Product thefreelibrary.com On BTS thefreelibrary.com On BTS, ABC'S KGO-TV Picks BTS Video Server to Go All Digital; Also Pilot Program For ABC-OWNED Stations patentmaps.com BTS Patents electronics.zibb.com BTS Trademarks business.highbeam.com BC plans playback for disk-based server; its San Francisco O&O also plots Media Pool project. (owned-and-operated television station; BTS Broadcast Television Systems Inc. Media Pool server), Article from: Broadcasting & Cable | May 15, 1995 | McConnell, Chris business.highbeam.com New cable networks going digital. (includes related article on Game Show Network) Article from: Broadcasting & Cable | January 9, 1995 | McConnell, Chris business.highbeam.com Media Pool tests the tapeless waters. (BTS digital disk-based recorder), Article from: Broadcasting & Cable | July 18, 1994 | McConnell, Chris allbusiness.com KGO-TV picks BTS' Media Pool Video Server to provide all-digital system of the future. Las Vegas, Nev.--(Business Wire)--April 11, 1995—KGO-TV, smpte.org 138th SMPTE Technical Conference Technical Papers Program, October 10–11, 1996, Los Angeles, Calif., Media Pool — Flexible Video Server Design for Television Broadcasting, by Charlie Bernstein1 patentstorm.us BTS, System and method for enabling a data/video server] trademarks.justia.com Compositor I - Trademark Details trademarkia.com Compositor I By: BTS-Broadcast Television Systems, Inc. emmyonline.tv National Academy of Television Arts and Sciences, Outstanding Achievement in Technical/Engineering Development Awards, BTS Broadcast Television Systems, Inc. To BTS in recognition of their engineering contribution in 3D computer graphic technology and for development of the FGS 4000 computer animation system. trademarks.justia.com MACH ONE - Trademark Details. trademarkia.com MACH ONE By: BTS-Broadcast Television Systems, Inc. The History of Television, 1942 to 2000, By Albert Abramson, Christopher H. Sterling, Page 304] trademarkia.com PIXELERATOR, By: BTS-Broadcast Television Systems, Inc., High-Speed Rendering Computer Processor For Use In Computer Graphics Application. broadcasting101.ws BTS Logo www.broadcasting101.ws LDK91 Camera broadcasting101.ws LDK910 camera broadcasting101.ws LDK-9000 www.broadcasting101.ws BTS OB Van *tvcameramuseum.org List of BTS cameras tvcameramuseum.org LDK614 tvcameramuseum.org XQ3427B Plumbicon camera tube. tvcameramuseum.org LDK 6 tvcameramuseum.org LDH 200 tvcameramuseum.org LDK 11 tvcameramuseum.org LDK 12 tvcameramuseum.org LDK 13 tvcameramuseum.org LDK 14 tvcameramuseum.org LDK 15 tvcameramuseum.org LDK 2 tvcameramuseum.org LDK 25 tvcameramuseum.org LDK 26 tvcameramuseum.org LDK 3 tvcameramuseum.org LDK 44 tvcameramuseum.org LDK 54 tvcameramuseum.org PC 60 [tvcameramuseum.org] LDK 54 tvcameramuseum.org List of cameras tvcameramuseum.org EL 8000 cam External links BTS Awards Page 8 and 11 Thomson Grassvalley Cameras Camera Museum The Museum of the Broadcast TV camera Philips Cameras, Photos and Specs Fernseh Museum BTS/Thomson/Grass valley Hi-res Photo Archive Cameras: EL-8020 PC-60 PC-70 PC-72 PCP-90 LDH-1 LDK-5 LDK-6 LDK-300 LDK-6200 Electronics companies of the Netherlands Philips Film and video technology Cameras Video storage Film production Technicolor SA
Broadcast Television Systems Inc.
[ "Technology" ]
2,437
[ "Recording devices", "Cameras" ]
9,618,303
https://en.wikipedia.org/wiki/Cary%20Karp
Cary Karp ( born 3 April 1947), a retired museum curator based in Sweden, has been instrumental in developing online facilities for museums in the context of the International Council of Museums (ICOM). In particular, he was central in promoting and establishing the .museum top-level domain as President of the international Museum Domain Management Association (MuseDoma). He has also been a principal contributor to establishment of standards for registration of internationalized domain names. Background Karp has a PhD in musicology and is Associate Professor of Organology at Uppsala University in Sweden. He has been professionally involved with museums since the late 1960s. He was curator of the musical instrument collections at the Music Museum in Stockholm from 1973 to 1990, especially concerned with conservation, and rose to be the museum's Deputy Director during the 1980s. He was at the Swedish Museum of Natural History 1990 until his retirement in 2014, first as Director of the Department of Information Technology and then as Director of Internet Strategy and Technology. Contribution to museums and IT Cary Karp has been the Director of Internet Strategy for ICOM. Within the context of IT development for museums internationally, he has been: President of the Museum Domain Management Association (MuseDoma), Chair of the ICOM Advisory Committee Internet Working Group, Computer Interchange of Museum Information (CIMI) Project Manager, a member of the Board of ICOM's International Documentation Committee (CIDOC) and Chair of the CIDOC Internet Working Group, an at-large member of the Internet Corporation for Assigned Names and Numbers (ICANN), a member of the editorial board of the international journal Archives and Museum Informatics. References External links Biography, Museums and the Web conference, 2001. Guidelines for the Implementation of Internationalized Domain Names, Version 2.1 published by Internet Corporation for Assigned Names and Numbers (ICANN), 22 February 2006. An IDNA problem in right-to-left scripts by H. Alvestrand and C. Karp, IETF, 13 October 2006. Living people Swedish curators Swedish musicologists Organologists People in information technology International Council of Museums 1947 births
Cary Karp
[ "Technology" ]
426
[ "People in information technology", "Information technology" ]
9,618,336
https://en.wikipedia.org/wiki/False%20loose%20smut
False loose smut is a fungal disease of barley caused by Ustilago nigra. This fungus is very similar to U. nuda, the cause of loose smut, and was first distinguished from it in 1932. Symptoms The disease is not apparent until heading, at which time, smutted heads emerge slightly earlier than healthy heads. At first, each smutted head is covered by a delicate, paperlike, grayish membrane. These membranes break shortly after the smutted heads have emerged and expose a dark brown to black, powdery mass of spores. These spores are easily dislodged, leaving only the bare rachis. Disease cycle The disease cycle of Ustilago nigra is similar to that of U. hordei, the cause of covered smut of barley. The teliospores survive on the surface or in the soil. In some cases, the teliospores that are deposited under the hull, may germinate immediately. The mycelium then grows into the lower layers of the seed and then remains dormant until seed germination. Infection of seedling occurs between germination and emergence. Infection can occur from seed-borne teliospores or by teliospores residing in the soil. Relatively dry soil at temperatures of 15–21 °C, are most favorable for infection. The invading mycelium becomes established within the growing point. As the plant enters the boot stage, the mycelium grows rapidly into the floral tissue which is converted to masses of black teliospores. Teliospores are disseminated by wind or during combining. The teliospores may remain viable for several spores. Management The incidence of false loose smut can be reduced by using clean seed, treated seed and resistant cultivars. References External links Extension publications US: Oregon Fungal plant pathogens and diseases Barley diseases Ustilaginomycotina Fungi described in 1932 Fungus species
False loose smut
[ "Biology" ]
395
[ "Fungi", "Fungus species" ]
9,618,570
https://en.wikipedia.org/wiki/Emotional%20affair
The term emotional affair describes a type of relationship between people. The term often describes a bond between two people that mimics or matches the closeness and emotional intimacy of a romantic relationship while not being physically consummated. An emotional affair is sometimes referred to as an affair of the heart. An emotional affair may emerge from a friendship, and progress toward greater levels of personal intimacy and attachment. Examples of specific behaviors include confiding personal information and turning to the other person during moments of vulnerability or need. However, nearly all friendships serve these roles to some degree. The intimacy between the people involved usually stems from a friendship with confidence to tell each other intimate aspects of themselves, their relationships, or even subjects they would not discuss with their partners. It is disputed whether this is inappropriate. Indeed, forbidding your partner from maintaining and participating in close friendships is a common feature of coercive control. High levels of platonic emotional intimacy in adults may occur without the participants being bound by other intimate relationships or may occur between people in other relationships as a normal course of life. Definition An emotional affair can be defined as: "A relationship between a person and someone other than (their) spouse that affects the level of intimacy, emotional distance and overall dynamic balance in the marriage. The role of an affair is to create emotional distance in the marriage." In this view, neither sexual intercourse nor physical affection is necessary to affect the committed relationship(s) of those involved in the affair. It is theorized that an emotional affair can injure a committed relationship more than a one night stand or other casual sexual encounters. Such closeness can also be a reaction to separate injury in the relationship, and indeed can be utilized to resolve the injury and heal the primary relationship. Incidence and prevalence Research by Glass & Wright found that men's extramarital relationships were more sexual and women's more emotional. For both genders, sexual and emotional extramarital involvement occurred in those with the greatest marital dissatisfaction. Chaste and emotionally intimate affairs tend to be more common than sexually intimate affairs. Shirley Glass reported in Not "Just Friends" that, among those who claim to have had an affair, 44% of husbands and 57% of wives indicated they had a strong emotional involvement with the subject of the affair without intercourse. In University of Chicago surveys conducted by the National Opinion Research Center (NORC) between 1990 and 2002, 27% of people who reported being happy in marriage admitted to having an extramarital affair. The meaning and definition of what infidelity constitutes often varies depending on the person asked. Sexual feelings in an emotional affair may be denied to maintain the illusion that it is just a special friendship. Affair surveys are unlikely to explore what is denied. Many people in affair surveys are not honest with themselves nor with the interviewer. Along with the possibility of these phenomena being underrepresented, this raises the possibility that it is being overrepresented, and the actual prevalence may be lower than indicated. Characteristics This type of affair is often characterized by: Unexpected emotional intimacy. The partner being unfaithful may spend inappropriate or excessive time with someone of the opposite or same gender (time not shared with the other partner). They may confide more in their new "friend" than in their partner and may share more intimate emotional feelings and secrets with their new partner than with their existing spouse. Any time that an individual invests more emotionally into a relationship with someone besides their partner the existing partnership may suffer. Deception and secrecy. Those involved may not tell their partners about the amount of time they spend with each other. An individual involved in this type of affair may, for example, tell their spouse that they are doing other activities when they are really meeting with someone else. Or the unfaithful spouse may exclude any mention of the other person while discussing the day’s activities to conceal the rendezvous. Even if no physical intimacy occurs, the deception shows that those involved believe they are doing something that undermines the existing relationship, whether because they feel the action is inherently wrong, or because they fear retribution from an unnecessarily jealous partner. Increased fighting. When a person becomes emotionally involved with someone and do not recognize it as a valid feeling, they may begin to channel their anger and disgust to diverse relationships, or to interpret different relationships in a dichotomized manner. This person may also rationalize a cause to something or someone, which can lead to increased fighting and strain on the relationships. Sexual and emotional chemistry. Sexual and emotional chemistry can present itself based on a physical attraction one might feel for another person. This may or may not lead to physical intimacy. Denial. Denial of the attraction and limerence felt may be exhibited by the cheating partner, but a similar denial and minimisation may also be defensively deployed by the excluded partner as well, to avoid confrontation. Cultural examples In Casanova's Chinese Restaurant, the composer Hugh Moreland, talking of an unlikely couple experiencing love at first sight, denies that they are having an affair: "You can have a passion for someone without having an affair. That is one of the things no one seems able to understand these days...one of those fascinating mutual attractions between improbable people that take place from time to time. I should like to write a ballet around it." Therapy as subset The entrance of a therapist into a couple's dynamics may be problematic. It may be experienced by the non-client partner as the client having an emotional affair with the therapist if the client is perceived as granting the therapist a greater degree of intimacy and confiding than they grant the client's partner. The tendency to create a mate-substitute out of the therapist may be especially acute in incest survivors. See also Notes References Pittman, F. (1989). Private Lies. New York: W. W. Norton Co. Vaughan, P. (1989). The Monogamy Myth. New York: New Market Press. Intimate relationships Emotion
Emotional affair
[ "Biology" ]
1,218
[ "Emotion", "Behavior", "Human behavior" ]
9,619,285
https://en.wikipedia.org/wiki/Ocean%20color
Ocean color is the branch of ocean optics that specifically studies the color of the water and information that can be gained from looking at variations in color. The color of the ocean, while mainly blue, actually varies from blue to green or even yellow, brown or red in some cases. This field of study developed alongside water remote sensing, so it is focused mainly on how color is measured by instruments (like the sensors on satellites and airplanes). Most of the ocean is blue in color, but in some places the ocean is blue-green, green, or even yellow to brown. Blue ocean color is a result of several factors. First, water preferentially absorbs red light, which means that blue light remains and is reflected back out of the water. Red light is most easily absorbed and thus does not reach great depths, usually to less than 50 meters (164 ft). Blue light, in comparison, can penetrate up to 200 meters (656 ft). Second, water molecules and very tiny particles in ocean water preferentially scatter blue light more than light of other colors. Blue light scattering by water and tiny particles happens even in the very clearest ocean water, and is similar to blue light scattering in the sky. The main substances that affect the color of the ocean include dissolved organic matter, living phytoplankton with chlorophyll pigments, and non-living particles like marine snow and mineral sediments. Chlorophyll can be measured by satellite observations and serves as a proxy for ocean productivity (marine primary productivity) in surface waters. In long term composite satellite images, regions with high ocean productivity show up in yellow and green colors because they contain more (green) phytoplankton, whereas areas of low productivity show up in blue. Overview Ocean color depends on how light interacts with the materials in the water. When light enters water, it can either be absorbed (light gets used up, the water gets "darker"), scattered (light gets bounced around in different directions, the water remains "bright"), or a combination of both. How underwater absorption and scattering vary spectrally, or across the spectrum of visible to infrared light energy (about 400 nm to 2000 nm wavelengths) determines what "color" the water will appear to a sensor. Water types by color Most of the world’s oceans appear blue because the light leaving water is brightest (has the highest reflectance value) in the blue part of the visible light spectrum. Nearer to land, coastal waters often appear green. Green waters appear this way because algae and dissolved substances are absorbing light in the blue and red portions of the spectrum. Blue oceans The reason that open-ocean waters appear blue is that they are very clear, somewhat similar to pure water, and have few materials present or very tiny particles only. Pure water absorbs red light with depth. As red light is absorbed, blue light remains. Large quantities of pure water appear blue (even in a white-bottom swimming pool or white-painted bucket). The substances that are present in blue-colored open ocean waters are often very tiny particles which scatter light, scattering light especially strongly in the blue wavelengths. Light scattering in blue water is similar to the scattering in the atmosphere which makes the sky appear blue (called Rayleigh scattering). Some blue-colored clear water lakes appear blue for these same reasons, like Lake Tahoe in the United States. Green oceans Microscopic marine algae, called phytoplankton, absorb light in the blue and red wavelengths, due to their specific pigments like chlorophyll-a. Accordingly, with more and more phytoplankton in the water, the color of the water shifts toward the green part of the spectrum. The most widespread light-absorbing substance in the oceans is chlorophyll pigment, which phytoplankton use to produce carbon by photosynthesis. Chlorophyll, a green pigment, makes phytoplankton preferentially absorb the red and blue portions of the light spectrum . As blue and red light are absorbed, green light remains. Ocean regions with high concentrations of phytoplankton have shades of blue-to-green water depending on the amount and type of the phytoplankton. Green waters can also have a combination of phytoplankton, dissolved substances, and sediments, while still appearing green. This often happens in estuaries, coastal waters, and inland waters, which are called "optically complex" waters because multiple different substances are creating the green color seen by the sensor. Yellow to brown oceans Ocean water appears yellow or brown when large amounts of dissolved substances, sediments, or both types of material are present. Water can appear yellow or brown due to large amounts of dissolved substances. Dissolved matter or gelbstoff (meaning yellow substance) appears dark yet relatively transparent, much like tea. Dissolved substances absorb blue light more strongly than light of other colors. Colored dissolved organic matter (CDOM) often comes from decaying plant matter on land or in marshes, or in the open ocean from marine phytoplankton exuding dissolved substances from their cells. In coastal areas, runoff from rivers and resuspension of sand and silt from the bottom add sediments to surface waters. More sediments can make the waters appear more green, yellow, or brown because sediment particles scatter light energy at all colors. In large amounts, mineral particles like sediment cause the water to turn brownish if there is a massive sediment loading event, appearing bright and opaque (not transparent), much like chocolate milk. Red oceans Ocean water can appear red if there is a bloom of a specific kind of phytoplankton causing a discoloration of the sea surface. These events are called "Red tides." However, not all red tides are harmful, and they are only considered harmful algal blooms if the type of plankton involved contains hazardous toxins. The red color comes from the pigments in the specific kinds of phytoplankton causing the bloom. Some examples are Karenia brevis in the Gulf of Mexico, Alexandrium fundyense in the Gulf of Maine, Margalefadinium polykroides and Alexandrium monilatum in the Chesapeake Bay, and Mesodinium rubrum in Long Island Sound. Ocean color remote sensing Ocean color remote sensing is also referred to as ocean color radiometry. Remote sensors on satellites, airplanes, and drones measure the spectrum of light energy coming from the water surface. The sensors used to measure light energy coming from the water are called radiometers (or spectrometers or spectroradiometers). Some radiometers are used in the field at earth’s surface on ships or directly in the water. Other radiometers are designed specifically for airplanes or earth-orbiting satellite missions. Using radiometers, scientists measure the amount of light energy coming from the water at all colors of the electromagnetic spectrum from ultraviolet to near-infrared. From this reflected spectrum of light energy, or the apparent "color," researchers derive other variables to understand the physics and biology of the oceans. Ocean color measurements can be used to infer important information such as phytoplankton biomass or concentrations of other living and non-living material. The patterns of algal blooms from satellite over time, over large regions up to the scale of the global ocean, has been instrumental in characterizing variability of marine ecosystems. Ocean color data is a key tool for research into how marine ecosystems respond to climate change and anthropogenic perturbations. One of the biggest challenges for ocean color remote sensing is atmospheric correction, or removing the color signal of the atmospheric haze and clouds to focus on the color signal of the ocean water. The signal from the water itself is less than 10% of the total signal of light leaving Earth’s surface. History People have written about the color of the ocean over many centuries, including ancient Greek poet Homer’s famous "wine-dark sea." Scientific measurements of the color of the ocean date back to the invention of the Secchi disk in Italy in the mid-1800s to study the transparency and clarity of the sea. Major accomplishments were made in the 1960s and 1970s leading up to modern ocean color remote sensing campaigns. Nils Gunnar Jerlov’s book Optical Oceanography, published in 1968, was a starting point for many researchers in the next decades. In 1970, George Clarke published the first evidence that chlorophyll concentration could be estimated based on green versus blue light coming from the water, as measured from an airplane over George's Bank. In the 1970s, scientist Howard Gordon and his graduate student George Maul related imagery from the first Landsat mission to ocean color. Around the same time, a group of researchers, including John Arvesen, Dr. Ellen Weaver, and explorer Jacques Cousteau, began developing sensors to measure ocean productivity beginning with an airborne sensor. Remote sensing of ocean color from space began in 1978 with the successful launch of NASA's Coastal Zone Color Scanner (CZCS) on the Nimbus-7 satellite. Despite the fact that CZCS was an experimental mission intended to last only one year as a proof of concept, the sensor continued to generate a valuable time-series of data over selected test sites until early 1986. Ten years passed before other sources of ocean color data became available with the launch of other sensors, and in particular the Sea-viewing Wide Field-of-view sensor (SeaWiFS) in 1997 on board the NASA SeaStar satellite. Subsequent sensors have included NASA's Moderate-resolution Imaging Spectroradiometer (MODIS) on board the Aqua and Terra satellites, ESA's MEdium Resolution Imaging Spectrometer (MERIS) onboard its environmental satellite Envisat. Several new ocean-colour sensors have recently been launched, including the Indian Ocean Colour Monitor (OCM-2) on-board ISRO's Oceansat-2 satellite and the Korean Geostationary Ocean Color Imager (GOCI), which is the first ocean colour sensor to be launched on a geostationary satellite, and Visible Infrared Imager Radiometer Suite (VIIRS) aboard NASA's Suomi NPP . More ocean colour sensors are planned over the next decade by various space agencies, including hyperspectral imaging. Applications Ocean Color Radiometry and its derived products are also seen as fundamental Essential Climate Variables as defined by the Global Climate Observing System. Ocean color datasets provide the only global synoptic perspective of primary production in the oceans, giving insight into the role of the world's oceans in the global carbon cycle. Ocean color data helps researchers map information relevant to society, such as water quality, hazards to human health like harmful algal blooms, bathymetry, and primary production and habitat types affecting commercially-important fisheries. Chlorophyll as a proxy for phytoplankton The most widely used piece of information from ocean color remote sensing is satellite-derived chlorophyll-a concentration. Researchers calculate satellite-derived chlorophyll-a concentration from space based on the central premise that the more phytoplankton is in the water, the greener it is. Phytoplankton are microscopic algae, marine primary producers that turn sunlight into chemical energy that supports the ocean food web. Like plants on land, phytoplankton create oxygen for other life on Earth. Ocean color remote sensing ever since the launch of SeaWiFS in 1997 has allowed scientists to map phytoplankton – and thus model primary production - throughout the world’s oceans over many decades, marking a major advance in knowledge of the Earth system. Other applications Beyond chlorophyll, a few examples of some of the ways that ocean color data are used include: Harmful algal blooms Researchers use ocean color data in conjunction with meteorological data and field sampling to forecast the development and movement of harmful algal blooms (commonly referred to as "red tides," although the two terms are not exactly the same). For example, MODIS data has been used to map Karenia brevis blooms in the Gulf of Mexico. Suspended sediments Researchers use ocean color data to map the extent of river plumes and document wind-driven resuspension of sediments from the seafloor. For example, after hurricanes Katrina and Rita in the Gulf of Mexico, ocean color remote sensing was used to map the effects offshore. Sensors Sensors used to measure ocean color are instruments that measure light at multiple wavelengths (multispectral) or a continuous spectrum of colors (hyperspectral), usually spectroradiometers or optical radiometers. Ocean color sensors can either be mounted on satellites or airplanes, or used at Earth’s surface. Satellite sensors The sensors below are earth-orbiting satellite sensors. The same sensor can be mounted on multiple satellites to give more coverage over time (aka higher temporal resolution). For example, the MODIS sensor is mounted on both Aqua and Terra satellites. Additionally, the VIIRS sensor is mounted on both Suomi National Polar-Orbiting Partnership (Suomi-NPP or SNPP) and Joint Polar Satellite System (JPSS-1, now known as NOAA-20) satellites. Coastal Zone Color Scanner (CZCS) Sea-viewing Wide Field-of-view Sensor (SeaWiFS) on OrbView-2 (aka SeaStar) Moderate-resolution Imaging Spectroradiometer (MODIS) on Aqua and Terra satellites Medium Resolution Imaging Spectrometer (MERIS) Polarization and Directionality of the Earth's Reflectances (POLDER) Geostationary Ocean Color Imager(GOCI) on the Communication, Ocean, and Meteorological (COMS) satellite Ocean Color Monitor (OCM) on Oceansat-2 Ocean Color and Temperature Scanner (OCTS) on the Advanced Earth Observing Satellite (ADEOS) Multi Spectral Instrument (MSI) on Sentinel-2A and Sentinel-2B Ocean and Land Colour Instrument (OLCI) on Sentinel-3A and Sentinel-3B Visible Infrared Imaging Radiometer Suite (VIIRS) on Suomi-NPP (SNPP) and NOAA-20 (JPSS1) satellites Operational Land Imager (OLI) on Landsat-8 Hyperspectral Imager for the Coastal Ocean (HICO) on the International Space Station Precursore IperSpettrale della Missione Applicative (PRISMA) Hawkeye on the SeaHawk Cubesat Ocean color instrument (OCI) and 2 polarimeters on the planned Plankton, Aerosol, Cloud, ocean Ecosystem satellite Airborne sensors The following sensors were designed to measure ocean color from airplanes for airborne remote sensing: Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) Airborne Ocean Color Imager (AOCI) Portable Remote Imaging Spectrometer (PRISM) flown for the CORALS project on the Tempus Applied Solutions Gulfstream-IV (G-IV) aircraft Headwall Hyperspectral Imaging System (HIS) Coastal Airborne In situ Radiometers (C-AIR) bio-optical radiometer package Compact Airborne Spectrographic Imager (CASI) In situ sensors At Earth’s surface, such as on research vessels, in the water using buoys, or on piers and towers, ocean color sensors take measurements that are then used to calibrate and validate satellite sensor data. Calibration and validation are two types of "ground-truthing" that are done independently. Calibration is the tuning of raw data from the sensor to match known values, such as the brightness of the moon or a known reflection value at Earth’s surface. Calibration, done throughout the lifetime of any sensor, is especially critical to the early part of any satellite mission when the sensor is developed, launched, and beginning its first raw data collection. Validation is the independent comparison of measurements made in situ with measurements made from a satellite or airborne sensor. Satellite calibration and validation maintain the quality of ocean color satellite data. There are many kinds of in situ sensors, and the different types are often compared on dedicated field campaigns or lab experiments called "round robins." In situ data are archived in data libraries such as the SeaBASS data archive. Some examples of in situ sensors (or networks of many sensors) used to calibrate or validate satellite data are: Marine Optical Buoy (MOBY) Aerosol Robotic Network (AERONET) PANTHYR instrument Trios-RAMSES Compact Optical Profiling System (C-OPS) HyperSAS and HyperPro instruments See also Color of water Ocean optics Oceanography Remote sensing Satellite imagery Water clarity Water remote sensing References External links International Ocean Colour Coordinating Group NASA's Ocean Color Home Page Ocean Optics Web Book Oceanography Earth observation Color Biological oceanography Aquatic ecology Marine biology Water Earth sciences Scattering, absorption and radiative transfer (optics)
Ocean color
[ "Physics", "Chemistry", "Biology", "Environmental_science" ]
3,475
[ "Hydrology", " absorption and radiative transfer (optics)", "Applied and interdisciplinary physics", "Oceanography", "Marine biology", "Scattering", "Ecosystems", "Water", "Aquatic ecology" ]
9,619,738
https://en.wikipedia.org/wiki/Valley%20of%20stability
In nuclear physics, the valley of stability (also called the belt of stability, nuclear valley, energy valley, or beta stability valley) is a characterization of the stability of nuclides to radioactivity based on their binding energy. Nuclides are composed of protons and neutrons. The shape of the valley refers to the profile of binding energy as a function of the numbers of neutrons and protons, with the lowest part of the valley corresponding to the region of most stable nuclei. The line of stable nuclides down the center of the valley of stability is known as the line of beta stability. The sides of the valley correspond to increasing instability to beta decay (β− or β+). The decay of a nuclide becomes more energetically favorable the further it is from the line of beta stability. The boundaries of the valley correspond to the nuclear drip lines, where nuclides become so unstable they emit single protons or single neutrons. Regions of instability within the valley at high atomic number also include radioactive decay by alpha radiation or spontaneous fission. The shape of the valley is roughly an elongated paraboloid corresponding to the nuclide binding energies as a function of neutron and atomic numbers. The nuclides within the valley of stability encompass the entire table of nuclides. The chart of those nuclides is also known as a Segrè chart, after the physicist Emilio Segrè. The Segrè chart may be considered a map of the nuclear valley. The region of proton and neutron combinations outside of the valley of stability is referred to as the sea of instability. Scientists have long searched for long-lived heavy isotopes outside of the valley of stability, hypothesized by Glenn T. Seaborg in the late 1960s. These relatively stable nuclides are expected to have particular configurations of "magic" atomic and neutron numbers, and form a so-called island of stability. Description All atomic nuclei are composed of protons and neutrons bound together by the nuclear force. There are 286 primordial nuclides that occur naturally on earth, each corresponding to a unique number of protons, called the atomic number, Z, and a unique number of neutrons, called the neutron number, N. The mass number, A, of a nuclide is the sum of atomic and neutron numbers, A = Z + N. Not all nuclides are stable, however. According to Byrne, stable nuclides are defined as those having a half-life greater than 1018 years, and there are many combinations of protons and neutrons that form nuclides that are unstable. A common example of an unstable nuclide is carbon-14 that decays by beta decay into nitrogen-14 with a half-life of about 5,730 years: → + + In this form of decay, the original element becomes a new chemical element in a process known as nuclear transmutation and a beta particle and an electron antineutrino are emitted. An essential property of this and all nuclide decays is that the total energy of the decay product is less than that of the original nuclide. The difference between the initial and final nuclide binding energies is carried away by the kinetic energies of the decay products, often the beta particle and its associated neutrino. The concept of the valley of stability is a way of organizing all of the nuclides according to binding energy as a function of neutron and proton numbers. Most stable nuclides have roughly equal numbers of protons and neutrons, so the line for which Z = N forms a rough initial line defining stable nuclides. The greater the number of protons, the more neutrons are required to stabilize a nuclide; nuclides with larger values for Z require an even larger number of neutrons, N > Z, to be stable. The valley of stability is formed by the negative of binding energy, the binding energy being the energy required to break apart the nuclide into its proton and neutron components. The stable nuclides have high binding energy, and these nuclides lie along the bottom of the valley of stability. Nuclides with weaker binding energy have combinations of N and Z that lie off of the line of stability and further up the sides of the valley of stability. Unstable nuclides can be formed in nuclear reactors or supernovas, for example. Such nuclides often decay in sequences of reactions called decay chains that take the resulting nuclides sequentially down the slopes of the valley of stability. The sequence of decays take nuclides toward greater binding energies, and the nuclides terminating the chain are stable. The valley of stability provides both a conceptual approach for how to organize the myriad stable and unstable nuclides into a coherent picture and an intuitive way to understand how and why sequences of radioactive decay occur. The role of neutrons The protons and neutrons that comprise an atomic nucleus behave almost identically within the nucleus. The approximate symmetry of isospin treats these particles as identical, but in a different quantum state. This symmetry is only approximate, however, and the nuclear force that binds nucleons together is a complicated function depending on nucleon type, spin state, electric charge, momentum, etc. and with contributions from non-central forces. The nuclear force is not a fundamental force of nature, but a consequence of the residual effects of the strong force that surround the nucleons. One consequence of these complications is that although deuterium, a bound state of a proton (p) and a neutron (n) is stable, exotic nuclides such as diproton or dineutron are unbound. The nuclear force is not sufficiently strong to form either p-p or n-n bound states, or equivalently, the nuclear force does not form a potential well deep enough to bind these identical nucleons. Stable nuclides require approximately equal numbers of protons and neutrons. The stable nuclide carbon-12 (12C) is composed of six neutrons and six protons, for example. Protons have a positive charge, hence within a nuclide with many protons there are large repulsive forces between protons arising from the Coulomb force. By acting to separate protons from one another, the neutrons within a nuclide play an essential role in stabilizing nuclides. With increasing atomic number, even greater numbers of neutrons are required to obtain stability. The heaviest stable element, lead (Pb), has many more neutrons than protons. The stable nuclide 206Pb has Z = 82 and N = 124, for example. For this reason, the valley of stability does not follow the line Z = N for A larger than 40 (Z = 20 is the element calcium). Neutron number increases along the line of beta stability at a faster rate than atomic number. The line of beta stability follows a particular curve of neutron–proton ratio, corresponding to the most stable nuclides. On one side of the valley of stability, this ratio is small, corresponding to an excess of protons over neutrons in the nuclides. These nuclides tend to be unstable to β+ decay or electron capture, since such decay converts a proton to a neutron. The decay serves to move the nuclides toward a more stable neutron-proton ratio. On the other side of the valley of stability, this ratio is large, corresponding to an excess of neutrons over protons in the nuclides. These nuclides tend to be unstable to β− decay, since such decay converts neutrons to protons. On this side of the valley of stability, β− decay also serves to move nuclides toward a more stable neutron-proton ratio. Neutrons, protons, and binding energy The mass of an atomic nucleus is given by where and are the rest mass of a proton and a neutron, respectively, and is the total binding energy of the nucleus. The mass–energy equivalence is used here. The binding energy is subtracted from the sum of the proton and neutron masses because the mass of the nucleus is less than that sum. This property, called the mass defect, is necessary for a stable nucleus; within a nucleus, the nuclides are trapped by a potential well. A semi-empirical mass formula states that the binding energy will take the form The difference between the mass of a nucleus and the sum of the masses of the neutrons and protons that comprise it is known as the mass defect. EB is often divided by the mass number to obtain binding energy per nucleon for comparisons of binding energies between nuclides. Each of the terms in this formula has a theoretical basis. The coefficients , , , and a coefficient that appears in the formula for are determined empirically. The binding energy expression gives a quantitative estimate for the neutron-proton ratio. The energy is a quadratic expression in that is minimized when the neutron-proton ratio is . This equation for the neutron-proton ratio shows that in stable nuclides the number of neutrons is greater than the number of protons by a factor that scales as . The figure at right shows the average binding energy per nucleon as a function of atomic mass number along the line of beta stability, that is, along the bottom of the valley of stability. For very small atomic mass number (H, He, Li), binding energy per nucleon is small, and this energy increases rapidly with atomic mass number. Nickel-62 (28 protons, 34 neutrons) has the highest mean binding energy of all nuclides, while iron-58 (26 protons, 32 neutrons) and iron-56 (26 protons, 30 neutrons) are a close second and third. These nuclides lie at the very bottom of the valley of stability. From this bottom, the average binding energy per nucleon slowly decreases with increasing atomic mass number. The heavy nuclide 238U is not stable, but is slow to decay with a half-life of 4.5 billion years. It has relatively small binding energy per nucleon. For β− decay, nuclear reactions have the generic form → + + where and are the mass number and atomic number of the decaying nucleus, and X and X′ are the initial and final nuclides, respectively. For β+ decay, the generic form is → + + These reactions correspond to the decay of a neutron to a proton, or the decay of a proton to a neutron, within the nucleus, respectively. These reactions begin on one side or the other of the valley of stability, and the directions of the reactions are to move the initial nuclides down the valley walls towards a region of greater stability, that is, toward greater binding energy. The figure at right shows the average binding energy per nucleon across the valley of stability for nuclides with mass number A = 125. At the bottom of this curve is tellurium (52Te), which is stable. Nuclides to the left of 52Te are unstable with an excess of neutrons, while those on the right are unstable with an excess of protons. A nuclide on the left therefore undergoes β− decay, which converts a neutron to a proton, hence shifts the nuclide to the right and toward greater stability. A nuclide on the right similarly undergoes β+ decay, which shifts the nuclide to the left and toward greater stability. Heavy nuclides are susceptible to α decay, and these nuclear reactions have the generic form, → + As in β decay, the decay product X′ has greater binding energy and it is closer to the middle of the valley of stability. The α particle carries away two neutrons and two protons, leaving a lighter nuclide. Since heavy nuclides have many more neutrons than protons, α decay increases a nuclide's neutron-proton ratio. Proton and neutron drip lines The boundaries of the valley of stability, that is, the upper limits of the valley walls, are the neutron drip line on the neutron-rich side, and the proton drip line on the proton-rich side. The nucleon drip lines are at the extremes of the neutron-proton ratio. At neutron–proton ratios beyond the drip lines, no nuclei can exist. The location of the neutron drip line is not well known for most of the Segrè chart, whereas the proton and alpha drip lines have been measured for a wide range of elements. Drip lines are defined for protons, neutrons, and alpha particles, and these all play important roles in nuclear physics. The difference in binding energy between neighboring nuclides increases as the sides of the valley of stability are ascended, and correspondingly the nuclide half-lives decrease, as indicated in the figure above. If one were to add nucleons one at a time to a given nuclide, the process will eventually lead to a newly formed nuclide that is so unstable that it promptly decays by emitting a proton (or neutron). Colloquially speaking, the nucleon has 'leaked' or 'dripped' out of the nucleus, hence giving rise to the term "drip line". Proton emission is not seen in naturally occurring nuclides. Proton emitters can be produced via nuclear reactions, usually utilizing linear particle accelerators (linac). Although prompt (i.e. not beta-delayed) proton emission was observed from an isomer in cobalt-53 as early as 1969, no other proton-emitting states were found until 1981, when the proton radioactive ground states of lutetium-151 and thulium-147 were observed at experiments at the GSI in West Germany. Research in the field flourished after this breakthrough, and to date more than 25 nuclides have been found to exhibit proton emission. The study of proton emission has aided the understanding of nuclear deformation, masses and structure, and it is an example of quantum tunneling. Two examples of nuclides that emit neutrons are beryllium-13 (mean life ) and helium-5 (). Since only a neutron is lost in this process, the atom does not gain or lose any protons, and so it does not become an atom of a different element. Instead, the atom will become a new isotope of the original element, such as beryllium-13 becoming beryllium-12 after emitting one of its neutrons. In nuclear engineering, a prompt neutron is a neutron immediately emitted by a nuclear fission event. Prompt neutrons emerge from the fission of an unstable fissionable or fissile heavy nucleus almost instantaneously. Delayed neutron decay can occur within the same context, emitted after beta decay of one of the fission products. Delayed neutron decay can occur at times from a few milliseconds to a few minutes. The U.S. Nuclear Regulatory Commission defines a prompt neutron as a neutron emerging from fission within 10−14 seconds. Island of stability The island of stability is a region outside the valley of stability where it is predicted that a set of heavy isotopes with near magic numbers of protons and neutrons will locally reverse the trend of decreasing stability in elements heavier than uranium. The hypothesis for the island of stability is based upon the nuclear shell model, which implies that the atomic nucleus is built up in "shells" in a manner similar to the structure of the much larger electron shells in atoms. In both cases, shells are just groups of quantum energy levels that are relatively close to each other. Energy levels from quantum states in two different shells will be separated by a relatively large energy gap. So when the number of neutrons and protons completely fills the energy levels of a given shell in the nucleus, the binding energy per nucleon will reach a local maximum and thus that particular configuration will have a longer lifetime than nearby isotopes that do not possess filled shells. A filled shell would have "magic numbers" of neutrons and protons. One possible magic number of neutrons for spherical nuclei is 184, and some possible matching proton numbers are 114, 120 and 126. These configurations imply that the most stable spherical isotopes would be flerovium-298, unbinilium-304 and unbihexium-310. Of particular note is 298Fl, which would be "doubly magic" (both its proton number of 114 and neutron number of 184 are thought to be magic). This doubly magic configuration is the most likely to have a very long half-life. The next lighter doubly magic spherical nucleus is lead-208, the heaviest known stable nucleus and most stable heavy metal. Discussion The valley of stability can be helpful in interpreting and understanding properties of nuclear decay processes such as decay chains and nuclear fission. Radioactive decay often proceeds via a sequence of steps known as a decay chain. For example, 238U decays to 234Th which decays to 234mPa and so on, eventually reaching 206Pb: With each step of this sequence of reactions, energy is released and the decay products move further down the valley of stability towards the line of beta stability. 206Pb is stable and lies on the line of beta stability. The fission processes that occur within nuclear reactors are accompanied by the release of neutrons that sustain the chain reaction. Fission occurs when a heavy nuclide such as uranium-235 absorbs a neutron and breaks into nuclides of lighter elements such as barium or krypton, usually with the release of additional neutrons. Like all nuclides with a high atomic number, these uranium nuclei require many neutrons to bolster their stability, so they have a large neutron-proton ratio (N/Z). The nuclei resulting from a fission (fission products) inherit a similar N/Z, but have atomic numbers that are approximately half that of uranium. Isotopes with the atomic number of the fission products and an N/Z near that of uranium or other fissionable nuclei have too many neutrons to be stable; this neutron excess is why multiple free neutrons but no free protons are usually emitted in the fission process, and it is also why many fission product nuclei undergo a long chain of β− decays, each of which converts a nucleus N/Z to (N − 1)/(Z + 1), where N and Z are, respectively, the numbers of neutrons and protons contained in the nucleus. When fission reactions are sustained at a given rate, such as in a liquid-cooled or solid fuel nuclear reactor, the nuclear fuel in the system produces many antineutrinos for each fission that has occurred. These antineutrinos come from the decay of fission products that, as their nuclei progress down a β− decay chain toward the valley of stability, emit an antineutrino along with each β− particle. In 1956, Reines and Cowan exploited the (anticipated) intense flux of antineutrinos from a nuclear reactor in the design of an experiment to detect and confirm the existence of these elusive particles. See also Alpha decay Gamma decay Neutron emission Proton emission Cluster decay Stable nuclide Nuclear shell model Nuclear drip line References External links The Live Chart of Nuclides - IAEA with filter on decay type The Valley of Stability (video) – a virtual "flight" through 3D representation of the nuclide chart, by CEA (France) The nuclear landscape: The variety and abundance of nuclei – Chapter 6 of the book Nucleus: A trip into the heart of matter by Mackintosh, Ai-Khalili, Jonson, and Pena describes the valley of stability and its implications (Baltimore, Maryland:The Johns Hopkins University Press), 2001. Nuclear physics Radioactivity Isotopes
Valley of stability
[ "Physics", "Chemistry" ]
4,051
[ "Isotopes", "Radioactivity", "Nuclear physics" ]
9,620,408
https://en.wikipedia.org/wiki/Laurence%20Chisholm%20Young
Laurence Chisholm Young (14 July 1905 – 24 December 2000) was a British mathematician known for his contributions to measure theory, the calculus of variations, optimal control theory, and potential theory. He was the son of William Henry Young and Grace Chisholm Young, both prominent mathematicians. He moved to the US in 1949 but never sought American citizenship. The concept of Young measure is named after him: he also introduced the concept of the generalized curve and a concept of generalized surface which later evolved in the concept of varifold. The Young integral also is named after him and has now been generalised in the theory of rough paths. Life and academic career Laurence Chisholm Young was born in Göttingen, the fifth of the six children of William Henry Young and Grace Chisholm Young. He held positions of Professor at the University of Cape Town, South Africa, and at the University of Wisconsin-Madison. He was also a chess grandmaster. Selected publications Books , available from the Internet archive. . . Papers . , memoir presented by Stanisław Saks at the session of 16 December 1937 of the Warsaw Society of Sciences and Letters. The free PDF copy is made available by the RCIN –Digital Repository of the Scientifics Institutes. . . . . . . . . See also Bounded variation Caccioppoli set Measure theory Varifold Notes References Biographical and general references , including a reply by L. C. Young himself (pages 109–112). . Scientific references . One of the most complete monographs on the theory of Young measures, strongly oriented to applications in continuum mechanics of fluids. . A thorough scrutiny of Young measures and their various generalization is in Chapter 3 from the perspective of convex compactifications. . . An extended version of with a list of Almgren's publications. External links Obituary on University of Wisconsin web site 20th-century British mathematicians Alumni of Trinity College, Cambridge Mathematical analysts Scientists from Göttingen 1905 births 2000 deaths Variational analysts British historians of mathematics Instituto Nacional de Matemática Pura e Aplicada researchers
Laurence Chisholm Young
[ "Mathematics" ]
415
[ "Mathematical analysis", "Mathematical analysts" ]
9,621,772
https://en.wikipedia.org/wiki/Lithia%20water
Lithia water is defined as a type of mineral water characterized by the presence of lithium salts (such as the carbonate, chloride, or citrate of lithium). Natural lithia mineral spring waters are rare, and there are few commercially bottled lithia water products. Between the 1880s and World War I, the consumption of bottled lithia mineral water was popular. One of the first commercially sold lithia waters in the United States was bottled at Lithia Springs, Georgia, in 1888. During this era, there was such a demand for lithia water that there was a proliferation of bottled lithia water products. However, only a few were natural lithia spring waters. Most of the bottled lithia water brands added lithium bicarbonate to spring water and called it lithia water. With the start of World War I and the formation of the new US government food safety agency, mineral water bottlers were under scrutiny. The new agency posted large fines against mineral water bottlers for mislabeled, misrepresented and adulterated products. These government actions and their publicity, along with public works that made clean tap water readily accessible, caused the American public to lose confidence and interest in bottled mineral water. Lithia water contains various lithium salts, including lithium citrate. An early version of Coca-Cola available in pharmacies' soda fountains called Lithia Coke was a mixture of Coca-Cola syrup and Bowden lithia spring water. The soft drink 7Up was named "Bib-Label Lithiated Lemon-Lime Soda" when it was formulated in 1929 because it contained lithium citrate. The beverage was a patent medicine marketed as a cure for hangover. Lithium citrate was removed from 7Up in 1948. Notable brands Lithia Spring Water, a brand of bottled natural lithia water sourced from Lithia Springs, Georgia, USA, since 1888 Londonderry Lithia, a brand of bottled lithia water produced during the late 19th and early 20th centuries Buffalo Lithia Water, a brand of bottled lithia water sourced from Buffalo Lithia Springs, Virginia Gerolsteiner, a natural sparkling water that lists 0.13 ppm lithium in its analysis See also Ashland, Oregon, where lithia water is piped to a public water fountain established as an attempt to draw tourists during the lithia water heyday. References Drinking water Lithium Mineral water Alternative medical treatments
Lithia water
[ "Chemistry" ]
486
[ "Mineral water", "Lithia water" ]
9,622,068
https://en.wikipedia.org/wiki/INK4
INK4 is a family of cyclin-dependent kinase inhibitors (CKIs). The members of this family (p16INK4a, p15INK4b, p18INK4c, p19INK4d) are inhibitors of CDK4 (hence their name INhibitors of CDK4), and of CDK6. The other family of CKIs, CIP/KIP proteins are capable of inhibiting all CDKs. Enforced expression of INK4 proteins can lead to G1 arrest by promoting redistribution of Cip/Kip proteins and blocking cyclin E-CDK2 activity. In cycling cells, there is a resassortment of Cip/Kip proteins between CDK4/5 and CDK2 as cells progress through G1. Their function, inhibiting CDK4/6, is to block progression of the cell cycle beyond the G1 restriction point. In addition, INK4 proteins play roles in cellular senescence, apoptosis and DNA repair. INK4 proteins are tumor suppressors and loss-of-function mutations lead to carcinogenesis. INK4 proteins are highly similar in terms of structure and function, with up to 85% amino acid similarity. They contain multiple ankyrin repeats. Genes The INK4a/ARF/INK4b locus encodes three genes (p15INK4b, ARF, and p16INK4a) in a 35-kilobase stretch of the human genome. P15INK4b has a different reading frame that is physically separated from p16INK4a and ARF. P16INK4a and ARF have different first exons that are spliced to the same second and third exon. While those second and third exons are shared by p16INK4a and ARF, the proteins are encoded in different reading frames meaning that p16INK4a and ARF are not isoforms, nor do they share any amino acid homology. Evolution Polymorphisms of the p15INK4b/p16INK4a homolog were found to segregate with melanoma susceptibility in Xiphophorus indicating that INK4 proteins have been involved with tumor suppression for over 350 million years. Furthermore, the older INK4-based system has been further bolstered by the evolution of the recent addition of the ARF-based anti-cancer response. Function INK4 proteins are cell-cycle inhibitors. When they bind to CDK4 and CDK6, they induce an allosteric change that leads to the formation of CDK-INK4 complexes rather than CDK-cyclin complexes. This leads to an inhibition of retinoblastoma (Rb) phosphorylation downstream. Therefore, the expression of p15INK4b or p16INK4A keeps the Rb-family proteins hypophosphorylated. This allows the hypophosphorylated Rb to repress transcription of S-phase genes causing cell cycle arrest in the G1 phase. Subsets P16INK4a P16 is formed from four ankyrin repeat (AR) motifs that exhibit a helix-turn-helix conformation except that the first helix in the second AR consists of four residues. P16 regulation involves epigenetic control and multiple transcription factors. PRC1, PRC2, YY1, and Id1 play a role in the suppression of p16INK4A expression and transcription factors CTCF, Sp1, and ETs activate p16INK4A transcription. In knockout experiments, it was found that mice lacking just p16INK4a were more prone to spontaneous cancers. Mice lacking both p16INK4a and ARF were found to be even more tumor prone than the mice lacking just p16INK4a. P15INK4b P15 is also formed from four ankyrin repeat (AR) motifs. Expression of P15INK4b is induced by TGF-b indicating its role as a potential downstream effector of TGF-b mediated growth arrest. P18INK4c P18INK4c has been shown to play an important role in modulating TCR-mediated T cell proliferation. The loss of p18INK4c in T cells reduced the requirement of CD28 costimulation for efficient T cell proliferation. Other INK4 family members did not affect this process. Furthermore, it was shown that p18INK4c is preferentially inhibitory to CDK6, but not CDK4 activity in activated T cells that suggest p18INK4c may set an inhibitory threshold in resting T cells. Clinical significance Role in cancer Cells containing oncogenic mutations in-vivo often responded by activating the INK4A/ARF/INK4B locus that encodes the INK4 tumor suppressor proteins. The unusual genomic arrangement of the INK4a/ARF/INK4b locus functions as a weakness in our anti-cancer defenses. This is due to the fact that three crucial regulators of the RB and p53 (regulated by ARF) are vulnerable to one single, small deletion. This observation yields two possible opposing conclusions: Either tumor formation does not provide any evolutionary selection pressure because the overlapping INK4a/ARF/INK4b is not selected against or tumorigenesis provides such a strong pressure, that an entire group of genes has been selected for at the INK4a/ARF/INK4b locus to prevent cancer. The response of the INK4a/ARF/INK4b locus efficiently prevents cancers that could occur to the constant oncogenic mutations that occur in long-lived mammals. When the INK4a/ARF/INK4b locus was overexpressed, the mice demonstrated a 3-fold reduction in the incidence of spontaneous cancers. This evidence further indicated that the INK4a/ARF/INK4b locus in mice plays a role in tumor suppression. Role in aging The INK4 family has been implicated in the aging process. The expression of p16INK4a increases with aging in many tissues of rodents and humans. It was also shown that INK4a/ARF deficient animals increase an age-related decline in T-cell responsiveness to CD3 and CD28, which is a hallmark of aging. Furthermore, neural stem cells from Bmi-1- deficient animals demonstrate increased INK4a/ARF expression and impaired regenerative potential. The phenotype; however, can be rescued by p16INK4a deficiency implying that while p16INK4a can potentially be used as a biomarker of physiologic, rather than chronologic age, it is also an effector of aging. The mechanism by which it does this is by limiting the self-renewal capacity of disparate tissues such as lymphoid organs, bone marrow, and the brain. Regulation of INK4 expression Initially, it was thought that each INK4 family member was structurally redundant and equally potent. It was later found; however, that INK4 family members are differentially expressed during mouse development. The diversity in expression pattern indicates that the INK4 gene family may have cell lineage-specific or tissue-specific functions. Evidence has shown that INK4a/ARF expression increase at an early stage of tumorigenesis, but the precise stimuli relevant to cancer that induces the expression of the locus is unknown. Expression of p15INK4b does not correlate with p16INK4a in many normal rodent tissues. Induction and repression of p15INK4b; however, has been noted in response to a few signaling events such as RAS activation, that also induce INK4/ARF expression. RAS activation might lead to increased INK4/ARF expression potentially through ERK-mediated activation of Ets1/2 to induce p16INK4. A few repressors of INK4a/ARF/INK4b expression have been identified as well. T box proteins and the polycomb group have been shown to repress p16INK4a, p15INK4b, and ARF. References Protein families Cell cycle Tumor suppressor genes
INK4
[ "Biology" ]
1,688
[ "Protein families", "Cell cycle", "Cellular processes", "Protein classification" ]
9,622,106
https://en.wikipedia.org/wiki/Engels%20Maps
Engels Maps is a map company in the Ohio Valley with particular concentration on the Cincinnati-Dayton region. It also produces chamber of commerce maps. Publications It has three semi-annual publications that form its foundation: Cincinnati Engels Guide Dayton Engels Guide Indianapolis Engels Guide Their maps are also found in the Cincinnati Bell Yellow Pages and the Dayton WorkBook. Corporate history Engels Maps was founded by Judson Engels in 1994. Sources External links Engels Maps http://cincinnati.citysearch.com/profile/4343456/fort_thomas_ky/engels_maps_guide.html Target Marketing http://www.macraesbluebook.com/search/company.cfm?company=838024 http://engelsmaps.com engelsmaps.com Geodesy Companies based in Kentucky Software companies based in Kentucky American companies established in 1994 Map companies of the United States Campbell County, Kentucky 1994 establishments in Kentucky Software companies of the United States Software companies established in 1994
Engels Maps
[ "Mathematics" ]
214
[ "Applied mathematics", "Geodesy" ]
9,622,293
https://en.wikipedia.org/wiki/List%20of%20orbits
This is a list of types of gravitational orbit classified by various characteristics. Common abbreviations List of abbreviations of common Earth orbits List of abbreviations of other orbits Classifications The following is a list of types of orbits: Centric classifications Galactocentric orbit: An orbit about the center of a galaxy. The Sun follows this type of orbit about the Galactic Center of the Milky Way. Heliocentric orbit: An orbit around the Sun. In the Solar System, all planets, comets, and asteroids are in such orbits, as are many artificial satellites and pieces of space debris. Moons by contrast are not in a heliocentric orbit but rather orbit their parent object. Geocentric orbit: An orbit around the planet Earth, such as that of the Moon or of artificial satellites. Selenocentric orbit (named after Selene): An orbit around Earth's Moon. Areocentric orbit (named after Ares): An orbit around the planet Mars, such as that of its moons or artificial satellites. For orbits centered about planets other than Earth and Mars and for the dwarf planet Pluto, the orbit names incorporating Greek terminology are not as established and much less commonly used: Mercury orbit (Hermeocentric orbit, named after Hermes): An orbit around the planet Mercury. Venus orbit (Cytherocentric orbit, named after Cytherea, or Aphrodiocentric, after Aphrodite): An orbit around the planet Venus. Jupiter orbit (Zenocentric orbit, named after Zeus, or Latin equivalent Jovicentric): An orbit around the planet Jupiter. Saturn orbit (Kronocentric orbit, named after Cronus, or Latin equivalent Saturnicentric): An orbit around the planet Saturn. Uranus orbit (Uranocentric orbit, named after Uranus): An orbit around the planet Uranus. Altitude classifications for geocentric orbits Transatmospheric orbit (TAO): geocentric orbits with an apogee above 100 km and perigee that intersects with the defined atmosphere. Very low Earth orbit (VLEO) is defined as altitudes between approximately 100 - 450 km above Earth’s surface. Low Earth orbit (LEO): geocentric orbits with altitudes below . Medium Earth orbit (MEO): geocentric orbits ranging in altitude from to just below geosynchronous orbit at . Also known as an intermediate circular orbit. These are used for Global Navigation Satellite System spacecraft, such as GPS, GLONASS, Galileo, BeiDou. GPS satellites orbit at an altitude of with an orbital period of almost 12 hours. Geosynchronous orbit (GSO) and geostationary orbit (GEO) are orbits around Earth matching Earth's sidereal rotation period. Although terms are often used interchangeably, technically a geosynchronous orbit matches the Earth's rotational period, but the definition does not require it to have zero orbital inclination to the equator, and thus is not stationary above a given point on the equator, but may oscillate north and south during the course of a day. Thus, a geostationary orbit is defined as a geosynchronous orbit at zero inclination. Geosynchronous (and geostationary) orbits have a semi-major axis of . This works out to an altitude of . Both complete one full orbit of Earth per sidereal day (relative to the stars, not the Sun). High Earth orbit: geocentric orbits above the altitude of geosynchronous orbit (). For Earth orbiting satellites below the height of about 800 km, the atmospheric drag is the major orbit perturbing force out of all non-gravitational forces. Above 800 km, solar radiation pressure causes the largest orbital perturbations. However, the atmospheric drag strongly depends on the density of the upper atmosphere, which is related to the solar activity, therefore the height at which the impact of the atmospheric drag is similar to solar radiation pressure varies depending on the phase of the solar cycle. Inclination classifications Inclined orbit: An orbit whose inclination in reference to the equatorial plane is not 0. Polar orbit: An orbit that passes above or nearly above both poles of the planet on each revolution. Therefore, it has an inclination of (or very close to) either 90 degrees or −90 degrees. Polar Sun-synchronous orbit (SSO): A nearly polar orbit that passes the equator at the same local solar time on every pass. Useful for image-taking satellites because shadows will be the same on every pass. Non-inclined orbit: An orbit whose inclination is equal to zero with respect to some plane of reference. Ecliptic orbit: A non-inclined orbit with respect to the ecliptic. Equatorial orbit: A non-inclined orbit with respect to the equator. Near equatorial orbit: An orbit whose inclination with respect to the equatorial plane is nearly zero. This orbit allows for rapid revisit times (for a single orbiting spacecraft) of near equatorial ground sites. Directional classifications Prograde orbit: An orbit that is in the same direction as the rotation of the primary (i.e. east on Earth). By convention, the inclination of a Prograde orbit is specified as an angle less than 90°. Retrograde orbit: An orbit counter to the direction of rotation of the primary. By convention, retrograde orbits are specified with an inclination angle of more than 90°. Apart from those in Sun-synchronous orbit, few satellites are launched into retrograde orbit on Earth because the quantity of fuel required to launch them is greater than for a prograde orbit. This is because when the rocket starts out on the ground, it already has an eastward component of velocity equal to the rotational velocity of the planet at its launch latitude. Eccentricity classifications There are two types of orbits: closed (periodic) orbits, and open (escape) orbits. Circular and elliptical orbits are closed. Parabolic and hyperbolic orbits are open. Radial orbits can be either open or closed. Circular orbit: An orbit that has an eccentricity of 0 and whose path traces a circle. Elliptic orbit: An orbit with an eccentricity greater than 0 and less than 1 whose orbit traces the path of an ellipse. Geostationary or geosynchronous transfer orbit (GTO): An elliptic orbit where the perigee is at the altitude of a low Earth orbit (LEO) and the apogee at the altitude of a geostationary orbit. Hohmann transfer orbit: An orbital maneuver that moves a spacecraft from one circular orbit to another using two engine impulses. This maneuver was named after Walter Hohmann. Ballistic capture orbit: a lower-energy orbit than a Hohmann transfer orbit, a spacecraft moving at a lower orbital velocity than the target celestial body is inserted into a similar orbit, allowing the planet or moon to move toward it and gravitationally snag it into orbit around the celestial body. Coelliptic orbit: A relative reference for two spacecraft—or more generally, satellites—in orbit in the same plane. "Coelliptic orbits can be defined as two orbits that are coplanar and confocal. A property of coelliptic orbits is that the difference in magnitude between aligned radius vectors is nearly the same, regardless of where within the orbits they are positioned. For this and other reasons, coelliptic orbits are useful in [spacecraft] rendezvous". Parabolic orbit: An orbit with the eccentricity equal to 1. Such an orbit also has a velocity equal to the escape velocity and therefore will escape the gravitational pull of the planet. If the speed of a parabolic orbit is increased it will become a hyperbolic orbit. Escape orbit: A parabolic orbit where the object has escape velocity and is moving away from the planet. Capture orbit: A parabolic orbit where the object has escape velocity and is moving toward the planet. Hyperbolic orbit: An orbit with the eccentricity greater than 1. Such an orbit also has a velocity in excess of the escape velocity and as such, will escape the gravitational pull of the planet and continue to travel infinitely until it is acted upon by another body with sufficient gravitational force. Radial orbit: An orbit with zero angular momentum and eccentricity equal to 1. The two objects move directly towards or away from each other in a straight-line. Radial elliptic orbit: A closed elliptic orbit where the object is moving at less than the escape velocity. This is an elliptic orbit with semi-minor axis = 0 and eccentricity = 1. Although the eccentricity is 1, this is not a parabolic orbit. Radial parabolic orbit: An open parabolic orbit where the object is moving at the escape velocity. Radial hyperbolic orbit: An open hyperbolic orbit where the object is moving at greater than the escape velocity. This is a hyperbolic orbit with semi-minor axis = 0 and eccentricity = 1. Although the eccentricity is 1, this is not a parabolic orbit. Synchronicity classifications Synchronous orbit: An orbit whose period is a rational multiple of the average rotational period of the body being orbited and in the same direction of rotation as that body. This means the track of the satellite, as seen from the central body, will repeat exactly after a fixed number of orbits. In practice, only 1:1 ratio (geosynchronous) and 1:2 ratios (semi-synchronous) are common. Geosynchronous orbit (GSO): An orbit around the Earth with a period equal to one sidereal day, which is Earth's average rotational period of 23 hours, 56 minutes, 4.091 seconds. For a nearly circular orbit, this implies an altitude of approximately . The orbit's inclination and eccentricity may not necessarily be zero. If both the inclination and eccentricity are zero, then the satellite will appear stationary from the ground. If not, then each day the satellite traces out an analemma (i.e. a "figure-eight") in the sky, as seen from the ground. When the orbit is circular and the rotational period has zero inclination, the orbit is considered to also be geostationary. Also known as a Clarke orbit after the writer Arthur C. Clarke. Geostationary orbit (GEO): A circular geosynchronous orbit with an inclination of zero. To an observer on the ground this satellite appears as a fixed point in the sky. "All geostationary orbits must be geosynchronous, but not all geosynchronous orbits are geostationary." Tundra orbit: A synchronous but highly elliptic orbit with significant inclination (typically close to 63.4°) and orbital period of one sidereal day (23 hours, 56 minutes for the Earth). Such a satellite spends most of its time over a designated area of the planet. The particular inclination keeps the perigee shift small. Areosynchronous orbit (ASO): A synchronous orbit around the planet Mars with an orbital period equal in length to Mars' sidereal day, 24.6229 hours. Areostationary orbit (AEO): A circular areosynchronous orbit on the equatorial plane and about 17,000 km (10,557 miles) above the surface of Mars. To an observer on Mars this satellite would appear as a fixed point in the sky. Subsynchronous orbit: A drift orbit close below GSO/GEO. Semi-synchronous orbit: An orbit with an orbital period equal to half of the average rotational period of the body being orbited and in the same direction of rotation as that body. For Earth this means a period of just under 12 hours at an altitude of approximately 20,200 km (12,544.2 miles) if the orbit is circular. Molniya orbit: A semi-synchronous variation of a Tundra orbit. For Earth this means an orbital period of just under 12 hours. Such a satellite spends most of its time over two designated areas of the planet. An inclination of 63.4° is normally used to keep the perigee shift small. Supersynchronous orbit: Any orbit in which the orbital period of a satellite or celestial body is greater than the rotational period of the body which contains the barycenter of the orbit. Orbits in galaxies or galaxy models Box orbit: An orbit in a triaxial elliptical galaxy that fills in a roughly box-shaped region. Pyramid orbit: An orbit near a massive black hole at the center of a triaxial galaxy. The orbit can be described as a Keplerian ellipse that precesses about the black hole in two orthogonal directions, due to torques from the triaxial galaxy. The eccentricity of the ellipse reaches unity at the four corners of the pyramid, allowing the star on the orbit to come very close to the black hole. Tube orbit: An orbit near a massive black hole at the center of an axisymmetric galaxy. Similar to a pyramid orbit, except that one component of the orbital angular momentum is conserved; as a result, the eccentricity never reaches unity. Special classifications Sun-synchronous orbit: An orbit which combines altitude and inclination in such a way that the satellite passes over any given point of the planets's surface at the same local solar time. Such an orbit can place a satellite in constant sunlight and is useful for imaging, spy, and weather satellites. Frozen orbit: An orbit in which natural drifting due to the central body's shape has been minimized by careful selection of the orbital parameters. Orbit of the Moon: The orbital characteristics of the Moon. Average altitude of 384,403 kilometres (238,857 mi), elliptical-inclined orbit. Beyond-low Earth orbit (BLEO) and beyond Earth orbit (BEO) are a broad class of orbits that are energetically farther out than low Earth orbit or require an insertion into a heliocentric orbit as part of a journey that may require multiple orbital insertions, respectively. Near-rectilinear halo orbit (NRHO): an orbit currently planned in cislunar space, as a selenocentric orbit that will serve as a staging area for future missions. Planned orbit for the NASA Lunar Gateway in circa 2024, as a highly-elliptical seven-day near-rectilinear halo orbit around the Moon, which would bring the small space station within of the lunar north pole at closest approach and as far away as over the lunar south pole. Distant retrograde orbit (DRO): A stable circular retrograde orbit (usually referring to Lunar Distant Retrograde Orbit). Stability means that satellites in DRO do not need to use station keeping propellant to stay in orbit. The lunar DRO is a high lunar orbit with a radius of approximately 61,500 km. This was proposed in 2017 as a possible orbit for the Lunar Gateway space station, outside Earth-Moon L1 and L2. Decaying orbit: A decaying orbit is an orbit at a low altitude that decreases over time due atmospheric resistance. Used to dispose of dying artificial satellites or to aerobrake an interplanetary spacecraft. Earth-trailing orbit, a heliocentric orbit that is placed such that the satellite will initially follow Earth but at a somewhat slower orbital angular speed, such that it moves further behind year by year. This orbit was used on the Spitzer Space Telescope in order to drastically reduce the heat load from the warm Earth from a more typical geocentric orbit used for space telescopes. Graveyard orbit (or disposal, junk orbit) : An orbit that satellites are moved into at the end of their operation. For geostationary satellites a few hundred kilometers above geosynchronous orbit. Parking orbit, a temporary orbit. Transfer orbit, an orbit used during an orbital maneuver from one orbit to another. Lunar transfer orbit (LTO) accomplished with trans-lunar injection (TLI) Mars transfer orbit (MTO) also known as trans-Mars injection (TMI) orbit Repeat orbit: An orbit where the ground track of the satellite repeats after a period of time. Gangale orbit: a solar orbit near Mars whose period is one Martian year, but whose eccentricity and inclination both differ from that of Mars such that a relay satellite in a Gangale orbit is visible from Earth even during solar conjunction. Pseudo-orbit classifications Horseshoe orbit: An orbit that appears to a ground observer to be orbiting a certain planet but is actually in co-orbit with the planet. See asteroids 3753 Cruithne and 2002 AA29. Libration point orbits such as halo orbits and Lissajous orbits: These are orbits around a Lagrangian point. Lagrange points are shown in the adjacent diagram, and orbits near these points allow a spacecraft to stay in constant relative position with very little use of fuel. Orbits around the point are used by spacecraft that want a constant view of the Sun, such as the Solar and Heliospheric Observatory. Orbits around are used by missions that always want both Earth and the Sun behind them. This enables a single shield to block radiation from both Earth and the Sun, allowing passive cooling of sensitive instruments. Examples include the Wilkinson Microwave Anisotropy Probe and the James Webb Space Telescope. L1, L2, and L3 are unstable orbits[6], meaning that small perturbations will cause the orbiting craft to drift out of the orbit without periodic corrections. P/2 orbit, a highly-stable 2:1 lunar resonant orbit, that was first used with the spacecraft TESS (Transiting Exoplanet Survey Satellite) in 2018. See also Geocentric orbits Orbital spaceflight Osculating orbit Notes References Outer space lists
List of orbits
[ "Astronomy" ]
3,638
[ "Outer space", "Outer space lists" ]
12,935
https://en.wikipedia.org/wiki/Gram%20stain
Gram stain (Gram staining or Gram's method), is a method of staining used to classify bacterial species into two large groups: gram-positive bacteria and gram-negative bacteria. It may also be used to diagnose a fungal infection. The name comes from the Danish bacteriologist Hans Christian Gram, who developed the technique in 1884. Gram staining differentiates bacteria by the chemical and physical properties of their cell walls. Gram-positive cells have a thick layer of peptidoglycan in the cell wall that retains the primary stain, crystal violet. Gram-negative cells have a thinner peptidoglycan layer that allows the crystal violet to wash out on addition of ethanol. They are stained pink or red by the counterstain, commonly safranin or fuchsine. Lugol's iodine solution is always added after addition of crystal violet to form a stable complex with crystal violet that strengthen the bonds of the stain with the cell wall. Gram staining is almost always the first step in the identification of a bacterial group. While Gram staining is a valuable diagnostic tool in both clinical and research settings, not all bacteria can be definitively classified by this technique. This gives rise to gram-variable and gram-indeterminate groups. History The method is named after its inventor, the Danish scientist Hans Christian Gram (1853–1938), who developed the technique while working with Carl Friedländer in the morgue of the city hospital in Berlin in 1884. Gram devised his technique not for the purpose of distinguishing one type of bacterium from another but to make bacteria more visible in stained sections of lung tissue. Gram noticed that some bacterial cells possessed noticeable resistance to decolorization. Based on these observations, Gram developed the initial gram staining procedure, initially making use of Ehrlich's aniline-gentian violet, Lugol's iodine, absolute alcohol for decolorization, and Bismarck brown for counterstain. He published his method in 1884, and included in his short report the observation that the typhus bacillus did not retain the stain. Gram did not initially make the distinction between Gram-negative and Gram-positive bacteria using his procedure. Uses Gram staining is a bacteriological laboratory technique used to differentiate bacterial species into two large groups (gram-positive and gram-negative) based on the physical properties of their cell walls. Gram staining can also be used to diagnose a fungal infection. Gram staining is not used to classify archaea, since these microorganisms yield widely varying responses that do not follow their phylogenetic groups. Some organisms are gram-variable (meaning they may stain either negative or positive); some are not stained with either dye used in the Gram technique and are not seen. Medical Gram stains are performed on body fluid or biopsy when infection is suspected. Gram stains yield results much more quickly than culturing, and are especially important when infection would make an important difference in the patient's treatment and prognosis; examples are cerebrospinal fluid for meningitis and synovial fluid for septic arthritis. Staining mechanism Gram-positive bacteria have a thick mesh-like cell wall made of peptidoglycan (50–90% of cell envelope), and as a result are stained purple by crystal violet, whereas gram-negative bacteria have a thinner layer (10% of cell envelope), so do not retain the purple stain and are counter-stained pink by safranin. There are four basic steps of the Gram stain: Applying a primary stain (crystal violet) to a heat-fixed smear of a bacterial culture. Heat fixation kills some bacteria but is mostly used to affix the bacteria to the slide so that they do not rinse out during the staining procedure. The addition of iodine, which binds to crystal violet and traps it in the cell Rapid decolorization with ethanol or acetone Counterstaining with safranin. Carbol fuchsin is sometimes substituted for safranin since it more intensely stains anaerobic bacteria, but it is less commonly used as a counterstain. Crystal violet (CV) dissociates in aqueous solutions into and chloride () ions. These ions penetrate the cell wall of both gram-positive and gram-negative cells. The ion interacts with negatively charged components of bacterial cells and stains the cells purple. Iodide ( or ) interacts with and forms large complexes of crystal violet and iodine (CV–I) within the inner and outer layers of the cell. Iodine is often referred to as a mordant, but is a trapping agent that prevents the removal of the CV–I complex and, therefore, colors the cell. When a decolorizer such as alcohol or acetone is added, it interacts with the lipids of the cell membrane. A gram-negative cell loses its outer lipopolysaccharide membrane, and the inner peptidoglycan layer is left exposed. The CV–I complexes are washed from the gram-negative cell along with the outer membrane. In contrast, a gram-positive cell becomes dehydrated from an ethanol treatment. The large CV–I complexes become trapped within the gram-positive cell due to the multilayered nature of its peptidoglycan. The decolorization step is critical and must be timed correctly; the crystal violet stain is removed from both gram-positive and negative cells if the decolorizing agent is left on too long (a matter of seconds). After decolorization, the gram-positive cell remains purple and the gram-negative cell loses its purple color. Counterstain, which is usually positively charged safranin or basic fuchsine, is applied last to give decolorized gram-negative bacteria a pink or red color. Both gram-positive bacteria and gram-negative bacteria pick up the counterstain. The counterstain, however, is unseen on gram-positive bacteria because of the darker crystal violet stain. Examples Gram-positive bacteria Gram-positive bacteria generally have a single membrane (monoderm) surrounded by a thick peptidoglycan. This rule is followed by two phyla: Bacillota (except for the classes Mollicutes and Negativicutes) and the Actinomycetota. In contrast, members of the Chloroflexota (green non-sulfur bacteria) are monoderms but possess a thin or absent (class Dehalococcoidetes) peptidoglycan and can stain negative, positive or indeterminate; members of the Deinococcota stain positive but are diderms with a thick peptidoglycan. The cell wall's strength is enhanced by teichoic acids, glycopolymeric substances embedded within the peptidoglycan. Teichoic acids play multiple roles, such as generating the cell's net negative charge, contributing to cell wall rigidity and shape maintenance, and aiding in cell division and resistance to various stressors, including heat and salt. Despite the density of the peptidoglycan layer, it remains relatively porous, allowing most substances to permeate. For larger nutrients, Gram-positive bacteria utilize exoenzymes, secreted extracellularly to break down macromolecules outside the cell. Historically, the gram-positive forms made up the phylum Firmicutes, a name now used for the largest group. It includes many well-known genera such as Lactobacillus, Bacillus, Listeria, Staphylococcus, Streptococcus, Enterococcus, and Clostridium. It has also been expanded to include the Mollicutes, bacteria such as Mycoplasma and Thermoplasma that lack cell walls and so cannot be Gram-stained, but are derived from such forms. Some bacteria have cell walls which are particularly adept at retaining stains. These will appear positive by Gram stain even though they are not closely related to other gram-positive bacteria. These are called acid-fast bacteria, and can only be differentiated from other gram-positive bacteria by special staining procedures. Gram-negative bacteria Gram-negative bacteria generally possess a thin layer of peptidoglycan between two membranes (diderm). Lipopolysaccharide (LPS) is the most abundant antigen on the cell surface of most gram-negative bacteria, contributing up to 80% of the outer membrane of E. coli and Salmonella. These LPS molecules, consisting of the O-antigen or O-polysaccharide, core polysaccharide, and lipid A, serve multiple functions including contributing to the cell's negative charge and protecting against certain chemicals. LPS's role is critical in host-pathogen interactions, with the O-antigen eliciting an immune response and lipid A acting as an endotoxin. Additionally, the outer membrane acts as a selective barrier, regulated by porins, transmembrane proteins forming pores that allow specific molecules to pass. The space between the cell membrane and the outer membrane, known as the periplasm, contains periplasmic enzymes for nutrient processing. A significant structural component linking the peptidoglycan layer and the outer membrane is Braun's lipoprotein, which provides additional stability and strength to the bacterial cell wall. Most bacterial phyla are gram-negative, including the cyanobacteria, green sulfur bacteria, and most Pseudomonadota (exceptions being some members of the Rickettsiales and the insect-endosymbionts of the Enterobacteriales). Gram-variable and gram-indeterminate bacteria Some bacteria, after staining with the Gram stain, yield a gram-variable pattern: a mix of pink and purple cells are seen. In cultures of Bacillus, Butyrivibrio, and Clostridium, a decrease in peptidoglycan thickness during growth coincides with an increase in the number of cells that stain gram-negative. In addition, in all bacteria stained using the Gram stain, the age of the culture may influence the results of the stain. Gram-indeterminate bacteria do not respond predictably to Gram staining and, therefore, cannot be determined as either gram-positive or gram-negative. Examples include many species of Mycobacterium, including Mycobacterium bovis, Mycobacterium leprae and Mycobacterium tuberculosis, the latter two of which are the causative agents of leprosy and tuberculosis, respectively. Bacteria of the genus Mycoplasma lack a cell wall around their cell membranes, which means they do not stain by Gram's method and are resistant to the antibiotics that target cell wall synthesis. Orthographic note The term Gram staining is derived from the surname of Hans Christian Gram; the eponym (Gram) is therefore capitalized but not the common noun (stain) as is usual for scientific terms. The initial letters of gram-positive and gram-negative, which are eponymous adjectives, can be either capital G or lowercase g, depending on what style guide (if any) governs the document being written. Lowercase style is used by the US Centers for Disease Control and Prevention and other style regimens such as the AMA style. Dictionaries may use lowercase, uppercase, or both. Uppercase Gram-positive or Gram-negative usage is also common in many scientific journal articles and publications. When articles are submitted to journals, each journal may or may not apply house style to the postprint version. Preprint versions contain whichever style the author happened to use. Even style regimens that use lowercase for the adjectives gram-positive and gram-negative still typically use capital for Gram stain. See also Bacterial cell structure Ziehl–Neelsen stain References External links Gram staining technique video Bacteriology Staining Microscopy Danish inventions 1884 in biology
Gram stain
[ "Chemistry", "Biology" ]
2,476
[ "Staining", "Microbiology techniques", "Cell imaging", "Microscopy" ]
12,936
https://en.wikipedia.org/wiki/Gram-positive%20bacteria
In bacteriology, gram-positive bacteria are bacteria that give a positive result in the Gram stain test, which is traditionally used to quickly classify bacteria into two broad categories according to their type of cell wall. The Gram stain is used by microbiologists to place bacteria into two main categories, Gram-positive (+) and Gram-negative (-). Gram-positive bacteria have a thick layer of peptidoglycan within the cell wall, and Gram-negative bacteria have a thin layer of peptidoglycan. Gram-positive bacteria take up the crystal violet stain used in the test, and then appear to be purple-coloured when seen through an optical microscope. This is because the thick layer of peptidoglycan in the bacterial cell wall retains the stain after it is washed away from the rest of the sample, in the decolorization stage of the test. Conversely, gram-negative bacteria cannot retain the violet stain after the decolorization step; alcohol used in this stage degrades the outer membrane of gram-negative cells, making the cell wall more porous and incapable of retaining the crystal violet stain. Their peptidoglycan layer is much thinner and sandwiched between an inner cell membrane and a bacterial outer membrane, causing them to take up the counterstain (safranin or fuchsine) and appear red or pink. Despite their thicker peptidoglycan layer, gram-positive bacteria are more receptive to certain cell wall–targeting antibiotics than gram-negative bacteria, due to the absence of the outer membrane. Characteristics In general, the following characteristics are present in gram-positive bacteria: Cytoplasmic lipid membrane Thick peptidoglycan layer Teichoic acids and lipoids are present, forming lipoteichoic acids, which serve as chelating agents, and also for certain types of adherence. Peptidoglycan chains are cross-linked to form rigid cell walls by a bacterial enzyme DD-transpeptidase. A much smaller volume of periplasm than that in gram-negative bacteria. Only some species have a capsule, usually consisting of polysaccharides. Also, only some species are flagellates, and when they do have flagella, have only two basal body rings to support them, whereas gram-negative have four. Both gram-positive and gram-negative bacteria commonly have a surface layer called an S-layer. In gram-positive bacteria, the S-layer is attached to the peptidoglycan layer. Gram-negative bacteria's S-layer is attached directly to the outer membrane. Specific to gram-positive bacteria is the presence of teichoic acids in the cell wall. Some of these are lipoteichoic acids, which have a lipid component in the cell membrane that can assist in anchoring the peptidoglycan. Classification Along with cell shape, Gram staining is a rapid method used to differentiate bacterial species. Such staining, together with growth requirement and antibiotic susceptibility testing, and other macroscopic and physiologic tests, forms a basis for practical classification and subdivision of the bacteria (e.g., see figure and pre-1990 versions of Bergey's Manual of Systematic Bacteriology). Historically, the kingdom Monera was divided into four divisions based primarily on Gram staining: Bacillota (positive in staining), Gracilicutes (negative in staining), Mollicutes (neutral in staining) and Mendocutes (variable in staining). Based on 16S ribosomal RNA phylogenetic studies of the late microbiologist Carl Woese and collaborators and colleagues at the University of Illinois, the monophyly of the gram-positive bacteria was challenged, with major implications for the therapeutic and general study of these organisms. Based on molecular studies of the 16S sequences, Woese recognised twelve bacterial phyla. Two of these were gram-positive and were divided on the proportion of the guanine and cytosine content in their DNA. The high G + C phylum was made up of the Actinobacteria, and the low G + C phylum contained the Firmicutes. The Actinomycetota include the Corynebacterium, Mycobacterium, Nocardia and Streptomyces genera. The (low G + C) Bacillota, have a 45–60% GC content, but this is lower than that of the Actinomycetota. Importance of the outer cell membrane in bacterial classification Although bacteria are traditionally divided into two main groups, gram-positive and gram-negative, based on their Gram stain retention property, this classification system is ambiguous as it refers to three distinct aspects (staining result, envelope organization, taxonomic group), which do not necessarily coalesce for some bacterial species. The gram-positive and gram-negative staining response is also not a reliable characteristic as these two kinds of bacteria do not form phylogenetic coherent groups. However, although Gram staining response is an empirical criterion, its basis lies in the marked differences in the ultrastructure and chemical composition of the bacterial cell wall, marked by the absence or presence of an outer lipid membrane. All gram-positive bacteria are bounded by a single-unit lipid membrane, and, in general, they contain a thick layer (20–80 nm) of peptidoglycan responsible for retaining the Gram stain. A number of other bacteria—that are bounded by a single membrane, but stain gram-negative due to either lack of the peptidoglycan layer, as in the mycoplasmas, or their inability to retain the Gram stain because of their cell wall composition—also show close relationship to the gram-positive bacteria. For the bacterial cells bounded by a single cell membrane, the term monoderm bacteria has been proposed. In contrast to gram-positive bacteria, all typical gram-negative bacteria are bounded by a cytoplasmic membrane and an outer cell membrane; they contain only a thin layer of peptidoglycan (2–3 nm) between these membranes. The presence of inner and outer cell membranes defines a new compartment in these cells: the periplasmic space or the periplasmic compartment. These bacteria have been designated as diderm bacteria. The distinction between the monoderm and diderm bacteria is supported by conserved signature indels in a number of important proteins (viz. DnaK, GroEL). Of these two structurally distinct groups of bacteria, monoderms are indicated to be ancestral. Based upon a number of observations including that the gram-positive bacteria are the major producers of antibiotics and that, in general, gram-negative bacteria are resistant to them, it has been proposed that the outer cell membrane in gram-negative bacteria (diderms) has evolved as a protective mechanism against antibiotic selection pressure. Some bacteria, such as Deinococcus, which stain gram-positive due to the presence of a thick peptidoglycan layer and also possess an outer cell membrane are suggested as intermediates in the transition between monoderm (gram-positive) and diderm (gram-negative) bacteria. The diderm bacteria can also be further differentiated between simple diderms lacking lipopolysaccharide, the archetypical diderm bacteria where the outer cell membrane contains lipopolysaccharide, and the diderm bacteria where outer cell membrane is made up of mycolic acid. Exceptions In general, gram-positive bacteria are monoderms and have a single lipid bilayer whereas gram-negative bacteria are diderms and have two bilayers. Exceptions include: Some taxa lack peptidoglycan (such as the class Mollicutes, some members of the Rickettsiales, and the insect-endosymbionts of the Enterobacteriales) and are gram-indeterminate. The Deinococcota have gram-positive stains, although they are structurally similar to gram-negative bacteria with two layers. The Chloroflexota have a single layer, yet (with some exceptions) stain negative. Two related phyla to the Chloroflexi, the TM7 clade and the Ktedonobacteria, are also monoderms. Some Bacillota species are not gram-positive. The class Negativicutes, which includes Selenomonas, are diderm and stain gram-negative. Additionally, a number of bacterial taxa (viz. Negativicutes, Fusobacteriota, Synergistota, and Elusimicrobiota) that are either part of the phylum Bacillota or branch in its proximity are found to possess a diderm cell structure. However, a conserved signature indel (CSI) in the HSP60 (GroEL) protein distinguishes all traditional phyla of gram-negative bacteria (e.g., Pseudomonadota, Aquificota, Chlamydiota, Bacteroidota, Chlorobiota, "Cyanobacteria", Fibrobacterota, Verrucomicrobiota, Planctomycetota, Spirochaetota, Acidobacteriota, etc.) from these other atypical diderm bacteria, as well as other phyla of monoderm bacteria (e.g., Actinomycetota, Bacillota, Thermotogota, Chloroflexota, etc.). The presence of this CSI in all sequenced species of conventional LPS (lipopolysaccharide)-containing gram-negative bacterial phyla provides evidence that these phyla of bacteria form a monophyletic clade and that no loss of the outer membrane from any species from this group has occurred. Pathogenicity In the classical sense, six gram-positive genera are typically pathogenic in humans. Two of these, Streptococcus and Staphylococcus, are cocci (sphere-shaped). The remaining organisms are bacilli (rod-shaped) and can be subdivided based on their ability to form spores. The non-spore formers are Corynebacterium and Listeria (a coccobacillus), whereas Bacillus and Clostridium produce spores. The spore-forming bacteria can again be divided based on their respiration: Bacillus is a facultative anaerobe, while Clostridium is an obligate anaerobe. Also, Rathybacter, Leifsonia, and Clavibacter are three gram-positive genera that cause plant disease. Gram-positive bacteria are capable of causing serious and sometimes fatal infections in newborn infants. Novel species of clinically relevant gram-positive bacteria also include Catabacter hongkongensis, which is an emerging pathogen belonging to Bacillota. Bacterial transformation Transformation is one of three processes for horizontal gene transfer, in which exogenous genetic material passes from a donor bacterium to a recipient bacterium, the other two processes being conjugation (transfer of genetic material between two bacterial cells in direct contact) and transduction (injection of donor bacterial DNA by a bacteriophage virus into a recipient host bacterium). In transformation, the genetic material passes through the intervening medium, and uptake is completely dependent on the recipient bacterium. As of 2014 about 80 species of bacteria were known to be capable of transformation, about evenly divided between gram-positive and gram-negative bacteria; the number might be an overestimate since several of the reports are supported by single papers. Transformation among gram-positive bacteria has been studied in medically important species such as Streptococcus pneumoniae, Streptococcus mutans, Staphylococcus aureus and Streptococcus sanguinis and in gram-positive soil bacteria Bacillus subtilis and Bacillus cereus. Orthographic note The adjectives gram-positive and gram-negative derive from the surname of Hans Christian Gram; as eponymous adjectives, their initial letter can be either capital G or lower-case g, depending on which style guide (e.g., that of the CDC), if any, governs the document being written. References External links 3D structures of proteins associated with plasma membrane of gram-positive bacteria 3D structures of proteins associated with outer membrane of gram-positive bacteria Staining Bacteriology
Gram-positive bacteria
[ "Chemistry", "Biology" ]
2,604
[ "Staining", "Microbiology techniques", "Cell imaging", "Microscopy" ]
12,937
https://en.wikipedia.org/wiki/Gram-negative%20bacteria
Gram-negative bacteria are bacteria that, unlike gram-positive bacteria, do not retain the crystal violet stain used in the Gram staining method of bacterial differentiation. Their defining characteristic is that their cell envelope consists of a thin peptidoglycan cell wall sandwiched between an inner (cytoplasmic) membrane and an outer membrane. These bacteria are found in all environments that support life on Earth. Within this category, notable species include the model organism Escherichia coli, along with various pathogenic bacteria, such as Pseudomonas aeruginosa, Chlamydia trachomatis, and Yersinia pestis. They pose significant challenges in the medical field due to their outer membrane, which acts as a protective barrier against numerous antibiotics (including penicillin), detergents that would normally damage the inner cell membrane, and the antimicrobial enzyme lysozyme produced by animals as part of their innate immune system. Furthermore, the outer leaflet of this membrane contains a complex lipopolysaccharide (LPS) whose lipid A component can trigger a toxic reaction when the bacteria are lysed by immune cells. This reaction may lead to septic shock, resulting in low blood pressure, respiratory failure, reduced oxygen delivery, and lactic acidosis. Several classes of antibiotics have been developed to target gram-negative bacteria, including aminopenicillins, ureidopenicillins, cephalosporins, beta-lactam-betalactamase inhibitor combinations (such as piperacillin-tazobactam), folate antagonists, quinolones, and carbapenems. Many of these antibiotics also cover gram-positive bacteria. The antibiotics that specifically target gram-negative organisms include aminoglycosides, monobactams (such as aztreonam), and ciprofloxacin. Characteristics Conventional gram-negative (LPS-diderm) bacteria display : An inner cell membrane is present (cytoplasmic) A thin peptidoglycan layer is present (this is much thicker in gram-positive bacteria) Has outer membrane containing lipopolysaccharides (LPS, which consists of lipid A, core polysaccharide, and O antigen) in its outer leaflet and phospholipids in the inner leaflet Porins exist in the outer membrane, which act like pores for particular molecules Between the outer membrane and the cytoplasmic membrane there is a space filled with a concentrated gel-like substance called periplasm The S-layer is directly attached to the outer membrane rather than to the peptidoglycan If present, flagella have four supporting rings instead of two Teichoic acids or lipoteichoic acids are absent Lipoproteins are attached to the polysaccharide backbone Some contain Braun's lipoprotein, which serves as a link between the outer membrane and the peptidoglycan chain by a covalent bond Most, with few exceptions, do not form spores Classification Along with cell shape, Gram staining is a rapid diagnostic tool and once was used to group species at the subdivision of Bacteria. Historically, the kingdom Monera was divided into four divisions based on Gram staining: Firmicutes (+), Gracillicutes (−), Mollicutes (0) and Mendocutes (var.). Since 1987, the monophyly of the gram-negative bacteria has been disproven with molecular studies. However some authors, such as Cavalier-Smith still treat them as a monophyletic taxon (though not a clade; his definition of monophyly requires a single common ancestor but does not require holophyly, the property that all descendants be encompassed by the taxon) and refer to the group as a subkingdom "Negibacteria". Taxonomy Bacteria are traditionally classified based on their Gram-staining response into the gram-positive and gram-negative bacteria. Having just one membrane, the gram-positive bacteria are also known as monoderm bacteria, while gram-negative bacteria, having two membranes, are also known as diderm bacteria. It was traditionally thought that the groups represent lineages, i.e., the extra membrane only evolved once, such that gram-negative bacteria are more closely related to one another than to any gram-positive bacteria. While this is often true, the classification system breaks down in some cases, with lineage groupings not matching the staining result. Thus, Gram staining cannot be reliably used to assess familial relationships of bacteria. Nevertheless, staining often gives reliable information about the composition of the cell membrane, distinguishing between the presence or absence of an outer lipid membrane. Of these two structurally distinct groups of prokaryotic organisms, monoderm prokaryotes are thought to be ancestral. Based upon a number of different observations, including that the gram-positive bacteria are the most sensitive to antibiotics and that the gram-negative bacteria are, in general, resistant to antibiotics, it has been proposed that the outer cell membrane in gram-negative bacteria (diderms) evolved as a protective mechanism against antibiotic selection pressure. Some bacteria such as Deinococcus, which stain gram-positive due to the presence of a thick peptidoglycan layer, but also possess an outer cell membrane are suggested as intermediates in the transition between monoderm (gram-positive) and diderm (gram-negative) bacteria. The diderm bacteria can also be further differentiated between simple diderms lacking lipopolysaccharide (LPS); the archetypical diderm bacteria, in which the outer cell membrane contains lipopolysaccharide; and the diderm bacteria in which the outer cell membrane is made up of mycolic acid (e. g. Mycobacterium). The conventional LPS-diderm group of gram-negative bacteria (e.g., Pseudomonadota, Aquificota, Chlamydiota, Bacteroidota, Chlorobiota, "Cyanobacteria", Fibrobacterota, Verrucomicrobiota, Planctomycetota, Spirochaetota, Acidobacteriota; "Hydrobacteria") are uniquely identified by a few conserved signature indel (CSI) in the HSP60 (GroEL) protein. In addition, a number of bacterial taxa (including Negativicutes, Fusobacteriota, Synergistota, and Elusimicrobiota) that are either part of the phylum Bacillota (a monoderm group) or branches in its proximity are also found to possess a diderm cell structure. They lack the GroEL signature. The presence of this CSI in all sequenced species of conventional lipopolysaccharide-containing gram-negative bacterial phyla provides evidence that these phyla of bacteria form a monophyletic clade and that no loss of the outer membrane from any species from this group has occurred. Example species The proteobacteria are a major superphylum of gram-negative bacteria, including E. coli, Salmonella, Shigella, and other Enterobacteriaceae, Pseudomonas, Moraxella, Helicobacter, Stenotrophomonas, Bdellovibrio, acetic acid bacteria, Legionella etc. Other notable groups of gram-negative bacteria include the cyanobacteria, spirochaetes, green sulfur, and green non-sulfur bacteria. Medically-relevant gram-negative diplococci include the four types that cause a sexually transmitted disease (Neisseria gonorrhoeae), a meningitis (Neisseria meningitidis), and respiratory symptoms (Moraxella catarrhalis, A coccobacillus Haemophilus influenzae is another medically relevant coccal type. Medically relevant gram-negative bacilli include a multitude of species. Some of them cause primarily respiratory problems (Klebsiella pneumoniae, Legionella pneumophila, Pseudomonas aeruginosa), primarily urinary problems (Escherichia coli, Proteus mirabilis, Enterobacter cloacae, Serratia marcescens), and primarily gastrointestinal problems (Helicobacter pylori, Salmonella enteritidis, Salmonella typhi). Gram-negative bacteria associated with hospital-acquired infections include Acinetobacter baumannii, which cause bacteremia, secondary meningitis, and ventilator-associated pneumonia in hospital intensive-care units. Bacterial transformation Transformation is one of three processes for horizontal gene transfer, in which exogenous genetic material passes from one bacterium to another, the other two being conjugation (transfer of genetic material between two bacterial cells in direct contact) and transduction (injection of foreign DNA by a bacteriophage virus into the host bacterium). In transformation, the genetic material passes through the intervening medium, and uptake is completely dependent on the recipient bacterium. As of 2014 about 80 species of bacteria were known to be capable of transformation, about evenly divided between gram-positive and gram-negative bacteria; the number might be an overestimate since several of the reports are supported by single papers. Transformation has been studied in medically important gram-negative bacteria species such as Helicobacter pylori, Legionella pneumophila, Neisseria meningitidis, Neisseria gonorrhoeae, Haemophilus influenzae and Vibrio cholerae. It has also been studied in gram-negative species found in soil such as Pseudomonas stutzeri, Acinetobacter baylyi, and gram-negative plant pathogens such as Ralstonia solanacearum and Xylella fastidiosa. Role in disease One of the several unique characteristics of gram-negative bacteria is the structure of the bacterial outer membrane. The outer leaflet of this membrane contains lipopolysaccharide (LPS), whose lipid A portion acts as an endotoxin. If gram-negative bacteria enter the circulatory system, LPS can trigger an innate immune response, activating the immune system and producing cytokines (hormonal regulators). This leads to inflammation and can cause a toxic reaction, resulting in fever, an increased respiratory rate, and low blood pressure. That is why some infections with gram-negative bacteria can lead to life-threatening septic shock. The outer membrane protects the bacteria from several antibiotics, dyes, and detergents that would normally damage either the inner membrane or the cell wall (made of peptidoglycan). The outer membrane provides these bacteria with resistance to lysozyme and penicillin. The periplasmic space (space between the two cell membranes) also contains enzymes which break down or modify antibiotics. Drugs commonly used to treat gram negative infections include amino, carboxy and ureido penicillins (ampicillin, amoxicillin, pipercillin, ticarcillin). These drugs may be combined with beta-lactamase inhibitors to combat the presence of enzymes that can digest these drugs (known as beta-lactamases) in the peri-plasmic space. Other classes of drugs that have gram negative spectrum include cephalosporins, monobactams (aztreonam), aminoglycosides, quinolones, macrolides, chloramphenicol, folate antagonists, and carbapenems. Orthographic note The adjectives gram-positive and gram-negative derive from the surname of Hans Christian Gram, a Danish bacteriologist; as eponymous adjectives, their initial letter can be either capital G or lower-case g, depending on which style guide (e.g., that of the CDC), if any, governs the document being written. This is further explained at Gram staining § Orthographic note. See also Autochaperone Gram-variable and gram-indeterminate bacteria OMPdb (2011) Outer membrane receptor References Notes External links 3D structures of proteins from inner membranes of Ellie Wyithe's gram-negative bacteria Staining Bacteriology
Gram-negative bacteria
[ "Chemistry", "Biology" ]
2,608
[ "Staining", "Microbiology techniques", "Cell imaging", "Microscopy" ]
12,946
https://en.wikipedia.org/wiki/Grigory%20Barenblatt
Grigory Isaakovich Barenblatt (; 10 July 1927 – 22 June 2018) was a Russian mathematician. Education Barenblatt graduated in 1950 from Moscow State University, Department of Mechanics and Mathematics. He received his Ph.D. in 1953 from Moscow State University under the supervision of A. N. Kolmogorov. Career and research Barenblatt also received a D.Sc. from Moscow State University in 1957. He was an emeritus Professor in Residence at the Department of Mathematics of the University of California, Berkeley and Mathematician at Department of Mathematics, Lawrence Berkeley National Laboratory. He was G. I. Taylor Professor of Fluid Mechanics at the University of Cambridge from 1992 to 1994 and he was Emeritus G. I. Taylor Professor of Fluid Mechanics. His areas of research were: Fracture mechanics The theory of fluid and gas flows in porous media The mechanics of a non-classical deformable solids Turbulence Self-similarities, nonlinear waves and intermediate asymptotics. Awards and honors 1975 – Foreign Honorary Member, American Academy of Arts and Sciences 1984 – Foreign Member, Danish Center of Applied Mathematics & Mechanics 1988 – Foreign Member, Polish Society of Theoretical & Applied Mechanics 1989 – Doctor of Technology Honoris Causa at the Royal Institute of Technology, Stockholm, Sweden 1992 – Foreign Associate, U.S. National Academy of Engineering 1993 – Fellow, Cambridge Philosophical Society 1993 – Member, Academia Europaea 1994 – Fellow, Gonville and Caius College, Cambridge; (since 1999, Honorary Fellow) 1995 – Lagrange Medal, Accademia Nazionale dei Lincei 1995 – Modesto Panetti Prize and Medal 1996 - Visiting Miller Professorship - University of California Berkeley 1997 – Foreign Associate, U.S. National Academy of Sciences 1999 – G. I. Taylor Medal, U.S. Society of Engineering Science 1999 – J. C. Maxwell Medal and Prize, International Congress for Industrial and Applied Mathematics 2000 – Foreign Member, Royal Society of London 2005 – Timoshenko Medal, American Society of Mechanical Engineers, "for seminal contributions to nearly every area of solid and fluid mechanics, including fracture mechanics, turbulence, stratified flows, flames, flow in porous media, and the theory and application of intermediate asymptotics." References External links Applied mechanics: an age old science perpetually in rebirth (pdf). The Timoshenko Medal acceptance speech by Grigory Barenblatt (to be published by ASME in summer 2006). 1927 births 2018 deaths 20th-century Russian mathematicians 21st-century Russian mathematicians Fellows of the American Academy of Arts and Sciences Fellows of Gonville and Caius College, Cambridge Fluid dynamicists Foreign members of the Royal Society Jewish Russian scientists Members of Academia Europaea Foreign associates of the National Academy of Sciences Moscow State University alumni Mathematicians from Moscow Russian Jews University of California, Berkeley College of Letters and Science faculty Foreign associates of the National Academy of Engineering G. I. Taylor Professors of Fluid Mechanics
Grigory Barenblatt
[ "Chemistry" ]
602
[ "Fluid dynamicists", "Fluid dynamics" ]
12,947
https://en.wikipedia.org/wiki/Grammatical%20tense
In grammar, tense is a category that expresses time reference. Tenses are usually manifested by the use of specific forms of verbs, particularly in their conjugation patterns. The main tenses found in many languages include the past, present, and future. Some languages have only two distinct tenses, such as past and nonpast, or future and nonfuture. There are also tenseless languages, like most of the Chinese languages, though they can possess a future and nonfuture system typical of Sino-Tibetan languages. In recent work Maria Bittner and Judith Tonhauser have described the different ways in which tenseless languages nonetheless mark time. On the other hand, some languages make finer tense distinctions, such as remote vs recent past, or near vs remote future. Tenses generally express time relative to the moment of speaking. In some contexts, however, their meaning may be relativized to a point in the past or future which is established in the discourse (the moment being spoken about). This is called relative (as opposed to absolute) tense. Some languages have different verb forms or constructions which manifest relative tense, such as pluperfect ("past-in-the-past") and "future-in-the-past". Expressions of tense are often closely connected with expressions of the category of aspect; sometimes what are traditionally called tenses (in languages such as Latin) may in modern analysis be regarded as combinations of tense with aspect. Verbs are also often conjugated for mood, and since in many cases the three categories are not manifested separately, some languages may be described in terms of a combined tense–aspect–mood (TAM) system. Etymology The English noun tense comes from Old French "time" (spelled in modern French through deliberate archaization), from Latin , "time". It is not related to the adjective tense, which comes from Latin , the perfect passive participle of , "stretch". Uses of the term In modern linguistic theory, tense is understood as a category that expresses (grammaticalizes) time reference; namely one which, using grammatical means, places a state or action in time. Nonetheless, in many descriptions of languages, particularly in traditional European grammar, the term "tense" is applied to verb forms or constructions that express not merely position in time, but also additional properties of the state or action – particularly aspectual or modal properties. The category of aspect expresses how a state or action relates to time – whether it is seen as a complete event, an ongoing or repeated situation, etc. Many languages make a distinction between perfective aspect (denoting complete events) and imperfective aspect (denoting ongoing or repeated situations); some also have other aspects, such as a perfect aspect, denoting a state following a prior event. Some of the traditional "tenses" express time reference together with aspectual information. In Latin and French, for example, the imperfect denotes past time in combination with imperfective aspect, while other verb forms (the Latin perfect, and the French or ) are used for past time reference with perfective aspect. The category of mood is used to express modality, which includes such properties as uncertainty, evidentiality, and obligation. Commonly encountered moods include the indicative, subjunctive, and conditional. Mood can be bound up with tense, aspect, or both, in particular verb forms. Hence, certain languages are sometimes analysed as having a single tense–aspect–mood (TAM) system, without separate manifestation of the three categories. The term tense, then, particularly in less formal contexts, is sometimes used to denote any combination of tense proper, aspect, and mood. As regards English, there are many verb forms and constructions which combine time reference with continuous and/or perfect aspect, and with indicative, subjunctive or conditional mood. Particularly in some English language teaching materials, some or all of these forms can be referred to simply as tenses (see below). Particular tense forms need not always carry their basic time-referential meaning in every case. For instance, the historical present is a use of the present tense to refer to past events. The phenomenon of fake tense is common crosslinguistically as a means of marking counterfactuality in conditionals and wishes. Possible tenses Not all languages have tense: tenseless languages include Chinese and Dyirbal. Some languages have all three basic tenses (the past, present, and future), while others have only two: some have past and nonpast tenses, the latter covering both present and future times (as in Arabic, Japanese, and, in some analyses, English), whereas others such as Greenlandic, Quechua, and Nivkh have future and nonfuture. Some languages have four or more tenses, making finer distinctions either in the past (e.g. remote vs. recent past) or in the future (e.g. near vs. remote future). The six-tense language Kalaw Lagaw Ya of Australia has the remote past, the recent past, the today past, the present, the today/near future and the remote future. Some languages, like the Amazonian Cubeo language, have a historical past tense, used for events perceived as historical. Tenses that refer specifically to "today" are called hodiernal tenses; these can be either past or future. Apart from Kalaw Lagaw Ya, another language which features such tenses is Mwera, a Bantu language of Tanzania. It is also suggested that in 17th-century French, the passé composé served as a hodiernal past. Tenses that contrast with hodiernals, by referring to the past before today or the future after today, are called pre-hodiernal and post-hodiernal respectively. Some languages also have a crastinal tense, a future tense referring specifically to tomorrow (found in some Bantu languages); or a hesternal tense, a past tense referring specifically to yesterday (although this name is also sometimes used to mean pre-hodiernal). A tense for after tomorrow is thus called post-crastinal, and one for before yesterday is called pre-hesternal. Another tense found in some languages, including Luganda, is the persistive tense, used to indicate that a state or ongoing action is still the case (or, in the negative, is no longer the case). Luganda also has tenses meaning "so far" and "not yet". Some languages have special tense forms that are used to express relative tense. Tenses that refer to the past relative to the time under consideration are called anterior; these include the pluperfect (for the past relative to a past time) and the future perfect (for the past relative to a future time). Similarly, posterior tenses refer to the future relative to the time under consideration, as with the English "future-in-the-past": (he said that) he would go. Relative tense forms are also sometimes analysed as combinations of tense with aspect: the perfect aspect in the anterior case, or the prospective aspect in the posterior case. Some languages, such as Nez perce or Cavineña also have periodic tense markers that encode that the action occurs in a recurrent temporal period of the day ("in the morning", "during the day", "at night", "until dawn" etc) or of the year ("in winter"). Some languages have cyclic tense systems. This is a form of temporal marking where tense is given relative to a reference point or reference span. In Burarra, for example, events that occurred earlier on the day of speaking are marked with the same verb forms as events that happened in the far past, while events that happened yesterday (compared to the moment of speech) are marked with the same forms as events in the present. This can be thought of as a system where events are marked as prior or contemporaneous to points of reference on a timeline. Tense marking Morphology of tense Tense is normally indicated by the use of a particular verb form – either an inflected form of the main verb, or a multi-word construction, or both in combination. Inflection may involve the use of affixes, such as the -ed ending that marks the past tense of English regular verbs, but can also entail stem modifications, such as ablaut, as found as in the strong verbs in English and other Germanic languages, or reduplication. Multi-word tense constructions often involve auxiliary verbs or clitics. Examples which combine both types of tense marking include the French passé composé, which has an auxiliary verb together with the inflected past participle form of the main verb; and the Irish past tense, where the proclitic do (in various surface forms) appears in conjunction with the affixed or ablaut-modified past tense form of the main verb. As has already been mentioned, indications of tense are often bound up with indications of other verbal categories, such as aspect and mood. The conjugation patterns of verbs often also reflect agreement with categories pertaining to the subject, such as person, number and gender. It is consequently not always possible to identify elements that mark any specific category, such as tense, separately from the others. Languages that do not have grammatical tense, such as most Sinitic languages, express time reference chiefly by lexical means – through adverbials, time phrases, and so on. (The same is done in tensed languages, to supplement or reinforce the time information conveyed by the choice of tense.) Time information is also sometimes conveyed as a secondary feature by markers of other categories, as with the aspect markers le and guò, which in most cases place an action in past time. However, much time information is conveyed implicitly by context – it is therefore not always necessary, when translating from a tensed to a tenseless language, say, to express explicitly in the target language all of the information conveyed by the tenses in the source. Nominal Tense A few languages have been shown to mark tense information (as well as aspect and mood) on nouns. This may be called nominal tense, or more broadly nominal TAM which includes nominal marking of aspect and mood as well. Syntax of tense The syntactic properties of tense have figured prominently in formal analyses of how tense-marking interacts with word order. Some languages (such as French) allow an adverb (Adv) to intervene between a tense-marked verb (V) and its direct object (O); in other words, they permit [Verb-Adverb-Object] ordering. In contrast, other languages (such as English) do not allow the adverb to intervene between the verb and its direct object, and require [Adverb-Verb-Object] ordering. Tense in syntax is represented by the category label T, which is the head of a TP (tense phrase). Tenseless language In linguistics, a tenseless language is a language that does not have a grammatical category of tense. Tenseless languages can and do refer to time, but they do so using lexical items such as adverbs or verbs, or by using combinations of aspect, mood, and words that establish time reference. Examples of tenseless languages are Burmese, Dyirbal, most varieties of Chinese, Malay (including Indonesian), Thai, Maya (linguistic nomenclature: "Yukatek Maya"), Vietnamese and in some analyses Greenlandic (Kalaallisut) and Guaraní. In particular languages The study of modern languages has been greatly influenced by the grammar of the Classical languages, since early grammarians, often monks, had no other reference point to describe their language. Latin terminology is often used to describe modern languages, sometimes with a change of meaning, as with the application of "perfect" to forms in English that do not necessarily have perfective meaning, or the words Imperfekt and Perfekt to German past tense forms that mostly lack any relationship to the aspects implied by those terms. Latin Latin is traditionally described as having six verb paradigms for tense (the Latin for "tense" being tempus, plural tempora): Present (praesēns) Future (futūrum) Imperfect (praeteritum imperfectum) Perfect (praesēns perfectum) Future perfect (futūrum perfectum) Pluperfect (plūs quam perfectum, praeteritum perfectum) Imperfect tense verbs represent a past process combined with so called imperfective aspect, that is, they often stand for an ongoing past action or state at a past point in time (see secondary present) or represent habitual actions (see Latin tenses with modality) (e.g. 'he was eating', 'he used to eat'). The perfect tense combines the meanings of a simple past ('he ate') with that of an English perfect tense ('he has eaten'), which in ancient Greek are two different tenses (aorist and perfect). The pluperfect, the perfect and the future perfect may also realise relative tenses, standing for events that are past at the time of another event (see secondary past): for instance, , , may stand for respectively '', '' and ''. Latin verbs are inflected for tense and aspect together with mood (indicative, subjunctive, infinitive, and imperative) and voice (active or passive). Most verbs can be built by selecting a verb stem and adapting them to endings. Endings may vary according to the speech role, the number and the gender of the subject or an object. Sometimes, verb groups function as a unit and supplement inflection for tense (see Latin periphrases). For details on verb structure, see Latin tenses and Latin conjugation. Ancient Greek The paradigms for tenses in Ancient Greek are similar to the ones in Latin, but with a three-way aspect contrast in the past: the aorist, the perfect and the imperfect. Both aorist and imperfect verbs can represent a past event: through contrast, the imperfect verb often implies a longer duration (e.g. 'they urged him' vs. 'they persuaded him'). The aorist participle represents the first event of a two-event sequence and the present participle represents an ongoing event at the time of another event. Perfect verbs stood for past actions if the result is still present (e.g. 'I have found it') or for present states resulting from a past event (e.g. 'I remember'). English English has only two morphological tenses: the present (or non-past), as in he goes, and the past (or preterite), as in he went. The non-past usually references the present, but sometimes references the future (as in the bus leaves tomorrow). In special uses such as the historical present it can talk about the past as well. These morphological tenses are marked either with a suffix (walk(s) ~ walked) or with ablaut (sing(s) ~ sang). In some contexts, particularly in English language teaching, various tense–aspect combinations are referred to loosely as tenses. Similarly, the term "future tense" is sometimes loosely applied to cases where modals such as will are used to talk about future points in time. Other Indo-European languages Proto-Indo-European verbs had present, perfect (stative), imperfect and aorist forms – these can be considered as representing two tenses (present and past) with different aspects. Most languages in the Indo-European family have developed systems either with two morphological tenses (present or "non-past", and past) or with three (present, past and future). The tenses often form part of entangled tense–aspect–mood conjugation systems. Additional tenses, tense–aspect combinations, etc. can be provided by compound constructions containing auxiliary verbs. The Germanic languages (which include English) have present (non-past) and past tenses formed morphologically, with future and other additional forms made using auxiliaries. In standard German, the compound past (Perfekt) has replaced the simple morphological past in most contexts. The Romance languages (descendants of Latin) have past, present and future morphological tenses, with additional aspectual distinction in the past. French is an example of a language where, as in German, the simple morphological perfective past (passé simple) has mostly given way to a compound form (passé composé). Irish, a Celtic language, has past, present and future tenses (see Irish conjugation). The past contrasts perfective and imperfective aspect, and some verbs retain such a contrast in the present. Classical Irish had a three-way aspectual contrast of simple–perfective–imperfective in the past and present tenses. Modern Scottish Gaelic on the other hand only has past, non-past and 'indefinite', and, in the case of the verb 'be' (including its use as an auxiliary), also present tense. Persian, an Indo-Iranian language, has past and non-past forms, with additional aspectual distinctions. Future can be expressed using an auxiliary, but almost never in non-formal context. Colloquially the perfect suffix -e can be added to past tenses to indicate that an action is speculative or reported (e.g. "it seems that he was doing", "they say that he was doing"). A similar feature is found in Turkish. (For details, see Persian verbs.) Hindustani (Hindi and Urdu), an Indo-Aryan language, has indicative perfect past and indicative future forms, while the indicative present and indicative imperfect past conjugations exist only for the verb honā (to be). The indicative future is constructed using the future subjunctive conjugations (which used to be the indicative present conjugations in older forms of Hind-Urdu) by adding a future future suffix -gā that declines for gender and the number of the noun that the pronoun refers to. The forms of gā are derived from the perfective participle forms of the verb "to go," jāna. The conjugations of the indicative perfect past and the indicative imperfect past are derived from participles (just like the past tense formation in Slavic languages) and hence they agree with the grammatical number and the gender of noun which the pronoun refers to and not the pronoun itself. The perfect past doubles as the perfective aspect participle and the imperfect past conjugations act as the copula to mark imperfect past when used with the aspectual participles. Hindi-Urdu has an overtly marked tense-aspect-mood system. Periphrastic Hindi-Urdu verb forms (aspectual verb forms) consist of two elements, the first of these two elements is the aspect marker and the second element (the copula) is the common tense-mood marker. Hindi-Urdu has 3 grammatical aspectsː Habitual, Perfective, and Progressive; and 5 grammatical moodsː Indicative, Presumptive, Subjunctive, Contrafactual, and Imperative. (Seeː Hindi verbs) In the Slavic languages, verbs are intrinsically perfective or imperfective. In Russian and some other languages in the group, perfective verbs have past and "future tenses", while imperfective verbs have past, present and "future", the imperfective "future" being a compound tense in most cases. The "future tense" of perfective verbs is formed in the same way as the present tense of imperfective verbs. However, in South Slavic languages, there may be a greater variety of forms – Bulgarian, for example, has present, past (both "imperfect" and "aorist") and "future tenses", for both perfective and imperfective verbs, as well as perfect forms made with an auxiliary (see Bulgarian verbs). However it doesn't have real future tense, because the future tense is formed by the shortened version of the present of the verb hteti (ще) and it just adds present tense forms of person suffixes: -m (I), -š (you), -ø (he,she,it), -me (we), -te (you, plural), -t (they). Other languages Finnish and Hungarian, both members of the Uralic language family, have morphological present (non-past) and past tenses. The Hungarian verb van ("to be") also has a future form. Turkish verbs conjugate for past, present and future, with a variety of aspects and moods. Arabic verbs have past and non-past; future can be indicated by a prefix. Korean verbs have a variety of affixed forms which can be described as representing present, past and future tenses, although they can alternatively be considered to be aspectual. Similarly, Japanese verbs are described as having present and past tenses, although they may be analysed as aspects. Some Wu Chinese languages, such as Shanghainese, use grammatical particles to mark some tenses. Other Chinese languages and many other East Asian languages generally lack inflection and are considered to be tenseless languages, although they often have aspect markers which convey certain information about time reference. For examples of languages with a greater variety of tenses, see the section on possible tenses, above. Fuller information on tense formation and usage in particular languages can be found in the articles on those languages and their grammars. Austronesian languages Rapa Rapa is the French Polynesian language of the island of Rapa Iti. Verbs in the indigenous Old Rapa occur with a marker known as TAM which stands for tense, aspect, or mood which can be followed by directional particles or deictic particles. Of the markers there are three tense markers called: Imperfective, Progressive, and Perfective. Which simply mean, Before, Currently, and After. However, specific TAM markers and the type of deictic or directional particle that follows determine and denote different types of meanings in terms of tenses. Imperfective: denotes actions that have not occurred yet but will occur and expressed by TAM e. Progressive: Also expressed by TAM e and denotes actions that are currently happening when used with deictic na, and denotes actions that was just witnessed but still currently happening when used with deictic ra. Perfective: denotes actions that have already occurred or have finished and is marked by TAM ka. In Old Rapa there are also other types of tense markers known as Past, Imperative, and Subjunctive. Past TAM i marks past action. It is rarely used as a matrix TAM and is more frequently observed in past embedded clauses Imperative The imperative is marked in Old Rapa by TAM a. A second person subject is implied by the direct command of the imperative. For a more polite form rather than a straightforward command imperative TAM a is used with adverbial kānei. Kānei is only shown to be used in imperative structures and was translated by the French as "please". It is also used in a more impersonal form. For example, how you would speak toward a pesky neighbor. Subjunctive The subjunctive in Old Rapa is marked by kia and can also be used in expressions of desire Tokelau The Tokelauan language is a tenseless language. The language uses the same words for all three tenses; the phrase E liliu mai au i te Aho Tōnai literally translates to Come back / me / on Saturday, but the translation becomes 'I am coming back on Saturday'. Wuvulu-Aua Wuvulu-Aua does not have an explicit tense, but rather tense is conveyed by mood, aspect markers, and time phrases. Wuvulu speakers use a realis mood to convey past tense as speakers can be certain about events that have occurred. In some cases, realis mood is used to convey present tense — often to indicate a state of being. Wuvulu speakers use an irrealis mood to convey future tense. Tense in Wuvulu-Aua may also be implied by using time adverbials and aspectual markings. Wuvulu contains three verbal markers to indicate sequence of events. The preverbal adverbial loʔo 'first' indicates the verb occurs before any other. The postverbal morpheme liai and linia are the respective intransitive and transitive suffixes indicating a repeated action. The postverbal morpheme li and liria are the respective intransitive and transitive suffixes indicating a completed action. Mortlockese Mortlockese uses tense markers such as mii and to denote the present tense state of a subject, aa to denote a present tense state that an object has changed to from a different, past state, kɞ to describe something that has already been completed, pɞ and lɛ to denote future tense, pʷapʷ to denote a possible action or state in future tense, and sæn/mwo for something that has not happened yet. Each of these markers is used in conjunction with the subject proclitics except for the markers aa and mii. Additionally, the marker mii can be used with any type of intransitive verb. See also Sequence of tenses Spatial tense References Further reading External links Combinations of Tense, Aspect, and Mood in Greek Grammatical Features Inventory DEIC:deictic DIR:directional English grammar Time in linguistics
Grammatical tense
[ "Physics" ]
5,277
[ "Spacetime", "Time in linguistics", "Physical quantities", "Time" ]
12,950
https://en.wikipedia.org/wiki/Glucose
Glucose is a sugar with the molecular formula . It is overall the most abundant monosaccharide, a subcategory of carbohydrates. It is mainly made by plants and most algae during photosynthesis from water and carbon dioxide, using energy from sunlight. It is used by plants to make cellulose, the most abundant carbohydrate in the world, for use in cell walls, and by all living organisms to make adenosine triphosphate (ATP), which is used by the cell as energy. In energy metabolism, glucose is the most important source of energy in all organisms. Glucose for metabolism is stored as a polymer, in plants mainly as amylose and amylopectin, and in animals as glycogen. Glucose circulates in the blood of animals as blood sugar. The naturally occurring form is -glucose, while its stereoisomer -glucose is produced synthetically in comparatively small amounts and is less biologically active. Glucose is a monosaccharide containing six carbon atoms and an aldehyde group, and is therefore an aldohexose. The glucose molecule can exist in an open-chain (acyclic) as well as ring (cyclic) form. Glucose is naturally occurring and is found in its free state in fruits and other parts of plants. In animals, it is released from the breakdown of glycogen in a process known as glycogenolysis. Glucose, as intravenous sugar solution, is on the World Health Organization's List of Essential Medicines. It is also on the list in combination with sodium chloride (table salt). The name glucose is derived from Ancient Greek () 'wine, must', from () 'sweet'. The suffix -ose is a chemical classifier denoting a sugar. History Glucose was first isolated from raisins in 1747 by the German chemist Andreas Marggraf. Glucose was discovered in grapes by another German chemistJohann Tobias Lowitzin 1792, and distinguished as being different from cane sugar (sucrose). Glucose is the term coined by Jean Baptiste Dumas in 1838, which has prevailed in the chemical literature. Friedrich August Kekulé proposed the term dextrose (from the Latin , meaning "right"), because in aqueous solution of glucose, the plane of linearly polarized light is turned to the right. In contrast, l-fructose (usually referred to as -fructose) (a ketohexose) and l-glucose (-glucose) turn linearly polarized light to the left. The earlier notation according to the rotation of the plane of linearly polarized light (d and l-nomenclature) was later abandoned in favor of the - and -notation, which refers to the absolute configuration of the asymmetric center farthest from the carbonyl group, and in concordance with the configuration of - or -glyceraldehyde. Since glucose is a basic necessity of many organisms, a correct understanding of its chemical makeup and structure contributed greatly to a general advancement in organic chemistry. This understanding occurred largely as a result of the investigations of Emil Fischer, a German chemist who received the 1902 Nobel Prize in Chemistry for his findings. The synthesis of glucose established the structure of organic material and consequently formed the first definitive validation of Jacobus Henricus van 't Hoff's theories of chemical kinetics and the arrangements of chemical bonds in carbon-bearing molecules. Between 1891 and 1894, Fischer established the stereochemical configuration of all the known sugars and correctly predicted the possible isomers, applying Van 't Hoff equation of asymmetrical carbon atoms. The names initially referred to the natural substances. Their enantiomers were given the same name with the introduction of systematic nomenclatures, taking into account absolute stereochemistry (e.g. Fischer nomenclature, / nomenclature). For the discovery of the metabolism of glucose Otto Meyerhof received the Nobel Prize in Physiology or Medicine in 1922. Hans von Euler-Chelpin was awarded the Nobel Prize in Chemistry along with Arthur Harden in 1929 for their "research on the fermentation of sugar and their share of enzymes in this process". In 1947, Bernardo Houssay (for his discovery of the role of the pituitary gland in the metabolism of glucose and the derived carbohydrates) as well as Carl and Gerty Cori (for their discovery of the conversion of glycogen from glucose) received the Nobel Prize in Physiology or Medicine. In 1970, Luis Leloir was awarded the Nobel Prize in Chemistry for the discovery of glucose-derived sugar nucleotides in the biosynthesis of carbohydrates. Chemical and physical properties Glucose forms white or colorless solids that are highly soluble in water and acetic acid but poorly soluble in methanol and ethanol. They melt at (α) and (beta), decompose starting at with release of various volatile products, ultimately leaving a residue of carbon. Glucose has a pKa value of 12.16 at in water. With six carbon atoms, it is classed as a hexose, a subcategory of the monosaccharides. -Glucose is one of the sixteen aldohexose stereoisomers. The -isomer, -glucose, also known as dextrose, occurs widely in nature, but the -isomer, -glucose, does not. Glucose can be obtained by hydrolysis of carbohydrates such as milk sugar (lactose), cane sugar (sucrose), maltose, cellulose, glycogen, etc. Dextrose is commonly commercially manufactured from starches, such as corn starch in the US and Japan, from potato and wheat starch in Europe, and from tapioca starch in tropical areas. The manufacturing process uses hydrolysis via pressurized steaming at controlled pH in a jet followed by further enzymatic depolymerization. Unbonded glucose is one of the main ingredients of honey. The term dextrose is often used in a clinical (related to patient's health status) or nutritional context (related to dietary intake, such as food labels or dietary guidelines), while "glucose" is used in a biological or physiological context (chemical processes and molecular interactions), but both terms refer to the same molecule, specifically D-glucose. Dextrose monohydrate is the hydrated form of D-glucose, meaning that it is a glucose molecule with an additional water molecule attached. Its chemical formula is  · . Dextrose monohydrate is also called hydrated D-glucose, and commonly manufactured from plant starches. Dextrose monohydrate is utilized as the predominant type of dextrose in food applications, such as beverage mixes—it is a common form of glucose widely used as a nutrition supplement in production of foodstuffs. Dextrose monohydrate is primarily consumed in North America as a corn syrup or high-fructose corn syrup. Anhydrous dextrose, on the other hand, is glucose that does not have any water molecules attached to it. Anhydrous chemical substances are commonly produced by eliminating water from a hydrated substance through methods such as heating or drying up (desiccation). Dextrose monohydrate can be dehydrated to anhydrous dextrose in industrial setting. Dextrose monohydrate is composed of approximately 9.5% water by mass; through the process of dehydration, this water content is eliminated to yield anhydrous (dry) dextrose. Anhydrous dextrose has the chemical formula , without any water molecule attached which is the same as glucose. Anhydrous dextrose on open air tends to absorb moisture and transform to the monohydrate, and it is more expensive to produce. Anhydrous dextrose (anhydrous D-glucose) has increased stability and increased shelf life, has medical applications, such as in oral glucose tolerance test. Whereas molecular weight (molar mass) for D-glucose monohydrate is 198.17 g/mol, that for anhydrous D-glucose is 180.16 g/mol The density of these two forms of glucose is also different. In terms of chemical structure, glucose is a monosaccharide, that is, a simple sugar. Glucose contains six carbon atoms and an aldehyde group, and is therefore an aldohexose. The glucose molecule can exist in an open-chain (acyclic) as well as ring (cyclic) form—due to the presence of alcohol and aldehyde or ketone functional groups, the form having the straight chain can easily convert into a chair-like hemiacetal ring structure commonly found in carbohydrates. Structure and nomenclature Glucose is present in solid form as a monohydrate with a closed pyran ring (α-D-glucopyranose monohydrate, sometimes known less precisely by dextrose hydrate). In aqueous solution, on the other hand, it is an open-chain to a small extent and is present predominantly as α- or β-pyranose, which interconvert. From aqueous solutions, the three known forms can be crystallized: α-glucopyranose, β-glucopyranose and α-glucopyranose monohydrate. Glucose is a building block of the disaccharides lactose and sucrose (cane or beet sugar), of oligosaccharides such as raffinose and of polysaccharides such as starch, amylopectin, glycogen, and cellulose. The glass transition temperature of glucose is and the Gordon–Taylor constant (an experimentally determined constant for the prediction of the glass transition temperature for different mass fractions of a mixture of two substances) is 4.5. Open-chain form A open-chain form of glucose makes up less than 0.02% of the glucose molecules in an aqueous solution at equilibrium. The rest is one of two cyclic hemiacetal forms. In its open-chain form, the glucose molecule has an open (as opposed to cyclic) unbranched backbone of six carbon atoms, where C-1 is part of an aldehyde group . Therefore, glucose is also classified as an aldose, or an aldohexose. The aldehyde group makes glucose a reducing sugar giving a positive reaction with the Fehling test. Cyclic forms In solutions, the open-chain form of glucose (either "-" or "-") exists in equilibrium with several cyclic isomers, each containing a ring of carbons closed by one oxygen atom. In aqueous solution, however, more than 99% of glucose molecules exist as pyranose forms. The open-chain form is limited to about 0.25%, and furanose forms exist in negligible amounts. The terms "glucose" and "-glucose" are generally used for these cyclic forms as well. The ring arises from the open-chain form by an intramolecular nucleophilic addition reaction between the aldehyde group (at C-1) and either the C-4 or C-5 hydroxyl group, forming a hemiacetal linkage, . The reaction between C-1 and C-5 yields a six-membered heterocyclic system called a pyranose, which is a monosaccharide sugar (hence "-ose") containing a derivatised pyran skeleton. The (much rarer) reaction between C-1 and C-4 yields a five-membered furanose ring, named after the cyclic ether furan. In either case, each carbon in the ring has one hydrogen and one hydroxyl attached, except for the last carbon (C-4 or C-5) where the hydroxyl is replaced by the remainder of the open molecule (which is or respectively). The ring-closing reaction can give two products, denoted "α-" and "β-". When a glucopyranose molecule is drawn in the Haworth projection, the designation "α-" means that the hydroxyl group attached to C-1 and the group at C-5 lies on opposite sides of the ring's plane (a trans arrangement), while "β-" means that they are on the same side of the plane (a cis arrangement). Therefore, the open-chain isomer -glucose gives rise to four distinct cyclic isomers: α--glucopyranose, β--glucopyranose, α--glucofuranose, and β--glucofuranose. These five structures exist in equilibrium and interconvert, and the interconversion is much more rapid with acid catalysis. The other open-chain isomer -glucose similarly gives rise to four distinct cyclic forms of -glucose, each the mirror image of the corresponding -glucose. The glucopyranose ring (α or β) can assume several non-planar shapes, analogous to the "chair" and "boat" conformations of cyclohexane. Similarly, the glucofuranose ring may assume several shapes, analogous to the "envelope" conformations of cyclopentane. In the solid state, only the glucopyranose forms are observed. Some derivatives of glucofuranose, such as 1,2-O-isopropylidene--glucofuranose are stable and can be obtained pure as crystalline solids. For example, reaction of α-D-glucose with para-tolylboronic acid reforms the normal pyranose ring to yield the 4-fold ester α-D-glucofuranose-1,2:3,5-bis(p-tolylboronate). Mutarotation Mutarotation consists of a temporary reversal of the ring-forming reaction, resulting in the open-chain form, followed by a reforming of the ring. The ring closure step may use a different group than the one recreated by the opening step (thus switching between pyranose and furanose forms), or the new hemiacetal group created on C-1 may have the same or opposite handedness as the original one (thus switching between the α and β forms). Thus, though the open-chain form is barely detectable in solution, it is an essential component of the equilibrium. The open-chain form is thermodynamically unstable, and it spontaneously isomerizes to the cyclic forms. (Although the ring closure reaction could in theory create four- or three-atom rings, these would be highly strained, and are not observed in practice.) In solutions at room temperature, the four cyclic isomers interconvert over a time scale of hours, in a process called mutarotation. Starting from any proportions, the mixture converges to a stable ratio of α:β 36:64. The ratio would be α:β 11:89 if it were not for the influence of the anomeric effect. Mutarotation is considerably slower at temperatures close to . Optical activity Whether in water or the solid form, -(+)-glucose is dextrorotatory, meaning it will rotate the direction of polarized light clockwise as seen looking toward the light source. The effect is due to the chirality of the molecules, and indeed the mirror-image isomer, -(−)-glucose, is levorotatory (rotates polarized light counterclockwise) by the same amount. The strength of the effect is different for each of the five tautomers. The - prefix does not refer directly to the optical properties of the compound. It indicates that the C-5 chiral centre has the same handedness as that of -glyceraldehyde (which was so labelled because it is dextrorotatory). The fact that -glucose is dextrorotatory is a combined effect of its four chiral centres, not just of C-5; some of the other -aldohexoses are levorotatory. The conversion between the two anomers can be observed in a polarimeter since pure α--glucose has a specific rotation angle of +112.2° mL/(dm·g), pure β--glucose of +17.5° mL/(dm·g). When equilibrium has been reached after a certain time due to mutarotation, the angle of rotation is +52.7° mL/(dm·g). By adding acid or base, this transformation is much accelerated. The equilibration takes place via the open-chain aldehyde form. Isomerisation In dilute sodium hydroxide or other dilute bases, the monosaccharides mannose, glucose and fructose interconvert (via a Lobry de Bruyn–Alberda–Van Ekenstein transformation), so that a balance between these isomers is formed. This reaction proceeds via an enediol: Biochemical properties Glucose is the most abundant monosaccharide. Glucose is also the most widely used aldohexose in most living organisms. One possible explanation for this is that glucose has a lower tendency than other aldohexoses to react nonspecifically with the amine groups of proteins. This reaction—glycation—impairs or destroys the function of many proteins, e.g. in glycated hemoglobin. Glucose's low rate of glycation can be attributed to its having a more stable cyclic form compared to other aldohexoses, which means it spends less time than they do in its reactive open-chain form. The reason for glucose having the most stable cyclic form of all the aldohexoses is that its hydroxy groups (with the exception of the hydroxy group on the anomeric carbon of -glucose) are in the equatorial position. Presumably, glucose is the most abundant natural monosaccharide because it is less glycated with proteins than other monosaccharides. Another hypothesis is that glucose, being the only -aldohexose that has all five hydroxy substituents in the equatorial position in the form of β--glucose, is more readily accessible to chemical reactions, for example, for esterification or acetal formation. For this reason, -glucose is also a highly preferred building block in natural polysaccharides (glycans). Polysaccharides that are composed solely of glucose are termed glucans. Glucose is produced by plants through photosynthesis using sunlight, water and carbon dioxide and can be used by all living organisms as an energy and carbon source. However, most glucose does not occur in its free form, but in the form of its polymers, i.e. lactose, sucrose, starch and others which are energy reserve substances, and cellulose and chitin, which are components of the cell wall in plants or fungi and arthropods, respectively. These polymers, when consumed by animals, fungi and bacteria, are degraded to glucose using enzymes. All animals are also able to produce glucose themselves from certain precursors as the need arises. Neurons, cells of the renal medulla and erythrocytes depend on glucose for their energy production. In adult humans, there is about of glucose, of which about is present in the blood. Approximately of glucose is produced in the liver of an adult in 24 hours. Many of the long-term complications of diabetes (e.g., blindness, kidney failure, and peripheral neuropathy) are probably due to the glycation of proteins or lipids. In contrast, enzyme-regulated addition of sugars to protein is called glycosylation and is essential for the function of many proteins. Uptake Ingested glucose initially binds to the receptor for sweet taste on the tongue in humans. This complex of the proteins T1R2 and T1R3 makes it possible to identify glucose-containing food sources. Glucose mainly comes from food—about per day is produced by conversion of food, but it is also synthesized from other metabolites in the body's cells. In humans, the breakdown of glucose-containing polysaccharides happens in part already during chewing by means of amylase, which is contained in saliva, as well as by maltase, lactase, and sucrase on the brush border of the small intestine. Glucose is a building block of many carbohydrates and can be split off from them using certain enzymes. Glucosidases, a subgroup of the glycosidases, first catalyze the hydrolysis of long-chain glucose-containing polysaccharides, removing terminal glucose. In turn, disaccharides are mostly degraded by specific glycosidases to glucose. The names of the degrading enzymes are often derived from the particular poly- and disaccharide; inter alia, for the degradation of polysaccharide chains there are amylases (named after amylose, a component of starch), cellulases (named after cellulose), chitinases (named after chitin), and more. Furthermore, for the cleavage of disaccharides, there are maltase, lactase, sucrase, trehalase, and others. In humans, about 70 genes are known that code for glycosidases. They have functions in the digestion and degradation of glycogen, sphingolipids, mucopolysaccharides, and poly(ADP-ribose). Humans do not produce cellulases, chitinases, or trehalases, but the bacteria in the gut microbiota do. In order to get into or out of cell membranes of cells and membranes of cell compartments, glucose requires special transport proteins from the major facilitator superfamily. In the small intestine (more precisely, in the jejunum), glucose is taken up into the intestinal epithelium with the help of glucose transporters via a secondary active transport mechanism called sodium ion-glucose symport via sodium/glucose cotransporter 1 (SGLT1). Further transfer occurs on the basolateral side of the intestinal epithelial cells via the glucose transporter GLUT2, as well uptake into liver cells, kidney cells, cells of the islets of Langerhans, neurons, astrocytes, and tanycytes. Glucose enters the liver via the portal vein and is stored there as a cellular glycogen. In the liver cell, it is phosphorylated by glucokinase at position 6 to form glucose 6-phosphate, which cannot leave the cell. Glucose 6-phosphatase can convert glucose 6-phosphate back into glucose exclusively in the liver, so the body can maintain a sufficient blood glucose concentration. In other cells, uptake happens by passive transport through one of the 14 GLUT proteins. In the other cell types, phosphorylation occurs through a hexokinase, whereupon glucose can no longer diffuse out of the cell. The glucose transporter GLUT1 is produced by most cell types and is of particular importance for nerve cells and pancreatic β-cells. GLUT3 is highly expressed in nerve cells. Glucose from the bloodstream is taken up by GLUT4 from muscle cells (of the skeletal muscle and heart muscle) and fat cells. GLUT14 is expressed exclusively in testicles. Excess glucose is broken down and converted into fatty acids, which are stored as triglycerides. In the kidneys, glucose in the urine is absorbed via SGLT1 and SGLT2 in the apical cell membranes and transmitted via GLUT2 in the basolateral cell membranes. About 90% of kidney glucose reabsorption is via SGLT2 and about 3% via SGLT1. Biosynthesis In plants and some prokaryotes, glucose is a product of photosynthesis. Glucose is also formed by the breakdown of polymeric forms of glucose like glycogen (in animals and mushrooms) or starch (in plants). The cleavage of glycogen is termed glycogenolysis, the cleavage of starch is called starch degradation. The metabolic pathway that begins with molecules containing two to four carbon atoms (C) and ends in the glucose molecule containing six carbon atoms is called gluconeogenesis and occurs in all living organisms. The smaller starting materials are the result of other metabolic pathways. Ultimately almost all biomolecules come from the assimilation of carbon dioxide in plants and microbes during photosynthesis. The free energy of formation of α--glucose is 917.2 kilojoules per mole. In humans, gluconeogenesis occurs in the liver and kidney, but also in other cell types. In the liver about of glycogen are stored, in skeletal muscle about . However, the glucose released in muscle cells upon cleavage of the glycogen can not be delivered to the circulation because glucose is phosphorylated by the hexokinase, and a glucose-6-phosphatase is not expressed to remove the phosphate group. Unlike for glucose, there is no transport protein for glucose-6-phosphate. Gluconeogenesis allows the organism to build up glucose from other metabolites, including lactate or certain amino acids, while consuming energy. The renal tubular cells can also produce glucose. Glucose also can be found outside of living organisms in the ambient environment. Glucose concentrations in the atmosphere are detected via collection of samples by aircraft and are known to vary from location to location. For example, glucose concentrations in atmospheric air from inland China range from 0.8 to 20.1 pg/L, whereas east coastal China glucose concentrations range from 10.3 to 142 pg/L. Glucose degradation In humans, glucose is metabolized by glycolysis and the pentose phosphate pathway. Glycolysis is used by all living organisms, with small variations, and all organisms generate energy from the breakdown of monosaccharides. In the further course of the metabolism, it can be completely degraded via oxidative decarboxylation, the citric acid cycle (synonym Krebs cycle) and the respiratory chain to water and carbon dioxide. If there is not enough oxygen available for this, the glucose degradation in animals occurs anaerobic to lactate via lactic acid fermentation and releases much less energy. Muscular lactate enters the liver through the bloodstream in mammals, where gluconeogenesis occurs (Cori cycle). With a high supply of glucose, the metabolite acetyl-CoA from the Krebs cycle can also be used for fatty acid synthesis. Glucose is also used to replenish the body's glycogen stores, which are mainly found in liver and skeletal muscle. These processes are hormonally regulated. In other living organisms, other forms of fermentation can occur. The bacterium Escherichia coli can grow on nutrient media containing glucose as the sole carbon source. In some bacteria and, in modified form, also in archaea, glucose is degraded via the Entner-Doudoroff pathway. With Glucose, a mechanism for gene regulation was discovered in E. coli, the catabolite repression (formerly known as glucose effect). Use of glucose as an energy source in cells is by either aerobic respiration, anaerobic respiration, or fermentation. The first step of glycolysis is the phosphorylation of glucose by a hexokinase to form glucose 6-phosphate. The main reason for the immediate phosphorylation of glucose is to prevent its diffusion out of the cell as the charged phosphate group prevents glucose 6-phosphate from easily crossing the cell membrane. Furthermore, addition of the high-energy phosphate group activates glucose for subsequent breakdown in later steps of glycolysis. In anaerobic respiration, one glucose molecule produces a net gain of two ATP molecules (four ATP molecules are produced during glycolysis through substrate-level phosphorylation, but two are required by enzymes used during the process). In aerobic respiration, a molecule of glucose is much more profitable in that a maximum net production of 30 or 32 ATP molecules (depending on the organism) is generated. Tumor cells often grow comparatively quickly and consume an above-average amount of glucose by glycolysis, which leads to the formation of lactate, the end product of fermentation in mammals, even in the presence of oxygen. This is called the Warburg effect. For the increased uptake of glucose in tumors various SGLT and GLUT are overly produced. In yeast, ethanol is fermented at high glucose concentrations, even in the presence of oxygen (which normally leads to respiration rather than fermentation). This is called the Crabtree effect. Glucose can also degrade to form carbon dioxide through abiotic means. This has been demonstrated to occur experimentally via oxidation and hydrolysis at 22 °C and a pH of 2.5. Energy source Glucose is a ubiquitous fuel in biology. It is used as an energy source in organisms, from bacteria to humans, through either aerobic respiration, anaerobic respiration (in bacteria), or fermentation. Glucose is the human body's key source of energy, through aerobic respiration, providing about 3.75 kilocalories (16 kilojoules) of food energy per gram. Breakdown of carbohydrates (e.g., starch) yields mono- and disaccharides, most of which is glucose. Through glycolysis and later in the reactions of the citric acid cycle and oxidative phosphorylation, glucose is oxidized to eventually form carbon dioxide and water, yielding energy mostly in the form of adenosine triphosphate (ATP). The insulin reaction, and other mechanisms, regulate the concentration of glucose in the blood. The physiological caloric value of glucose, depending on the source, is 16.2 kilojoules per gram or 15.7 kJ/g (3.74 kcal/g). The high availability of carbohydrates from plant biomass has led to a variety of methods during evolution, especially in microorganisms, to utilize glucose for energy and carbon storage. Differences exist in which end product can no longer be used for energy production. The presence of individual genes, and their gene products, the enzymes, determine which reactions are possible. The metabolic pathway of glycolysis is used by almost all living beings. An essential difference in the use of glycolysis is the recovery of NADPH as a reductant for anabolism that would otherwise have to be generated indirectly. Glucose and oxygen supply almost all the energy for the brain, so its availability influences psychological processes. When glucose is low, psychological processes requiring mental effort (e.g., self-control, effortful decision-making) are impaired. In the brain, which is dependent on glucose and oxygen as the major source of energy, the glucose concentration is usually 4 to 6 mM (5 mM equals 90 mg/dL), but decreases to 2 to 3 mM when fasting. Confusion occurs below 1 mM and coma at lower levels. The glucose in the blood is called blood sugar. Blood sugar levels are regulated by glucose-binding nerve cells in the hypothalamus. In addition, glucose in the brain binds to glucose receptors of the reward system in the nucleus accumbens. The binding of glucose to the sweet receptor on the tongue induces a release of various hormones of energy metabolism, either through glucose or through other sugars, leading to an increased cellular uptake and lower blood sugar levels. Artificial sweeteners do not lower blood sugar levels. The blood sugar content of a healthy person in the short-time fasting state, e.g. after overnight fasting, is about 70 to 100 mg/dL of blood (4 to 5.5 mM). In blood plasma, the measured values are about 10–15% higher. In addition, the values in the arterial blood are higher than the concentrations in the venous blood since glucose is absorbed into the tissue during the passage of the capillary bed. Also in the capillary blood, which is often used for blood sugar determination, the values are sometimes higher than in the venous blood. The glucose content of the blood is regulated by the hormones insulin, incretin and glucagon. Insulin lowers the glucose level, glucagon increases it. Furthermore, the hormones adrenaline, thyroxine, glucocorticoids, somatotropin and adrenocorticotropin lead to an increase in the glucose level. There is also a hormone-independent regulation, which is referred to as glucose autoregulation. After food intake the blood sugar concentration increases. Values over 180 mg/dL in venous whole blood are pathological and are termed hyperglycemia, values below 40 mg/dL are termed hypoglycaemia. When needed, glucose is released into the bloodstream by glucose-6-phosphatase from glucose-6-phosphate originating from liver and kidney glycogen, thereby regulating the homeostasis of blood glucose concentration. In ruminants, the blood glucose concentration is lower (60 mg/dL in cattle and 40 mg/dL in sheep), because the carbohydrates are converted more by their gut microbiota into short-chain fatty acids. Some glucose is converted to lactic acid by astrocytes, which is then utilized as an energy source by brain cells; some glucose is used by intestinal cells and red blood cells, while the rest reaches the liver, adipose tissue and muscle cells, where it is absorbed and stored as glycogen (under the influence of insulin). Liver cell glycogen can be converted to glucose and returned to the blood when insulin is low or absent; muscle cell glycogen is not returned to the blood because of a lack of enzymes. In fat cells, glucose is used to power reactions that synthesize some fat types and have other purposes. Glycogen is the body's "glucose energy storage" mechanism, because it is much more "space efficient" and less reactive than glucose itself. As a result of its importance in human health, glucose is an analyte in glucose tests that are common medical blood tests. Eating or fasting prior to taking a blood sample has an effect on analyses for glucose in the blood; a high fasting glucose blood sugar level may be a sign of prediabetes or diabetes mellitus. The glycemic index is an indicator of the speed of resorption and conversion to blood glucose levels from ingested carbohydrates, measured as the area under the curve of blood glucose levels after consumption in comparison to glucose (glucose is defined as 100). The clinical importance of the glycemic index is controversial, as foods with high fat contents slow the resorption of carbohydrates and lower the glycemic index, e.g. ice cream. An alternative indicator is the insulin index, measured as the impact of carbohydrate consumption on the blood insulin levels. The glycemic load is an indicator for the amount of glucose added to blood glucose levels after consumption, based on the glycemic index and the amount of consumed food. Precursor Organisms use glucose as a precursor for the synthesis of several important substances. Starch, cellulose, and glycogen ("animal starch") are common glucose polymers (polysaccharides). Some of these polymers (starch or glycogen) serve as energy stores, while others (cellulose and chitin, which is made from a derivative of glucose) have structural roles. Oligosaccharides of glucose combined with other sugars serve as important energy stores. These include lactose, the predominant sugar in milk, which is a glucose-galactose disaccharide, and sucrose, another disaccharide which is composed of glucose and fructose. Glucose is also added onto certain proteins and lipids in a process called glycosylation. This is often critical for their functioning. The enzymes that join glucose to other molecules usually use phosphorylated glucose to power the formation of the new bond by coupling it with the breaking of the glucose-phosphate bond. Other than its direct use as a monomer, glucose can be broken down to synthesize a wide variety of other biomolecules. This is important, as glucose serves both as a primary store of energy and as a source of organic carbon. Glucose can be broken down and converted into lipids. It is also a precursor for the synthesis of other important molecules such as vitamin C (ascorbic acid). In living organisms, glucose is converted to several other chemical compounds that are the starting material for various metabolic pathways. Among them, all other monosaccharides such as fructose (via the polyol pathway), mannose (the epimer of glucose at position 2), galactose (the epimer at position 4), fucose, various uronic acids and the amino sugars are produced from glucose. In addition to the phosphorylation to glucose-6-phosphate, which is part of the glycolysis, glucose can be oxidized during its degradation to glucono-1,5-lactone. Glucose is used in some bacteria as a building block in the trehalose or the dextran biosynthesis and in animals as a building block of glycogen. Glucose can also be converted from bacterial xylose isomerase to fructose. In addition, glucose metabolites produce all nonessential amino acids, sugar alcohols such as mannitol and sorbitol, fatty acids, cholesterol and nucleic acids. Finally, glucose is used as a building block in the glycosylation of proteins to glycoproteins, glycolipids, peptidoglycans, glycosides and other substances (catalyzed by glycosyltransferases) and can be cleaved from them by glycosidases. Pathology Diabetes Diabetes is a metabolic disorder where the body is unable to regulate levels of glucose in the blood either because of a lack of insulin in the body or the failure, by cells in the body, to respond properly to insulin. Each of these situations can be caused by persistently high elevations of blood glucose levels, through pancreatic burnout and insulin resistance. The pancreas is the organ responsible for the secretion of the hormones insulin and glucagon. Insulin is a hormone that regulates glucose levels, allowing the body's cells to absorb and use glucose. Without it, glucose cannot enter the cell and therefore cannot be used as fuel for the body's functions. If the pancreas is exposed to persistently high elevations of blood glucose levels, the insulin-producing cells in the pancreas could be damaged, causing a lack of insulin in the body. Insulin resistance occurs when the pancreas tries to produce more and more insulin in response to persistently elevated blood glucose levels. Eventually, the rest of the body becomes resistant to the insulin that the pancreas is producing, thereby requiring more insulin to achieve the same blood glucose-lowering effect, and forcing the pancreas to produce even more insulin to compete with the resistance. This negative spiral contributes to pancreatic burnout, and the disease progression of diabetes. To monitor the body's response to blood glucose-lowering therapy, glucose levels can be measured. Blood glucose monitoring can be performed by multiple methods, such as the fasting glucose test which measures the level of glucose in the blood after 8 hours of fasting. Another test is the 2-hour glucose tolerance test (GTT)for this test, the person has a fasting glucose test done, then drinks a 75-gram glucose drink and is retested. This test measures the ability of the person's body to process glucose. Over time the blood glucose levels should decrease as insulin allows it to be taken up by cells and exit the blood stream. Hypoglycemia management Individuals with diabetes or other conditions that result in low blood sugar often carry small amounts of sugar in various forms. One sugar commonly used is glucose, often in the form of glucose tablets (glucose pressed into a tablet shape sometimes with one or more other ingredients as a binder), hard candy, or sugar packet. Sources Most dietary carbohydrates contain glucose, either as their only building block (as in the polysaccharides starch and glycogen), or together with another monosaccharide (as in the hetero-polysaccharides sucrose and lactose). Unbound glucose is one of the main ingredients of honey. Glucose is extremely abundant and has been isolated from a variety of natural sources across the world, including male cones of the coniferous tree Wollemia nobilis in Rome, the roots of Ilex asprella plants in China, and straws from rice in California. Commercial production Glucose is produced industrially from starch by enzymatic hydrolysis using glucose amylase or by the use of acids. Enzymatic hydrolysis has largely displaced acid-catalyzed hydrolysis reactions. The result is glucose syrup (enzymatically with more than 90% glucose in the dry matter) with an annual worldwide production volume of 20 million tonnes (as of 2011). This is the reason for the former common name "starch sugar". The amylases most often come from Bacillus licheniformis or Bacillus subtilis (strain MN-385), which are more thermostable than the originally used enzymes. Starting in 1982, pullulanases from Aspergillus niger were used in the production of glucose syrup to convert amylopectin to starch (amylose), thereby increasing the yield of glucose. The reaction is carried out at a pH = 4.6–5.2 and a temperature of 55–60 °C. Corn syrup has between 20% and 95% glucose in the dry matter. The Japanese form of the glucose syrup, Mizuame, is made from sweet potato or rice starch. Maltodextrin contains about 20% glucose. Many crops can be used as the source of starch. Maize, rice, wheat, cassava, potato, barley, sweet potato, corn husk and sago are all used in various parts of the world. In the United States, corn starch (from maize) is used almost exclusively. Some commercial glucose occurs as a component of invert sugar, a roughly 1:1 mixture of glucose and fructose that is produced from sucrose. In principle, cellulose could be hydrolyzed to glucose, but this process is not yet commercially practical. Conversion to fructose In the US, almost exclusively corn (more precisely, corn syrup) is used as glucose source for the production of isoglucose, which is a mixture of glucose and fructose, since fructose has a higher sweetening powerwith same physiological calorific value of 374 kilocalories per 100 g. The annual world production of isoglucose is 8 million tonnes (as of 2011). When made from corn syrup, the final product is high-fructose corn syrup (HFCS). Commercial usage Glucose is mainly used for the production of fructose and of glucose-containing foods. In foods, it is used as a sweetener, humectant, to increase the volume and to create a softer mouthfeel. Various sources of glucose, such as grape juice (for wine) or malt (for beer), are used for fermentation to ethanol during the production of alcoholic beverages. Most soft drinks in the US use HFCS-55 (with a fructose content of 55% in the dry mass), while most other HFCS-sweetened foods in the US use HFCS-42 (with a fructose content of 42% in the dry mass). In Mexico, on the other hand, soft drinks are sweetened by cane sugar, which has a higher sweetening power. In addition, glucose syrup is used, inter alia, in the production of confectionery such as candies, toffee and fondant. Typical chemical reactions of glucose when heated under water-free conditions are caramelization and, in presence of amino acids, the Maillard reaction. In addition, various organic acids can be biotechnologically produced from glucose, for example by fermentation with Clostridium thermoaceticum to produce acetic acid, with Penicillium notatum for the production of araboascorbic acid, with Rhizopus delemar for the production of fumaric acid, with Aspergillus niger for the production of gluconic acid, with Candida brumptii to produce isocitric acid, with Aspergillus terreus for the production of itaconic acid, with Pseudomonas fluorescens for the production of 2-ketogluconic acid, with Gluconobacter suboxydans for the production of 5-ketogluconic acid, with Aspergillus oryzae for the production of kojic acid, with Lactobacillus delbrueckii for the production of lactic acid, with Lactobacillus brevis for the production of malic acid, with Propionibacter shermanii for the production of propionic acid, with Pseudomonas aeruginosa for the production of pyruvic acid and with Gluconobacter suboxydans for the production of tartaric acid. Potent, bioactive natural products like triptolide that inhibit mammalian transcription via inhibition of the XPB subunit of the general transcription factor TFIIH has been recently reported as a glucose conjugate for targeting hypoxic cancer cells with increased glucose transporter expression. Recently, glucose has been gaining commercial use as a key component of "kits" containing lactic acid and insulin intended to induce hypoglycemia and hyperlactatemia to combat different cancers and infections. Analysis When a glucose molecule is to be detected at a certain position in a larger molecule, nuclear magnetic resonance spectroscopy, X-ray crystallography analysis or lectin immunostaining is performed with concanavalin A reporter enzyme conjugate, which binds only glucose or mannose. Classical qualitative detection reactions These reactions have only historical significance: Fehling test The Fehling test is a classic method for the detection of aldoses. Due to mutarotation, glucose is always present to a small extent as an open-chain aldehyde. By adding the Fehling reagents (Fehling (I) solution and Fehling (II) solution), the aldehyde group is oxidized to a carboxylic acid, while the Cu2+ tartrate complex is reduced to Cu+ and forms a brick red precipitate (Cu2O). Tollens test In the Tollens test, after addition of ammoniacal AgNO3 to the sample solution, glucose reduces Ag+ to elemental silver. Barfoed test In Barfoed's test, a solution of dissolved copper acetate, sodium acetate and acetic acid is added to the solution of the sugar to be tested and subsequently heated in a water bath for a few minutes. Glucose and other monosaccharides rapidly produce a reddish color and reddish brown copper(I) oxide (Cu2O). Nylander's test As a reducing sugar, glucose reacts in the Nylander's test. Other tests Upon heating a dilute potassium hydroxide solution with glucose to 100 °C, a strong reddish browning and a caramel-like odor develops. Concentrated sulfuric acid dissolves dry glucose without blackening at room temperature forming sugar sulfuric acid. In a yeast solution, alcoholic fermentation produces carbon dioxide in the ratio of 2.0454 molecules of glucose to one molecule of CO2. Glucose forms a black mass with stannous chloride. In an ammoniacal silver solution, glucose (as well as lactose and dextrin) leads to the deposition of silver. In an ammoniacal lead acetate solution, white lead glycoside is formed in the presence of glucose, which becomes less soluble on cooking and turns brown. In an ammoniacal copper solution, yellow copper oxide hydrate is formed with glucose at room temperature, while red copper oxide is formed during boiling (same with dextrin, except for with an ammoniacal copper acetate solution). With Hager's reagent, glucose forms mercury oxide during boiling. An alkaline bismuth solution is used to precipitate elemental, black-brown bismuth with glucose. Glucose boiled in an ammonium molybdate solution turns the solution blue. A solution with indigo carmine and sodium carbonate destains when boiled with glucose. Instrumental quantification Refractometry and polarimetry In concentrated solutions of glucose with a low proportion of other carbohydrates, its concentration can be determined with a polarimeter. For sugar mixtures, the concentration can be determined with a refractometer, for example in the Oechsle determination in the course of the production of wine. Photometric enzymatic methods in solution The enzyme glucose oxidase (GOx) converts glucose into gluconic acid and hydrogen peroxide while consuming oxygen. Another enzyme, peroxidase, catalyzes a chromogenic reaction (Trinder reaction) of phenol with 4-aminoantipyrine to a purple dye. Photometric test-strip method The test-strip method employs the above-mentioned enzymatic conversion of glucose to gluconic acid to form hydrogen peroxide. The reagents are immobilised on a polymer matrix, the so-called test strip, which assumes a more or less intense color. This can be measured reflectometrically at 510 nm with the aid of an LED-based handheld photometer. This allows routine blood sugar determination by nonscientists. In addition to the reaction of phenol with 4-aminoantipyrine, new chromogenic reactions have been developed that allow photometry at higher wavelengths (550 nm, 750 nm). Amperometric glucose sensor The electroanalysis of glucose is also based on the enzymatic reaction mentioned above. The produced hydrogen peroxide can be amperometrically quantified by anodic oxidation at a potential of 600 mV. The GOx is immobilized on the electrode surface or in a membrane placed close to the electrode. Precious metals such as platinum or gold are used in electrodes, as well as carbon nanotube electrodes, which e.g. are doped with boron. Cu–CuO nanowires are also used as enzyme-free amperometric electrodes, reaching a detection limit of 50 μmol/L. A particularly promising method is the so-called "enzyme wiring", where the electron flowing during the oxidation is transferred via a molecular wire directly from the enzyme to the electrode. Other sensory methods There are a variety of other chemical sensors for measuring glucose. Given the importance of glucose analysis in the life sciences, numerous optical probes have also been developed for saccharides based on the use of boronic acids, which are particularly useful for intracellular sensory applications where other (optical) methods are not or only conditionally usable. In addition to the organic boronic acid derivatives, which often bind highly specifically to the 1,2-diol groups of sugars, there are also other probe concepts classified by functional mechanisms which use selective glucose-binding proteins (e.g. concanavalin A) as a receptor. Furthermore, methods were developed which indirectly detect the glucose concentration via the concentration of metabolized products, e.g. by the consumption of oxygen using fluorescence-optical sensors. Finally, there are enzyme-based concepts that use the intrinsic absorbance or fluorescence of (fluorescence-labeled) enzymes as reporters. Copper iodometry Glucose can be quantified by copper iodometry. Chromatographic methods In particular, for the analysis of complex mixtures containing glucose, e.g. in honey, chromatographic methods such as high performance liquid chromatography and gas chromatography are often used in combination with mass spectrometry. Taking into account the isotope ratios, it is also possible to reliably detect honey adulteration by added sugars with these methods. Derivatization using silylation reagents is commonly used. Also, the proportions of di- and trisaccharides can be quantified. In vivo analysis Glucose uptake in cells of organisms is measured with 2-deoxy-D-glucose or fluorodeoxyglucose. (18F)fluorodeoxyglucose is used as a tracer in positron emission tomography in oncology and neurology, where it is by far the most commonly used diagnostic agent. References External links Chemical pathology Furanoses Glycolysis Nutrition Pyranoses World Health Organization essential medicines
Glucose
[ "Chemistry", "Biology" ]
11,261
[ "Biochemistry", "Carbohydrate metabolism", "Glycolysis", "Chemical pathology" ]
12,955
https://en.wikipedia.org/wiki/George%20P%C3%B3lya
George Pólya (; , ; December 13, 1887 – September 7, 1985) was a Hungarian-American mathematician. He was a professor of mathematics from 1914 to 1940 at ETH Zürich and from 1940 to 1953 at Stanford University. He made fundamental contributions to combinatorics, number theory, numerical analysis and probability theory. He is also noted for his work in heuristics and mathematics education. He has been described as one of The Martians, an informal category which included one of his most famous students at ETH Zurich, John von Neumann. Life and works Pólya was born in Budapest, Austria-Hungary, to Anna Deutsch and Jakab Pólya, Hungarian Jews who had converted to Christianity in 1886. Although his parents were religious and he was baptized into the Catholic Church upon birth, George eventually grew up to be an agnostic. He received a PhD under Lipót Fejér in 1912, at Eötvös Loránd University. He was a professor of mathematics from 1914 to 1940 at ETH Zürich in Switzerland and from 1940 to 1953 at Stanford University. He remained a professor emeritus at Stanford for the rest of his career, working on a range of mathematical topics, including series, number theory, mathematical analysis, geometry, algebra, combinatorics, and probability. He was invited to speak at the ICM at Bologna in 1928, at Oslo in 1936 and at Cambridge, Massachusetts, in 1950. On September 7, 1985, Pólya died in Palo Alto, California, United States due to complications of a stroke he suffered during that summer. Heuristics Early in his career, Pólya wrote with Gábor Szegő two influential problem books, Problems and Theorems in Analysis (I: Series, Integral Calculus, Theory of Functions and II: Theory of Functions. Zeros. Polynomials. Determinants. Number Theory. Geometry). Later in his career, he spent considerable effort to identify systematic methods of problem-solving to further discovery and invention in mathematics for students, teachers, and researchers. He wrote five books on the subject: How to Solve It, Mathematics and Plausible Reasoning (Volume I: Induction and Analogy in Mathematics, and Volume II: Patterns of Plausible Inference), and Mathematical Discovery: On Understanding, Learning, and Teaching Problem Solving (volumes 1 and 2). In How to Solve It, Pólya provides general heuristics for solving a gamut of problems, including both mathematical and non-mathematical problems. The book includes advice for teaching students of mathematics and a mini-encyclopedia of heuristic terms. It was translated into several languages and has sold over a million copies. The book is still used in mathematical education. Douglas Lenat's Automated Mathematician and Eurisko artificial intelligence programs were inspired by Pólya's work. In addition to his works directly addressing problem solving, Pólya wrote another short book called Mathematical Methods in Science, based on a 1963 work supported by the National Science Foundation edited by Leon Bowden and published by the Mathematical Association of America (MAA) in 1977. As Pólya notes in the preface, Bowden carefully followed a tape recording of a course Pólya gave several times at Stanford in order to put the book together. Pólya notes in the preface "that the following pages will be useful, yet they should not be regarded as a finished expression." Legacy There are three prizes named after Pólya, causing occasional confusion of one for another. In 1969 the Society for Industrial and Applied Mathematics (SIAM) established the George Pólya Prize, given alternately in two categories for "a notable application of combinatorial theory" and for "a notable contribution in another area of interest to George Pólya." In 1976 the Mathematical Association of America (MAA) established the George Pólya Award "for articles of expository excellence" published in the College Mathematics Journal. In 1987 the London Mathematical Society (LMS) established the Pólya Prize for "outstanding creativity in, imaginative exposition of, or distinguished contribution to, mathematics within the United Kingdom." In 1991, the MAA established the George Pólya Lectureship series. Stanford University has a Polya Hall named in his honor. Selected publications Books Aufgaben und Lehrsätze aus der Analysis, 1st edn. 1925. ("Problems and theorems in analysis“). Springer, Berlin 1975 (with Gábor Szegő). Reihen. 1975, 4th edn., . Funktionentheorie, Nullstellen, Polynome, Determinanten, Zahlentheorie. 1975, 4th edn., . Mathematik und plausibles Schliessen. Birkhäuser, Basel 1988, Induktion und Analogie in der Mathematik, 3rd edn., (Wissenschaft und Kultur; 14). Typen und Strukturen plausibler Folgerung, 2nd edn., (Wissenschaft und Kultur; 15). – English translation: Mathematics and Plausible Reasoning, Princeton University Press 1954, 2 volumes (Vol. 1: Induction and Analogy in Mathematics, Vol. 2: Patterns of Plausible Inference) Schule des Denkens. Vom Lösen mathematischer Probleme ("How to solve it"). 4th edn. Francke Verlag, Tübingen 1995, (Sammlung Dalp). – English translation: How to Solve It, Princeton University Press 2004 (with foreword by John Horton Conway and added exercises) Vom Lösen mathematischer Aufgaben. 2nd edn. Birkhäuser, Basel 1983, (Wissenschaft und Kultur; 21). – English translation: Mathematical Discovery: On Understanding, Learning and Teaching Problem Solving, 2 volumes, Wiley 1962 (published in one vol. 1981) Collected Papers, 4 volumes, MIT Press 1974 (ed. Ralph P. Boas). Vol. 1: Singularities of Analytic Functions, Vol. 2: Location of Zeros, Vol. 3: Analysis, Vol. 4: Probability, Combinatorics with R. C. Read: Combinatorial enumeration of groups, graphs, and chemical compounds, Springer Verlag 1987 (English translation of Kombinatorische Anzahlbestimmungen für Gruppen, Graphen und chemische Verbindungen, Acta Mathematica, vol. 68, 1937, pp. 145–254) with Godfrey Harold Hardy: John Edensor Littlewood Inequalities, Cambridge University Press 1934 Mathematical Methods in Science, MAA, Washington D. C. 1977 (ed. Leon Bowden) with Gordon Latta: Complex Variables, Wiley 1974 with Robert E. Tarjan, Donald R. Woods: Notes on introductory combinatorics, Birkhäuser 1983 with Jeremy Kilpatrick: The Stanford mathematics problem book: with hints and solutions, New York: Teachers College Press 1974 with several co-authors: Applied combinatorial mathematics, Wiley 1964 (ed. Edwin F. Beckenbach) with Gábor Szegő: Isoperimetric inequalities in mathematical physics, Princeton, Annals of Mathematical Studies 27, 1951 Articles with Ralph P. Boas, Jr.: with Norbert Wiener: See also Integer-valued polynomial Laguerre–Pólya class Landau–Kolmogorov inequality Multivariate Pólya distribution Pólya's characterization theorem Pólya class Pólya conjecture Polya distribution Pólya enumeration theorem Pólya–Vinogradov inequality Pólya inequality Pólya urn model Pólya's theorem Pólya's proof that there is no "horse of a different color" Wallpaper group The Martians (scientists) References External links The George Pólya Award George Pólya, Gábor Szegö, Problems and theorems in analysis (1998) George Pólya on UIUC's WikEd Memorial Resolution 1887 births 1985 deaths 20th-century Hungarian mathematicians Mathematics popularizers American agnostics American people of Hungarian-Jewish descent Hungarian Jews American statisticians Hungarian emigrants to Switzerland Combinatorialists Academic staff of ETH Zurich Hungarian agnostics Hungarian statisticians Complex analysts Mathematical analysts Members of the United States National Academy of Sciences Mathematicians from Budapest Swiss emigrants to the United States Stanford University Department of Mathematics faculty
George Pólya
[ "Mathematics" ]
1,698
[ "Mathematical analysis", "Combinatorialists", "Mathematical analysts", "Combinatorics" ]
12,962
https://en.wikipedia.org/wiki/%CE%93-Hydroxybutyric%20acid
γ-Hydroxybutyric acid, also known as gamma-hydroxybutyric acid, GHB, or 4-hydroxybutanoic acid, is a naturally occurring neurotransmitter and a depressant drug. It is a precursor to GABA, glutamate, and glycine in certain brain areas. It acts on the GHB receptor and is a weak agonist at the GABAB receptor. GHB has been used in the medical setting as a general anesthetic and as treatment for cataplexy, narcolepsy, and alcoholism. The substance is also used illicitly for various reasons, including as a performance-enhancing drug, date rape drug, and as a recreational drug. It is commonly used in the form of a salt, such as sodium γ-hydroxybutyrate (NaGHB, sodium oxybate, or Xyrem) or potassium γ-hydroxybutyrate (KGHB, potassium oxybate). GHB is also produced as a result of fermentation, and is found in small quantities in some beers and wines, beef, and small citrus fruits. Succinic semialdehyde dehydrogenase deficiency is a disease that causes GHB to accumulate in the blood. Medical use GHB is used for medical purposes in the treatment of narcolepsy and, more rarely, alcohol dependence, although there remains uncertainty about its efficacy relative to other pharmacotherapies for alcohol dependence. The authors of a 2010 Cochrane review concluded that "GHB appears better than NTX and disulfiram in maintaining abstinence and preventing craving in the medium term (3 to 12 months)". It is sometimes used off-label for the treatment of fibromyalgia. GHB is the active ingredient of the prescription medication sodium oxybate (Xyrem). Sodium oxybate is approved by the U.S. Food and Drug Administration for the treatment of cataplexy associated with narcolepsy and excessive daytime sleepiness (EDS) associated with narcolepsy. GHB has been shown to reliably increase slow-wave sleep and decrease the tendency for REM sleep in modified multiple sleep latency tests. The FDA-approved labeling for sodium oxybate suggests no evidence GHB has teratogenic, carcinogenic or hepatotoxic properties. Its favorable safety profile relative to ethanol may explain why GHB continues to be investigated as a candidate for alcohol substitution. Recreational use GHB is a central nervous system depressant used as an intoxicant. It has many street names. Its effects have been described as comparable with ethanol (alcohol) and MDMA use, such as euphoria, disinhibition, enhanced libido and empathogenic states. A review comparing ethanol to GHB concluded that the dangers of the two drugs were similar. At higher doses, GHB may induce nausea, dizziness, drowsiness, agitation, visual disturbances, depressed breathing, amnesia, unconsciousness, and death. One potential cause of death from GHB consumption is polydrug toxicity. Co-administration with other CNS depressants such as alcohol or benzodiazepines can result in an additive effect (potentiation), as they all bind to gamma-aminobutyric acid (or "GABA") receptor sites. The effects of GHB can last from 1.5 to 4 hours, or longer if large doses have been consumed. Consuming GHB with alcohol can cause respiratory arrest and vomiting in combination with unarousable sleep, which can lead to death. Recreational doses of 1–2 g generally provide a feeling of euphoria, and larger doses create deleterious effects such as reduced motor function and drowsiness. The sodium salt of GHB has a salty taste. Other salt forms such as calcium GHB and magnesium GHB have also been reported, but the sodium salt is by far the most common. Some prodrugs, such as γ-butyrolactone (GBL), convert to GHB in the stomach and bloodstream. Other prodrugs exist, such as 1,4-butanediol (1,4-B). GBL and 1,4-B are normally found as pure liquids, but they can be mixed with other more harmful solvents when intended for industrial use (e.g. as paint stripper or varnish thinner). GHB can be manufactured with little knowledge of chemistry, as it involves the mixing of its two precursors, GBL and an alkali hydroxide such as sodium hydroxide, to form the GHB salt. Due to the ease of manufacture and the availability of its precursors, it is not usually produced in illicit laboratories like other synthetic drugs, but in private homes by low-level producers. GHB is colourless and odourless. Party use GHB has been used as a club drug, apparently starting in the 1990s, as small doses of GHB can act as a euphoriant and are believed to be aphrodisiac. Slang terms for GHB include liquid ecstasy, lollipops, liquid X or liquid E due to its tendency to produce euphoria and sociability and its use in the dance party scene. Sports and athletics Some athletes have used GHB or its analogs because of being marketed as anabolic agents, although there is no evidence that it builds muscle or improves performance. Usage as a date-rape drug GHB became known to the general public as a date-rape drug by the late 1990s. GHB is colourless and odorless and has been described as "very easy to add to drinks". When consumed, the victim will quickly feel groggy and sleepy and may become unconscious. Upon recovery they may have an impaired ability to recall events that have occurred during the period of intoxication. In these situations evidence and the identification of the perpetrator of the rape is often difficult. It is also difficult to establish how often GHB is used to facilitate rape as it is difficult to detect in a urine sample after a day, and many victims may only recall the rape some time after its occurrence; however, a 2006 study suggested that there was "no evidence to suggest widespread date rape drug use" in the UK, and that less than 2% of cases involved GHB, while 17% involved cocaine, and a survey in the Netherlands published in 2010 found that the proportion of drug-related rapes where GHB was used appeared to be greatly overestimated by the media. More recently, a study in Western Australia reviewed the pre-hospital context given in medical records around emergency department presentations with analytical confirmation of GHB exposure. This study found that most cases reported daily dosing and subsequent accidental overdose rather than their presentation being associated with date-rape. There have been several high-profile cases of GHB as a date-rape drug that received national attention in the United States. In early 1999, a 15-year-old girl, Samantha Reid of Rockwood, Michigan, died from GHB poisoning. Reid's death inspired the legislation titled the "Hillory J. Farias and Samantha Reid Date-Rape Drug Prohibition Act of 2000". This is the law that made GHB a Schedule 1 controlled substance. In the United Kingdom, British serial killer Stephen Port administered GHB to his victims by adding it to drinks given to them, raping them, and murdering four of them in his flat in Barking, East London. GHB can be detected in hair. Hair testing can be a useful tool in court cases or for the victim's own information. Most over-the-counter urine test kits test only for date-rape drugs that are benzodiazepines, which GHB is not. To detect GHB in urine, the sample must be taken within four hours of GHB ingestion, and cannot be tested at home. Adverse effects Combination with alcohol In humans, GHB has been shown to reduce the elimination rate (thus increasing the elimination ) of alcohol. This may explain the respiratory arrest that has been reported after ingestion of both drugs. A review of the details of 194 deaths attributed to or related to GHB over a ten-year period found that most were from respiratory depression caused by interaction with alcohol or other drugs. Deaths One publication has investigated 226 deaths attributed to GHB. Of the 226 deaths included, 213 had a cardiorespiratory arrest and 13 had fatal accidents. Seventy-one of these deaths (34%) had no co-intoxicants. Postmortem blood GHB was 18–4400 mg/L (median=347) in deaths negative for co-intoxicants. One report has suggested that sodium oxybate overdose might be fatal, based on deaths of three patients who had been prescribed the drug. However, for two of the three cases, post-mortem GHB concentrations were 141 and 110 mg/L, which is within the expected range of concentrations for GHB after death, and the third case was a patient with a history of intentional drug overdose. The toxicity of GHB has been an issue in criminal trials, as in the death of Felicia Tang, where the defense argued that death was due to GHB, not murder. GHB is produced in the body in very small amounts, and blood levels may climb after death to levels in the range of 30–50 mg/L. Levels higher than this are found in GHB deaths. Levels lower than this may be due to GHB or to postmortem endogenous elevations. Neurotoxicity In multiple studies, GHB has been found to impair spatial memory, working memory, learning and memory in rats with chronic administration. These effects are associated with decreased NMDA receptor expression in the cerebral cortex and possibly other areas as well. In addition, the neurotoxicity appears to be caused by oxidative stress. Addiction Addiction occurs when repeated drug use disrupts the normal balance of brain circuits that control rewards, memory and cognition, ultimately leading to compulsive drug taking. Rats forced to consume massive doses of GHB will intermittently prefer GHB solution to water. Withdrawal GHB has also been associated with a withdrawal syndrome of insomnia, anxiety, and tremor that usually resolves within three to twenty-one days. The withdrawal syndrome can be severe producing acute delirium and may require hospitalization in an intensive care unit for management. Management of GHB dependence involves considering the person's age, comorbidity and the pharmacological pathways of GHB. The mainstay of treatment for severe withdrawal is supportive care and benzodiazepines for control of acute delirium, but larger doses are often required compared to acute delirium of other causes (e.g. > 100 mg/d of diazepam). Baclofen has been suggested as an alternative or adjunct to benzodiazepines based on anecdotal evidence and some animal data. However, there is less experience with the use of baclofen for GHB withdrawal, and additional research in humans is needed. Baclofen was first suggested as an adjunct because benzodiazepines do not affect GABAB receptors and therefore have no cross-tolerance with GHB while baclofen, which works via GABAB receptors, is cross-tolerant with GHB and may be more effective in alleviating withdrawal effects of GHB. GHB withdrawal is not widely discussed in textbooks and some psychiatrists, general practitioners, and even hospital emergency physicians may not be familiar with this withdrawal syndrome. Overdose Overdose of GHB can sometimes be difficult to treat because of its multiple effects on the body. GHB tends to cause rapid unconsciousness at doses above 3500 mg, with single doses over 7000 mg often causing life-threatening respiratory depression, and higher doses still inducing bradycardia and cardiac arrest. Other side-effects include convulsions (especially when combined with stimulants), and nausea/vomiting (especially when combined with alcohol). The greatest life threat due to GHB overdose (with or without other substances) is respiratory arrest. Other relatively common causes of death due to GHB ingestion include aspiration of vomitus, positional asphyxia, and trauma sustained while intoxicated (e.g., motor vehicle accidents while driving under the influence of GHB). The risk of aspiration pneumonia and positional asphyxia risk can be reduced by laying the patient down in the recovery position. People are most likely to vomit as they become unconscious, and as they wake up. It is important to keep the victim awake and moving; the victim must not be left alone due to the risk of death through vomiting. Frequently the victim will be in a good mood but this does not mean the victim is not in danger. GHB overdose is a medical emergency and immediate assessment in an emergency department is needed. Convulsions from GHB can be treated with the benzodiazepines diazepam or lorazepam. Even though these benzodiazepines are also CNS depressants, they primarily modulate GABAA receptors whereas GHB is primarily a GABAB receptor agonist, and so do not worsen CNS depression as much as might be expected. Because of the faster and more complete absorption of GBL relative to GHB, its dose-response curve is steeper, and overdoses of GBL tend to be more dangerous and problematic than overdoses involving only GHB or 1,4-B. Any GHB/GBL overdose is a medical emergency and should be cared for by appropriately trained personnel. A newer synthetic drug, SCH-50911, which acts as a selective GABAB antagonist, quickly reverses GHB overdose in mice. However, this treatment has yet to be tried in humans, and it is unlikely that it will be researched for this purpose in humans due to the illegal nature of clinical trials of GHB and the lack of medical indemnity coverage inherent in using an untested treatment for a life-threatening overdose. Detection of use GHB may be quantitated in blood or plasma to confirm a diagnosis of poisoning in hospitalized patients, to provide evidence in an impaired driving, or to assist in a medicolegal death investigation. Blood or plasma GHB concentrations are usually in a range of 50–250 mg/L in persons receiving the drug therapeutically (during general anesthesia), 30–100 mg/L in those arrested for impaired driving, 50–500 mg/L in acutely intoxicated patients and 100–1000 mg/L in victims of fatal overdosage. Urine is often the preferred specimen for routine drug abuse monitoring purposes. Both γ-butyrolactone (GBL) and 1,4-butanediol are converted to GHB in the body. In January 2016, it was announced scientists had developed a way to detect GHB, among other things, in saliva. Endogenous production Cells produce GHB by reduction of succinic semialdehyde via succinic semialdehyde reductase (SSR). This enzyme appears to be induced by cAMP levels, meaning substances that elevate cAMP, such as forskolin and vinpocetine, may increase GHB synthesis and release. Conversely, endogeneous GHB production in those taking valproic acid will be inhibited via inhibition of the conversion from succinic acid semialdehyde to GHB. People with the disorder known as succinic semialdehyde dehydrogenase deficiency, also known as γ-hydroxybutyric aciduria, have elevated levels of GHB in their urine, blood plasma and cerebrospinal fluid. The precise function of GHB in the body is not clear. It is known, however, that the brain expresses a large number of receptors that are activated by GHB. These receptors are excitatory, however, and therefore not responsible for the sedative effects of GHB; they have been shown to elevate the principal excitatory neurotransmitter, glutamate. The benzamide antipsychotics—amisulpride, nemonapride, etc.—have been shown to bind to these GHB-activated receptors in vivo. Other antipsychotics were tested and were not found to have an affinity for this receptor. GHB is a precursor to GABA, glutamate, and glycine in certain brain areas. In spite of its demonstrated neurotoxicity, (see relevant section, above), GHB has neuroprotective properties, and has been found to protect cells from hypoxia. Natural fermentation by-product GHB is also produced as a result of fermentation and so is found in small quantities in some beers and wines, in particular fruit wines. The amount found in wine is pharmacologically insignificant and not sufficient to produce psychoactive effects. Pharmacology GHB has at least two distinct binding sites in the central nervous system. GHB acts as an agonist at the inhibitory GHB receptor and as a weak agonist at the inhibitory GABAB receptor. GHB is a naturally occurring substance that acts in a similar fashion to some neurotransmitters in the mammalian brain. GHB is probably synthesized from GABA in GABAergic neurons, and released when the neurons fire. GHB has been found to activate oxytocinergic neurons in the supraoptic nucleus. If taken orally, GABA itself does not effectively cross the blood–brain barrier. GHB induces the accumulation of either a derivative of tryptophan or tryptophan itself in the extracellular space, possibly by increasing tryptophan transport across the blood–brain barrier. The blood content of certain neutral amino-acids, including tryptophan, is also increased by peripheral GHB administration. GHB-induced stimulation of tissue serotonin turnover may be due to an increase in tryptophan transport to the brain and in its uptake by serotonergic cells. As the serotonergic system may be involved in the regulation of sleep, mood, and anxiety, the stimulation of this system by high doses of GHB may be involved in certain neuropharmacological events induced by GHB administration. However, at therapeutic doses, GHB reaches much higher concentrations in the brain and activates GABAB receptors, which are primarily responsible for its sedative effects. GHB's sedative effects are blocked by GABAB antagonists. The role of the GHB receptor in the behavioural effects induced by GHB is more complex. GHB receptors are densely expressed in many areas of the brain, including the cortex and hippocampus, and these are the receptors that GHB displays the highest affinity for. There has been somewhat limited research into the GHB receptor; however, there is evidence that activation of the GHB receptor in some brain areas results in the release of glutamate, the principal excitatory neurotransmitter. Drugs that selectively activate the GHB receptor cause absence seizures in high doses, as do GHB and GABAB agonists. Activation of both the GHB receptor and GABAB is responsible for the addictive profile of GHB. GHB's effect on dopamine release is biphasic. Low concentrations stimulate dopamine release via the GHB receptor. Higher concentrations inhibit dopamine release via GABAB receptors as do other GABAB agonists such as baclofen and phenibut. After an initial phase of inhibition, dopamine release is then increased via the GHB receptor. Both the inhibition and increase of dopamine release by GHB are inhibited by opioid antagonists such as naloxone and naltrexone. Dynorphin may play a role in the inhibition of dopamine release via kappa opioid receptors. This explains the paradoxical mix of sedative and stimulatory properties of GHB, as well as the so-called "rebound" effect, experienced by individuals using GHB as a sleeping agent, wherein they awake suddenly after several hours of GHB-induced deep sleep. That is to say that, over time, the concentration of GHB in the system decreases below the threshold for significant GABAB receptor activation and activates predominantly the GHB receptor, leading to wakefulness. Recently, analogs of GHB, such as 4-hydroxy-4-methylpentanoic acid (UMB68) have been synthesised and tested on animals, in order to gain a better understanding of GHB's mode of action. Analogues of GHB such as 3-methyl-GHB, 4-methyl-GHB, and 4-phenyl-GHB have been shown to produce similar effects to GHB in some animal studies, but these compounds are even less well researched than GHB itself. Of these analogues, only 4-methyl-GHB (γ-hydroxyvaleric acid, GHV) and a prodrug form γ-valerolactone (GVL) have been reported as drugs of abuse in humans, and on the available evidence seem to be less potent but more toxic than GHB, with a particular tendency to cause nausea and vomiting. Other prodrug ester forms of GHB have also rarely been encountered by law enforcement, including 1,4-butanediol diacetate (BDDA/DABD), methyl-4-acetoxybutanoate (MAB), and ethyl-4-acetoxybutanoate (EAB), but these are, in general, covered by analogue laws in jurisdictions where GHB is illegal, and little is known about them beyond their delayed onset and longer duration of action. The intermediate compound γ-hydroxybutyraldehyde (GHBAL) is also a prodrug for GHB; however, as with all aliphatic aldehydes this compound is caustic and is strong-smelling and foul-tasting; actual use of this compound as an intoxicant is likely to be unpleasant and result in severe nausea and vomiting. Only a minor portion (1–5%) of the administered GHB dose is excreted unchanged in the urine. Studies have shown that the maximum concentration of GHB in urine appears within 1 hour and rapidly declines thereafter. The vast majority (95–98%) undergoes extensive metabolism in the liver. GHB is broken down through a series of enzymatic pathways. The primary route involves conversion to succinic semialdehyde (SSA) by either GHB dehydrogenase (ADH) or GHB transhydrogenase. SSA is further oxidized by succinic semialdehyde dehydrogenase (SSADH) to succinic acid, which enters the Krebs cycle and is ultimately converted into carbon dioxide and water. Both of the metabolic breakdown pathways shown for GHB can run in either direction, depending on the concentrations of the substances involved, so the body can make its own GHB either from GABA or from succinic semialdehyde. Under normal physiological conditions, the concentration of GHB in the body is rather low, and the pathways would run in the reverse direction to what is shown here to produce endogenous GHB. However, when GHB is consumed for recreational or health promotion purposes, its concentration in the body is much higher than normal, which changes the enzyme kinetics so that these pathways operate to metabolise GHB rather than producing it. History Alexander Zaytsev worked on this chemical family and published work on it in 1874. The first extended research into GHB and its use in humans was conducted in the early 1960s by Henri Laborit to use in studying the neurotransmitter GABA. It was studied in a range of uses including obstetric surgery and during childbirth and as an anxiolytic; there were anecdotal reports of it having antidepressant and aphrodisiac effects as well. It was also studied as an intravenous anesthetic agent and was marketed for that purpose starting in 1964 in Europe but it was not widely adopted as it caused seizures; as of 2006 that use was still authorized in France and Italy but not widely used. It was also studied to treat alcohol addiction; while the evidence for this use is weak; however, sodium oxybate is marketed for this use in Italy. GHB and sodium oxybate were also studied for use in narcolepsy from the 1960s onwards. In May 1990 GHB was introduced as a dietary supplement and was marketed to body builders, for help with weight control and as a sleep aid, and as a "replacement" for l-tryptophan, which was removed from the market in November 1989 when batches contaminated with trace impurities were found to cause eosinophilia–myalgia syndrome, although eosinophilia–myalgia syndrome is also tied to tryptophan overload. In 2001 tryptophan supplement sales were allowed to resume, and in 2005 the FDA ban on tryptophan supplement importation was lifted. By November 1989 57 cases of illness caused by the GHB supplements had been reported to the Centers for Disease Control and Prevention, with people having taken up to three teaspoons of GHB; there were no deaths but nine people needed care in an intensive care unit. The FDA issued a warning in November 1990 that sale of GHB was illegal. GHB continued to be manufactured and sold illegally and it and analogs were adopted as a club drug and came to be used as a date rape drug, and the DEA made seizures and the FDA reissued warnings several times throughout the 1990s. At the same time, research on the use of GHB in the form of sodium oxybate had formalized, as a company called Orphan Medical had filed an investigational new drug application and was running clinical trials with the intention of gaining regulatory approval for use to treat narcolepsy. A popular children's toy, Bindeez (also known as Aqua Dots, in the United States), produced by Melbourne company Moose, was banned in Australia in early November 2007 when it was discovered that 1,4-butanediol (1,4-B), which is metabolized into GHB, had been substituted for the non-toxic plasticiser 1,5-pentanediol in the bead manufacturing process. Three young children were hospitalized as a result of ingesting a large number of the beads, and the toy was recalled. Legal status In the United States, GHB was placed on Schedule I of the Controlled Substances Act in March 2000. However, used in sodium oxybate under an IND or NDA from the US FDA, it is considered a Schedule III substance but with Schedule I trafficking penalties, one of several drugs that are listed in multiple schedules. On 20 March 2001, the UN Commission on Narcotic Drugs placed GHB in Schedule IV of the 1971 Convention on Psychotropic Substances. In the UK GHB was made a class C drug in June 2003. In October 2013 the ACMD recommended upgrading it from schedule IV to schedule II in line with UN recommendations. Their report concluded that the minimal use of Xyrem in the UK meant that prescribers would be minimally inconvenienced by the rescheduling. This advice was followed and GHB was moved to schedule 2 on 7 January 2015. In April 2022 GHB was changed from class C to class B. In Hong Kong, GHB is regulated under Schedule 1 of Hong Kong's Chapter 134 Dangerous Drugs Ordinance. It can only be used legally by health professionals and for university research purposes. The substance can be given by pharmacists under a prescription. Anyone who supplies the substance without prescription can be fined HK$10,000. The penalty for trafficking or manufacturing the substance is a HK$150,000 fine and life imprisonment. Possession of the substance for consumption without license from the Department of Health is illegal with a HK$100,000 fine or five years of jail time. In Canada, GHB has been a Schedule I controlled substance since 6 November 2012 (the same schedule that contains heroin and cocaine). Prior to that date, it was a Schedule III controlled substance (the same schedule that contains amphetamines and LSD). In New Zealand and Australia, GHB, 1,4-B, and GBL are all Class B illegal drugs, along with any possible esters, ethers, and aldehydes. GABA itself is also listed as an illegal drug in these jurisdictions, which seems unusual given its failure to cross the blood–brain barrier, but there was a perception among legislators that all known analogues should be covered as far as this was possible. Attempts to circumvent the illegal status of GHB have led to the sale of derivatives such as 4-methyl-GHB (γ-hydroxyvaleric acid, GHV) and its prodrug form γ-valerolactone (GVL), but these are also covered under the law by virtue of their being "substantially similar" to GHB or GBL, so importation, sale, possession and use of these compounds is also considered to be illegal. In Chile, GHB is a controlled drug under the law (psychotropic substances and narcotics). In Norway and in Switzerland, GHB is considered a narcotic and is only available by prescription under the trade name Xyrem (Union Chimique Belge S.A.). Sodium oxybate is also used therapeutically in Italy under the brand name Alcover for treatment of alcohol withdrawal and dependence. See also Beta-Hydroxybutyric acid γ-Hydroxyvaleric acid (GHV) γ-Valerolactone (GVL) β-Hydroxy β-methylbutyric acid (HMB) References External links Gamma-hydroxybutyrate MS Spectrum EMCDDA Report on the risk assessment of GHB in the framework of the joint action on new synthetic drugs Erowid GHB Vault (also contains information about addiction and dangers) InfoFacts – Rohypnol and GHB (National Institute on Drug Abuse) Webarchive template wayback links Sedatives GABAB receptor agonists GHB receptor agonists GABA analogues General anesthetics Neurotransmitters Drug culture Euphoriants Gamma hydroxy acids GABA Glutamate (neurotransmitter) Drug-facilitated sexual assault
Γ-Hydroxybutyric acid
[ "Chemistry" ]
6,591
[ "Neurochemistry", "Neurotransmitters" ]
12,963
https://en.wikipedia.org/wiki/Giordano%20Bruno
Giordano Bruno (; ; ; born Filippo Bruno, January or February 1548 – 17 February 1600) was an Italian philosopher, poet, alchemist, astrologer, cosmological theorist, and esotericist. He is known for his cosmological theories, which conceptually extended to include the then-novel Copernican model. He practiced Hermeticism and gave a mystical stance to exploring the universe. He proposed that the stars were distant suns surrounded by their own planets (exoplanets), and he raised the possibility that these planets might foster life of their own, a cosmological position known as cosmic pluralism. He also insisted that the universe is infinite and could have no center. Bruno was tried for heresy by the Roman Inquisition on charges of denial of several core Catholic doctrines, including eternal damnation, the Trinity, the divinity of Christ, the virginity of Mary, and transubstantiation. Bruno's pantheism was not taken lightly by the church, nor was his teaching of metempsychosis regarding the reincarnation of the soul. The Inquisition found him guilty, and he was burned at the stake in Rome's Campo de' Fiori in 1600. After his death, he gained considerable fame, being particularly celebrated by 19th- and early 20th-century commentators who regarded him as a martyr for science. Some historians are of the opinion his heresy trial was not a response to his cosmological views but rather a response to his religious and afterlife views, while others find the main reason for Bruno's death was indeed his cosmological views. Bruno's case is still considered a landmark in the history of free thought and the emerging sciences. In addition to cosmology, Bruno also wrote extensively on the art of memory, a loosely organized group of mnemonic techniques and principles. Historian Frances Yates argues that Bruno was deeply influenced by the presocratic Empedocles, Neoplatonism, Renaissance Hermeticism, and Book of Genesis-like legends surrounding the Hellenistic conception of Hermes Trismegistus. Other studies of Bruno have focused on his qualitative approach to mathematics and his application of the spatial concepts of geometry to language. Life Early years, 1548–1576 Born Filippo Bruno in Nola (a comune in the modern-day province of Naples, in the Southern Italian region of Campania, then part of the Kingdom of Naples) in 1548, he was the son of Giovanni Bruno (1517 – c. 1592), a soldier, and Fraulissa Savolino (1520–?). In his youth he was sent to Naples to be educated. He was tutored privately at the Augustinian monastery there, and attended public lectures at the Studium Generale. At the age of 17, he entered the Dominican Order at the monastery of San Domenico Maggiore in Naples, taking the name Giordano, after Giordano Crispo, his metaphysics tutor. He continued his studies there, completing his novitiate, and ordained a priest in 1572 at age 24. During his time in Naples, he became known for his skill with the art of memory and on one occasion traveled to Rome to demonstrate his mnemonic system before Pope Pius V and Cardinal Rebiba. In his later years, Bruno claimed that the Pope accepted his dedication to him of the lost work On The Ark of Noah at this time. While Bruno was distinguished for outstanding ability, his taste for free thinking and forbidden books soon caused him difficulties. Given the controversy he caused in later life, it is surprising that he was able to remain within the monastic system for eleven years. In his testimony to Venetian inquisitors during his trial many years later, he says that proceedings were twice taken against him for having cast away images of the saints, retaining only a crucifix, and for having recommended controversial texts to a novice. Such behavior could perhaps be overlooked, but Bruno's situation became much more serious when he was reported to have defended the Arian heresy, and when a copy of the banned writings of Erasmus, annotated by him, was discovered hidden in the monastery latrine. When he learned that an indictment was being prepared against him in Naples he fled, shedding his religious habit, at least for a time. First years of wandering, 1576–1583 Bruno first went to the Genoese port of Noli, then to Savona, Turin and finally to Venice, where he published his lost work On the Signs of the Times with the permission (so he claimed at his trial) of the Dominican Remigio Nannini Fiorentino. From Venice he went to Padua, where he met fellow Dominicans who convinced him to wear his religious habit again. From Padua he went to Bergamo and then across the Alps to Chambéry and Lyon. His movements after this time are obscure. In 1579, Bruno arrived in Geneva. During his Venetian trial, he told inquisitors that while in Geneva he told the Marchese de Vico of Naples, who was notable for helping Italian refugees in Geneva, "I did not intend to adopt the religion of the city. I desired to stay there only that I might live at liberty and in security." Bruno had a pair of breeches made for himself, and the Marchese and others apparently made Bruno a gift of a sword, hat, cape and other necessities for dressing himself; in such clothing Bruno could no longer be recognized as a priest. Things apparently went well for Bruno for a time, as he entered his name in the Rector's Book of the University of Geneva in May 1579. But in keeping with his personality he could not long remain silent. In August he published an attack on the work of , a distinguished professor. Bruno and the printer, Jean Bergeon, were promptly arrested. Rather than apologizing, Bruno insisted on continuing to defend his publication. He was refused the right to take sacrament. Though this right was soon restored, he left Geneva. He went to France, arriving first in Lyon, and thereafter settling for a time (1580–1581) in Toulouse, where he took his doctorate in theology and was elected by students to lecture in philosophy. He also attempted at this time to return to Catholicism, but was denied absolution by the Jesuit priest he approached. When religious strife broke out in the summer of 1581, he moved to Paris. There he held a cycle of thirty lectures on theological topics and also began to gain fame for his prodigious memory. His talents attracted the benevolent attention of the king Henry III; Bruno subsequently reported: "I got me such a name that King Henry III summoned me one day to discover from me if the memory which I possessed was natural or acquired by magic art. I satisfied him that it did not come from sorcery but from organized knowledge; and, following this, I got a book on memory printed, entitled The Shadows of Ideas, which I dedicated to His Majesty. Forthwith he gave me an Extraordinary Lectureship with a salary." In Paris, Bruno enjoyed the protection of his powerful French patrons. During this period, he published several works on mnemonics, including De umbris idearum (On the Shadows of Ideas, 1582), (The Art of Memory, 1582), and Cantus circaeus (Circe's Song, 1582; described at ). All of these were based on his mnemonic models of organized knowledge and experience, as opposed to the simplistic logic-based mnemonic techniques of Petrus Ramus then becoming popular. Bruno also published a comedy summarizing some of his philosophical positions, titled Il Candelaio (The Candlemaker, 1582). In the 16th century dedications were, as a rule, approved beforehand, and hence were a way of placing a work under the protection of an individual. Given that Bruno dedicated various works to the likes of King Henry III, Sir Philip Sidney, Michel de Castelnau (French Ambassador to England), and possibly Pope Pius V, it is apparent that this wanderer had risen sharply in status and moved in powerful circles. England, 1583–1585 In April 1583, Bruno went to England with letters of recommendation from Henry III as a guest of the French ambassador, Michel de Castelnau. Bruno lived at the French embassy with the lexicographer John Florio. There he became acquainted with the poet Philip Sidney (to whom he dedicated two books) and other members of the Hermetic circle around John Dee, though there is no evidence that Bruno ever met Dee himself. He also lectured at Oxford, and unsuccessfully sought a teaching position there. His views were controversial, notably with John Underhill, Rector of Lincoln College and subsequently bishop of Oxford, and George Abbot, who later became Archbishop of Canterbury. Abbot mocked Bruno for supporting "the opinion of Copernicus that the earth did go round, and the heavens did stand still; whereas in truth it was his own head which rather did run round, and his brains did not stand still", and found Bruno had both plagiarized and misrepresented Ficino's work, leading Bruno to return to the continent. Nevertheless, his stay in England was fruitful. During that time Bruno completed and published some of his most important works, the six "Italian Dialogues", including the cosmological tracts La cena de le ceneri (The Ash Wednesday Supper, 1584), De la causa, principio et uno (On Cause, Principle and Unity, 1584), De l'infinito, universo et mondi (On the Infinite, Universe and Worlds, 1584) as well as Lo spaccio de la bestia trionfante (The Expulsion of the Triumphant Beast, 1584) and (On the Heroic Frenzies, 1585). Some of these were printed by John Charlewood. Some of the works that Bruno published in London, notably The Ash Wednesday Supper, appear to have given offense. Once again, Bruno's controversial views and tactless language lost him the support of his friends. John Bossy has advanced the theory that, while staying in the French Embassy in London, Bruno was also spying on Catholic conspirators, under the pseudonym "Henry Fagot", for Sir Francis Walsingham, Queen Elizabeth's Secretary of State. Bruno is sometimes cited as being the first to propose that the universe is infinite, which he did during his time in England, but an English scientist, Thomas Digges, put forth this idea in a published work in 1576, some eight years earlier than Bruno. An infinite universe and the possibility of alien life had also been earlier suggested by German Catholic Cardinal Nicholas of Cusa in "On Learned Ignorance" published in 1440 and Bruno attributed his understanding of multiple worlds to this earlier scholar, who he called "the divine Cusanus". Last years of wandering, 1585–1592 In October 1585, Castelnau was recalled to France, and Bruno went with him. In Paris, Bruno found a tense political situation. Moreover, his 120 theses against Aristotelian natural science soon put him in ill favor. In 1586, following a violent quarrel over these theses, he left France for Germany. In Germany he failed to obtain a teaching position at Marburg, but was granted permission to teach at Wittenberg, where he lectured on Aristotle for two years. However, with a change of intellectual climate there, he was no longer welcome, and went in 1588 to Prague, where he obtained 300 taler from Rudolf II, but no teaching position. He went on to serve briefly as a professor in Helmstedt, but had to flee again in 1590 when he was excommunicated by the Lutherans. During this period he produced several Latin works, dictated to his friend and secretary Girolamo Besler, including De Magia (On Magic), Theses De Magia (Theses on Magic) and De Vinculis in Genere (A General Account of Bonding). All these were apparently transcribed or recorded by Besler (or Bisler) between 1589 and 1590. He also published De Imaginum, Signorum, Et Idearum Compositione (On the Composition of Images, Signs and Ideas, 1591). In 1591 he was in Frankfurt, where he received an invitation from the Venetian patrician Giovanni Mocenigo, who wished to be instructed in the art of memory, and also heard of a vacant chair in mathematics at the University of Padua. At the time the Inquisition seemed to be losing some of its strictness, and because the Republic of Venice was the most liberal state in the Italian Peninsula, Bruno was lulled into making the fatal mistake of returning to Italy. He went first to Padua, where he taught briefly, and applied unsuccessfully for the chair of mathematics, which was given instead to Galileo Galilei one year later. Bruno accepted Mocenigo's invitation and moved to Venice in March 1592. For about two months he served as an in-house tutor to Mocenigo, to whom he let slip some of his heterodox ideas. Mocenigo denounced him to the Venetian Inquisition, which had Bruno arrested on 22 May 1592. Among the numerous charges of blasphemy and heresy brought against him in Venice, based on Mocenigo's denunciation, was his belief in the plurality of worlds, as well as accusations of personal misconduct. Bruno defended himself skillfully, stressing the philosophical character of some of his positions, denying others and admitting that he had had doubts on some matters of dogma. The Roman Inquisition, however, asked for his transfer to Rome. After several months of argument, the Venetian authorities reluctantly consented and Bruno was sent to Rome in January 1593. Imprisonment, trial and execution, 1593–1600 During the seven years of his trial in Rome, Bruno was held in confinement, lastly in the Tower of Nona. Some important documents about the trial are lost, but others have been preserved, among them a summary of the proceedings that was rediscovered in 1940. The numerous charges against Bruno, based on some of his books as well as on witness accounts, included blasphemy, immoral conduct, and heresy in matters of dogmatic theology, and involved some of the basic doctrines of his philosophy and cosmology. Luigi Firpo speculates the charges made against Bruno by the Roman Inquisition were: holding opinions contrary to the Catholic faith and speaking against it and its ministers; holding opinions contrary to the Catholic faith about the Trinity, the divinity of Christ, and the Incarnation; holding opinions contrary to the Catholic faith pertaining to Jesus as the Christ; holding opinions contrary to the Catholic faith regarding the virginity of Mary, mother of Jesus; holding opinions contrary to the Catholic faith about both Transubstantiation and the Mass; claiming the existence of a plurality of worlds and their eternity; believing in metempsychosis and in the transmigration of the human soul into brutes; dealing in magics and divination. Bruno defended himself as he had in Venice, insisting that he accepted the Church's dogmatic teachings, but trying to preserve the basis of his cosmological views. In particular, he held firm to his belief in the plurality of worlds, although he was admonished to abandon it. His trial was overseen by the Inquisitor Cardinal Bellarmine, who demanded a full recantation, which Bruno eventually refused. On 20 January 1600, Pope Clement VIII declared Bruno a heretic, and the Inquisition issued a sentence of death. According to the correspondence of Gaspar Schopp of Breslau, he is said to have made a threatening gesture towards his judges and to have replied: Maiori forsan cum timore sententiam in me fertis quam ego accipiam ("Perhaps you pronounce this sentence against me with greater fear than I receive it"). He was turned over to the secular authorities. On 17 February 1600, in the Campo de' Fiori (a central Roman market square), naked, with his "tongue imprisoned because of his wicked words", he was burned alive at the stake. His ashes were thrown into the Tiber river. All of Bruno's works were placed on the Index Librorum Prohibitorum in 1603. The inquisition cardinals who judged Giordano Bruno were Cardinal Bellarmino (Bellarmine), Cardinal Madruzzo (Madruzzi), Camillo Cardinal Borghese (later Pope Paul V), Domenico Cardinal Pinelli, Pompeio Cardinal Arrigoni, Cardinal Sfondrati, Pedro Cardinal De Deza Manuel and Cardinal Santorio (Archbishop of Santa Severina, Cardinal-Bishop of Palestrina). The measures taken to prevent Bruno continuing to speak have resulted in his becoming a symbol for free thought and free speech in present-day Rome, where an annual memorial service takes place close to the spot where he was executed. Physical appearance The earliest likeness of Bruno is an engraving published in 1715 and cited by Salvestrini as "the only known portrait of Bruno". Salvestrini suggests that it is a re-engraving made from a now lost original. This engraving has provided the source for later images. The records of Bruno's imprisonment by the Venetian inquisition in May 1592 describe him as a man "of average height, with a hazel-coloured beard and the appearance of being about forty years of age". Alternately, a passage in a work by George Abbot indicates that Bruno was of diminutive stature: "When that Italian Didapper, who intituled himself Philotheus Iordanus Brunus Nolanus, magis elaboratae Theologiae Doctor, &c. with a name longer than his body...". Cosmology Contemporary cosmological beliefs In the first half of the 15th century, Nicholas of Cusa challenged the then widely accepted philosophies of Aristotelianism, envisioning instead an infinite universe whose center was everywhere and circumference nowhere, and moreover teeming with countless stars. He also predicted that neither were the rotational orbits circular nor were their movements uniform. In the second half of the 16th century, the theories of Copernicus (1473–1543) began diffusing through Europe. Copernicus conserved the idea of planets fixed to solid spheres, but considered the apparent motion of the stars to be an illusion caused by the rotation of the Earth on its axis; he also preserved the notion of an immobile center, but it was the Sun rather than the Earth. Copernicus also argued the Earth was a planet orbiting the Sun once every year. However he maintained the Ptolemaic hypothesis that the orbits of the planets were composed of perfect circles—deferents and epicycles—and that the stars were fixed on a stationary outer sphere. Despite the widespread publication of Copernicus' work De revolutionibus orbium coelestium, during Bruno's time most educated Catholics subscribed to the Aristotelian geocentric view that the Earth was the center of the universe, and that all heavenly bodies revolved around it. The ultimate limit of the universe was the primum mobile, whose diurnal rotation was conferred upon it by a transcendental God, not part of the universe (although, as the kingdom of heaven, adjacent to it), a motionless prime mover and first cause. The fixed stars were part of this celestial sphere, all at the same fixed distance from the immobile Earth at the center of the sphere. Ptolemy had numbered these at 1,022, grouped into 48 constellations. The planets were each fixed to a transparent sphere. Few astronomers of Bruno's time accepted Copernicus's heliocentric model. Among those who did were the Germans Michael Maestlin (1550–1631), Christoph Rothmann, Johannes Kepler (1571–1630); the Englishman Thomas Digges (c. 1546–1595), author of A Perfit Description of the Caelestial Orbes; and the Italian Galileo Galilei (1564–1642). Cosmological claims In 1584, Bruno published two important philosophical dialogues (La Cena de le Ceneri and De l'infinito universo et mondi) in which he argued against the planetary spheres (Christoph Rothmann did the same in 1586 as did Tycho Brahe in 1587) and affirmed the Copernican principle. In particular, to support the Copernican view and oppose the objection according to which the motion of the Earth would be perceived by means of the motion of winds, clouds etc., in La Cena de le Ceneri Bruno anticipates some of the arguments of Galilei on the relativity principle. Note that he also uses the example now known as Galileo's ship. Theophilus – [...] air through which the clouds and winds move are parts of the Earth, [...] to mean under the name of Earth the whole machinery and the entire animated part, which consists of dissimilar parts; so that the rivers, the rocks, the seas, the whole vaporous and turbulent air, which is enclosed within the highest mountains, should belong to the Earth as its members, just as the air [does] in the lungs and in other cavities of animals by which they breathe, widen their arteries, and other similar effects necessary for life are performed. The clouds, too, move through accidents in the body of the Earth and are in its bowels as are the waters. [...] With the Earth move [...] all things that are on the Earth. If, therefore, from a point outside the Earth something were thrown upon the Earth, it would lose, because of the latter's motion, its straightness as would be seen on the ship [...] moving along a river, if someone on point C of the riverbank were to throw a stone along a straight line, and would see the stone miss its target by the amount of the velocity of the ship's motion. But if someone were placed high on the mast of that ship, move as it may however fast, he would not miss his target at all, so that the stone or some other heavy thing thrown downward would not come along a straight line from the point E which is at the top of the mast, or cage, to the point D which is at the bottom of the mast, or at some point in the bowels and body of the ship. Thus, if from the point D to the point E someone who is inside the ship would throw a stone straight up, it would return to the bottom along the same line however far the ship moved, provided it was not subject to any pitch and roll." Bruno's infinite universe was filled with a substance—a "pure air", aether, or spiritus—that offered no resistance to the heavenly bodies which, in Bruno's view, rather than being fixed, moved under their own impetus (momentum). Most dramatically, he completely abandoned the idea of a hierarchical universe. The universe is then one, infinite, immobile... It is not capable of comprehension and therefore is endless and limitless, and to that extent infinite and indeterminable, and consequently immobile. Bruno's cosmology distinguishes between "suns" which produce their own light and heat, and have other bodies moving around them; and "earths" which move around suns and receive light and heat from them. Bruno suggested that some, if not all, of the objects classically known as fixed stars are in fact suns. According to astrophysicist Steven Soter, he was the first person to grasp that "stars are other suns with their own planets." Bruno wrote that other worlds "have no less virtue nor a nature different from that of our Earth" and, like Earth, "contain animals and inhabitants". During the late 16th century, and throughout the 17th century, Bruno's ideas were held up for ridicule, debate, or inspiration. Margaret Cavendish, for example, wrote an entire series of poems against "atoms" and "infinite worlds" in Poems and Fancies in 1664. Bruno's true, if partial, vindication would have to wait for the implications and impact of Newtonian cosmology. Bruno's overall contribution to the birth of modern science is still controversial. Some scholars follow Frances Yates in stressing the importance of Bruno's ideas about the universe being infinite and lacking geocentric structure as a crucial crossing point between the old and the new. Others see in Bruno's idea of multiple worlds instantiating the infinite possibilities of a pristine, indivisible One, a forerunner of Everett's many-worlds interpretation of quantum mechanics. While many academics note Bruno's theological position as pantheism, several have described it as pandeism, and some also as panentheism. Physicist and philosopher Max Bernhard Weinstein in his Welt- und Lebensanschauungen, Hervorgegangen aus Religion, Philosophie und Naturerkenntnis ("World and Life Views, Emerging From Religion, Philosophy and Nature"), wrote that the theological model of pandeism was strongly expressed in the teachings of Bruno, especially with respect to the vision of a deity for which "the concept of God is not separated from that of the universe." However, Otto Kern takes exception to what he considers Weinstein's overbroad assertions that Bruno, as well as other historical philosophers such as John Scotus Eriugena, Nicholas of Cusa, Mendelssohn, and Lessing, were pandeists or leaned towards pandeism. Discover editor Corey S. Powell also described Bruno's cosmology as pandeistic, writing that it was "a tool for advancing an animist or Pandeist theology", and this assessment of Bruno as a pandeist was agreed with by science writer Michael Newton Keas, and The Daily Beast writer David Sessions. Retrospective views Late Vatican position The Vatican has published few official statements about Bruno's trial and execution. In 1942, Cardinal Giovanni Mercati, who discovered a number of lost documents relating to Bruno's trial, stated that the Church was perfectly justified in condemning him. On the 400th anniversary of Bruno's death, in 2000, Cardinal Angelo Sodano declared Bruno's death to be a "sad episode" but, despite his regret, he defended Bruno's prosecutors, maintaining that the Inquisitors "had the desire to serve freedom and promote the common good and did everything possible to save his life". In the same year, Pope John Paul II made a general apology for "the use of violence that some have committed in the service of truth". A martyr of science Some authors have characterized Bruno as a "martyr of science", suggesting parallels with the Galileo affair which began around 1610. "It should not be supposed," writes A. M. Paterson of Bruno and his "heliocentric solar system", that he "reached his conclusions via some mystical revelation ... His work is an essential part of the scientific and philosophical developments that he initiated." Paterson echoes Hegel in writing that Bruno "ushers in a modern theory of knowledge that understands all natural things in the universe to be known by the human mind through the mind's dialectical structure". Ingegno writes that Bruno embraced the philosophy of Lucretius, "aimed at liberating man from the fear of death and the gods." Characters in Bruno's Cause, Principle and Unity desire "to improve speculative science and knowledge of natural things," and to achieve a philosophy "which brings about the perfection of the human intellect most easily and eminently, and most closely corresponds to the truth of nature." Other scholars oppose such views, and claim Bruno's martyrdom to science to be exaggerated, or outright false. For Yates, while "nineteenth century liberals" were thrown "into ecstasies" over Bruno's Copernicanism, "Bruno pushes Copernicus' scientific work back into a prescientific stage, back into Hermeticism, interpreting the Copernican diagram as a hieroglyph of divine mysteries." According to historian Mordechai Feingold, "Both admirers and critics of Giordano Bruno basically agree that he was pompous and arrogant, highly valuing his opinions and showing little patience with anyone who even mildly disagreed with him." Discussing Bruno's experience of rejection when he visited Oxford University, Feingold suggests that "it might have been Bruno's manner, his language and his self-assertiveness, rather than his ideas" that caused offence. Theological heresy In his Lectures on the History of Philosophy, Hegel writes that Bruno's life represented "a bold rejection of all Catholic beliefs resting on mere authority." Alfonso Ingegno states that Bruno's philosophy "challenges the developments of the Reformation, calls into question the truth-value of the whole of Christianity, and claims that Christ perpetrated a deceit on mankind ... Bruno suggests that we can now recognize the universal law which controls the perpetual becoming of all things in an infinite universe." A. M. Paterson says that, while we no longer have a copy of the official papal condemnation of Bruno, his heresies included "the doctrine of the infinite universe and the innumerable worlds" and his beliefs "on the movement of the earth". Michael White notes that the Inquisition may have pursued Bruno early in his life on the basis of his opposition to Aristotle, interest in Arianism, reading of Erasmus, and possession of banned texts. White considers that Bruno's later heresy was "multifaceted" and may have rested on his conception of infinite worlds. "This was perhaps the most dangerous notion of all ... If other worlds existed with intelligent beings living there, did they too have their visitations? The idea was quite unthinkable." Frances Yates rejects what she describes as the "legend that Bruno was prosecuted as a philosophical thinker, was burned for his daring views on innumerable worlds or on the movement of the earth." Yates however writes that "the Church was ... perfectly within its rights if it included philosophical points in its condemnation of Bruno's heresies" because "the philosophical points were quite inseparable from the heresies." According to the Stanford Encyclopedia of Philosophy, "in 1600 there was no official Catholic position on the Copernican system, and it was certainly not a heresy. When [...] Bruno [...] was burned at the stake as a heretic, it had nothing to do with his writings in support of Copernican cosmology." The website of the Vatican Apostolic Archive, discussing a summary of legal proceedings against Bruno in Rome, states: In the same rooms where Giordano Bruno was questioned, for the same important reasons of the relationship between science and faith, at the dawning of the new astronomy and at the decline of Aristotle's philosophy, sixteen years later, Cardinal Bellarmino, who then contested Bruno's heretical theses, summoned Galileo Galilei, who also faced a famous inquisitorial trial, which, luckily for him, ended with a simple abjuration. Cultural legacy In art Following the 1870 Capture of Rome by the newly created Kingdom of Italy and the end of the Church's temporal power over the city, the erection of a monument to Bruno on the site of his execution became feasible. The monument was sharply opposed by the clerical party, but was finally erected by the Rome Municipality and inaugurated in 1889. A statue of a stretched human figure standing on its head, designed by Alexander Polzin and depicting Bruno's death at the stake, was placed in Potsdamer Platz station in Berlin on 2 March 2008. Retrospective iconography of Bruno shows him with a Dominican cowl but not tonsured. Edward Gosselin has suggested that it is likely Bruno kept his tonsure at least until 1579, and it is possible that he wore it again thereafter. An idealized animated version of Bruno appears in the first episode of the 2014 television series Cosmos: A Spacetime Odyssey. In this depiction, Bruno is shown with a more modern look, without tonsure and wearing clerical robes and without his hood. Cosmos presents Bruno as an impoverished philosopher who was ultimately executed due to his refusal to recant his belief in other worlds, a portrayal that was criticized by some as simplistic or historically inaccurate. Corey S. Powell, of Discover magazine, says of Bruno, "A major reason he moved around so much is that he was argumentative, sarcastic, and drawn to controversy ... He was a brilliant, complicated, difficult man. In poetry Poems that refer to Bruno include: "The Monument of Giordano Bruno" (1889) by Algernon Charles Swinburne, written when the statue of Bruno was constructed in Rome. "Campo Dei Fiori" (1943) by Czesław Miłosz, which draws parallels between indifference to the fate of Bruno and indifference to the victims of the then-ongoing Occupation of Poland. "The Emancipators" (1958) by Randall Jarrell, which addresses Bruno, along with Galileo and Newton, as an originator of the modern scientific-industrial world. "What He Thought" (1994) by Heather McHugh, a (possibly autobiographical) poem about a group of American poets who visit Italy and are lectured about Bruno and the nature of poetry by a local arts administrator. The poem was published in the collection Hinge & Sign, a nominee for the National Book Award. In fiction Bruno and his theory of "the coincidence of contraries" (coincidentia oppositorum) play an important role in James Joyce's 1939 novel Finnegans Wake. Joyce wrote in a letter to his patroness, Harriet Shaw Weaver, "His philosophy is a kind of dualism – every power in nature must evolve an opposite in order to realise itself and opposition brings reunion". Amongst his numerous allusions to Bruno in his novel, including his trial and torture, Joyce plays upon Bruno's notion of coincidentia oppositorum through applying his name to word puns such as "Browne and Nolan" (the name of Dublin printers) and '"brownesberrow in nolandsland". In 1973 the biographical drama Giordano Bruno was released, an Italian/French movie directed by Giuliano Montaldo, starring Gian Maria Volonté as Bruno. Bruno is a major character in the four-novel Aegypt sequence (1987–2007) by John Crowley. Historical episodes from Bruno's life are fictionalized in the novels, and his philosophical ideas are key to the novels’ themes. The Last Confession (2000) by Morris West is an unfinished, posthumously published fictional autobiography of Bruno, ostensibly written shortly before Bruno's execution. In the 2008 novel Children of God by Mary Doria Russell, several characters travel on an interstellar spaceship named Giordano Bruno. Bruno features as the hero of the Giordano Bruno series (2010–2023) of historical crime novels by S. J. Parris (a pseudonym of Stephanie Merritt). In music Hans Werner Henze set his large scale cantata for orchestra, choir and four soloists, Novae de infinito laudes to Italian texts by Bruno, recorded in 1972 at the Salzburg Festival reissued on CD Orfeo C609 031B. The Italian composer Francesco Filidei wrote an opera, based on a libretto by Stefano Busellato, titled Giordano Bruno. The premiere took place on 12 September 2015 at the Casa da Música in Porto, Portugal. The 2016 song "Roman Sky" by heavy metal band Avenged Sevenfold focuses on the death of Bruno. Bruno is the central character in Roger Doyle’s Heresy – an electronic opera (2017). Legacy Giordano Bruno Foundation The Giordano Bruno Foundation (German: Giordano-Bruno-Stiftung) is a non-profit foundation based in Germany that pursues the "Support of Evolutionary Humanism". It was founded by entrepreneur Herbert Steffen in 2004. The Giordano Bruno Foundation is critical of religious fundamentalism and nationalism. Giordano Bruno Memorial Award The SETI League makes an annual award honoring the memory of Giordano Bruno to a deserving person or persons who have made a significant contribution to the practice of SETI (the search for extraterrestrial intelligence). The award was proposed by sociologist Donald Tarter in 1995 on the 395th anniversary of Bruno's death. The trophy presented is called a Bruno. Astronomical objects named after Bruno The 22 km impact crater Giordano Bruno on the far side of the Moon is named in his honor, as are the main belt Asteroids 5148 Giordano and 13223 Cenaceneri; the latter is named after his philosophical dialogue La Cena de le Ceneri ("The Ash Wednesday Supper") (see above). Works De umbris idearum (On the Shadows of Ideas, Paris, 1582) Cantus circaeus (The Incantation of Circe or Circe's Song, Paris, 1582) (The Art of Memory, Paris, 1582) De compendiosa architectura et complento artis Lulli (A Compendium of Architecture and Lulli's Art, 1582) Candelaio (The Torchbearer or The Candle Bearer, 1582; play) Ars reminiscendi (The Art of Memory, 1583) Explicatio triginta sigillorum (Explanation of Thirty Seals, 1583) Sigillus sigillorum (The Seal of Seals, 1583) La cena de le ceneri (The Ash Wednesday Supper, 1584) De la causa, principio, et uno (Concerning Cause, Principle, and Unity, 1584) (De l'infinito universo et mondi, 1584) Spaccio de la bestia trionfante (The Expulsion of the Triumphant Beast, London, 1584) Cabala del cavallo Pegaseo (Cabal of the Horse Pegasus, 1585) De gli eroici furori (The Heroic Frenzies, 1585) Figuratio Aristotelici Physici auditus (Figures From Aristotle's Physics, 1585) Dialogi duo de Fabricii Mordentis Salernitani (Two Dialogues of Fabricii Mordentis Salernitani, 1586) Idiota triumphans (The Triumphant Idiot, 1586) De somni interpretatione (Dream Interpretation, 1586) Animadversiones circa lampadem lullianam (Amendments regarding Lull's Lantern, 1586) Lampas triginta statuarum (The Lantern of Thirty Statues, 1586) Centum et viginti articuli de natura et mundo adversus peripateticos (One Hundred and Twenty Articles on Nature and the World Against the Peripatetics, 1586) De Lampade combinatoria Lulliana (The Lamp of Combinations according to Lull, 1587) De progressu et lampade venatoria logicorum (Progress and the Hunter's Lamp of Logical Methods, 1587) Oratio valedictoria (Valedictory Oration, 1588) Camoeracensis Acrotismus (The Pleasure of Dispute, 1588) De specierum scrutinio (1588) Articuli centum et sexaginta adversus huius tempestatis mathematicos atque Philosophos (One Hundred and Sixty Theses Against Mathematicians and Philosophers, 1588) Oratio consolatoria (Consolation Oration, 1589) De vinculis in genere (Of Bonds in General, 1591) De triplici minimo et mensura (On the Threefold Minimum and Measure, 1591) De monade numero et figura (On the Monad, Number, and Figure, Frankfurt, 1591) De innumerabilibus, immenso, et infigurabili (Of Innumerable Things, Vastness and the Unrepresentable, 1591) De imaginum, signorum et idearum compositione (On the Composition of Images, Signs and Ideas, 1591) Summa terminorum metaphysicorum (Handbook of Metaphysical Terms, 1595) Artificium perorandi (The Art of Communicating, 1612) Collections Jordani Bruni Nolani opera latine conscripta (Giordano Bruno the Nolan's Works Written in Latin), Dritter Band (1962) / curantibus F. Tocco et H. Vitelli See also Fermi paradox List of Roman Catholic scientist-clerics References Citations Works cited Further reading External links Bruno's works: text, concordances and frequency list Writings of Giordano Bruno Giordano Bruno Library of the World's Best Literature Ancient and Modern Charles Dudley Warner Editor Bruno's Latin and Italian works online: Biblioteca Ideale di Giordano Bruno Complete works of Bruno as well as main biographies and studies available for free download in PDF format from the Warburg Institute and the Centro Internazionale di Studi Bruniani Giovanni Aquilecchia Online Galleries, History of Science Collections, University of Oklahoma Libraries High resolution images of works by and/or portraits of Giordano Bruno in .jpg and .tiff format. 1548 births 1600 deaths 16th-century executions by Italian states 16th-century Italian Christian monks 16th-century Italian philosophers 16th-century Italian male writers 16th-century Italian scientists 16th-century Italian poets Architectural theoreticians Atomists Commentators on Aristotle Communication theorists Cosmologists Date of birth unknown Epistemologists Executed Italian people Executed philosophers Executed scientists Executed writers Former Dominicans Galileo affair Hermeticists Italian architecture writers Italian astrologers Italian esotericists Italian essayists Italian-language poets Italian logicians Italian male non-fiction writers Italian male writers Italian occult writers Italian scientists Italian semioticians Mystics Natural philosophers Nontrinitarian Christians Ontologists Pantheists People excommunicated by the Catholic Church People executed by the Papal States by burning People executed by the Roman Inquisition People executed for heresy People from Nola Philosophers of art Philosophers of culture Philosophers of logic Philosophers of mathematics Philosophers of religion Philosophers of science Philosophers of social science Philosophy writers Social philosophers Academic staff of the University of Helmstedt Writers about religion and science
Giordano Bruno
[ "Astronomy", "Mathematics" ]
8,837
[ "Astronomical controversies", "Galileo affair" ]
12,970
https://en.wikipedia.org/wiki/Gambler%27s%20fallacy
The gambler's fallacy, also known as the Monte Carlo fallacy or the fallacy of the maturity of chances, is the belief that, if an event (whose occurrences are independent and identically distributed) has occurred less frequently than expected, it is more likely to happen again in the future (or vice versa). The fallacy is commonly associated with gambling, where it may be believed, for example, that the next dice roll is more likely to be six than is usually the case because there have recently been fewer than the expected number of sixes. The term "Monte Carlo fallacy" originates from an example of the phenomenon, in which the roulette wheel spun black 26 times in succession at the Monte Carlo Casino in 1913. Examples Coin toss The gambler's fallacy can be illustrated by considering the repeated toss of a fair coin. The outcomes in different tosses are statistically independent and the probability of getting heads on a single toss is (one in two). The probability of getting two heads in two tosses is (one in four) and the probability of getting three heads in three tosses is (one in eight). In general, if Ai is the event where toss i of a fair coin comes up heads, then: . If after tossing four heads in a row, the next coin toss also came up heads, it would complete a run of five successive heads. Since the probability of a run of five successive heads is (one in thirty-two), a person might believe that the next flip would be more likely to come up tails rather than heads again. This is incorrect and is an example of the gambler's fallacy. The event "5 heads in a row" and the event "first 4 heads, then a tails" are equally likely, each having probability . Since the first four tosses turn up heads, the probability that the next toss is a head is: . While a run of five heads has a probability of = 0.03125 (a little over 3%), the misunderstanding lies in not realizing that this is the case only before the first coin is tossed. After the first four tosses in this example, the results are no longer unknown, so their probabilities are at that point equal to 1 (100%). The probability of a run of coin tosses of any length continuing for one more toss is always 0.5. The reasoning that a fifth toss is more likely to be tails because the previous four tosses were heads, with a run of luck in the past influencing the odds in the future, forms the basis of the fallacy. Why the probability is 1/2 for a fair coin If a fair coin is flipped 21 times, the probability of 21 heads is 1 in 2,097,152. The probability of flipping a head after having already flipped 20 heads in a row is . Assuming a fair coin: The probability of 20 heads, then 1 tail is 0.520 × 0.5 = 0.521 The probability of 20 heads, then 1 head is 0.520 × 0.5 = 0.521 The probability of getting 20 heads then 1 tail, and the probability of getting 20 heads then another head are both 1 in 2,097,152. When flipping a fair coin 21 times, the outcome is equally likely to be 21 heads as 20 heads and then 1 tail. These two outcomes are equally as likely as any of the other combinations that can be obtained from 21 flips of a coin. All of the 21-flip combinations will have probabilities equal to 0.521, or 1 in 2,097,152. Assuming that a change in the probability will occur as a result of the outcome of prior flips is incorrect because every outcome of a 21-flip sequence is as likely as the other outcomes. In accordance with Bayes' theorem, the likely outcome of each flip is the probability of the fair coin, which is . Other examples The fallacy leads to the incorrect notion that previous failures will create an increased probability of success on subsequent attempts. For a fair 16-sided die, the probability of each outcome occurring is (6.25%). If a win is defined as rolling a 1, the probability of a 1 occurring at least once in 16 rolls is: The probability of a loss on the first roll is (93.75%). According to the fallacy, the player should have a higher chance of winning after one loss has occurred. The probability of at least one win is now: By losing one toss, the player's probability of winning drops by two percentage points. With 5 losses and 11 rolls remaining, the probability of winning drops to around 0.5 (50%). The probability of at least one win does not increase after a series of losses; indeed, the probability of success actually decreases, because there are fewer trials left in which to win. The probability of winning will eventually be equal to the probability of winning a single toss, which is (6.25%) and occurs when only one toss is left. Reverse position After a consistent tendency towards tails, a gambler may also decide that tails has become a more likely outcome. This is a rational and Bayesian conclusion, bearing in mind the possibility that the coin may not be fair; it is not a fallacy. Believing the odds to favor tails, the gambler sees no reason to change to heads. However it is a fallacy that a sequence of trials carries a memory of past results which tend to favor or disfavor future outcomes. The inverse gambler's fallacy described by Ian Hacking is a situation where a gambler entering a room and seeing a person rolling a double six on a pair of dice may erroneously conclude that the person must have been rolling the dice for quite a while, as they would be unlikely to get a double six on their first attempt. Retrospective gambler's fallacy Researchers have examined whether a similar bias exists for inferences about unknown past events based upon known subsequent events, calling this the "retrospective gambler's fallacy". An example of a retrospective gambler's fallacy would be to observe multiple successive "heads" on a coin toss and conclude from this that the previously unknown flip was "tails". Real world examples of retrospective gambler's fallacy have been argued to exist in events such as the origin of the Universe. In his book Universes, John Leslie argues that "the presence of vastly many universes very different in their characters might be our best explanation for why at least one universe has a life-permitting character". Daniel M. Oppenheimer and Benoît Monin argue that "In other words, the 'best explanation' for a low-probability event is that it is only one in a multiple of trials, which is the core intuition of the reverse gambler's fallacy." Philosophical arguments are ongoing about whether such arguments are or are not a fallacy, arguing that the occurrence of our universe says nothing about the existence of other universes or trials of universes. Three studies involving Stanford University students tested the existence of a retrospective gamblers' fallacy. All three studies concluded that people have a gamblers' fallacy retrospectively as well as to future events. The authors of all three studies concluded their findings have significant "methodological implications" but may also have "important theoretical implications" that need investigation and research, saying "[a] thorough understanding of such reasoning processes requires that we not only examine how they influence our predictions of the future, but also our perceptions of the past." Childbirth In 1796, Pierre-Simon Laplace described in A Philosophical Essay on Probabilities the ways in which men calculated their probability of having sons: "I have seen men, ardently desirous of having a son, who could learn only with anxiety of the births of boys in the month when they expected to become fathers. Imagining that the ratio of these births to those of girls ought to be the same at the end of each month, they judged that the boys already born would render more probable the births next of girls." The expectant fathers feared that if more sons were born in the surrounding community, then they themselves would be more likely to have a daughter. This essay by Laplace is regarded as one of the earliest descriptions of the fallacy. Likewise, after having multiple children of the same sex, some parents may erroneously believe that they are due to have a child of the opposite sex. Monte Carlo Casino An example of the gambler's fallacy occurred in a game of roulette at the Monte Carlo Casino on August 18, 1913, when the ball fell in black 26 times in a row. This was an extremely unlikely occurrence: the probability of a sequence of either red or black occurring 26 times in a row is or around 1 in 66.6 million, assuming the mechanism is unbiased. Gamblers lost millions of francs betting against black, reasoning incorrectly that the streak was causing an imbalance in the randomness of the wheel, and that it had to be followed by a long streak of red. Non-examples Non-independent events The gambler's fallacy does not apply when the probability of different events is not independent. In such cases, the probability of future events can change based on the outcome of past events, such as the statistical permutation of events. An example is when cards are drawn from a deck without replacement. If an ace is drawn from a deck and not reinserted, the next card drawn is less likely to be an ace and more likely to be of another rank. The probability of drawing another ace, assuming that it was the first card drawn and that there are no jokers, has decreased from (7.69%) to (5.88%), while the probability for each other rank has increased from (7.69%) to (7.84%). This effect allows card counting systems to work in games such as blackjack. Bias In most illustrations of the gambler's fallacy and the reverse gambler's fallacy, the trial (e.g. flipping a coin) is assumed to be fair. In practice, this assumption may not hold. For example, if a coin is flipped 21 times, the probability of 21 heads with a fair coin is 1 in 2,097,152. Since this probability is so small, if it happens, it may well be that the coin is somehow biased towards landing on heads, or that it is being controlled by hidden magnets, or similar. In this case, the smart bet is "heads" because Bayesian inference from the empirical evidence — 21 heads in a row — suggests that the coin is likely to be biased toward heads. Bayesian inference can be used to show that when the long-run proportion of different outcomes is unknown but exchangeable (meaning that the random process from which the outcomes are generated may be biased but is equally likely to be biased in any direction) and that previous observations demonstrate the likely direction of the bias, the outcome which has occurred the most in the observed data is the most likely to occur again. For example, if the a priori probability of a biased coin is say 1%, and assuming that such a biased coin would come down heads say 60% of the time, then after 21 heads the probability of a biased coin has increased to about 32%. The opening scene of the play Rosencrantz and Guildenstern Are Dead by Tom Stoppard discusses these issues as one man continually flips heads and the other considers various possible explanations. Changing probabilities If external factors are allowed to change the probability of the events, the gambler's fallacy may not hold. For example, a change in the game rules might favour one player over the other, improving his or her win percentage. Similarly, an inexperienced player's success may decrease after opposing teams learn about and play against their weaknesses. This is another example of bias. Psychology Origins The gambler's fallacy arises out of a belief in a law of small numbers, leading to the erroneous belief that small samples must be representative of the larger population. According to the fallacy, streaks must eventually even out in order to be representative. Amos Tversky and Daniel Kahneman first proposed that the gambler's fallacy is a cognitive bias produced by a psychological heuristic called the representativeness heuristic, which states that people evaluate the probability of a certain event by assessing how similar it is to events they have experienced before, and how similar the events surrounding those two processes are. According to this view, "after observing a long run of red on the roulette wheel, for example, most people erroneously believe that black will result in a more representative sequence than the occurrence of an additional red", so people expect that a short run of random outcomes should share properties of a longer run, specifically in that deviations from average should balance out. When people are asked to make up a random-looking sequence of coin tosses, they tend to make sequences where the proportion of heads to tails stays closer to 0.5 in any short segment than would be predicted by chance, a phenomenon known as insensitivity to sample size. Kahneman and Tversky interpret this to mean that people believe short sequences of random events should be representative of longer ones. The representativeness heuristic is also cited behind the related phenomenon of the clustering illusion, according to which people see streaks of random events as being non-random when such streaks are actually much more likely to occur in small samples than people expect. The gambler's fallacy can also be attributed to the mistaken belief that gambling, or even chance itself, is a fair process that can correct itself in the event of streaks, known as the just-world hypothesis. Other researchers believe that belief in the fallacy may be the result of a mistaken belief in an internal locus of control. When a person believes that gambling outcomes are the result of their own skill, they may be more susceptible to the gambler's fallacy because they reject the idea that chance could overcome skill or talent. Variations Some researchers believe that it is possible to define two types of gambler's fallacy: type one and type two. Type one is the classic gambler's fallacy, where individuals believe that a particular outcome is due after a long streak of another outcome. Type two gambler's fallacy, as defined by Gideon Keren and Charles Lewis, occurs when a gambler underestimates how many observations are needed to detect a favorable outcome, such as watching a roulette wheel for a length of time and then betting on the numbers that appear most often. For events with a high degree of randomness, detecting a bias that will lead to a favorable outcome takes an impractically large amount of time and is very difficult, if not impossible, to do. The two types differ in that type one wrongly assumes that gambling conditions are fair and perfect, while type two assumes that the conditions are biased, and that this bias can be detected after a certain amount of time. Another variety, known as the retrospective gambler's fallacy, occurs when individuals judge that a seemingly rare event must come from a longer sequence than a more common event does. The belief that an imaginary sequence of die rolls is more than three times as long when a set of three sixes is observed as opposed to when there are only two sixes. This effect can be observed in isolated instances, or even sequentially. Another example would involve hearing that a teenager has unprotected sex and becomes pregnant on a given night, and concluding that she has been engaging in unprotected sex for longer than if we hear she had unprotected sex but did not become pregnant, when the probability of becoming pregnant as a result of each intercourse is independent of the amount of prior intercourse. Relationship to hot-hand fallacy Another psychological perspective states that gambler's fallacy can be seen as the counterpart to basketball's hot-hand fallacy, in which people tend to predict the same outcome as the previous event - known as positive recency - resulting in a belief that a high scorer will continue to score. In the gambler's fallacy, people predict the opposite outcome of the previous event - negative recency - believing that since the roulette wheel has landed on black on the previous six occasions, it is due to land on red the next. Ayton and Fischer have theorized that people display positive recency for the hot-hand fallacy because the fallacy deals with human performance, and that people do not believe that an inanimate object can become "hot." Human performance is not perceived as random, and people are more likely to continue streaks when they believe that the process generating the results is nonrandom. When a person exhibits the gambler's fallacy, they are more likely to exhibit the hot-hand fallacy as well, suggesting that one construct is responsible for the two fallacies. The difference between the two fallacies is also found in economic decision-making. A study by Huber, Kirchler, and Stockl in 2010 examined how the hot hand and the gambler's fallacy are exhibited in the financial market. The researchers gave their participants a choice: they could either bet on the outcome of a series of coin tosses, use an expert opinion to sway their decision, or choose a risk-free alternative instead for a smaller financial reward. Participants turned to the expert opinion to make their decision 24% of the time based on their past experience of success, which exemplifies the hot-hand. If the expert was correct, 78% of the participants chose the expert's opinion again, as opposed to 57% doing so when the expert was wrong. The participants also exhibited the gambler's fallacy, with their selection of either heads or tails decreasing after noticing a streak of either outcome. This experiment helped bolster Ayton and Fischer's theory that people put more faith in human performance than they do in seemingly random processes. Neurophysiology While the representativeness heuristic and other cognitive biases are the most commonly cited cause of the gambler's fallacy, research suggests that there may also be a neurological component. Functional magnetic resonance imaging has shown that after losing a bet or gamble, known as riskloss, the frontoparietal network of the brain is activated, resulting in more risk-taking behavior. In contrast, there is decreased activity in the amygdala, caudate, and ventral striatum after a riskloss. Activation in the amygdala is negatively correlated with gambler's fallacy, so that the more activity exhibited in the amygdala, the less likely an individual is to fall prey to the gambler's fallacy. These results suggest that gambler's fallacy relies more on the prefrontal cortex, which is responsible for executive, goal-directed processes, and less on the brain areas that control affective decision-making. The desire to continue gambling or betting is controlled by the striatum, which supports a choice-outcome contingency learning method. The striatum processes the errors in prediction and the behavior changes accordingly. After a win, the positive behavior is reinforced and after a loss, the behavior is conditioned to be avoided. In individuals exhibiting the gambler's fallacy, this choice-outcome contingency method is impaired, and they continue to make risks after a series of losses. Possible solutions The gambler's fallacy is a deep-seated cognitive bias and can be very hard to overcome. Educating individuals about the nature of randomness has not always proven effective in reducing or eliminating any manifestation of the fallacy. Participants in a study by Beach and Swensson in 1967 were shown a shuffled deck of index cards with shapes on them, and were instructed to guess which shape would come next in a sequence. The experimental group of participants was informed about the nature and existence of the gambler's fallacy, and were explicitly instructed not to rely on run dependency to make their guesses. The control group was not given this information. The response styles of the two groups were similar, indicating that the experimental group still based their choices on the length of the run sequence. This led to the conclusion that instructing individuals about randomness is not sufficient in lessening the gambler's fallacy. An individual's susceptibility to the gambler's fallacy may decrease with age. A study by Fischbein and Schnarch in 1997 administered a questionnaire to five groups: students in grades 5, 7, 9, 11, and college students specializing in teaching mathematics. None of the participants had received any prior education regarding probability. The question asked was: "Ronni flipped a coin three times and in all cases heads came up. Ronni intends to flip the coin again. What is the chance of getting heads the fourth time?" The results indicated that as the students got older, the less likely they were to answer with "smaller than the chance of getting tails", which would indicate a negative recency effect. 35% of the 5th graders, 35% of the 7th graders, and 20% of the 9th graders exhibited the negative recency effect. Only 10% of the 11th graders answered this way, and none of the college students did. Fischbein and Schnarch theorized that an individual's tendency to rely on the representativeness heuristic and other cognitive biases can be overcome with age. Another possible solution comes from Roney and Trick, Gestalt psychologists who suggest that the fallacy may be eliminated as a result of grouping. When a future event such as a coin toss is described as part of a sequence, no matter how arbitrarily, a person will automatically consider the event as it relates to the past events, resulting in the gambler's fallacy. When a person considers every event as independent, the fallacy can be greatly reduced. Roney and Trick told participants in their experiment that they were betting on either two blocks of six coin tosses, or on two blocks of seven coin tosses. The fourth, fifth, and sixth tosses all had the same outcome, either three heads or three tails. The seventh toss was grouped with either the end of one block, or the beginning of the next block. Participants exhibited the strongest gambler's fallacy when the seventh trial was part of the first block, directly after the sequence of three heads or tails. The researchers pointed out that the participants that did not show the gambler's fallacy showed less confidence in their bets and bet fewer times than the participants who picked with the gambler's fallacy. When the seventh trial was grouped with the second block, and was perceived as not being part of a streak, the gambler's fallacy did not occur. Roney and Trick argued that instead of teaching individuals about the nature of randomness, the fallacy could be avoided by training people to treat each event as if it is a beginning and not a continuation of previous events. They suggested that this would prevent people from gambling when they are losing, in the mistaken hope that their chances of winning are due to increase based on an interaction with previous events. Users Types of users Within a real-world setting, numerous studies have uncovered that for various decision makers placed in high stakes scenarios, it is likely they will reflect some degree of strong negative autocorrelation in their judgement. Asylum judges In a study aimed at discovering if the negative autocorrelation that exists with the gambler's fallacy existed in the decision made by U.S. asylum judges, results showed that after two successive asylum grants, a judge would be 5.5% less likely to approve a third grant. Baseball umpires In the game of baseball, decisions are made every minute. One particular decision made by umpires which is often subject to scrutiny is the 'strike zone' decision. Whenever a batter does not swing, the umpire must decide if the ball was within a fair region for the batter, known as the strike zone. If outside of this zone, the ball does not count towards outing the batter. In a study of over 12,000 games, results showed that umpires are 1.3% less likely to call a strike if the previous two balls were also strikes. Loan officers In the decision making of loan officers, it can be argued that monetary incentives are a key factor in biased decision making, rendering it harder to examine the gambler's fallacy effect. However, research shows that loan officers who are not incentivised by monetary gain are 8% less likely to approve a loan if they approved one for the previous client. Lottery players Lottery play and jackpots entice gamblers around the globe, with the biggest decision for hopeful winners being what numbers to pick. While most people will have their own strategy, evidence shows that after a number is selected as a winner in the current draw, the same number will experience a significant drop in selections in the following lottery. A popular study by Charles Clotfelter and Philip Cook investigated this effect in 1991, where they concluded bettors would cease to select numbers immediately after they were selected, ultimately recovering selection popularity within three months. Soon after, a 1994 study was constructed by Dek Terrell to test the findings of Clotfelter and Cook. The key change in Terrell's study was the examination of a pari-mutuel lottery in which, a number selected with lower total wagers placed on it will result in a higher pay-out. While this examination did conclude that players in both types of lotteries exhibited behaviour in-line with the gambler's fallacy theory, those who took part in pari-mutuel betting seemed to be less influenced. The effect the of gambler's fallacy can be observed as numbers are chosen far less frequently soon after they are selected as winners, recovering slowly over a two-month period. For example, on the 11th of April 1988, 41 players selected 244 as the winning combination. Three days later only 24 individuals selected 244, a 41.5% decrease. This is the gambler's fallacy in motion, as lottery players believe that the occurrence of a winning combination in previous days will decrease its likelihood of occurring today. Video game players Several video games feature the use of loot boxes, a collection of in-game items awarded on opening with random contents set by rarity metrics, as a monetization scheme. Since around 2018, loot boxes have come under scrutiny from governments and advocates on the basis they are akin to gambling, particularly for games aimed at youth. Some games use a special "pity-timer" mechanism, that if the player has opened several loot boxes in a row without obtaining a high-rarity item, subsequent loot boxes will improve the odds of a higher-rate item drop. This is considered to feed into the gambler's fallacy since it reinforces the idea that a player will eventually obtain a high-rarity item (a win) after only receiving common items from a string of previous loot boxes. See also Availability heuristic Gambler's conceit Gambler's ruin Inverse gambler's fallacy Hot hand fallacy Law of averages Martingale (betting system) Mean reversion (finance) Memorylessness Oscar's grind Regression toward the mean Statistical regularity Problem gambling References Behavioral finance Causal fallacies Gambling terminology Statistical paradoxes Cognitive inertia Gambling mathematics Relevance fallacies Cognitive biases
Gambler's fallacy
[ "Mathematics", "Biology" ]
5,685
[ "Behavior", "Statistical paradoxes", "Mathematical paradoxes", "Behavioral finance", "Mathematical problems", "Human behavior" ]
12,984
https://en.wikipedia.org/wiki/Geiger%20counter
A Geiger counter (, ; also known as a Geiger–Müller counter or G-M counter) is an electronic instrument used for detecting and measuring ionizing radiation. It is widely used in applications such as radiation dosimetry, radiological protection, experimental physics and the nuclear industry. "Geiger counter" is often used generically to refer to any form of dosimeter (or, radiation-measuring device), but scientifically, a Geiger counter is only one specific type of dosimeter. It detects ionizing radiation such as alpha particles, beta particles, and gamma rays using the ionization effect produced in a Geiger–Müller tube, which gives its name to the instrument. In wide and prominent use as a hand-held radiation survey instrument, it is perhaps one of the world's best-known radiation detection instruments. The original detection principle was realized in 1908 at the University of Manchester, but it was not until the development of the Geiger–Müller tube in 1928 that the Geiger counter could be produced as a practical instrument. Since then, it has been very popular due to its robust sensing element and relatively low cost. However, there are limitations in measuring high radiation rates and the energy of incident radiation. The Geiger counter is one of the first examples of data sonification. Principle of operation A Geiger counter consists of a Geiger–Müller tube (the sensing element which detects the radiation) and the processing electronics, which display the result. The Geiger–Müller tube is filled with an inert gas such as helium, neon, or argon at low pressure, to which a high voltage is applied. The tube briefly conducts electrical charge when high energy particles or gamma radiation make the gas conductive by ionization. The ionization is considerably amplified within the tube by the Townsend discharge effect to produce an easily measured detection pulse, which is fed to the processing and display electronics. This large pulse from the tube makes the Geiger counter relatively cheap to manufacture, as the subsequent electronics are greatly simplified. The electronics also generate the high voltage, typically 400–900 volts, that has to be applied to the Geiger–Müller tube to enable its operation. This voltage must be carefully selected, as too high a voltage will allow for continuous discharge, damaging the instrument and invalidating the results. Conversely, too low a voltage will result in an electric field that is too weak to generate a current pulse. The correct voltage is usually specified by the manufacturer. To help quickly terminate each discharge in the tube a small amount of halogen gas or organic material known as a quenching mixture is added to the fill gas. Readout There are two types of detected radiation readout: counts and radiation dose. The counts display is the simplest, and shows the number of ionizing events detected, displayed either as a count rate, such as "counts per minute" or "counts per second", or as a total number of counts over a set time period (an integrated total). The counts readout is normally used when alpha or beta particles are being detected. More complex to achieve is a display of radiation dose rate, displayed in units such as the sievert, which is normally used for measuring gamma or X-ray dose rates. A Geiger–Müller tube can detect the presence of radiation, but not its energy, which influences the radiation's ionizing effect. Consequently, instruments measuring dose rate require the use of an energy compensated Geiger–Müller tube, so that the dose displayed relates to the counts detected. The electronics will apply known factors to make this conversion, which is specific to each instrument and is determined by design and calibration. The readout can be analog or digital, and modern instruments offer serial communications with a host computer or network. There is usually an option to produce audible clicks representing the number of ionization events detected. This is the distinctive sound associated with handheld or portable Geiger counters. The purpose of this is to allow the user to concentrate on manipulation of the instrument while retaining auditory feedback on the radiation rate. Limitations There are two main limitations of the Geiger counter: Because the output pulse from a Geiger–Müller tube is always of the same magnitude (regardless of the energy of the incident radiation), the tube cannot differentiate between radiation types or measure radiation energy, which prevents it from correctly measuring dose rate. The tube is less accurate at high radiation rates, because each ionization event is followed by a "dead time", an insensitive period during which any further incident radiation does not result in a count. Typically, the dead time will reduce indicated count rates above about 104 to 105 counts per second, depending on the characteristic of the tube being used. While some counters have circuitry which can compensate for this, for measuring very high dose rates, ion chamber instruments are preferred for high radiation rates. Types and applications The intended detection application of a Geiger counter dictates the tube design used. Consequently, there are a great many designs, but they can be generally categorized as "end-window", windowless "thin-walled", "thick-walled", and sometimes hybrids of these types. Particle detection The first historical uses of the Geiger principle were to detect α- and β-particles, and the instrument is still used for this purpose today. For α-particles and low energy β-particles, the "end-window" type of a Geiger–Müller tube has to be used, as these particles have a limited range and are easily stopped by a solid material. Therefore, the tube requires a window which is thin enough to allow as many as possible of these particles through to the fill gas. The window is usually made of mica with a density of about 1.5–2.0 mg/cm2. α-particles have the shortest range, and to detect these the window should ideally be within 10 mm of the radiation source due to α-particle attenuation. However, the Geiger–Müller tube produces a pulse output which is the same magnitude for all detected radiation, so a Geiger counter with an end window tube cannot distinguish between α- and β-particles. A skilled operator can use varying distance from a radiation source to differentiate between α- and high energy β-particles. The "pancake" Geiger–Müller tube is a variant of the end-window probe, but designed with a larger detection area to make checking quicker. However, the pressure of the atmosphere against the low pressure of the fill gas limits the window size due to the limited strength of the window membrane. Some β-particles can also be detected by a thin-walled "windowless" Geiger–Müller tube, which has no end-window, but allows high energy β-particles to pass through the tube walls. Although the tube walls have a greater stopping power than a thin end-window, they still allow these more energetic particles to reach the fill gas. End-window Geiger counters are still used as a general purpose, portable, radioactive contamination measurement and detection instrument, owing to their relatively low cost, robustness and relatively high detection efficiency; particularly with high energy β-particles. However, for discrimination between α- and β-particles or provision of particle energy information, scintillation counters or proportional counters should be used. Those instrument types are manufactured with much larger detector areas, which means that checking for surface contamination is quicker than with a Geiger counter. Gamma and X-ray detection Geiger counters are widely used to detect gamma radiation and X-rays, collectively known as photons, and for this the windowless tube is used. However, detection efficiency is low compared to alpha and beta particles. The article on the Geiger–Müller tube carries a more detailed account of the techniques used to detect photon radiation. For high energy photons, the tube relies on the interaction of the radiation with the tube wall, usually a material with a high atomic number such as stainless steel of 1–2 mm thickness, to produce free electrons within the tube wall, due to the photoelectric effect. If these migrate out of the tube wall, they enter and ionize the fill gas. This effect increases the detection efficiency because the low-pressure gas in the tube has poorer interaction with higher energy photons than a steel tube. However, as photon energies decrease to low levels, there is greater gas interaction, and the contribution of direct gas interaction increases. At very low energies (less than 25 keV), direct gas ionisation dominates, and a steel tube attenuates the incident photons. Consequently, at these energies, a typical tube design is a long tube with a thin wall which has a larger gas volume, to give an increased chance of direct interaction of a particle with the fill gas. Above these low energy levels, there is a considerable variance in response to different photon energies of the same intensity, and a steel-walled tube employs what is known as "energy compensation" in the form of filter rings around the naked tube, which attempts to compensate for these variations over a large energy range. A steel-walled Geiger–Müller tube is about 1% efficient over a wide range of energies. Neutron detection A variation of the Geiger tube known as a Bonner sphere can be used to exclusively measure radiation dosage from neutrons rather than from gammas by the process of neutron capture. The tube, which can contain the fill gas boron trifluoride or helium-3, is surrounded by a plastic moderator that reduces neutron energies prior to capture. When a capture occurs in the fill gas, the energy released is registered in the detector. Gamma measurement—personnel protection and process control While "Geiger counter" is practically synonymous with the hand-held variety, the Geiger principle is in wide use in installed "area gamma" alarms for personnel protection, as well as in process measurement and interlock applications. The processing electronics of such installations have a higher degree of sophistication and reliability than those of hand-held meters. Physical design For hand-held units there are two fundamental physical configurations: the "integral" unit with both detector and electronics in the same unit, and the "two-piece" design which has a separate detector probe and an electronics module connected by a short cable. In the 1930s a mica window was added to the cylindrical design allowing low-penetration radiation to pass through with ease. The integral unit allows single-handed operation, so the operator can use the other hand for personal security in challenging monitoring positions, but the two piece design allows easier manipulation of the detector, and is commonly used for alpha and beta surface contamination monitoring where careful manipulation of the probe is required or the weight of the electronics module would make operation unwieldy. A number of different sized detectors are available to suit particular situations, such as placing the probe in small apertures or confined spaces. Gamma and X-Ray detectors generally use an "integral" design so the Geiger–Müller tube is conveniently within the electronics enclosure. This can easily be achieved because the casing usually has little attenuation, and is employed in ambient gamma measurements where distance from the source of radiation is not a significant factor. However, to facilitate more localised measurements such as "surface dose", the position of the tube in the enclosure is sometimes indicated by targets on the enclosure so an accurate measurement can be made with the tube at the correct orientation and a known distance from the surface. There is a particular type of gamma instrument known as a "hot spot" detector which has the detector tube on the end of a long pole or flexible conduit. These are used to measure high radiation gamma locations whilst protecting the operator by means of distance shielding. Particle detection of alpha and beta can be used in both integral and two-piece designs. A pancake probe (for alpha/beta) is generally used to increase the area of detection in two-piece instruments whilst being relatively light weight. In integral instruments using an end window tube there is a window in the body of the casing to prevent shielding of particles. There are also hybrid instruments which have a separate probe for particle detection and a gamma detection tube within the electronics module. The detectors are switchable by the operator, depending the radiation type that is being measured. Guidance on application use In the United Kingdom the National Radiological Protection Board issued a user guidance note on selecting the best portable instrument type for the radiation measurement application concerned. This covers all radiation protection instrument technologies and includes a guide to the use of G-M detectors. History In 1908 Hans Geiger, under the supervision of Ernest Rutherford at the Victoria University of Manchester (now the University of Manchester), developed an experimental technique for detecting alpha particles that would later be used to develop the Geiger–Müller tube in 1928. This early counter was only capable of detecting alpha particles and was part of a larger experimental apparatus. The fundamental ionization mechanism used was discovered by John Sealy Townsend between 1897 and 1901, and is known as the Townsend discharge, which is the ionization of molecules by ion impact. It was not until 1928 that Geiger and Walther Müller (a PhD student of Geiger) developed the sealed Geiger–Müller tube which used basic ionization principles previously used experimentally. Small and rugged, not only could it detect alpha and beta radiation as prior models had done, but also gamma radiation. Now a practical radiation instrument could be produced relatively cheaply, and so the Geiger counter was born. As the tube output required little electronic processing, a distinct advantage in the thermionic valve era due to minimal valve count and low power consumption, the instrument achieved great popularity as a portable radiation detector. Modern versions of the Geiger counter use halogen quench gases, a technique invented in 1947 by Sidney H. Liebson. Halogen compounds have superseded the organic quench gases because of their much longer life and lower operating voltages; typically 400-900 volts. Gallery See also Becquerel, the SI unit of the radioactive decay rate of a quantity of radioactive material Civil defense Geiger counters, handheld radiation monitors, both G-M and ion chambers Counting efficiency the ratio of radiation events reaching a detector and the number it counts Data sonification, the interpretation or processing of data by sound Dosimeter, a device used by personnel to measure what radiation dose they have received Ionization chamber, the simplest ionising radiation detector Gaseous ionization detector, an overview of the main gaseous detector types Geiger–Müller tube, provides a more detailed description of Geiger–Müller tube operation and types Geiger plateau, the correct operating voltage range for a Geiger–Müller tube Photon counting Radioactive decay, the process by which unstable atoms emit radiation Safecast (organization), use of Geiger–Müller counter technology in citizen science Scintillation counter, a gasless radiation detector Sievert, the SI unit of stochastic effects of radiation on the human body References External links How a Geiger counter works. Particle detectors Laboratory equipment Counting instruments Ionising radiation detectors 1908 introductions 1928 introductions English inventions German inventions Radiation protection
Geiger counter
[ "Mathematics", "Technology", "Engineering" ]
3,068
[ "Radioactive contamination", "Counting instruments", "Particle detectors", "Measuring instruments", "Ionising radiation detectors", "Numeral systems" ]
12,988
https://en.wikipedia.org/wiki/Gin
Gin () is a distilled alcoholic drink flavoured with juniper berries and other botanical ingredients. Gin originated as a medicinal liquor made by monks and alchemists across Europe. The modern gin was modified in Flanders and the Netherlands to provide aqua vita from distillates of grapes and grains, becoming an object of commerce in the spirits industry. Gin became popular in England after the introduction of jenever, a Dutch and Belgian liquor. Although this development had been taking place since the early 17th century, gin became widespread after the 1688 Glorious Revolution led by William of Orange and subsequent import restrictions on French brandy. Gin emerged as the national alcoholic drink of England during the so-called Gin Craze of 1695–1735. Gin is produced from a wide range of herbal ingredients in a number of distinct styles and brands. After juniper, gin tends to be flavoured with herbs, spices, floral or fruit flavours, or often a combination. It is commonly mixed with tonic water in a gin and tonic. Gin is also used as a base spirit to produce flavoured, gin-based liqueurs, for example sloe gin, traditionally produced by the addition of fruit, flavourings and sugar. Etymology The name gin is a shortened form of the older English word genever, related to the French word and the Dutch word . All ultimately derive from , the Latin for juniper. History Origin: 13th-century mentions The earliest known written reference to jenever appears in the 13th-century encyclopaedic work (Bruges), with the earliest printed recipe for jenever dating from 16th-century work (Antwerp). The monks used it to distill sharp, fiery, alcoholic tonics, one of which was distilled from wine infused with juniper berries. They were making medicines, hence the juniper. As a medicinal herb, juniper had been an essential part of doctors' kits for centuries; plague doctors stuffed the beaks of their plague masks with juniper to supposedly protect them from the Black Death. Across Europe, apothecaries handed out juniper tonic wines for coughs, colds, pains, strains, ruptures and cramps. These were a popular cure-all, though some thought these tonic wines to be a little too popular, and consumed for enjoyment rather than medicinal purposes. 17th century The physician Franciscus Sylvius has been falsely credited with the invention of gin in the mid-17th century, as the existence of jenever is confirmed in Philip Massinger's play The Duke of Milan (1623), when Sylvius would have been about nine years old. It is further claimed that English soldiers who provided support in Antwerp against the Spanish in 1585, during the Eighty Years' War, were already drinking jenever for its calming effects before battle, from which the term Dutch courage is believed to have originated. By the mid-17th century, numerous small Dutch and Flemish distillers had popularized the re-distillation of malted barley spirit or malt wine with juniper, also anise, caraway, coriander, etc., which were sold in pharmacies and used to treat such medical problems as kidney ailments, lumbago, stomach ailments, gallstones, and gout. Gin emerged in England in varying forms by the early 17th century, and at the time of the Stuart Restoration, enjoyed a brief resurgence. Gin became vastly more popular as an alternative to brandy, when William III and Mary II became co-sovereigns of England, Scotland and Ireland after leading the Glorious Revolution. Particularly in crude, inferior forms, it was more likely to be flavoured with turpentine. Historian Angela McShane has described it as a "Protestant drink" as its rise was brought about by a Protestant king, fuelling his armies fighting the Catholic Irish and French. 18th century Gin drinking in England rose significantly after the government allowed unlicensed gin production, and at the same time imposed a heavy duty on all imported spirits such as French brandy. This created a larger market for poor-quality barley that was unfit for brewing beer, and in 1695–1735 thousands of gin-shops sprang up throughout England, a period known as the Gin Craze. Because of the low price of gin compared with other drinks available at the time and in the same location, gin began to be consumed regularly by the poor. Of the 15,000 drinking establishments in London, not including coffee shops and drinking chocolate shops, over half were gin shops. Beer maintained a healthy reputation as it was often safer to drink the brewed ale than unclean plain water. Gin, though, was blamed for various social problems, and it may have been a factor in the higher death rates which stabilized London's previously growing population. The reputation of the two drinks was illustrated by William Hogarth in his engravings Beer Street and Gin Lane (1751), described by the BBC as "arguably the most potent anti-drug poster ever conceived". The negative reputation of gin survives in the English language in terms like gin mills or the American phrase gin joints to describe disreputable bars, or gin-soaked to refer to drunks. The epithet mother's ruin is a common British name for gin, the origin of which is debated. The Gin Act 1736 imposed high taxes on retailers and led to riots in the streets. The prohibitive duty was gradually reduced and finally abolished in 1742. The Gin Act 1751 was more successful, but it forced distillers to sell only to licensed retailers and brought gin shops under the jurisdiction of local magistrates. Gin in the 18th century was produced in pot stills, and thus had a maltier profile than modern London gin. In London in the early 18th century, much gin was distilled legally in residential houses (there were estimated to be 1,500 residential stills in 1726) and was often flavoured with turpentine to generate resinous woody notes in addition to the juniper. As late as 1913, Webster's Dictionary states without further comment, "'common gin' is usually flavoured with turpentine". Another common variation was to distill in the presence of sulfuric acid. Although the acid itself does not distil, it imparts the additional aroma of diethyl ether to the resulting gin. Sulfuric acid subtracts one water molecule from two ethanol molecules to create diethyl ether, which also forms an azeotrope with ethanol, and therefore distils with it. The result is a sweeter spirit, and one that may have possessed additional analgesic or even intoxicating effects – see Paracelsus. Dutch or Belgian gin, also known as jenever or genever, evolved from malt wine spirits, and is a distinctly different drink from later styles of gin. Schiedam, a city in the province of South Holland, is famous for its jenever-producing history. The same for Hasselt in the Belgian province of Limburg. The oude (old) style of jenever remained very popular throughout the 19th century, where it was referred to as Holland or Geneva gin in popular, American, pre-Prohibition bartender guides. The 18th century gave rise to a style of gin referred to as Old Tom gin, which is a softer, sweeter style of gin, often containing sugar. Old Tom gin faded in popularity by the early 20th century. 19th–20th centuries The invention and development of the column still (1826 and 1831) made the distillation of neutral spirits practical, thus enabling the creation of the "London dry" style that evolved later in the 19th century. In tropical British colonies gin was used to mask the bitter flavour of quinine, which was the only effective anti-malarial compound. Quinine was dissolved in carbonated water to form tonic water; the resulting cocktail is gin and tonic, although modern tonic water contains only a trace of quinine as a flavouring. Gin is a common base spirit for many mixed drinks, including the martini. Secretly produced "bathtub gin" was available in the speakeasies and "blind pigs" of Prohibition-era America as a result of the relatively simple production. Sloe gin is traditionally described as a liqueur made by infusing sloes (the fruit of the blackthorn) in gin, although modern versions are almost always compounded from neutral spirits and flavourings. Similar infusions are possible with other fruits, such as damsons. Another popular gin-based liqueur with a longstanding history is Pimm's No.1 Cup (25% alcohol by volume (ABV)), which is a fruit cup flavoured with citrus and spices. The National Jenever Museums are located in Hasselt in Belgium, and Schiedam in the Netherlands. 21st century Since 2013, gin has been in a period of ascendancy worldwide, with many new brands and producers entering the category leading to a period of strong growth, innovation and change. More recently gin-based liqueurs have been popularised, reaching a market outside that of traditional gin drinkers, including fruit-flavoured and usually coloured "Pink gin", rhubarb gin, Spiced gin, violet gin, blood orange gin and sloe gin. Surging popularity and unchecked competition has led to consumer's conflation of gin with gin liqueurs and many products are straddling, pushing or breaking the boundaries of established definitions in a period of genesis for the industry. Legal definition Geographical indication Some legal classifications (protected denomination of origin) define gin as only originating from specific geographical areas without any further restrictions (e.g. Plymouth gin (PGI now lapsed), Ostfriesischer Korngenever, Slovenská borovička, Kraški Brinjevec, etc.), while other common descriptors refer to classic styles that are culturally recognised, but not legally defined (e.g. Old Tom gin). Sloe gin is also worth mentioning, as although technically a gin-based liqueur, it is unique in that the EU spirit drink regulations stipulate the colloquial term "sloe gin" can legally be used without the "liqueur" suffix when certain production criteria are met. Canada According to the Canadian Food and Drug Regulation, gin is produced through redistillation of alcohol from juniper berries or a mixture of more than one such redistilled food products. The Canadian Food and Drug Regulation recognises gin with three different definitions (Genever, Gin, London or Dry gin) that loosely approximate the US definitions. Whereas a more detailed regulation is provided for Holland gin or genever, no distinction is made between compounded gin and distilled gin. Either compounded or distilled gin can be labelled as Dry Gin or London Dry Gin if it does not contain any sweetening agents. For Genever and Gin, they shall not contain more than two percent sweetening agents. European Union Although many different styles of gin have evolved, it is legally differentiated into four categories in the European Union, as follows. Juniper-flavoured spirit drink Juniper-flavoured spirit drinks include the earliest class of gin, which is produced by pot distilling a fermented grain mash to moderate strength, e.g., 68% ABV, and then redistilling it with botanicals to extract the aromatic compounds. It must be bottled at a minimum of 30% ABV. Juniper-flavoured spirit-drinks may also be sold under the names Wacholder or Ginebra. Gin Gin is a juniper-flavoured spirit made not via the redistillation of botanicals, but by simply adding approved natural flavouring substances to a neutral spirit of agricultural origin. The predominant flavour must be juniper. Minimum bottled strength is 37.5% ABV. Distilled gin Distilled gin is produced exclusively by redistilling ethanol of agricultural origin with an initial strength of 96% ABV (the azeotrope of water and ethanol), in the presence of juniper berries and of other natural botanicals, provided that the juniper taste is predominant. Gin obtained simply by adding essences or flavourings to ethanol of agricultural origin is not distilled gin. Minimum bottled strength is 37.5% ABV. London gin London gin is obtained exclusively from ethanol of agricultural origin with a maximum methanol content of per hectolitre of 100% ABV equivalent, whose flavour is introduced exclusively through the re-distillation in traditional stills of ethanol in the presence of all the natural plant materials used, the resultant distillate of which is at least 70% ABV. London gin may not contain added sweetening exceeding of sugars per litre of the final product, nor colourants, nor any added ingredients other than water. The predominant flavour must be juniper. The term London gin may be supplemented by the term dry. Minimum bottled strength is 37.5% ABV. Although London gin is the strictest of distilled gin categories, it is not a geographical designation. United States In the United States of America, "gin" is defined as an alcoholic beverage of no less than 40% ABV (80 proof) that possesses the characteristic flavour of juniper berries. Gin produced only through the redistillation of botanicals can be further distinguished and marketed as "distilled gin". Production Methods Gin can be broadly differentiated into three basic styles reflecting modernization in its distillation and flavouring techniques: Pot distilled gin represents the earliest style of gin, and is traditionally produced by pot distilling a fermented grain mash (malt wine) from barley or other grains, then redistilling it with flavouring botanicals to extract the aromatic compounds. A double gin can be produced by redistilling the first gin again with more botanicals. Due to the use of pot stills, the alcohol content of the distillate is relatively low; around 68% ABV for a single distilled gin or 76% ABV for a double gin. This type of gin is often aged in tanks or wooden casks, and retains a heavier, malty flavour that gives it a marked resemblance to whisky. Korenwijn (grain wine) and the oude (old) style of Geneva gin or Holland gin represent the most prominent gins of this class. Column distilled gin evolved following the invention of the Coffey still, and is produced by first distilling high proof (e.g. 96% ABV) neutral spirits from a fermented mash or wash using a refluxing still such as a column still. The fermentable base for this spirit may be derived from grain, sugar beets, grapes, potatoes, sugar cane, plain sugar, or any other material of agricultural origin. The highly concentrated spirit is then redistilled with juniper berries and other botanicals in a pot still. Most often, the botanicals are suspended in a "gin basket" positioned within the head of the still, which allows the hot alcoholic vapours to extract flavouring components from the botanical charge. This method yields a gin lighter in flavour than the older pot still method, and results in either a distilled gin or London dry gin, depending largely upon how the spirit is finished. Compound gin is made by compounding (blending) neutral spirits with essences, other natural flavourings, or ingredients left to infuse in neutral spirit without redistillation. Flavouring Popular botanicals or flavouring agents for gin, besides the required juniper, often include citrus elements, such as lemon and bitter orange peel, as well as a combination of other spices, which may include any of anise, angelica root and seed, orris root, cardamom, pine needles and cone, licorice root, cinnamon, almond, cubeb, savory, lime peel, grapefruit peel, dragon eye (longan), saffron, baobab, frankincense, coriander, grains of paradise, nutmeg, cassia bark or others. The different combinations and concentrations of these botanicals in the distillation process cause the variations in taste among gin products. Chemical research has begun to identify the various chemicals that are extracted in the distillation process and contribute to gin's flavouring. For example, juniper monoterpenes come from juniper berries. Citric and berry flavours come from chemicals such as limonene and gamma-terpinene linalool found in limes, blueberries and hops amongst others. Floral notes come from compounds such as geraniol and eugenol. Spice-like flavours come from chemicals such as sabinene, delta-3-carene, and para-cymene. In 2018, more than half the growth in the UK Gin category was contributed by flavoured gin. Similar spirits A similar drink, also made with juniper berries and called Borovička, is produced in the Slovak Republic. Consumption Classic gin cocktails A well known gin cocktail is the martini, traditionally made with gin and dry vermouth. Several other notable gin-based drinks include: 20th Century Aviation Bee's Knees Bloody Margaret Fallen Angel French 75 Gibson Gimlet Gin and tonic Gin Fizz Gin Rickey Lonkero Moon River Negroni Old Etonian Pink Gin Ramos Gin Fizz Singapore Sling The Last Word Tom Collins Vesper White Lady Notable brands Archie Rose Distilling Co. – Sydney microdistillery Aviation American Gin – Oregon, US, one of the early New Western style gins Beefeater – England, first produced in 1820 BOLS Damrak – Netherlands, jenever The Botanist – Hebridean island of Islay, Scotland, made with 31 botanicals, 22 being native to the island Blackwood's – Scotland Bombay Sapphire – England, distilled with ten botanicals Boodles British Gin – England Booth's Gin – England Broker's Gin – England Catoctin Creek – organic gin from Virginia, US Citadelle – France Cork Dry Gin – Ireland Gilbey's – England Gilpin's Westmorland Extra Dry Gin – England Ginebra San Miguel – Philippines Gordon's – England, first distilled in 1763 Greenall's – England Hendrick's Gin – Scotland, infused with flavours of cucumber and rose petal Konig's Westphalian Gin – Germany Leopolds Gin – Colorado, US Masons Gin – North Yorkshire, England Nicholson's – England, made in London from 1730 Plymouth – England, first distilled in 1793 Pickering's Gin – Scotland, from Edinburgh's first gin distillery in 150 years Sacred Microdistillery – England, from one of London's new micro-distilleries Seagram's – Quebec, Canada Sipsmith – England Smeets – Belgium, jenever Steinhäger – Germany St. George – California, US Taaka – Louisiana, US Tanqueray – England, first distilled in 1830 Uganda Waragi – Uganda, triple distilled Waragi Vickers – South Australia Whitley Neill Gin – England See also References Further reading External links EU definition original source – scroll down to paras: 20 nand 21 of Annex II – Spirit Drinks Gin news page – Alcohol and Drugs History Society Gin Palaces at The Dictionary of Victorian London New Western Style Gins at .drinkspirits.com Map of Scottish Gin Producers at ginspiredscotland.com History of Gin at Difford's Guide Distilled drinks Dutch inventions
Gin
[ "Chemistry" ]
4,015
[ "Distillation", "Distilled drinks" ]
13,017
https://en.wikipedia.org/wiki/Gilbert%20N.%20Lewis
Gilbert Newton Lewis (October 23 or October 25, 1875 – March 23, 1946) was an American physical chemist and a dean of the college of chemistry at University of California, Berkeley. Lewis was best known for his discovery of the covalent bond and his concept of electron pairs; his Lewis dot structures and other contributions to valence bond theory have shaped modern theories of chemical bonding. Lewis successfully contributed to chemical thermodynamics, photochemistry, and isotope separation, and is also known for his concept of acids and bases. Lewis also researched on relativity and quantum physics, and in 1926 he coined the term "photon" for the smallest unit of radiant energy. G. N. Lewis was born in 1875 in Weymouth, Massachusetts. After receiving his PhD in chemistry from Harvard University and studying abroad in Germany and the Philippines, Lewis moved to California in 1912 to teach chemistry at the University of California, Berkeley, where he became the dean of the college of chemistry and spent the rest of his life. As a professor, he incorporated thermodynamic principles into the chemistry curriculum and reformed chemical thermodynamics in a mathematically rigorous manner accessible to ordinary chemists. He began measuring the free energy values related to several chemical processes, both organic and inorganic. In 1916, he also proposed his theory of bonding and added information about electrons in the periodic table of the chemical elements. In 1933, he started his research on isotope separation. Lewis worked with hydrogen and managed to purify a sample of heavy water. He then came up with his theory of acids and bases, and did work in photochemistry during the last years of his life. Though he was nominated 41 times, G. N. Lewis never won the Nobel Prize in Chemistry, resulting in a major Nobel Prize controversy. On the other hand, Lewis mentored and influenced numerous Nobel laureates at Berkeley including Harold Urey (1934 Nobel Prize), William F. Giauque (1949 Nobel Prize), Glenn T. Seaborg (1951 Nobel Prize), Willard Libby (1960 Nobel Prize), Melvin Calvin (1961 Nobel Prize) and so on, turning Berkeley into one of the world's most prestigious centers for chemistry. On March 23, 1946, Lewis was found dead in his Berkeley laboratory where he had been working with hydrogen cyanide; many postulated that the cause of his death was suicide. After Lewis' death, his children followed their father's career in chemistry, and the Lewis Hall on the Berkeley campus is named after him. Biography Early life Lewis was born in 1875 and raised in Weymouth, Massachusetts, where there exists a street named for him, G.N. Lewis Way, off Summer Street. Additionally, the wing of the new Weymouth High School Chemistry department has been named in his honor. Lewis received his primary education at home from his parents, Frank Wesley Lewis, a lawyer of independent character, and Mary Burr White Lewis. He read at age three and was intellectually precocious. In 1884 his family moved to Lincoln, Nebraska, and in 1889 he received his first formal education at the university preparatory school. In 1893, after two years at the University of Nebraska, Lewis transferred to Harvard University, where he obtained his B.S. in 1896. After a year of teaching at Phillips Academy in Andover, Lewis returned to Harvard to study with the physical chemist T. W. Richards and obtained his Ph.D. in 1899 with a dissertation on electrochemical potentials. After a year of teaching at Harvard, Lewis took a traveling fellowship to Germany, the center of physical chemistry, and studied with Walther Nernst at Göttingen and with Wilhelm Ostwald at Leipzig. While working in Nernst's lab, Lewis apparently developed a lifelong enmity with Nernst. In the following years, Lewis started to criticize and denounce his former teacher on many occasions, calling Nernst's work on his heat theorem "a regrettable episode in the history of chemistry". A Swedish friend of Nernst's, Wilhelm Palmær, was a member of the Nobel Chemistry Committee. There is evidence that he used the Nobel nominating and reporting procedures to block a Nobel Prize for Lewis in thermodynamics by nominating Lewis for the prize three times, and then using his position as a committee member to write negative reports. Harvard, Manila, and MIT After his stay in Nernst's lab, Lewis returned to Harvard in 1901 as an instructor for three more years. He was appointed instructor in thermodynamics and electrochemistry. In 1904 Lewis was granted a leave of absence and became Superintendent of Weights and Measures for the Bureau of Science in Manila, Philippines. The next year he returned to Cambridge, Massachusetts when the Massachusetts Institute of Technology (MIT) appointed him to a faculty position, in which he had a chance to join a group of outstanding physical chemists under the direction of Arthur Amos Noyes. He became an assistant professor in 1907, associate professor in 1908, and full professor in 1911. University of California, Berkeley G. N. Lewis left MIT in 1912 to become a professor of physical chemistry and dean of the College of Chemistry at the University of California, Berkeley. On June 21, 1912, he married Mary Hinckley Sheldon, daughter of a Harvard professor of Romance languages. They had two sons, both of whom became chemistry professors, and a daughter. In 1913, he joined the Alpha Chi Sigma at Berkeley, the professional chemistry fraternity. Lewis' graduate advisees at Berkeley went on to be exceptionally successful with the Nobel Committee. 14 Nobel prizes were eventually awarded to the men he took as students. The best-known of these include Harold Urey (1934 Nobel Prize), William F. Giauque (1949 Nobel Prize), Glenn T. Seaborg (1951 Nobel Prize), Willard Libby (1960 Nobel Prize), Melvin Calvin (1961 Nobel Prize). Due to his efforts, the college of chemistry at Berkeley became one of the top chemistry centers in the world. While at Berkeley he also refused entry to women, including preventing Margaret Melhase from conducting graduate studies. Melhase had previously co-discovered Cesium-137 with Seaborg as an undergraduate. In 1913, he was elected to the National Academy of Sciences. He was elected to the American Philosophical Society in 1918. He resigned in 1934, refusing to state the cause for his resignation; it has been speculated that it was due to a dispute over the internal politics of that institution or to the failure of those he had nominated to be elected. His decision to resign may also have been sparked by his resentment over the award of the 1934 Nobel Prize for chemistry to his student, Harold Urey, for his 1931 isolation of deuterium and the confirmation of its spectrum. This was a prize Lewis almost certainly felt he should have shared for his efforts to purify and characterize heavy water. Death On 23 March 1946, a graduate student found Lewis's lifeless body under a laboratory workbench at Berkeley. Lewis had been working on an experiment with liquid hydrogen cyanide, and deadly fumes from a broken line had leaked into the laboratory. The coroner ruled that the cause of death was coronary artery disease, because of a lack of any signs of cyanosis, but some believe that it may have been a suicide. Berkeley Emeritus Professor William Jolly, who reported the various views on Lewis's death in his 1987 history of UC Berkeley's College of Chemistry, From Retorts to Lasers, wrote that a higher-up in the department believed that Lewis had committed suicide. If Lewis's death was indeed a suicide, a possible explanation was depression brought on by a lunch with Irving Langmuir. Langmuir and Lewis had a long rivalry, dating back to Langmuir's extensions of Lewis's theory of the chemical bond. Langmuir had been awarded the 1932 Nobel Prize in chemistry for his work on surface chemistry, while Lewis had not received the Prize despite having been nominated 41 times. On the day of Lewis's death, Langmuir and Lewis had met for lunch at Berkeley, a meeting that Michael Kasha recalled only years later. Associates reported that Lewis came back from lunch in a dark mood, played a morose game of bridge with some colleagues, then went back to work in his lab. An hour later, he was found dead. Langmuir's papers at the Library of Congress confirm that he had been on the Berkeley campus that day to receive an honorary degree. Lewis Hall at Berkeley, built in 1948, is named in his honor. Scientific achievements Thermodynamics Most of Lewis’ lasting interests originated during his Harvard years. The most important was thermodynamics, a subject in which Richards was very active at that time. Although most of the important thermodynamic relations were known by 1895, they were seen as isolated equations, and had not yet been rationalized as a logical system, from which, given one relation, the rest could be derived. Moreover, these relations were inexact, applying only to ideal chemical systems. These were two outstanding problems of theoretical thermodynamics. In two long and ambitious theoretical papers in 1900 and 1901, Lewis tried to provide a solution. Lewis introduced the thermodynamic concept of activity and coined the term "fugacity". His new idea of fugacity, or "escaping tendency", was a function with the dimensions of pressure which expressed the tendency of a substance to pass from one chemical phase to another. Lewis believed that fugacity was the fundamental principle from which a system of real thermodynamic relations could be derived. This hope was not realized, though fugacity did find a lasting place in the description of real gases. Lewis’ early papers also reveal an unusually advanced awareness of J. W. Gibbs's and P. Duhem's ideas of free energy and thermodynamic potential. These ideas were well known to physicists and mathematicians, but not to most practical chemists, who regarded them as abstruse and inapplicable to chemical systems. Most chemists relied on the familiar thermodynamics of heat (enthalpy) of Berthelot, Ostwald, and Van ’t Hoff, and the calorimetric school. Heat of reaction is not, of course, a measure of the tendency of chemical changes to occur, and Lewis realized that only free energy and entropy could provide an exact chemical thermodynamics. He derived free energy from fugacity; he tried, without success, to obtain an exact expression for the entropy function, which in 1901 had not been defined at low temperatures. Richards too tried and failed, and not until Nernst succeeded in 1907 was it possible to calculate entropies unambiguously. Although Lewis’ fugacity-based system did not last, his early interest in free energy and entropy proved most fruitful, and much of his career was devoted to making these useful concepts accessible to practical chemists. At Harvard, Lewis also wrote a theoretical paper on the thermodynamics of blackbody radiation in which he postulated that light has a pressure. He later revealed that he had been discouraged from pursuing this idea by his older, more conservative colleagues, who were unaware that Wilhelm Wien and others were successfully pursuing the same line of thought. Lewis’ paper remained unpublished; but his interest in radiation and quantum theory, and (later) in relativity, sprang from this early, aborted effort. From the start of his career, Lewis regarded himself as both chemist and physicist. Valence theory About 1902 Lewis started to use unpublished drawings of cubical atoms in his lecture notes, in which the corners of the cube represented possible electron positions. Lewis later cited these notes in his classic 1916 paper on chemical bonding, as being the first expression of his ideas. A third major interest that originated during Lewis’ Harvard years was his valence theory. In 1902, while trying to explain the laws of valence to his students, Lewis conceived the idea that atoms were built up of a concentric series of cubes with electrons at each corner. This “cubic atom” explained the cycle of eight elements in the periodic table and was in accord with the widely accepted belief that chemical bonds were formed by transfer of electrons to give each atom a complete set of eight. This electrochemical theory of valence found its most elaborate expression in the work of Richard Abegg in 1904, but Lewis’ version of this theory was the only one to be embodied in a concrete atomic model. Again Lewis’ theory did not interest his Harvard mentors, who, like most American chemists of that time, had no taste for such speculation. Lewis did not publish his theory of the cubic atom, but in 1916 it became an important part of his theory of the shared electron pair bond. In 1916, he published his classic paper on chemical bonding "The Atom and the Molecule" in which he formulated the idea of what would become known as the covalent bond, consisting of a shared pair of electrons, and he defined the term odd molecule (the modern term is free radical) when an electron is not shared. He included what became known as Lewis dot structures as well as the cubical atom model. These ideas on chemical bonding were expanded upon by Irving Langmuir and became the inspiration for the studies on the nature of the chemical bond by Linus Pauling. Acids and bases In 1923, he formulated the electron-pair theory of acid–base reactions. In this theory of acids and bases, a "Lewis acid" is an electron-pair acceptor and a "Lewis base" is an electron-pair donor. This year he also published a monograph on his theories of the chemical bond. Based on work by J. Willard Gibbs, it was known that chemical reactions proceeded to an equilibrium determined by the free energy of the substances taking part. Lewis spent 25 years determining free energies of various substances. In 1923 he and Merle Randall published the results of this study, which helped formalize modern chemical thermodynamics. Heavy water Lewis was the first to produce a pure sample of deuterium oxide (heavy water) in 1933 and the first to study survival and growth of life forms in heavy water. By accelerating deuterons (deuterium nuclei) in Ernest O. Lawrence's cyclotron, he was able to study many of the properties of atomic nuclei. During the 1930s, he was mentor to Glenn T. Seaborg, who was retained for post-doctoral work as Lewis' personal research assistant. Seaborg went on to win the 1951 Nobel Prize in Chemistry and have the element seaborgium named in his honor while he was still alive. O4 Tetraoxygen In 1924, by studying the magnetic properties of solutions of oxygen in liquid nitrogen, Lewis found that O4 molecules were formed. This was the first evidence for tetratomic oxygen. Relativity and quantum physics In 1908 he published the first of several papers on relativity, in which he derived the mass-energy relationship in a different way from Albert Einstein's derivation. In 1909, he and Richard C. Tolman combined his methods with special relativity. In 1912 Lewis and Edwin Bidwell Wilson presented a major work in mathematical physics that not only applied synthetic geometry to the study of spacetime, but also noted the identity of a spacetime squeeze mapping and a Lorentz transformation. In 1926, he coined the term "photon" for the smallest unit of radiant energy (light). Actually, the outcome of his letter to Nature was not what he had intended. In the letter, he proposed a photon being a structural element, not energy. He insisted on the need for a new variable, the number of photons. Although his theory differed from the quantum theory of light introduced by Albert Einstein in 1905, his name was adopted for what Einstein had called a light quantum (Lichtquant in German). Other achievements In 1921, Lewis was the first to propose an empirical equation describing the failure of strong electrolytes to obey the law of mass action, a problem that had perplexed physical chemists for twenty years. His empirical equations for what he called ionic strength were later confirmed to be in accord with the Debye–Hückel equation for strong electrolytes, published in 1923. Over the course of his career, Lewis published on many other subjects besides those mentioned in this entry, ranging from the nature of light quanta to the economics of price stabilization. In the last years of his life, Lewis and graduate student Michael Kasha, his last research associate, established that phosphorescence of organic molecules involves emission of light from one electron in an excited triplet state (a state in which two electrons have their spin vectors oriented in the same direction, but in different orbitals) and measured the paramagnetism of this triplet state. See also History of molecular theory References Further reading Coffey, Patrick (2008) Cathedrals of Science: The Personalities and Rivalries That Made Modern Chemistry. Oxford University Press. External links Key Participants: G. N. Lewis - Linus Pauling and the Nature of the Chemical Bond: A Documentary History Eric Scerri, The Periodic Table, Its Story and Its Significance, Oxford University Press, 2007, see chapter 8 especially National Academy of Sciences Biographical Memoir Members of the United States National Academy of Sciences American physical chemists Thermodynamicists American relativity theorists Harvard University alumni Massachusetts Institute of Technology alumni UC Berkeley College of Chemistry faculty Foreign members of the Royal Society Honorary members of the USSR Academy of Sciences 1875 births 1946 deaths 1946 suicides Suicides by cyanide poisoning People from Weymouth, Massachusetts University of Nebraska alumni Members of the American Philosophical Society
Gilbert N. Lewis
[ "Physics", "Chemistry" ]
3,631
[ "Thermodynamics", "Thermodynamicists" ]
13,034
https://en.wikipedia.org/wiki/Geyser
A geyser (, ) is a spring with an intermittent discharge of water ejected turbulently and accompanied by steam. The formation of geysers is fairly rare, and is caused by particular hydrogeological conditions that exist only in a few places on Earth. Generally, geyser field sites are located near active volcanic areas, and the geyser effect is due to the proximity of magma. Surface water works its way down to an average depth of around where it contacts hot rocks. The pressurized water boils, and this causes the geyser effect of hot water and steam spraying out of the geyser's surface vent. A geyser's eruptive activity may change or cease due to ongoing mineral deposition within the geyser plumbing, exchange of functions with nearby hot springs, earthquake influences, and human intervention. Like many other natural phenomena, geysers are not unique to Earth. Jet-like eruptions, often referred to as cryogeysers, have been observed on several of the moons of the outer Solar System. Due to the low ambient pressures, these eruptions consist of vapour without liquid; they are made more easily visible by particles of dust and ice carried aloft by the gas. Water vapour jets have been observed near the south pole of Saturn's moon Enceladus, while nitrogen eruptions have been observed on Neptune's moon Triton. There are also signs of carbon dioxide eruptions from the southern polar ice cap of Mars. In the case of Enceladus, the plumes are believed to be driven by internal energy. In the cases of the venting on Mars and Triton, the activity may be a result of solar heating via a solid-state greenhouse effect. In all three cases, there is no evidence of the subsurface hydrological system which differentiates terrestrial geysers from other sorts of venting, such as fumaroles. Etymology The term 'geyser' in English dates back to the late 18th century and comes from Geysir, which is a geyser in Iceland. Its name means "one who gushes". Geology Form and function Geysers are nonpermanent geological features. Geysers are generally associated with areas of recent magmatism. As the water boils, the resulting pressure forces a superheated column of steam and water to the surface through the geyser's internal plumbing. The formation of geysers specifically requires the combination of three geologic conditions that are usually found in volcanic terrain: heat, water, and a subsurface hydraulic system with the right geometry. The heat needed for geyser formation comes from magma that needs to be close to the surface of the Earth. For the heated water to form a geyser, a plumbing system (made of fractures, fissures, porous spaces, and sometimes cavities) is required. This includes a reservoir to hold the water while it is being heated. Geysers tend to be coated with geyserite, or siliceous sinter. The water in geysers comes in contact with hot silica-containing rocks, such as rhyolite. The heated water dissolves the silica. As it gets closer to the surface, the water cools and the silica drops out of solution, leaving a deposit of amorphous opal. Gradually the opal anneals into quartz, forming geyserite. Geyserite often covers the microbial mats that grow in geysers. As the mats grow and the silica is deposited, the mats can form up to 50% of the volume of the geyserite. Eruptions Geyser activity, like all hot spring activity, is caused by surface water gradually seeping down through the ground until it meets geothermally heated rock. In non-eruptive hot springs, the heated water then rises back toward the surface by convection through porous and fractured rocks, while in geysers, the water instead is explosively forced upwards by the high steam pressure created when water boils below. Geysers also differ from non-eruptive hot springs in their subterranean structure: geysers have constrictions in their plumbing that creates pressure build-up. As the geyser fills, the water at the top of the column cools off, but because of the narrowness of the channel, convective cooling of the water in the reservoir is impossible. The cooler water above presses down on the hotter water beneath, not unlike the lid of a pressure cooker, allowing the water in the reservoir to become superheated, i.e. to remain liquid at temperatures well above the standard-pressure boiling point. Ultimately, the temperatures near the bottom of the geyser rise to a point where boiling begins, forcing steam bubbles to rise to the top of the column. As they burst through the geyser's vent, some water overflows or splashes out, reducing the weight of the column and thus the pressure on the water below. With this release of pressure, the superheated water flashes into steam, boiling violently throughout the column. The resulting froth of expanding steam and hot water then sprays out of the geyser vent. Eventually the water remaining in the geyser cools back to below the boiling point and the eruption ends; heated groundwater begins seeping back into the reservoir, and the whole cycle begins again. The duration of eruptions and the time between successive eruptions vary greatly from geyser to geyser; Strokkur in Iceland erupts for a few seconds every few minutes, while Grand Geyser in the United States erupts for up to 10 minutes every 8–12 hours. General categorization There are two types of geysers: fountain geysers which erupt from pools of water, typically in a series of intense, even violent, bursts; and cone geysers which erupt from cones or mounds of siliceous sinter (including geyserite), usually in steady jets that last anywhere from a few seconds to several minutes. Old Faithful, perhaps the best-known geyser at Yellowstone National Park, is an example of a cone geyser. Grand Geyser, the tallest predictable geyser on Earth (although Geysir in Iceland is taller, it is not predictable), also at Yellowstone National Park, is an example of a fountain geyser. There are many volcanic areas in the world that have hot springs, mud pots and fumaroles, but very few have erupting geysers. The main reason for their rarity is that multiple intense transient forces must occur simultaneously for a geyser to exist. For example, even when other necessary conditions exist, if the rock structure is loose, eruptions will erode the channels and rapidly destroy any nascent geysers. Geysers are fragile, and if conditions change, they may go dormant or extinct. Many have been destroyed simply by people throwing debris into them, while others have ceased to erupt due to dewatering by geothermal power plants. However, the Geysir in Iceland has had periods of activity and dormancy. During its long dormant periods, eruptions were sometimes artificially induced—often on special occasions—by the addition of surfactant soaps to the water. Biology Some geysers have specific colours, because despite the harsh conditions, life is often found in them (and also in other hot habitats) in the form of thermophilic prokaryotes. No known eukaryote can survive over . In the 1960s, when the research of the biology of geysers first appeared, scientists were generally convinced that no life can survive above around —the upper limit for the survival of cyanobacteria, as the structure of key cellular proteins and deoxyribonucleic acid (DNA) would be destroyed. The optimal temperature for thermophilic bacteria was placed even lower, around . However, the observations proved that can exist at high temperatures and that some bacteria even prefer temperatures higher than the boiling point of water. Dozens of such bacteria are known. Thermophiles prefer temperatures from , while hyperthermophiles grow better at temperatures as high as . As they have heat-stable enzymes that retain their activity even at high temperatures, they have been used as a source of thermostable tools, which are important in medicine and biotechnology, for example in manufacturing antibiotics, plastics, detergents (by the use of heat-stable enzymes lipases, pullulanases and proteases), and fermentation products (for example ethanol is produced). Among these, the first discovered and the most important for biotechnology is Thermus aquaticus. Major geyser fields and their distribution Geysers are quite rare, requiring a combination of water, heat, and fortuitous plumbing. The combination exists in few places on Earth. Yellowstone National Park Yellowstone is the largest geyser locale, containing thousands of hot springs, and approximately 300 to 500 geysers. It is home to half of the world's total number of geysers in its nine geyser basins. It is located mostly in Wyoming, USA, with small portions in Montana and Idaho. Yellowstone includes the world's tallest active geyser (Steamboat Geyser in Norris Geyser Basin). Valley of Geysers, Russia The Valley of Geysers (), located in the Kamchatka Peninsula of Russia, is the second-largest concentration of geysers in the world. The area was discovered and explored by Tatyana Ustinova in 1941. There are about 200 geysers in the area, along with many hot-water springs and perpetual spouters. The area was formed by vigorous volcanic activity. The peculiar way of eruptions is an important feature of these geysers. Most of the geysers erupt at angles, and only very few have the geyser cones that exist at many other of the world's geyser fields. On 3 June 2007, a massive mudflow influenced two-thirds of the valley. It was then reported that a thermal lake was forming above the valley. Four of the eight thermal areas in the valley were covered by the landslide or by the lake. Velikan Geyser, one of the field's largest, was not buried in the slide: the slide shortened its period of eruption from 379 minutes before the slide to 339 minutes after (through 2010). El Tatio, Chile The name "El Tatio" comes from the Quechua word for oven. El Tatio is located in the high valleys of the Andes in Chile, surrounded by many active volcanoes, at around above mean sea level. The valley is home to approximately 80 geysers at present. It became the largest geyser field in the Southern Hemisphere after the destruction of many of the New Zealand geysers, and is the third largest geyser field in the world. The salient feature of these geysers is that the height of their eruptions is very low, the tallest being only high, but with steam columns that can be over high. The average geyser eruption height at El Tatio is about . Taupō Volcanic Zone, New Zealand The Taupō Volcanic Zone is located on New Zealand's North Island. It is long by and lies over a subduction zone in the Earth's crust. Mount Ruapehu marks its southwestern end, while the submarine Whakatāne seamount ( beyond Whakaari / White Island) is considered its northeastern limit. Many geysers in this zone were destroyed due to geothermal developments and a hydroelectric reservoir: only one geyser basin at Whakarewarewa remains. In the beginning of the 20th century, the largest geyser ever known, the Waimangu Geyser, existed in this zone. It began erupting in 1900 and erupted periodically for four years until a landslide changed the local water table. Eruptions of Waimangu would typically reach and some superbursts are known to have reached . Recent scientific work indicates that the Earth's crust below the zone may be as little as thick. Beneath this lies a film of magma wide and long. Iceland Due to the high rate of volcanic activity in Iceland, it is home to some of the most famous geysers in the world. There are around 20–29 active geysers in the country, as well as numerous formerly active geysers. Icelandic geysers are distributed in the zone stretching from south-west to north-east, along the boundary between the Eurasian Plate and the North American Plate. Most of the Icelandic geysers are comparatively short-lived. It is also characteristic that many geysers here are reactivated or newly created after earthquakes, becoming dormant or extinct after some years or some decades. Two most prominent geysers of Iceland are located in Haukadalur. The Great Geysir, which first erupted in the 14th century, gave rise to the word geyser. By 1896, Geysir was almost dormant before an earthquake that year caused eruptions to begin again, occurring several times a day; but in 1916, eruptions all but ceased. Throughout much of the 20th century, eruptions did happen from time to time, usually following earthquakes. Some man-made improvements were made to the spring and eruptions were forced with soap on special occasions. Earthquakes in June 2000 subsequently reawakened the giant for a time, but it is not currently erupting regularly. The nearby Strokkur geyser erupts every 5–8 minutes to a height of some . Extinct and dormant geyser fields There used to be two large geyser fields in Nevada—Beowawe and Steamboat Springs—but they were destroyed by the installation of nearby geothermal power plants. At the plants, geothermal drilling reduced the available heat and lowered the local water table to the point that geyser activity could no longer be sustained. Many of New Zealand's geysers have been destroyed by humans in the last century. Several New Zealand geysers have also become dormant or extinct by natural means. The main remaining field is Whakarewarewa at Rotorua. Two-thirds of the geysers at Orakei Korako were flooded by the construction of the hydroelectric Ohakuri dam in 1961. The Wairakei field was lost to a geothermal power plant in 1958. The Rotomahana field was destroyed by the 1886 eruption of Mount Tarawera. Misnamed geysers There are various other types of geysers which are different in nature compared to the normal steam-driven geysers. These geysers differ not only in their style of eruption but also in the cause that makes them erupt. Artificial geysers In a number of places where there is geothermal activity, wells have been drilled and fitted with impermeable casements that allow them to erupt like geysers. The vents of such geysers are artificial, but are tapped into natural hydrothermal systems. These so-called artificial geysers, technically known as erupting geothermal wells, are not true geysers. Little Old Faithful Geyser, in Calistoga, California, is an example. The geyser erupts from the casing of a well drilled in the late 19th century, which opened up a dead geyser. In the case of the Big Mine Run Geyser in Ashland, Pennsylvania, the heat powering the geyser (which erupts from an abandoned mine vent) comes not from geothermal power, but from the long-simmering Centralia mine fire. Perpetual spouter This is a natural hot spring that spouts water constantly without stopping for recharge. Some of these are incorrectly called geysers, but because they are not periodic in nature they are not considered true geysers. Commercialization Geysers are used for various activities such as electricity generation, heating and geotourism. Many geothermal reserves are found all around the world. The geyser fields in Iceland are some of the most commercially viable geyser locations in the world. Since the 1920s hot water directed from the geysers has been used to heat greenhouses and to grow food that otherwise could not have been cultivated in Iceland's inhospitable climate. Steam and hot water from the geysers has also been used for heating homes since 1943 in Iceland. In 1979 the U.S. Department of Energy (DOE) actively promoted development of geothermal energy in the "Geysers-Calistoga Known Geothermal Resource Area" (KGRA) near Calistoga, California through a variety of research programs and the Geothermal Loan Guarantee Program. The department is obligated by law to assess the potential environmental impacts of geothermal development. Extraterrestrial geyser-like features There are many bodies in the Solar System where eruptions which superficially resemble terrestrial geysers have been observed or are believed to occur. Despite being commonly referred to as geysers, they are driven by fundamentally different processes, consist of a wide range of volatiles, and can occur on vastly disparate scales; from the modestly sized Martian carbon dioxide jets to the immense plumes of Enceladus. Generally, there are two broad categories of feature commonly referred to as geysers: sublimation plumes, and cryovolcanic plumes (also referred to as cryogeysers). Sublimation plumes are jets of sublimated volatiles and dust from shallow sources under icy surfaces. Known examples include the CO2 jets on Mars, and the nitrogen eruptions on Neptune's moon Triton. On Mars carbon dioxide jets are believed to occur in the southern polar region of Mars during spring, as a layer of dry ice accumulated over winter is warmed by the sun. Although these jets have not yet been directly observed, they leave evidence visible from orbit in the form of dark spots and lighter fans atop the dry ice. These features consist primarily of sand and dust blown out by the outbursts, as well as spider-like patterns of channels created below the ice by the rapid flow of CO2 gas. There are a plethora of theories to explain the eruptions, including heating from sunlight, chemical reactions, or even biological activity. Triton was found to have active eruptions of nitrogen and dust by Voyager 2 when it flew past the moon in 1989. These plumes were up to 8km high, where winds would blow them up to 150km downwind, creating long, dark streaks across the otherwise bright south polar ice cap. There are various theories as to what drives the activity on Triton, such as solar heating through transparent ice, cryovolcanism, or basal heating of nitrogen ice sheets. Cryovolcanic plumes or cryogeysers generally refer to large-scale eruptions of predominantly water vapour from active cryovolcanic features on certain icy moons. Such plumes occur on Saturn's moon Enceladus and Jupiter's moon Europa. Plumes of water vapour, together with ice particles and smaller amounts of other components (such as carbon dioxide, nitrogen, ammonia, hydrocarbons and silicates), have been observed erupting from vents associated with the "tiger stripes" in the south polar region of Enceladus by the Cassini orbiter. These plumes are the source of the material in Saturn's E ring. The mechanism which causes these eruptions are generated remains uncertain, as well as to what extent they are physically linked to Enceladus' subsurface ocean, but they are believed to be powered at least in part by tidal heating. Cassini flew through these plumes several times, allowing direct analysis of water from inside another solar system body for the first time. In December 2013, the Hubble Space Telescope detected water vapour plumes potentially 200km high above the south polar region of Europa. Re-examination of Galileo data also suggested that it may have flown through a plume during a flyby in 1997. Water was also detected by the Keck Observatory in 2016, announced in a 2019 Nature article speculating the cause to be a cryovolcanic eruption. It is thought that Europa's lineae might be venting this water vapour into space in a similar manner to the "tiger stripes" of Enceladus. See also References Further reading External links Geysers and How They Work by Yellowstone National Park Geyser Observation and Study Association (GOSA) GeyserTimes.org Geysers of Yellowstone: Online Videos and Descriptions About Geysers by Alan Glennon Geysers, The UnMuseum Johnston's Archive Geyser Resources The Geology of the Icelandic geysers by Dr. Helgi Torfason, geologist Geysers and the Earth's Plumbing Systems by Meg Streepey National Geographic Articles containing video clips Volcanic landforms Springs (hydrology) Bodies of water
Geyser
[ "Environmental_science" ]
4,338
[ "Hydrology", "Springs (hydrology)" ]
13,040
https://en.wikipedia.org/wiki/Gypsum
Gypsum is a soft sulfate mineral composed of calcium sulfate dihydrate, with the chemical formula . It is widely mined and is used as a fertilizer and as the main constituent in many forms of plaster, drywall and blackboard or sidewalk chalk. Gypsum also crystallizes as translucent crystals of selenite. It forms as an evaporite mineral and as a hydration product of anhydrite. The Mohs scale of mineral hardness defines gypsum as hardness value 2 based on scratch hardness comparison. Fine-grained white or lightly tinted forms of gypsum known as alabaster have been used for sculpture by many cultures including Ancient Egypt, Mesopotamia, Ancient Rome, the Byzantine Empire, and the Nottingham alabasters of Medieval England. Etymology and history The word gypsum is derived from the Greek word (), "plaster". Because the quarries of the Montmartre district of Paris have long furnished burnt gypsum (calcined gypsum) used for various purposes, this dehydrated gypsum became known as plaster of Paris. Upon adding water, after a few dozen minutes, plaster of Paris becomes regular gypsum (dihydrate) again, causing the material to harden or "set" in ways that are useful for casting and construction. Gypsum was known in Old English as , "spear stone", referring to its crystalline projections. Thus, the word spar in mineralogy, by comparison to gypsum, refers to any non-ore mineral or crystal that forms in spearlike projections. In the mid-18th century, the German clergyman and agriculturalist Johann Friderich Mayer investigated and publicized gypsum's use as a fertilizer. Gypsum may act as a source of sulfur for plant growth, and in the early 19th century, it was regarded as an almost miraculous fertilizer. American farmers were so anxious to acquire it that a lively smuggling trade with Nova Scotia evolved, resulting in the so-called "Plaster War" of 1820. Physical properties Gypsum is moderately water-soluble (~2.0–2.5 g/L at 25 °C) and, in contrast to most other salts, it exhibits retrograde solubility, becoming less soluble at higher temperatures. When gypsum is heated in air it loses water and converts first to calcium sulfate hemihydrate (bassanite, often simply called "plaster") and, if heated further, to anhydrous calcium sulfate (anhydrite). As with anhydrite, the solubility of gypsum in saline solutions and in brines is also strongly dependent on sodium chloride (common table salt) concentration. The structure of gypsum consists of layers of calcium (Ca2+) and sulfate () ions tightly bound together. These layers are bonded by sheets of anion water molecules via weaker hydrogen bonding, which gives the crystal perfect cleavage along the sheets (in the {010} plane). Crystal varieties Gypsum occurs in nature as flattened and often twinned crystals, and transparent, cleavable masses called selenite. Selenite contains no significant selenium; rather, both substances were named for the ancient Greek word for the Moon. Selenite may also occur in a silky, fibrous form, in which case it is commonly called "satin spar". Finally, it may also be granular or quite compact. In hand-sized samples, it can be anywhere from transparent to opaque. A very fine-grained white or lightly tinted variety of gypsum, called alabaster, is prized for ornamental work of various sorts. In arid areas, gypsum can occur in a flower-like form, typically opaque, with embedded sand grains called desert rose. It also forms some of the largest crystals found in nature, up to long, in the form of selenite. Occurrence Gypsum is a common mineral, with thick and extensive evaporite beds in association with sedimentary rocks. Deposits are known to occur in strata from as far back as the Archaean eon. Gypsum is deposited from lake and sea water, as well as in hot springs, from volcanic vapors, and sulfate solutions in veins. Hydrothermal anhydrite in veins is commonly hydrated to gypsum by groundwater in near-surface exposures. It is often associated with the minerals halite and sulfur. Gypsum is the most common sulfate mineral. Pure gypsum is white, but other substances found as impurities may give a wide range of colors to local deposits. Because gypsum dissolves over time in water, gypsum is rarely found in the form of sand. However, the unique conditions of the White Sands National Park in the US state of New Mexico have created a expanse of white gypsum sand, enough to supply the US construction industry with drywall for 1,000 years. Commercial exploitation of the area, strongly opposed by area residents, was permanently prevented in 1933 when President Herbert Hoover declared the gypsum dunes a protected national monument. Gypsum is also formed as a by-product of sulfide oxidation, amongst others by pyrite oxidation, when the sulfuric acid generated reacts with calcium carbonate. Its presence indicates oxidizing conditions. Under reducing conditions, the sulfates it contains can be reduced back to sulfide by sulfate-reducing bacteria. This can lead to accumulation of elemental sulfur in oil-bearing formations, such as salt domes, where it can be mined using the Frasch process Electric power stations burning coal with flue gas desulfurization produce large quantities of gypsum as a byproduct from the scrubbers. Orbital pictures from the Mars Reconnaissance Orbiter (MRO) have indicated the existence of gypsum dunes in the northern polar region of Mars, which were later confirmed at ground level by the Mars Exploration Rover (MER) Opportunity. Mining Commercial quantities of gypsum are found in the cities of Araripina and Grajaú in Brazil; in Pakistan, Jamaica, Iran (world's second largest producer), Thailand, Spain (the main producer in Europe), Germany, Italy, England, Ireland, Canada and the United States. Large open pit quarries are located in many places including Fort Dodge, Iowa, which sits on one of the largest deposits of gypsum in the world, and Plaster City, California, United States, and East Kutai, Kalimantan, Indonesia. Several small mines also exist in places such as Kalannie in Western Australia, where gypsum is sold to private buyers for additions of calcium and sulfur as well as reduction of aluminum toxicities on soil for agricultural purposes. Crystals of gypsum up to long have been found in the caves of the Naica Mine of Chihuahua, Mexico. The crystals thrived in the cave's extremely rare and stable natural environment. Temperatures stayed at , and the cave was filled with mineral-rich water that drove the crystals' growth. The largest of those crystals weighs and is around 500,000 years old. Synthesis Synthetic gypsum is produced as a waste product or by-product in a range of industrial processes. Desulfurization Flue gas desulfurization gypsum (FGDG) is recovered at some coal-fired power plants. The main contaminants are Mg, K, Cl, F, B, Al, Fe, Si, and Se. They come both from the limestone used in desulfurization and from the coal burned. This product is pure enough to replace natural gypsum in a wide variety of fields including drywalls, water treatment, and cement set retarder. Improvements in flue gas desulfurization have greatly reduced the amount of toxic elements present. Desalination Gypsum precipitates onto brackish water membranes, a phenomenon known as mineral salt scaling, such as during brackish water desalination of water with high concentrations of calcium and sulfate. Scaling decreases membrane life and productivity. This is one of the main obstacles in brackish water membrane desalination processes, such as reverse osmosis or nanofiltration. Other forms of scaling, such as calcite scaling, depending on the water source, can also be important considerations in distillation, as well as in heat exchangers, where either the salt solubility or concentration can change rapidly. A new study has suggested that the formation of gypsum starts as tiny crystals of a mineral called bassanite (2CaSO4·H2O). This process occurs via a three-stage pathway: homogeneous nucleation of nanocrystalline bassanite; self-assembly of bassanite into aggregates, and transformation of bassanite into gypsum. Refinery waste The production of phosphate fertilizers requires breaking down calcium-containing phosphate rock with acid, producing calcium sulfate waste known as phosphogypsum (PG). This form of gypsum is contaminated by impurities found in the rock, namely fluoride, silica, radioactive elements such as radium, and heavy metal elements such as cadmium. Similarly, production of titanium dioxide produces titanium gypsum (TG) due to neutralization of excess acid with lime. The product is contaminated with silica, fluorides, organic matters, and alkalis. Impurities in refinery gypsum waste have, in many cases, prevented them from being used as normal gypsum in fields such as construction. As a result, waste gypsum is stored in stacks indefinitely, with significant risk of leaching their contaminants into water and soil. To reduce the accumulation and ultimately clear out these stacks, research is underway to find more applications for such waste products. Occupational safety People can be exposed to gypsum in the workplace by breathing it in, skin contact, and eye contact. Calcium sulfate per se is nontoxic and is even approved as a food additive, but as powdered gypsum, it can irritate skin and mucous membranes. United States The Occupational Safety and Health Administration (OSHA) has set the legal limit (permissible exposure limit) for gypsum exposure in the workplace as TWA 15 mg/m3 for total exposure and TWA 5 mg/m3 for respiratory exposure over an eight-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of TWA 10 mg/m3 for total exposure and TWA 5 mg/m3 for respiratory exposure over an eight-hour workday. Uses Gypsum is used in a wide variety of applications: Construction industry Gypsum board is primarily used as a finish for walls and ceilings, and is known in construction as plasterboard, "sheetrock", or drywall. Gypsum provides a degree of fire-resistance to these materials, and glass fibers are added to their composition to accentuate this effect. Gypsum has negligible heat conductivity, giving its plaster some insulative properties. Gypsum blocks are used like concrete blocks in construction. Gypsum mortar is an ancient mortar used in construction. A component of Portland cement used to prevent flash setting (too rapid hardening) of concrete. A wood substitute in the ancient world: For example, when wood became scarce due to deforestation on Bronze Age Crete, gypsum was employed in building construction at locations where wood was previously used. Agriculture Fertilizer: In the late 18th and early 19th centuries, Nova Scotia gypsum, often referred to as plaster, was a highly sought fertilizer for wheat fields in the United States. Gypsum provides two of the secondary plant macronutrients, calcium and sulfur. Unlike limestone, it generally does not affect soil pH. Reclamation of saline soils, regardless of pH. When gypsum is added to sodic (saline) and acidic soil, the highly soluble form of boron (sodium metaborate) is converted to the less soluble calcium metaborate. The exchangeable sodium percentage is also reduced by gypsum application. The Zuiderzee Works uses gypsum for the recovered land. Other soil conditioner uses: Gypsum reduces aluminium and boron toxicity in acidic soils. It also improves soil structure, water absorption, and aeration. Soil water potential monitoring: a gypsum block can be inserted into the soil, and its electrical resistance can be measured to derive soil moisture. Modeling, sculpture and art Plaster for casting moulds and modeling. As alabaster, a material for sculpture, it was used especially in the ancient world before steel was developed, when its relative softness made it much easier to carve. During the Middle Ages and Renaissance, it was preferred even to marble. In the medieval period, scribes and illuminators used it as an ingredient in gesso, which was applied to illuminated letters and gilded with gold in illuminated manuscripts. Food and drink A tofu (soy bean curd) coagulant, making it ultimately a significant source of dietary calcium. Adding hardness to water used for brewing. Used in baking as a dough conditioner, reducing stickiness, and as a baked goods source of dietary calcium. The primary component of mineral yeast food. Used in mushroom cultivation to stop grains from clumping together. Medicine and cosmetics Plaster for surgical splints. Impression plasters in dentistry. Other An alternative to iron oxide in some thermite mixes. Tests have shown that gypsum can be used to remove pollutants such as lead or arsenic from contaminated waters. Gallery See also Gypcrust Gypsum flora of Nova Scotia Gypsum recycling Phosphogypsum References External links WebMineral data Mineral galleries – gypsum CDC – NIOSH Pocket Guide to Chemical Hazards Calcium minerals Sulfate minerals Sedimentary rocks Evaporite Alchemical substances Monoclinic minerals Minerals in space group 15 Alabaster Luminescent minerals Industrial minerals
Gypsum
[ "Chemistry" ]
2,883
[ "Luminescence", "Alchemical substances", "Luminescent minerals" ]
13,041
https://en.wikipedia.org/wiki/Growth%20factor
A growth factor is a naturally occurring substance capable of stimulating cell proliferation, wound healing, and occasionally cellular differentiation. Usually it is a secreted protein or a steroid hormone. Growth factors are important for regulating a variety of cellular processes. Growth factors typically act as signaling molecules between cells. Examples are cytokines and hormones that bind to specific receptors on the surface of their target cells. They often promote cell differentiation and maturation, which varies between growth factors. For example, epidermal growth factor (EGF) enhances osteogenic differentiation (osteogenesis or bone formation), while fibroblast growth factors and vascular endothelial growth factors stimulate blood vessel differentiation (angiogenesis). Comparison to cytokines Growth factor is sometimes used interchangeably among scientists with the term cytokine. Historically, cytokines were associated with hematopoietic (blood and lymph forming) cells and immune system cells (e.g., lymphocytes and tissue cells from spleen, thymus, and lymph nodes). For the circulatory system and bone marrow in which cells can occur in a liquid suspension and not bound up in solid tissue, it makes sense for them to communicate by soluble, circulating protein molecules. However, as different lines of research converged, it became clear that some of the same signaling proteins which the hematopoietic and immune systems use were also being used by all sorts of other cells and tissues, during development and in the mature organism. While growth factor implies a positive effect on cell proliferation, cytokine is a neutral term with respect to whether a molecule affects proliferation. While some cytokines can be growth factors, such as G-CSF and GM-CSF, others have an inhibitory effect on cell growth or cell proliferation. Some cytokines, such as Fas ligand, are used as "death" signals; they cause target cells to undergo programmed cell death or apoptosis. The nerve growth factor (NGF) was first discovered by Rita Levi-Montalcini, which won her a Nobel Prize in Physiology or Medicine. List of classes Individual growth factor proteins tend to occur as members of larger families of structurally and evolutionarily related proteins. There are many families, some of which are listed below: Adrenomedullin (AM) Angiopoietin (Ang) Autocrine motility factor Bone morphogenetic proteins (BMPs) Ciliary neurotrophic factor family Ciliary neurotrophic factor (CNTF) Leukemia inhibitory factor (LIF) Interleukin-6 (IL-6) Colony-stimulating factors Macrophage colony-stimulating factor (M-CSF) Granulocyte colony-stimulating factor (G-CSF) Granulocyte macrophage colony-stimulating factor (GM-CSF) Epidermal growth factor (EGF) Ephrins Ephrin A1 Ephrin A2 Ephrin A3 Ephrin A4 Ephrin A5 Ephrin B1 Ephrin B2 Ephrin B3 Erythropoietin (EPO) Fibroblast growth factor (FGF) Fibroblast growth factor 1(FGF1) Fibroblast growth factor 2(FGF2) Fibroblast growth factor 3(FGF3) Fibroblast growth factor 4(FGF4) Fibroblast growth factor 5(FGF5) Fibroblast growth factor 6(FGF6) Fibroblast growth factor 7(FGF7) Fibroblast growth factor 8(FGF8) Fibroblast growth factor 9(FGF9) Fibroblast growth factor 10(FGF10) Fibroblast growth factor 11(FGF11) Fibroblast growth factor 12(FGF12) Fibroblast growth factor 13(FGF13) Fibroblast growth factor 14(FGF14) Fibroblast growth factor 15(FGF15) Fibroblast growth factor 16(FGF16) Fibroblast growth factor 17(FGF17) Fibroblast growth factor 18(FGF18) Fibroblast growth factor 19(FGF19) Fibroblast growth factor 20(FGF20) Fibroblast growth factor 21(FGF21) Fibroblast growth factor 22(FGF22) Fibroblast growth factor 23(FGF23) Foetal Bovine Somatotrophin (FBS) GDNF family of ligands Glial cell line-derived neurotrophic factor (GDNF) Neurturin Persephin Artemin Growth differentiation factor-9 (GDF9) Hepatocyte growth factor (HGF) Hepatoma-derived growth factor (HDGF) Insulin Insulin-like growth factors Insulin-like growth factor-1 (IGF-1) Insulin-like growth factor-2 (IGF-2) Interleukins IL-1- Cofactor for IL-3 and IL-6. Activates T cells. IL-2 – T-cell growth factor. Stimulates IL-1 synthesis. Activates B-cells and NK cells. IL-3 – Stimulates production of all non-lymphoid cells. IL-4 – Growth factor for activated B cells, resting T cells, and mast cells. IL-5 – Induces differentiation of activated B cells and eosinophils. IL-6 – Stimulates Ig synthesis. Growth factor for plasma cells. IL-7 – Growth factor for pre-B cells. Keratinocyte growth factor (KGF) Migration-stimulating factor (MSF) Macrophage-stimulating protein (MSP), also known as hepatocyte growth factor-like protein (HGFLP) Myostatin (GDF-8) Neuregulins Neuregulin 1 (NRG1) Neuregulin 2 (NRG2) Neuregulin 3 (NRG3) Neuregulin 4 (NRG4) Neurotrophins Brain-derived neurotrophic factor (BDNF) Nerve growth factor (NGF) Neurotrophin-3 (NT-3) Neurotrophin-4 (NT-4) Placental growth factor (PGF) Platelet-derived growth factor (PDGF) Renalase (RNLS) – Anti-apoptotic survival factor T-cell growth factor (TCGF) Thrombopoietin (TPO) Transforming growth factors Transforming growth factor alpha (TGF-α) Transforming growth factor beta (TGF-β) Tumor necrosis factor-alpha (TNF-α) Vascular endothelial growth factor (VEGF) In platelets The alpha granules in blood platelets contain growth factors PDGF, IGF-1, EGF, and TGF-β which begin healing of wounds by attracting and activating macrophages, fibroblasts, and endothelial cells. Uses in medicine For the last two decades, growth factors have been increasingly used in the treatment of hematologic and oncologic diseases and cardiovascular diseases such as: skin wound healing and regeneration of other tissues such as bone (PDGF-BB) neutropenia myelodysplastic syndrome (MDS) leukemias aplastic anaemia bone marrow transplantation angiogenesis for cardiovascular diseases See also Angiogenesis Bone growth factor Cytokine Growth factor receptor Human Genome Organisation Mitogen Neurotrophic factor Receptor (biochemistry) Signal transduction References External links FGF5 in Hair Tonic Products FGF1 in Cosmetic Products Immune system
Growth factor
[ "Chemistry", "Biology" ]
1,620
[ "Immune system", "Organ systems", "Growth factors", "Signal transduction" ]
13,046
https://en.wikipedia.org/wiki/Geometric%20mean
In mathematics, the geometric mean is a mean or average which indicates a central tendency of a finite collection of positive real numbers by using the product of their values (as opposed to the arithmetic mean which uses their sum). The geometric mean of numbers is the th root of their product, i.e., for a collection of numbers , the geometric mean is defined as When the collection of numbers and their geometric mean are plotted in logarithmic scale, the geometric mean is transformed into an arithmetic mean, so the geometric mean can equivalently be calculated by taking the natural logarithm of each number, finding the arithmetic mean of the logarithms, and then returning the result to linear scale using the exponential function , The geometric mean of two numbers is the square root of their product, for example with numbers and the geometric mean is The geometric mean of the three numbers is the cube root of their product, for example with numbers , , and , the geometric mean is The geometric mean is useful whenever the quantities to be averaged combine multiplicatively, such as population growth rates or interest rates of a financial investment. Suppose for example a person invests $1000 and achieves annual returns of +10%, −12%, +90%, −30% and +25%, giving a final value of $1609. The average percentage growth is the geometric mean of the annual growth ratios (1.10, 0.88, 1.90, 0.70, 1.25), namely 1.0998, an annual average growth of 9.98%. The arithmetic mean of these annual returns – 16.6% per annum – is not a meaningful average because growth rates do not combine additively. The geometric mean can be understood in terms of geometry. The geometric mean of two numbers, and , is the length of one side of a square whose area is equal to the area of a rectangle with sides of lengths and . Similarly, the geometric mean of three numbers, , , and , is the length of one edge of a cube whose volume is the same as that of a cuboid with sides whose lengths are equal to the three given numbers. The geometric mean is one of the three classical Pythagorean means, together with the arithmetic mean and the harmonic mean. For all positive data sets containing at least one pair of unequal values, the harmonic mean is always the least of the three means, while the arithmetic mean is always the greatest of the three and the geometric mean is always in between (see Inequality of arithmetic and geometric means.) Formulation The geometric mean of a data set is given by: That is, the nth root of the product of the elements. For example, for , the product is , and the geometric mean is the fourth root of 24, approximately 2.213. Formulation using logarithms The geometric mean can also be expressed as the exponential of the arithmetic mean of logarithms. By using logarithmic identities to transform the formula, the multiplications can be expressed as a sum and the power as a multiplication: When since This is sometimes called the log-average (not to be confused with the logarithmic average). It is simply the arithmetic mean of the logarithm-transformed values of (i.e., the arithmetic mean on the log scale), using the exponentiation to return to the original scale, i.e., it is the generalised f-mean with . A logarithm of any base can be used in place of the natural logarithm. For example, the geometric mean of , , , and can be calculated using logarithms base 2: Related to the above, it can be seen that for a given sample of points , the geometric mean is the minimizer of , whereas the arithmetic mean is the minimizer of . Thus, the geometric mean provides a summary of the samples whose exponent best matches the exponents of the samples (in the least squares sense). In computer implementations, naïvely multiplying many numbers together can cause arithmetic overflow or underflow. Calculating the geometric mean using logarithms is one way to avoid this problem. Related concepts Iterative means The geometric mean of a data set is less than the data set's arithmetic mean unless all members of the data set are equal, in which case the geometric and arithmetic means are equal. This allows the definition of the arithmetic-geometric mean, an intersection of the two which always lies in between. The geometric mean is also the arithmetic-harmonic mean in the sense that if two sequences () and () are defined: and where is the harmonic mean of the previous values of the two sequences, then and will converge to the geometric mean of and . The sequences converge to a common limit, and the geometric mean is preserved: Replacing the arithmetic and harmonic mean by a pair of generalized means of opposite, finite exponents yields the same result. Comparison to arithmetic mean The geometric mean of a non-empty data set of positive numbers is always at most their arithmetic mean. Equality is only obtained when all numbers in the data set are equal; otherwise, the geometric mean is smaller. For example, the geometric mean of 2 and 3 is 2.45, while their arithmetic mean is 2.5. In particular, this means that when a set of non-identical numbers is subjected to a mean-preserving spread — that is, the elements of the set are "spread apart" more from each other while leaving the arithmetic mean unchanged — their geometric mean decreases. Geometric mean of a continuous function If is a positive continuous real-valued function, its geometric mean over this interval is For instance, taking the identity function over the unit interval shows that the geometric mean of the positive numbers between 0 and 1 is equal to . Applications Average proportional growth rate The geometric mean is more appropriate than the arithmetic mean for describing proportional growth, both exponential growth (constant proportional growth) and varying growth; in business the geometric mean of growth rates is known as the compound annual growth rate (CAGR). The geometric mean of growth over periods yields the equivalent constant growth rate that would yield the same final amount. As an example, suppose an orange tree yields 100 oranges one year and then 180, 210 and 300 the following years, for growth rates of 80%, 16.7% and 42.9% respectively. Using the arithmetic mean calculates a (linear) average growth of 46.5% (calculated by ). However, when applied to the 100 orange starting yield, 46.5% annual growth results in 314 oranges after three years of growth, rather than the observed 300. The linear average overstates the rate of growth. Instead, using the geometric mean, the average yearly growth is approximately 44.2% (calculated by ). Starting from a 100 orange yield with annual growth of 44.2% gives the expected 300 orange yield after three years. In order to determine the average growth rate, it is not necessary to take the product of the measured growth rates at every step. Let the quantity be given as the sequence , where is the number of steps from the initial to final state. The growth rate between successive measurements and is . The geometric mean of these growth rates is then just: Normalized values The fundamental property of the geometric mean, which does not hold for any other mean, is that for two sequences and of equal length, . This makes the geometric mean the only correct mean when averaging normalized results; that is, results that are presented as ratios to reference values. This is the case when presenting computer performance with respect to a reference computer, or when computing a single average index from several heterogeneous sources (for example, life expectancy, education years, and infant mortality). In this scenario, using the arithmetic or harmonic mean would change the ranking of the results depending on what is used as a reference. For example, take the following comparison of execution time of computer programs: Table 1 The arithmetic and geometric means "agree" that computer C is the fastest. However, by presenting appropriately normalized values and using the arithmetic mean, we can show either of the other two computers to be the fastest. Normalizing by A's result gives A as the fastest computer according to the arithmetic mean: Table 2 while normalizing by B's result gives B as the fastest computer according to the arithmetic mean but A as the fastest according to the harmonic mean: Table 3 and normalizing by C's result gives C as the fastest computer according to the arithmetic mean but A as the fastest according to the harmonic mean: Table 4 In all cases, the ranking given by the geometric mean stays the same as the one obtained with unnormalized values. However, this reasoning has been questioned. Giving consistent results is not always equal to giving the correct results. In general, it is more rigorous to assign weights to each of the programs, calculate the average weighted execution time (using the arithmetic mean), and then normalize that result to one of the computers. The three tables above just give a different weight to each of the programs, explaining the inconsistent results of the arithmetic and harmonic means (Table 4 gives equal weight to both programs, the Table 2 gives a weight of 1/1000 to the second program, and the Table 3 gives a weight of 1/100 to the second program and 1/10 to the first one). The use of the geometric mean for aggregating performance numbers should be avoided if possible, because multiplying execution times has no physical meaning, in contrast to adding times as in the arithmetic mean. Metrics that are inversely proportional to time (speedup, IPC) should be averaged using the harmonic mean. The geometric mean can be derived from the generalized mean as its limit as goes to zero. Similarly, this is possible for the weighted geometric mean. Financial The geometric mean has from time to time been used to calculate financial indices (the averaging is over the components of the index). For example, in the past the FT 30 index used a geometric mean. It is also used in the CPI calculation and recently introduced "RPIJ" measure of inflation in the United Kingdom and in the European Union. This has the effect of understating movements in the index compared to using the arithmetic mean. Applications in the social sciences Although the geometric mean has been relatively rare in computing social statistics, starting from 2010 the United Nations Human Development Index did switch to this mode of calculation, on the grounds that it better reflected the non-substitutable nature of the statistics being compiled and compared: The geometric mean decreases the level of substitutability between dimensions [being compared] and at the same time ensures that a 1 percent decline in say life expectancy at birth has the same impact on the HDI as a 1 percent decline in education or income. Thus, as a basis for comparisons of achievements, this method is also more respectful of the intrinsic differences across the dimensions than a simple average. Not all values used to compute the HDI (Human Development Index) are normalized; some of them instead have the form . This makes the choice of the geometric mean less obvious than one would expect from the "Properties" section above. The equally distributed welfare equivalent income associated with an Atkinson Index with an inequality aversion parameter of 1.0 is simply the geometric mean of incomes. For values other than one, the equivalent value is an Lp norm divided by the number of elements, with p equal to one minus the inequality aversion parameter. Geometry In the case of a right triangle, its altitude is the length of a line extending perpendicularly from the hypotenuse to its 90° vertex. Imagining that this line splits the hypotenuse into two segments, the geometric mean of these segment lengths is the length of the altitude. This property is known as the geometric mean theorem. In an ellipse, the semi-minor axis is the geometric mean of the maximum and minimum distances of the ellipse from a focus; it is also the geometric mean of the semi-major axis and the semi-latus rectum. The semi-major axis of an ellipse is the geometric mean of the distance from the center to either focus and the distance from the center to either directrix. Another way to think about it is as follows: Consider a circle with radius . Now take two diametrically opposite points on the circle and apply pressure from both ends to deform it into an ellipse with semi-major and semi-minor axes of lengths and . Since the area of the circle and the ellipse stays the same, we have: The radius of the circle is the geometric mean of the semi-major and the semi-minor axes of the ellipse formed by deforming the circle. Distance to the horizon of a sphere (ignoring the effect of atmospheric refraction when atmosphere is present) is equal to the geometric mean of the distance to the closest point of the sphere and the distance to the farthest point of the sphere. The geometric mean is used in both in the approximation of squaring the circle by S.A. Ramanujan and in the construction of the heptadecagon with "mean proportionals". Aspect ratios The geometric mean has been used in choosing a compromise aspect ratio in film and video: given two aspect ratios, the geometric mean of them provides a compromise between them, distorting or cropping both in some sense equally. Concretely, two equal area rectangles (with the same center and parallel sides) of different aspect ratios intersect in a rectangle whose aspect ratio is the geometric mean, and their hull (smallest rectangle which contains both of them) likewise has the aspect ratio of their geometric mean. In the choice of 16:9 aspect ratio by the SMPTE, balancing 2.35 and 4:3, the geometric mean is , and thus ... was chosen. This was discovered empirically by Kerns Powers, who cut out rectangles with equal areas and shaped them to match each of the popular aspect ratios. When overlapped with their center points aligned, he found that all of those aspect ratio rectangles fit within an outer rectangle with an aspect ratio of 1.77:1 and all of them also covered a smaller common inner rectangle with the same aspect ratio 1.77:1. The value found by Powers is exactly the geometric mean of the extreme aspect ratios, 4:3(1.33:1) and CinemaScope(2.35:1), which is coincidentally close to (). The intermediate ratios have no effect on the result, only the two extreme ratios. Applying the same geometric mean technique to 16:9 and 4:3 approximately yields the 14:9 (...) aspect ratio, which is likewise used as a compromise between these ratios. In this case 14:9 is exactly the arithmetic mean of and , since 14 is the average of 16 and 12, while the precise geometric mean is but the two different means, arithmetic and geometric, are approximately equal because both numbers are sufficiently close to each other (a difference of less than 2%). Paper formats The geometric mean is also used to calculate B and C series paper formats. The format has an area which is the geometric mean of the areas of and . For example, the area of a B1 paper is , because it is the geometric mean of the areas of an A0 () and an A1 () paper The same principle applies with the C series, whose area is the geometric mean of the A and B series. For example, the C4 format has an area which is the geometric mean of the areas of A4 and B4. An advantage that comes from this relationship is that an A4 paper fits inside a C4 envelope, and both fit inside a B4 envelope. Other applications Spectral flatness: in signal processing, spectral flatness, a measure of how flat or spiky a spectrum is, is defined as the ratio of the geometric mean of the power spectrum to its arithmetic mean. Anti-reflective coatings: In optical coatings, where reflection needs to be minimised between two media of refractive indices n0 and n2, the optimum refractive index n1 of the anti-reflective coating is given by the geometric mean: . Subtractive color mixing: The spectral reflectance curve for paint mixtures (of equal tinting strength, opacity and dilution) is approximately the geometric mean of the paints' individual reflectance curves computed at each wavelength of their spectra. Image processing: The geometric mean filter is used as a noise filter in image processing. Labor compensation: The geometric mean of a subsistence wage and market value of the labor using capital of employer was suggested as the natural wage by Johann von Thünen in 1875. See also Arithmetic-geometric mean Generalized mean Geometric mean theorem Geometric standard deviation Harmonic mean Heronian mean Heteroscedasticity Log-normal distribution Muirhead's inequality Product Pythagorean means Quadratic mean Quadrature (mathematics) Quasi-arithmetic mean (generalized f-mean) Rate of return Weighted geometric mean Notes References External links Calculation of the geometric mean of two numbers in comparison to the arithmetic solution Arithmetic and geometric means When to use the geometric mean Practical solutions for calculating geometric mean with different kinds of data Geometric Mean on MathWorld Geometric Meaning of the Geometric Mean Geometric Mean Calculator for larger data sets Computing Congressional apportionment using Geometric Mean Non-Newtonian calculus website Geometric Mean Definition and Formula The Distribution of the Geometric Mean The geometric mean? Means Non-Newtonian calculus
Geometric mean
[ "Physics", "Mathematics" ]
3,608
[ "Means", "Mathematical analysis", "Point (geometry)", "Calculus", "Geometric centers", "Non-Newtonian calculus", "Symmetry" ]
13,050
https://en.wikipedia.org/wiki/Guru%20Meditation
Guru Meditation is an error notice originally displayed by the Amiga computer when it crashes. It is now also used by Varnish, a software component used by many content-heavy websites. This has led to many internet users seeing a "Guru Meditation" message (or the variant "Guru Mediation") when these websites suffer crashes or other issues. It is analogous to the "Blue Screen of Death" in Microsoft Windows operating systems, or a kernel panic in Unix. It has also been used as a message for unrecoverable errors in software packages such as VirtualBox and other operating systems (see Legacy section below). Origins The term "Guru Meditation Error" originated as an in-house joke in Amiga's early days. The company had a product called the Joyboard for the Atari 2600 home video game console, a game controller much like a joystick but operated by the feet, similar to the Wii Balance Board. Early in the development of the Amiga computer operating system, the company's developers became so frustrated with the system's frequent crashes that, as a relaxation technique, a game was developed where a person would sit cross-legged on the Joyboard, resembling an Indian guru. The player tried to remain extremely still; the winner of the game stayed still the longest. If the player moved too much, a "guru meditation" error occurred. Description of "Guru Meditation" errors on the Amiga The alert occurred when there was a fatal problem with the system. If the system had no means of recovery, it could display the alert, even in systems with numerous critical flaws. In extreme cases, the alert could even be displayed if the system's memory was completely exhausted. The text of the alert messages was completely baffling to most users. Only highly technically adept Amiga users would know, for example, that exception 3 was an address error, and meant the program was accessing a word on an unaligned boundary. Users without this specialized knowledge would have no recourse but to look for a "Guru" or to simply reboot the machine and hope for the best. Technical description (Amiga) When a Guru Meditation is displayed, the options are to reboot by pressing the left mouse button, or to invoke ROMWack by pressing the right mouse button or to manually reboot. ROMWack is a minimalist debugger built into the operating system which is accessible by connecting a 9600 bit/s terminal to the serial port. The alert itself appears as a black rectangular box located in the upper portion of the screen. Its border and text are red for a normal Guru Meditation, or green/yellow for a Recoverable Alert, another kind of Guru Meditation. The screen may go black, but the power LEDs always alternates between full and half-brightness for a few seconds before the alert appears. In AmigaOS 1.x, programmed in ROMs known as Kickstart 1.1, 1.2 and 1.3, the errors are always red. In AmigaOS 2.x and 3.x, recoverable alerts are yellow, except for some very early versions of 2.x where they were green. Dead-end alerts are always red and terminal in all OS versions except in a rare series of events, as in when a deprecated Kickstart (example: 1.1) program conditionally boots from disk on a more advanced Kickstart 3.x ROM Amiga running in compatibility mode (therefore eschewing the on-disk OS) and crashes with a red Guru Meditation but subsequently restores itself by pressing the left mouse button, the newer Kickstart recognizing an inadvised low level chipset call for the older ROM directly poking the hardware, and addressing it. The error is displayed as two fields, separated by a period. The format is #0000000x.yyyyyyyy in case of a CPU error, or #aabbcccc.dddddddd in case of a system software error. The first field is either the Motorola 68000 exception number that occurred (if a CPU error occurs) or an internal error identifier (such as an "Out of Memory" code), in case of a system software error. The second can be the address of a Task structure, or the address of a memory block whose allocation or deallocation failed. It is never the address of the code that caused the error. If the cause of the crash is uncertain, this number is rendered as 48454C50, which stands for "HELP" in hexadecimal ASCII characters (48=H, 45=E, 4C=L, 50=P). Guru Meditation handler There was a commercially available error handler for AmigaOS, before version 2.04, called GOMF (Get Outta My Face) made by Hypertek/Silicon Springs Development corp. It was able to deal with many kinds of errors and gave the user a choice to either remove the offending process and associated screen, or allow the machine to show the Guru Meditation. In many cases, removal of the offending process gave one the choice to save one's data and exit running programs before rebooting the system. When the damage was not extensive, one was able to continue using the machine. However, it did not save the user from all errors, as one may have still seen this error occasionally. Recoverable Alerts Recoverable Alerts are non-critical crashes in the computer system. In most cases, it is possible to resume work and save files after a Recoverable Alert, while a normal, red Guru Meditation always results in an immediate reboot. It is, however, still recommended to reboot as soon as possible after encountering a Recoverable Alert, because the system may be in an unpredictable state that can cause data corruption. System software error codes The first byte specifies the area of the system affected. The top bit will be set if the error is a dead end alert. Legacy AmigaOS versions 4.0 and onwards replaced "Guru Meditation" with "Grim Reaper", but briefly mentions the Guru Meditation number in the prompt box. MorphOS displays an "Application Is Meditating" error message. Attempting to close the application may revive the operating system, but restarting is still recommended. Varnish references Guru Meditation for severe errors. The ESP8266 and ESP32 microcontrollers will display "Guru Meditation Error: Core X panic'ed" (where X is 0 or 1 depending on which core crashed) along with a core dump and stack trace. VirtualBox uses the term "Guru Meditation" for severe errors in the virtual machine monitor, for example caused by a triple fault in the virtual machine. E23 displays a "Guru Meditation" and restarts when severe errors occur. Some Nintendo DS homebrew titles display a "Guru Meditation" error when an issue occurs, likely when the title crashes. OpenMediaVault has Guru Meditation like errors (such as with session timeouts) as well as emulating the Amiga's pointer and wait pointer (when executing a task) in the web interface See also Screen of death References AmigaOS Amiga Screens of death Computer errors
Guru Meditation
[ "Technology" ]
1,475
[ "AmigaOS", "Computer errors", "Screens of death", "Computing platforms" ]
13,064
https://en.wikipedia.org/wiki/Granville%20rail%20disaster
The Granville rail disaster occurred on Tuesday 18 January 1977 at Granville, a western suburb of Sydney, New South Wales, Australia, when a crowded commuter train derailed, running into the supports of a road bridge that collapsed onto two of the train's passenger carriages. The official inquiry found the primary cause of the crash to be poor fastening of the track. It remains the worst rail disaster in Australian history; 83 people died and 213 were injured. An 84th victim, an unborn child, was added to the fatality list in 2017. Disaster The train involved in the disaster consisted of eight passenger carriages hauled by 46 class electric locomotive 4620, and had commenced its journey towards Sydney from Mount Victoria in the Blue Mountains at 6:09 a.m. At approximately 8:10 a.m. it was approaching Granville railway station when the locomotive derailed and struck one of the steel-and-concrete pillars supporting the bridge carrying Bold Street over the railway cutting. The derailed engine and first two carriages passed the bridge. The first carriage, which broke free from the other carriages, was torn open when it collided with a severed mast beside the track, killing eight passengers. The remaining carriages came to a halt with the second carriage clear of the bridge. The rear half of the third carriage, and forward half of the fourth carriage, came to rest under the weakened bridge, whose weight was estimated at . Within seconds, with all its supports demolished, the bridge and several motor cars on top of it collapsed on top of the carriages, crushing them and the passengers inside. Of the total number of passengers travelling in the third and fourth carriages, half were killed instantly when the bridge fell on them, crushing them in their seats. Several injured passengers were trapped in the train for hours after the accident, with part of the bridge crushing a limb or torso. Some had been conscious and lucid, talking to rescuers, but died of crush syndrome soon after the weight was removed from their bodies. This resulted in changes to rescue procedures for these kinds of accidents. Rescuers also faced greater difficulties as the weight of the bridge was still crushing the affected carriages, reducing the space in which they had to work to get survivors out, until it was declared that no one was allowed to attempt further entry until the bridge had been lifted. Soon after, the bridge settled a further onto the train, trapping two rescuers and crushing a portable generator "like butter". Another danger came from gas; LPG cylinders were kept year-round on board the train to be used in winter for heating. Several people were overcome by gas leaking from ruptured cylinders. The leaking gas also prevented the immediate use of powered rescue tools. The NSW Fire Brigade provided ventilation equipment to dispel the gas and a constant film of water was sprayed over the accident site to prevent the possibility of the gas igniting. The train driver, the assistant crewman, the "second man", and the motorists including one motorcyclist driving on the fallen bridge all survived. The operation lasted from 8:12 a.m. Tuesday until 6:00 a.m. Thursday. Ultimately, 84 people were killed in the accident, including an unborn child. Aftermath The bridge was rebuilt as a single span without any intermediate support piers. Other bridges similar to the destroyed bridge had their piers reinforced. The original inquiry into the accident found that the primary cause of the crash was "the very unsatisfactory condition of the permanent way", being the poor fastening of the track, causing the track to spread and allowing the left front wheel of the locomotive to come off the rail. Other contributing factors included the structure of the bridge itself. When built, the base of its deck was found to be one metre lower than the road. Concrete was added on top to build the surface up level with the road. This additional weight significantly added to the destruction of the wooden train carriages. The disaster prompted substantial increases in rail-maintenance expenditure. The train driver, Edward Olencewicz, was exonerated by the inquiry. On 4 May 2017, New South Wales Premier Gladys Berejiklian apologised to the victims of the disaster, in Parliament House. Memorial Families and friends of the victims and survivors gather with surviving members of the rescue crews annually. The ceremony ends with the throwing of 84 roses on to the tracks to mark the number of passengers killed. In 2007, a plaque was placed on the bridge to mark the efforts of railway workers who assisted in rescuing survivors from the train. The original group, known as 'the trust', made submissions on rail safety issues, including recommending that fines for safety breaches be dedicated to rail safety improvements, and campaigning for the establishment of an independent railway safety ombudsman. Media A television docudrama, The Day of the Roses, was produced in 1998 about the accident. The Granville Train Disaster: 35 Years of Memories – a 2011 book by B. J. Gobbe, an emergency worker who attended the incident. A television documentary, The Train, produced by Graham McNeice from Shadow Productions was aired in 2012 on The History Channel Australia about the accident, and narrated by Brian Henderson. Revisiting the Granville Train Disaster of 1977 – a 2017 book by B. J. Gobbe. ABC's You Can't Ask That, series 4 episode 8 ("Disaster Survivors"), featured a victim from the accident who spoke about what happened and the long-term impacts on her life. See also Lewisham rail crash (United Kingdom) Eschede train disaster (Germany) Railway accidents in New South Wales Lists of rail accidents References External links Danger Ahead! Granville, Sydney, Australia Documentary on the Granville Train Disaster featuring Rescuer Gary Raymond & Survivor Paul Touzell (9 minute video) Granville Train Disaster Historians Web page Derailments in Australia Railway accidents and incidents in New South Wales Railway accidents in 1977 Bridge disasters in Australia Bridge disasters caused by collision Disasters in Sydney January 1977 events in Australia 1970s in Sydney Granville, New South Wales 1977 disasters in Australia
Granville rail disaster
[ "Technology" ]
1,230
[ "Railway accidents and incidents", "Bridge disasters caused by collision" ]
13,073
https://en.wikipedia.org/wiki/GNU%20Lesser%20General%20Public%20License
The GNU Lesser General Public License (LGPL) is a free-software license published by the Free Software Foundation (FSF). The license allows developers and companies to use and integrate a software component released under the LGPL into their own (even proprietary) software without being required by the terms of a strong copyleft license to release the source code of their own components. However, any developer who modifies an LGPL-covered component is required to make their modified version available under the same LGPL license. For proprietary software, code under the LGPL is usually used in the form of a shared library, so that there is a clear separation between the proprietary and LGPL components. The LGPL is primarily used for software libraries, although it is also used by some stand-alone applications. The LGPL was developed as a compromise between the strong copyleft of the GNU General Public License (GPL) and more permissive licenses such as the BSD licenses and the MIT License. The word "Lesser" in the title shows that the LGPL does not guarantee the end user's complete freedom in the use of software; it only guarantees the freedom of modification for components licensed under the LGPL, but not for any proprietary components. History The license was originally called the GNU Library General Public License and was first published in 1991, and adopted the version number 2 for parity with GPL version 2. The LGPL was revised in minor ways in the 2.1 point release, published in 1999, when it was renamed the GNU Lesser General Public License to reflect the FSF's position that not all libraries should use it. Version 3 of the LGPL was published in 2007 as a list of additional permissions applied to GPL version 3. In addition to the term "work based on the Program" of GPL, LGPL version 2 introduced two additional clarification terms "work based on the library" and "work that uses the library". LGPL version 3 partially dropped these terms. Differences from the GPL The main difference between the GPL and the LGPL is that the latter allows the work to be linked with (in the case of a library, "used by") a non-(L)GPLed program, regardless of whether it is licensed under a license of GPL family or other licenses. In LGPL 2.1, the non-(L)GPLed program can then be distributed under any terms if it is not a derivative work. If it is a derivative work, then the program's terms must allow for "modification of the work for the customer's own use and reverse engineering for debugging such modifications". Whether a work that uses an LGPL program is a derivative work or not is a legal issue. A standalone executable that dynamically links to a library through a .so, .dll, or similar medium is generally accepted as not being a derivative work as defined by the LGPL. It would fall under the definition of a "work that uses the Library". Paragraph 5 of the LGPL version 2.1 states: A program that contains no derivative of any portion of the Library, but is designed to work with the Library by being compiled or linked with it, is called a "work that uses the Library". Such a work, in isolation, is not a derivative work of the Library, and therefore falls outside the scope of this License. Essentially, if it is a "work that uses the library", then it must be possible for the software to be linked with a newer version of the LGPL-covered program. The most commonly used method for doing so is to use "a suitable shared library mechanism for linking". Alternatively, a statically linked library is allowed if either source code or linkable object files are provided. Compatibility One feature of the LGPL is the permission to sublicense under the GPL any piece of software which is received under the LGPL (see section 3 of the LGPL version 2.1, and section 2 option b of the LGPL version 3). This feature allows for direct reuse of LGPLed code in GPLed libraries and applications. Version 3 of the LGPL is not inherently compatible with version 2 of the GPL. However, works using the latter that have given permission to use a later version of the GPL are compatible: a work released under the GPLv2 "or any later version" may be combined with code from a LGPL version 3 library, with the combined work as a whole falling under the terms of the GPLv3. FSF recommendations on library licensing The former name GNU Library General Public License gave some the impression that the FSF recommended that all software libraries should use the LGPL and programs should use the GPL. In 1999 essay Why you shouldn't use the Lesser GPL for your next library Richard Stallman explained that while the LGPL had not been deprecated, one should not necessarily use the LGPL for all libraries, as using GPL can give advantage to free-software developers. Programming language specifications The license uses terminology which is mainly intended for applications written in the C programming language or its family. Franz Inc., the developers of Allegro Common Lisp, published their own preamble to the license to clarify terminology in the Lisp context. The LGPL with this preamble is sometimes referred to as the LLGPL. In addition, Ada has a special feature, generics, which may prompt the use of the GNAT Modified General Public License (GMGPL): it allows code to link against or instantiate GMGPL-covered units without the code itself becoming covered by the GPL. C++ templates and header-only libraries have the same problem as Ada generics. Version 3 of the LGPL addresses such cases in section 3. Class inheritance Some concern has risen about the suitability of object-oriented classes in LGPL-licensed code being inherited by non-(L)GPL code. Clarification is given on the official GNU website: The LGPL does not contain special provisions for inheritance, because none are needed. Inheritance creates derivative works in the same way as traditional linking, and the LGPL permits this type of derivative work in the same way as it permits ordinary function calls. See also GNU Affero General Public License GNU Free Documentation License GNAT Modified General Public License GPL linking exception Software using the GNU Lesser General Public License (category) References External links GNU Lesser General Public License v3.0 GNU Lesser General Public License v2.1—This version is deprecated by the FSF. GNU Lesser General Public License v2.0—This version is deprecated by the FSF. Derivative Works Lisping Copyleft: A Close Reading of the Lisp LGPL, 5 International Free and Open Source Software Law Review 15 (2013) Computer law Copyleft Free and open-source software licenses Copyleft software licenses GNU Project
GNU Lesser General Public License
[ "Technology" ]
1,465
[ "Computer law", "Computing and society" ]
13,088
https://en.wikipedia.org/wiki/Granite
Granite ( ) is a coarse-grained (phaneritic) intrusive igneous rock composed mostly of quartz, alkali feldspar, and plagioclase. It forms from magma with a high content of silica and alkali metal oxides that slowly cools and solidifies underground. It is common in the continental crust of Earth, where it is found in igneous intrusions. These range in size from dikes only a few centimeters across to batholiths exposed over hundreds of square kilometers. Granite is typical of a larger family of granitic rocks, or granitoids, that are composed mostly of coarse-grained quartz and feldspars in varying proportions. These rocks are classified by the relative percentages of quartz, alkali feldspar, and plagioclase (the QAPF classification), with true granite representing granitic rocks rich in quartz and alkali feldspar. Most granitic rocks also contain mica or amphibole minerals, though a few (known as leucogranites) contain almost no dark minerals. Granite is nearly always massive (lacking any internal structures), hard (falling between 6 and 7 on the Mohs hardness scale), and tough. These properties have made granite a widespread construction stone throughout human history. Description The word "granite" comes from the Latin granum, a grain, in reference to the coarse-grained structure of such a completely crystalline rock. Granites can be predominantly white, pink, or gray in color, depending on their mineralogy. Granitic rocks mainly consist of feldspar, quartz, mica, and amphibole minerals, which form an interlocking, somewhat equigranular matrix of feldspar and quartz with scattered darker biotite mica and amphibole (often hornblende) peppering the lighter color minerals. Occasionally some individual crystals (phenocrysts) are larger than the groundmass, in which case the texture is known as porphyritic. A granitic rock with a porphyritic texture is known as a granite porphyry. Granitoid is a general, descriptive field term for lighter-colored, coarse-grained igneous rocks. Petrographic examination is required for identification of specific types of granitoids. The alkali feldspar in granites is typically orthoclase or microcline and is often perthitic. The plagioclase is typically sodium-rich oligoclase. Phenocrysts are usually alkali feldspar. Granitic rocks are classified according to the QAPF diagram for coarse grained plutonic rocks and are named according to the percentage of quartz, alkali feldspar (orthoclase, sanidine, or microcline) and plagioclase feldspar on the A-Q-P half of the diagram. True granite (according to modern petrologic convention) contains between 20% and 60% quartz by volume, with 35% to 90% of the total feldspar consisting of alkali feldspar. Granitic rocks poorer in quartz are classified as syenites or monzonites, while granitic rocks dominated by plagioclase are classified as granodiorites or tonalites. Granitic rocks with over 90% alkali feldspar are classified as alkali feldspar granites. Granitic rock with more than 60% quartz, which is uncommon, is classified simply as quartz-rich granitoid or, if composed almost entirely of quartz, as quartzolite. True granites are further classified by the percentage of their total feldspar that is alkali feldspar. Granites whose feldspar is 65% to 90% alkali feldspar are syenogranites, while the feldspar in monzogranite is 35% to 65% alkali feldspar. A granite containing both muscovite and biotite micas is called a binary or two-mica granite. Two-mica granites are typically high in potassium and low in plagioclase, and are usually S-type granites or A-type granites, as described below. Another aspect of granite classification is the ratios of metals that potentially form feldspars. Most granites have a composition such that almost all their aluminum and alkali metals (sodium and potassium) are combined as feldspar. This is the case when K2O + Na2O + CaO > Al2O3 > K2O + Na2O. Such granites are described as normal or metaluminous. Granites in which there is not enough aluminum to combine with all the alkali oxides as feldspar (Al2O3 < K2O + Na2O) are described as peralkaline, and they contain unusual sodium amphiboles such as riebeckite. Granites in which there is an excess of aluminum beyond what can be taken up in feldspars (Al2O3 > CaO + K2O + Na2O) are described as peraluminous, and they contain aluminum-rich minerals such as muscovite. Physical properties The average density of granite is between , its compressive strength usually lies above 200 MPa (29,000 psi), and its viscosity near STP is 3–6·1020 Pa·s. The melting temperature of dry granite at ambient pressure is ; it is strongly reduced in the presence of water, down to 650 °C at a few hundred megapascals of pressure. Granite has poor primary permeability overall, but strong secondary permeability through cracks and fractures if they are present. Chemical composition A worldwide average of the chemical composition of granite, by mass percent, based on 2485 analyses: The medium-grained equivalent of granite is microgranite. The extrusive igneous rock equivalent of granite is rhyolite. Occurrence Granitic rock is widely distributed throughout the continental crust. Much of it was intruded during the Precambrian age; it is the most abundant basement rock that underlies the relatively thin sedimentary veneer of the continents. Outcrops of granite tend to form tors, domes or bornhardts, and rounded massifs. Granites sometimes occur in circular depressions surrounded by a range of hills, formed by the metamorphic aureole or hornfels. Granite often occurs as relatively small, less than 100 km2 stock masses (stocks) and in batholiths that are often associated with orogenic mountain ranges. Small dikes of granitic composition called aplites are often associated with the margins of granitic intrusions. In some locations, very coarse-grained pegmatite masses occur with granite. Origin Granite forms from silica-rich (felsic) magmas. Felsic magmas are thought to form by addition of heat or water vapor to rock of the lower crust, rather than by decompression of mantle rock, as is the case with basaltic magmas. It has also been suggested that some granites found at convergent boundaries between tectonic plates, where oceanic crust subducts below continental crust, were formed from sediments subducted with the oceanic plate. The melted sediments would have produced magma intermediate in its silica content, which became further enriched in silica as it rose through the overlying crust. Early fractional crystallisation serves to reduce a melt in magnesium and chromium, and enrich the melt in iron, sodium, potassium, aluminum, and silicon. Further fractionation reduces the content of iron, calcium, and titanium. This is reflected in the high content of alkali feldspar and quartz in granite. The presence of granitic rock in island arcs shows that fractional crystallization alone can convert a basaltic magma to a granitic magma, but the quantities produced are small. For example, granitic rock makes up just 4% of the exposures in the South Sandwich Islands. In continental arc settings, granitic rocks are the most common plutonic rocks, and batholiths composed of these rock types extend the entire length of the arc. There are no indication of magma chambers where basaltic magmas differentiate into granites, or of cumulates produced by mafic crystals settling out of the magma. Other processes must produce these great volumes of felsic magma. One such process is injection of basaltic magma into the lower crust, followed by differentiation, which leaves any cumulates in the mantle. Another is heating of the lower crust by underplating basaltic magma, which produces felsic magma directly from crustal rock. The two processes produce different kinds of granites, which may be reflected in the division between S-type (produced by underplating) and I-type (produced by injection and differentiation) granites, discussed below. Alphabet classification system The composition and origin of any magma that differentiates into granite leave certain petrological evidence as to what the granite's parental rock was. The final texture and composition of a granite are generally distinctive as to its parental rock. For instance, a granite that is derived from partial melting of metasedimentary rocks may have more alkali feldspar, whereas a granite derived from partial melting of metaigneous rocks may be richer in plagioclase. It is on this basis that the modern "alphabet" classification schemes are based. The letter-based Chappell & White classification system was proposed initially to divide granites into I-type (igneous source) granite and S-type (sedimentary sources). Both types are produced by partial melting of crustal rocks, either metaigneous rocks or metasedimentary rocks. I-type granites are characterized by a high content of sodium and calcium, and by a strontium isotope ratio, 87Sr/86Sr, of less than 0.708. 87Sr is produced by radioactive decay of 87Rb, and since rubidium is concentrated in the crust relative to the mantle, a low ratio suggests origin in the mantle. The elevated sodium and calcium favor crystallization of hornblende rather than biotite. I-type granites are known for their porphyry copper deposits. I-type granites are orogenic (associated with mountain building) and usually metaluminous. S-type granites are sodium-poor and aluminum-rich. As a result, they contain micas such as biotite and muscovite instead of hornblende. Their strontium isotope ratio is typically greater than 0.708, suggesting a crustal origin. They also commonly contain xenoliths of metamorphosed sedimentary rock, and host tin ores. Their magmas are water-rich, and they readily solidify as the water outgasses from the magma at lower pressure, so they less commonly make it to the surface than magmas of I-type granites, which are thus more common as volcanic rock (rhyolite). They are also orogenic but range from metaluminous to strongly peraluminous. Although both I- and S-type granites are orogenic, I-type granites are more common close to the convergent boundary than S-type. This is attributed to thicker crust further from the boundary, which results in more crustal melting. A-type granites show a peculiar mineralogy and geochemistry, with particularly high silicon and potassium at the expense of calcium and magnesium and a high content of high field strength cations (cations with a small radius and high electrical charge, such as zirconium, niobium, tantalum, and rare earth elements.) They are not orogenic, forming instead over hot spots and continental rifting, and are metaluminous to mildly peralkaline and iron-rich. These granites are produced by partial melting of refractory lithology such as granulites in the lower continental crust at high thermal gradients. This leads to significant extraction of hydrous felsic melts from granulite-facies resitites. A-type granites occur in the Koettlitz Glacier Alkaline Province in the Royal Society Range, Antarctica. The rhyolites of the Yellowstone Caldera are examples of volcanic equivalents of A-type granite. M-type granite was later proposed to cover those granites that were clearly sourced from crystallized mafic magmas, generally sourced from the mantle. Although the fractional crystallisation of basaltic melts can yield small amounts of granites, which are sometimes found in island arcs, such granites must occur together with large amounts of basaltic rocks. H-type granites were suggested for hybrid granites, which were hypothesized to form by mixing between mafic and felsic from different sources, such as M-type and S-type. However, the big difference in rheology between mafic and felsic magmas makes this process problematic in nature. Granitization Granitization is an old, and largely discounted, hypothesis that granite is formed in place through extreme metasomatism. The idea behind granitization was that fluids would supposedly bring in elements such as potassium, and remove others, such as calcium, to transform a metamorphic rock into granite. This was supposed to occur across a migrating front. However, experimental work had established by the 1960s that granites were of igneous origin. The mineralogical and chemical features of granite can be explained only by crystal-liquid phase relations, showing that there must have been at least enough melting to mobilize the magma. However, at sufficiently deep crustal levels, the distinction between metamorphism and crustal melting itself becomes vague. Conditions for crystallization of liquid magma are close enough to those of high-grade metamorphism that the rocks often bear a close resemblance. Under these conditions, granitic melts can be produced in place through the partial melting of metamorphic rocks by extracting melt-mobile elements such as potassium and silicon into the melts but leaving others such as calcium and iron in granulite residues. This may be the origin of migmatites. A migmatite consists of dark, refractory rock (the melanosome) that is permeated by sheets and channels of light granitic rock (the leucosome). The leucosome is interpreted as partial melt of a parent rock that has begun to separate from the remaining solid residue (the melanosome). If enough partial melt is produced, it will separate from the source rock, become more highly evolved through fractional crystallization during its ascent toward the surface, and become the magmatic parent of granitic rock. The residue of the source rock becomes a granulite. The partial melting of solid rocks requires high temperatures and the addition of water or other volatiles which lower the solidus temperature (temperature at which partial melting commences) of these rocks. It was long debated whether crustal thickening in orogens (mountain belts along convergent boundaries) was sufficient to produce granite melts by radiogenic heating, but recent work suggests that this is not a viable mechanism. In-situ granitization requires heating by the asthenospheric mantle or by underplating with mantle-derived magmas. Ascent and emplacement Granite magmas have a density of 2.4 Mg/m3, much less than the 2.8 Mg/m3 of high-grade metamorphic rock. This gives them tremendous buoyancy, so that ascent of the magma is inevitable once enough magma has accumulated. However, the question of precisely how such large quantities of magma are able to shove aside country rock to make room for themselves (the room problem) is still a matter of research. Two main mechanisms are thought to be important: Stokes diapir Fracture propagation Of these two mechanisms, Stokes diapirism has been favoured for many years in the absence of a reasonable alternative. The basic idea is that magma will rise through the crust as a single mass through buoyancy. As it rises, it heats the wall rocks, causing them to behave as a power-law fluid and thus flow around the intrusion allowing it to pass without major heat loss. This is entirely feasible in the warm, ductile lower crust where rocks are easily deformed, but runs into problems in the upper crust which is far colder and more brittle. Rocks there do not deform so easily: for magma to rise as a diapir it would expend far too much energy in heating wall rocks, thus cooling and solidifying before reaching higher levels within the crust. Fracture propagation is the mechanism preferred by many geologists as it largely eliminates the major problems of moving a huge mass of magma through cold brittle crust. Magma rises instead in small channels along self-propagating dykes which form along new or pre-existing fracture or fault systems and networks of active shear zones. As these narrow conduits open, the first magma to enter solidifies and provides a form of insulation for later magma. These mechanisms can operate in tandem. For example, diapirs may continue to rise through the brittle upper crust through stoping, where the granite cracks the roof rocks, removing blocks of the overlying crust which then sink to the bottom of the diapir while the magma rises to take their place. This can occur as piecemeal stopping (stoping of small blocks of chamber roof), as cauldron subsidence (collapse of large blocks of chamber roof), or as roof foundering (complete collapse of the roof of a shallow magma chamber accompanied by a caldera eruption.) There is evidence for cauldron subsidence at the Mt. Ascutney intrusion in eastern Vermont. Evidence for piecemeal stoping is found in intrusions that are rimmed with igneous breccia containing fragments of country rock. Assimilation is another mechanism of ascent, where the granite melts its way up into the crust and removes overlying material in this way. This is limited by the amount of thermal energy available, which must be replenished by crystallization of higher-melting minerals in the magma. Thus, the magma is melting crustal rock at its roof while simultaneously crystallizing at its base. This results in steady contamination with crustal material as the magma rises. This may not be evident in the major and minor element chemistry, since the minerals most likely to crystallize at the base of the chamber are the same ones that would crystallize anyway, but crustal assimilation is detectable in isotope ratios. Heat loss to the country rock means that ascent by assimilation is limited to distance similar to the height of the magma chamber. Weathering Physical weathering occurs on a large scale in the form of exfoliation joints, which are the result of granite's expanding and fracturing as pressure is relieved when overlying material is removed by erosion or other processes. Chemical weathering of granite occurs when dilute carbonic acid, and other acids present in rain and soil waters, alter feldspar in a process called hydrolysis. As demonstrated in the following reaction, this causes potassium feldspar to form kaolinite, with potassium ions, bicarbonate, and silica in solution as byproducts. An end product of granite weathering is grus, which is often made up of coarse-grained fragments of disintegrated granite. Climatic variations also influence the weathering rate of granites. For about two thousand years, the relief engravings on Cleopatra's Needle obelisk had survived the arid conditions of its origin before its transfer to London. Within two hundred years, the red granite has drastically deteriorated in the damp and polluted air there. Soil development on granite reflects the rock's high quartz content and dearth of available bases, with the base-poor status predisposing the soil to acidification and podzolization in cool humid climates as the weather-resistant quartz yields much sand. Feldspars also weather slowly in cool climes, allowing sand to dominate the fine-earth fraction. In warm humid regions, the weathering of feldspar as described above is accelerated so as to allow a much higher proportion of clay with the Cecil soil series a prime example of the consequent Ultisol great soil group. Natural radiation Granite is a natural source of radiation, like most natural stones. Potassium-40 is a radioactive isotope of weak emission, and a constituent of alkali feldspar, which in turn is a common component of granitic rocks, more abundant in alkali feldspar granite and syenites. Some granites contain around 10 to 20 parts per million (ppm) of uranium. By contrast, more mafic rocks, such as tonalite, gabbro and diorite, have 1 to 5 ppm uranium, and limestones and sedimentary rocks usually have equally low amounts. Many large granite plutons are sources for palaeochannel-hosted or roll front uranium ore deposits, where the uranium washes into the sediments from the granite uplands and associated, often highly radioactive pegmatites. Cellars and basements built into soils over granite can become a trap for radon gas, which is formed by the decay of uranium. Radon gas poses significant health concerns and is the number two cause of lung cancer in the US behind smoking. Thorium occurs in all granites. Conway granite has been noted for its relatively high thorium concentration of 56±6 ppm. There is some concern that some granite sold as countertops or building material may be hazardous to health. Dan Steck of St. Johns University has stated that approximately 5% of all granite is of concern, with the caveat that only a tiny percentage of the tens of thousands of granite slab types have been tested. Resources from national geological survey organizations are accessible online to assist in assessing the risk factors in granite country and design rules relating, in particular, to preventing accumulation of radon gas in enclosed basements and dwellings. A study of granite countertops was done (initiated and paid for by the Marble Institute of America) in November 2008 by National Health and Engineering Inc. of USA. In this test, all of the 39 full-size granite slabs that were measured for the study showed radiation levels well below the European Union safety standards (section 4.1.1.1 of the National Health and Engineering study) and radon emission levels well below the average outdoor radon concentrations in the US. Industry and uses Granite and related marble industries are considered one of the oldest industries in the world, existing as far back as Ancient Egypt. Major modern exporters of granite include China, India, Italy, Brazil, Canada, Germany, Sweden, Spain and the United States. Antiquity The Red Pyramid of Egypt (), named for the light crimson hue of its exposed limestone surfaces, is the third largest of Egyptian pyramids. Pyramid of Menkaure, likely dating 2510 BC, was constructed of limestone and granite blocks. The Great Pyramid of Giza (c. 2580 BC) contains a granite sarcophagus fashioned of "Red Aswan Granite". The mostly ruined Black Pyramid dating from the reign of Amenemhat III once had a polished granite pyramidion or capstone, which is now on display in the main hall of the Egyptian Museum in Cairo (see Dahshur). Other uses in Ancient Egypt include columns, door lintels, sills, jambs, and wall and floor veneer. How the Egyptians worked the solid granite is still a matter of debate. Tool marks described by the Egyptologist Anna Serotta indicate the use of flint tools on finer work with harder stones, e.g. when producing the hieroglyphic inscriptions. Patrick Hunt has postulated that the Egyptians used emery, which has greater hardness. The Seokguram Grotto in Korea is a Buddhist shrine and part of the Bulguksa temple complex. Completed in 774 AD, it is an artificial grotto constructed entirely of granite. The main Buddha of the grotto is a highly regarded piece of Buddhist art, and along with the temple complex to which it belongs, Seokguram was added to the UNESCO World Heritage List in 1995. Rajaraja Chola I of the Chola Dynasty in South India built the world's first temple entirely of granite in the 11th century AD in Tanjore, India. The Brihadeeswarar Temple dedicated to Lord Shiva was built in 1010. The massive Gopuram (ornate, upper section of shrine) is believed to have a mass of around 81 tonnes. It was the tallest temple in south India. Imperial Roman granite was quarried mainly in Egypt, and also in Turkey, and on the islands of Elba and Giglio. Granite became "an integral part of the Roman language of monumental architecture". The quarrying ceased around the third century AD. Beginning in Late Antiquity the granite was reused, which since at least the early 16th century became known as spolia. Through the process of case-hardening, granite becomes harder with age. The technology required to make tempered metal chisels was largely forgotten during the Middle Ages. As a result, Medieval stoneworkers were forced to use saws or emery to shorten ancient columns or hack them into discs. Giorgio Vasari noted in the 16th century that granite in quarries was "far softer and easier to work than after it has lain exposed" while ancient columns, because of their "hardness and solidity have nothing to fear from fire or sword, and time itself, that drives everything to ruin, not only has not destroyed them but has not even altered their colour." Modern Sculpture and memorials In some areas, granite is used for gravestones and memorials. Granite is a hard stone and requires skill to carve by hand. Until the early 18th century, in the Western world, granite could be carved only by hand tools with generally poor results. A key breakthrough was the invention of steam-powered cutting and dressing tools by Alexander MacDonald of Aberdeen, inspired by seeing ancient Egyptian granite carvings. In 1832, the first polished tombstone of Aberdeen granite to be erected in an English cemetery was installed at Kensal Green Cemetery. It caused a sensation in the London monumental trade and for some years all polished granite ordered came from MacDonald's. As a result of the work of sculptor William Leslie, and later Sidney Field, granite memorials became a major status symbol in Victorian Britain. The royal sarcophagus at Frogmore was probably the pinnacle of its work, and at 30 tons one of the largest. It was not until the 1880s that rival machinery and works could compete with the MacDonald works. Modern methods of carving include using computer-controlled rotary bits and sandblasting over a rubber stencil. Leaving the letters, numbers, and emblems exposed and the remainder of the stone covered with rubber, the blaster can create virtually any kind of artwork or epitaph. The stone known as "black granite" is usually gabbro, which has a completely different chemical composition. Buildings Granite has been extensively used as a dimension stone and as flooring tiles in public and commercial buildings and monuments. Aberdeen in Scotland, which is constructed principally from local granite, is known as "The Granite City". Because of its abundance in New England, granite was commonly used to build foundations for homes there. The Granite Railway, America's first railroad, was built to haul granite from the quarries in Quincy, Massachusetts, to the Neponset River in the 1820s. Engineering Engineers have traditionally used polished granite surface plates to establish a plane of reference, since they are relatively impervious, inflexible, and maintain good dimensional stability. Sandblasted concrete with a heavy aggregate content has an appearance similar to rough granite, and is often used as a substitute when use of real granite is impractical. Granite tables are used extensively as bases or even as the entire structural body of optical instruments, CMMs, and very high precision CNC machines because of granite's rigidity, high dimensional stability, and excellent vibration characteristics. A most unusual use of granite was as the material of the tracks of the Haytor Granite Tramway, Devon, England, in 1820. Granite block is usually processed into slabs, which can be cut and shaped by a cutting center. In military engineering, Finland planted granite boulders along its Mannerheim Line to block invasion by Russian tanks in the Winter War of 1939–40. Paving Granite is used as a pavement material. This is because it is extremely durable, permeable and requires little maintenance. For example, in Sydney, Australia black granite stone is used for the paving and kerbs throughout the Central Business District. Curling stones Curling stones are traditionally fashioned of Ailsa Craig granite. The first stones were made in the 1750s, the original source being Ailsa Craig in Scotland. Because of the rarity of this granite, the best stones can cost as much as US$1,500. Between 60 and 70 percent of the stones used today are made from Ailsa Craig granite. Although the island is now a wildlife reserve, it is still quarried under license for Ailsa granite by Kays of Scotland for curling stones. Rock climbing Granite is one of the rocks most prized by climbers, for its steepness, soundness, crack systems, and friction. Well-known venues for granite climbing include the Yosemite Valley, the Bugaboos, the Mont Blanc massif (and peaks such as the Aiguille du Dru, the Mourne Mountains, the Adamello-Presanella Alps, the Aiguille du Midi and the Grandes Jorasses), the Bregaglia, Corsica, parts of the Karakoram (especially the Trango Towers), the Fitzroy Massif and the Paine Massif in Patagonia, Baffin Island, Ogawayama, the Cornish coast, the Cairngorms, Sugarloaf Mountain in Rio de Janeiro, Brazil, and the Stawamus Chief, British Columbia, Canada. Gallery See also References Citations Further reading External links Felsic rocks National symbols of Finland Plutonic rocks Sculpture materials Symbols of Wisconsin Industrial minerals
Granite
[ "Chemistry" ]
6,215
[ "Felsic rocks", "Igneous rocks by composition" ]
13,115
https://en.wikipedia.org/wiki/Gametophyte
A gametophyte () is one of the two alternating multicellular phases in the life cycles of plants and algae. It is a haploid multicellular organism that develops from a haploid spore that has one set of chromosomes. The gametophyte is the sexual phase in the life cycle of plants and algae. It develops sex organs that produce gametes, haploid sex cells that participate in fertilization to form a diploid zygote which has a double set of chromosomes. Cell division of the zygote results in a new diploid multicellular organism, the second stage in the life cycle known as the sporophyte. The sporophyte can produce haploid spores by meiosis that on germination produce a new generation of gametophytes. Algae In some multicellular green algae (Ulva lactuca is one example), red algae and brown algae, sporophytes and gametophytes may be externally indistinguishable (isomorphic). In Ulva, the gametes are isogamous, all of one size, shape and general morphology. Land plants In land plants, anisogamy is universal. As in animals, female and male gametes are called, respectively, eggs and sperm. In extant land plants, either the sporophyte or the gametophyte may be reduced (heteromorphic). No extant gametophytes have stomata, but they have been found on fossil species like the early Devonian Aglaophyton from the Rhynie chert. Other fossil gametophytes found in the Rhynie chert shows they were much more developed than present forms, resembling the sporophyte in having a well-developed conducting strand, a cortex, an epidermis and a cuticle with stomata, but were much smaller. Bryophytes In bryophytes (mosses, liverworts, and hornworts), the gametophyte is the most visible stage of the life cycle. The bryophyte gametophyte is longer lived, nutritionally independent, and the sporophytes are attached to the gametophytes and dependent on them. When a moss spore germinates it grows to produce a filament of cells (called the protonema). The mature gametophyte of mosses develops into leafy shoots that produce sex organs (gametangia) that produce gametes. Eggs develop in archegonia and sperm in antheridia. In some bryophyte groups such as many liverworts of the order Marchantiales, the gametes are produced on specialized structures called gametophores (or gametangiophores). Vascular plants All vascular plants are sporophyte dominant, and a trend toward smaller and more sporophyte-dependent female gametophytes is evident as land plants evolved reproduction by seeds. Those vascular plants, such as clubmosses and many ferns, that produce only one type of spore are said to be homosporous. They have exosporic gametophytes — that is, the gametophyte is free-living and develops outside of the spore wall. Exosporic gametophytes can either be bisexual, capable of producing both sperm and eggs in the same thallus (monoicous), or specialized into separate male and female organisms (dioicous). In heterosporous vascular plants (plants that produce both microspores and megaspores), the gametophytes develop endosporically (within the spore wall). These gametophytes are dioicous, producing either sperm or eggs but not both. Ferns In most ferns, for example, in the leptosporangiate fern Dryopteris, the gametophyte is a photosynthetic free living autotrophic organism called a prothallus that produces gametes and maintains the sporophyte during its early multicellular development. However, in some groups, notably the clade that includes Ophioglossaceae and Psilotaceae, the gametophytes are subterranean and subsist by forming mycotrophic relationships with fungi. Homosporous ferns secrete a chemical called antheridiogen. Lycophytes Extant lycophytes produce two different types of gametophytes. In the homosporous families Lycopodiaceae and Huperziaceae, spores germinate into bisexual free-living, subterranean and mycotrophic gametophytes that derive nutrients from symbiosis with fungi. In Isoetes and Selaginella, which are heterosporous, microspores and megaspores are dispersed from sporangia either passively or by active ejection. Microspores produce microgametophytes which produce sperm. Megaspores produce reduced megagametophytes inside the spore wall. At maturity, the megaspore cracks open at the trilete suture to allow the male gametes to access the egg cells in the archegonia inside. The gametophytes of Isoetes appear to be similar in this respect to those of the extinct Carboniferous arborescent lycophytes Lepidodendron and Lepidostrobus. Seed plants The seed plant gametophyte life cycle is even more reduced than in basal taxa (ferns and lycophytes). Seed plant gametophytes are not independent organisms and depend upon the dominant sporophyte tissue for nutrients and water. With the exception of mature pollen, if the gametophyte tissue is separated from the sporophyte tissue it will not survive. Due to this complex relationship and the small size of the gametophyte tissue—in some situations single celled—differentiating with the human eye or even a microscope between seed plant gametophyte tissue and sporophyte tissue can be a challenge. While seed plant gametophyte tissue is typically composed of mononucleate haploid cells (1 x n), specific circumstances can occur in which the ploidy does vary widely despite still being considered part of the gametophyte. In gymnosperms, the male gametophytes are produced inside microspores within the microsporangia located inside male cones or microstrobili. In each microspore, a single gametophyte is produced, consisting of four haploid cells produced by meiotic division of a diploid microspore mother cell. At maturity, each microspore-derived gametophyte becomes a pollen grain. During its development, the water and nutrients that the male gametophyte requires are provided by the sporophyte tissue until they are released for pollination. The cell number of each mature pollen grain varies between the gymnosperm orders. Cycadophyta have 3 celled pollen grains while Ginkgophyta have 4 celled pollen grains. Gnetophyta may have 2 or 3 celled pollen grains depending on the species, and Coniferophyta pollen grains vary greatly ranging from single celled to 40 celled. One of these cells is typically a germ cell and other cells may consist of a single tube cell which grows to form the pollen tube, sterile cells, and/or prothallial cells which are both vegetative cells without an essential reproductive function. After pollination is successful, the male gametophyte continues to develop. If a tube cell was not developed in the microstrobilus, one is created after pollination via mitosis. The tube cell grows into the diploid tissue of the female cone and may branch out into the megastrobilus tissue or grow straight towards the egg cell. The megastrobilus sporophytic tissue provides nutrients for the male gametophyte at this stage. In some gymnosperms, the tube cell will create a direct channel from the site of pollination to the egg cell, in other gymnosperms, the tube cell will rupture in the middle of the megastrobilus sporophyte tissue. This occurs because in some gymnosperm orders, the germ cell is nonmobile and a direct pathway is needed, however, in Cycadophyta and Ginkgophyta, the germ cell is mobile due to flagella being present and a direct tube cell path from the pollination site to the egg is not needed. In most species the germ cell can be more specifically described as a sperm cell which mates with the egg cell during fertilization, though that is not always the case. In some Gnetophyta species, the germ cell will release two sperm nuclei that undergo a rare gymnosperm double fertilization process occurring solely with sperm nuclei and not with the fusion of developed cells. After fertilization is complete in all orders, the remaining male gametophyte tissue will deteriorate. The female gametophyte in gymnosperms differs from the male gametophyte as it spends its whole life cycle in one organ, the ovule located inside the megastrobilus or female cone. Similar to the male gametophyte, the female gametophyte normally is fully dependent on the surrounding sporophytic tissue for nutrients and the two organisms cannot be separated. However, the female gametophytes of Ginkgo biloba do contain chlorophyll and can produce some of their own energy, though, not enough to support itself without being supplemented by the sporophyte. The female gametophyte forms from a diploid megaspore that undergoes meiosis and starts being singled celled. The size of the mature female gametophyte varies drastically between gymnosperm orders. In Cycadophyta, Ginkgophyta, Coniferophyta, and some Gnetophyta, the single celled female gametophyte undergoes many cycles of mitosis ending up consisting of thousands of cells once mature. At a minimum, two of these cells are egg cells and the rest are haploid somatic cells, but more egg cells may be present and their ploidy, though typically haploid, may vary. In select Gnetophyta, the female gametophyte stays singled celled. Mitosis does occur, but no cell divisions are ever made. This results in the mature female gametophyte in some Gnetophyta having many free nuclei in one cell. Once mature, this single celled gametophyte is 90% smaller than the female gametophytes in other gymnosperm orders. After fertilization, the remaining female gametophyte tissue in gymnosperms serves as the nutrient source for the developing zygote (even in Gnetophyta where the diploid zygote cell is much smaller at that stage, and for a while lives within the single celled gametophyte). The precursor to the male angiosperm gametophyte is a diploid microspore mother cell located inside the anther. Once the microspore undergoes meiosis, 4 haploid cells are formed, each of which is a singled celled male gametophyte. The male gametophyte will develop via one or two rounds of mitosis inside the anther. This creates a 2 or 3 celled male gametophyte which becomes known as the pollen grain once dehiscing occurs. One cell is the tube cell, and the remaining cell/cells are the sperm cells. The development of the three celled male gametophyte prior to dehiscing has evolved multiple times and is present in about a third of angiosperm species allowing for faster fertilization after pollination. Once pollination occurs, the tube cell grows in size and if the male gametophyte is only 2 cells at this stage, the single sperm cell undergoes mitosis to create a second sperm cell. Just like in gymnosperms, the tube cell in angiosperms obtains nutrients from the sporophytic tissue, and may branch out into the pistil tissue or grow directly towards the ovule. Once double fertilization is completed, the tube cell and other vegetative cells, if present, are all that remains of the male gametophyte and soon degrade. The female gametophyte of angiosperms develops in the ovule (located inside the female or hermaphrodite flower). Its precursor is a diploid megaspore that undergoes meiosis which produces four haploid daughter cells. Three of these independent gametophyte cells degenerate and the one that remains is the gametophyte mother cell which normally contains one nucleus. In general, it will then divide by mitosis until it consists of 8 nuclei separated into 1 egg cell, 3 antipodal cells, 2 synergid cells, and a central cell that contains two nuclei. In select angiosperms, special cases occur in which the female gametophyte is not 7 celled with 8 nuclei. On the small end of the spectrum, some species have mature female gametophytes with only 4 cells, each with one nuclei. Conversely, some species have 10-celled mature female gametophytes consisting of 16 total nuclei. Once double fertilization occurs, the egg cell becomes the zygote which is then considered sporophyte tissue. Scholars still disagree on whether the fertilized central cell is considered gametophyte tissue. Some botanists consider this endospore as gametophyte tissue with typically 2/3 being female and 1/3 being male, but as the central cell before double fertilization can range from 1n to 8n in special cases, the fertilized central cells range from 2n (50% male/female) to 9n (1/9 male, 8/9th female). However, other botanists consider the fertilized endospore as sporophyte tissue. Some believe it is neither. Heterospory In heterosporic plants, there are two distinct kinds of gametophytes. Because the two gametophytes differ in form and function, they are termed heteromorphic, from hetero- "different" and morph "form". The egg-producing gametophyte is known as a megagametophyte, because it is typically larger, and the sperm producing gametophyte is known as a microgametophyte. Species which produce egg and sperm on separate gametophytes plants are termed dioicous, while those that produce both eggs and sperm on the same gametophyte are termed monoicous. In heterosporous plants (water ferns, some lycophytes, as well as all gymnosperms and angiosperms), there are two distinct types of sporangia, each of which produces a single kind of spore that germinates to produce a single kind of gametophyte. However, not all heteromorphic gametophytes come from heterosporous plants. That is, some plants have distinct egg-producing and sperm-producing gametophytes, but these gametophytes develop from the same kind of spore inside the same sporangium; Sphaerocarpos is an example of such a plant. In seed plants, the microgametophyte is called pollen. Seed plant microgametophytes consists of several (typically two to five) cells when the pollen grains exit the sporangium. The megagametophyte develops within the megaspore of extant seedless vascular plants and within the megasporangium in a cone or flower in seed plants. In seed plants, the microgametophyte (pollen) travels to the vicinity of the egg cell (carried by a physical or animal vector) and produces two sperm by mitosis. In gymnosperms, the megagametophyte consists of several thousand cells and produces one to several archegonia, each with a single egg cell. The gametophyte becomes a food storage tissue in the seed. In angiosperms, the megagametophyte is reduced to only a few cells, and is sometimes called the embryo sac. A typical embryo sac contains seven cells and eight nuclei, one of which is the egg cell. Two nuclei fuse with a sperm nucleus to form the primary endospermic nucleus which develops to form triploid endosperm, which becomes the food storage tissue in the seed. See also References Further reading Plant morphology Plant anatomy Plant reproduction
Gametophyte
[ "Biology" ]
3,492
[ "Behavior", "Plant reproduction", "Plants", "Reproduction", "Plant morphology" ]
13,118
https://en.wikipedia.org/wiki/Grazia%20Deledda
Grazia Maria Cosima Damiana Deledda (; or Gràtzia Deledda ; 27 September 1871 – 15 August 1936) was an Italian writer who received the Nobel Prize for Literature in 1926 "for her idealistically inspired writings which with plastic clarity picture the life on her native island [i.e. Sardinia] and with depth and sympathy deal with human problems in general". She was the first Italian woman to receive the prize, and only the second woman in general after Selma Lagerlöf was awarded hers in 1909. Biography Deledda was born in Nuoro, Sardinia, into a middle-class family, to Giovanni Antonio Deledda and Francesca Cambosu, as the fourth of seven siblings. She attended elementary school (the minimum required at the time) and was then educated by a private tutor (a guest of one of her relatives) and moved on to study literature on her own. It was during this time that she started displaying an interest in writing short novels, mostly inspired by the life of Sardinian peasants and their struggles. Her teacher encouraged her to submit her writing to a newspaper and, at age 13, her first story was published in a local journal. Some of Deledda's early works were published in the fashion magazine L'ultima moda between 1888 and 1889. In 1890 Trevisani published Nell'azzurro (Into the Blue), her first collection of short stories. Deledda's main focus was the representation of poverty and the struggles associated with it through a combination of imaginary and autobiographical elements. Her family wasn't particularly supportive of her desire to write. Deledda's first novel, Fiori di Sardegna (Flowers of Sardinia) was published in 1892. Her 1896 book Paesaggi sardi, published by Speirani, is characterized by a prose both informed by fiction and poetry. Around this time Deledda initiated a regular collaboration with newspapers and magazines, most notably La Sardegna, Piccola Rivista and Nuova Antologia. Her work earned significant visibility as well as critical interest. In October 1899, Deledda met Palmiro Madesani, a functionary of the Ministry of Finance, in Cagliari. Madesani and Deledda were married in 1900 and the couple moved to Rome right after the publication of Deledda's Il vecchio della montagna (The Old Man from the Mountain, 1900). Despite the birth of her two sons, Sardus (1901) and Francesco "Franz" (1904), Deledda managed to continue to write prolifically, publishing about a novel a year. In 1903 she published Elias Portolu, which was met with commercial and critical success, boosting her reputation as a writer. This was followed by Cenere (Ashes, 1904); L'edera (The Ivy, 1908); Sino al confine (To the Border, 1910); Colombi e sparvieri (Doves and Sparrows, 1912); and her most popular book, Canne al vento (Reeds in the Wind, 1913). In 1916 Cenere was the inspiration for a silent movie with famed Italian actress Eleonora Duse. It was the first and only time that Duse, a theatre performer, appeared in a film. Deledda was one of the contributors of the nationalist women's magazine, Lidel, which was established in 1919. In 1926 Henrik Schück, a member of the Swedish Academy, nominated Deledda for the Nobel Prize in Literature. Deledda won "for her idealistically inspired writings which with plastic clarity picture the life on her native island and with depth and sympathy deal with human problems in general." She was awarded the Prize in a ceremony in Stockholm in 1926. Her initial response to the news was "Già?" ("Already?") Deledda's win contributed to an increase in popularity of her writing. Benito Mussolini, who had just consolidated his grip on power, sent Deledda a signed portrait of himself with a dedication where he expressed his "profound admiration" for the writer. Flocks of journalists and photographers started visiting her home in Rome. Deledda initially welcomed them but eventually grew tired of the attention. One day she noticed that her beloved pet crow, Checca, was visibly irritated by the commotion, with people constantly coming in and out of the house. "If Checca has had enough, so have I," Deledda was quoted as saying, and she returned to a more retired routine. The events also put a strain on Deledda's extremely methodical writings schedule. Her day would start with a late breakfast, followed by a morning of hard reading, lunch, a quick nap and a few hours of writing before dinner. Deledda continued to write even as she grew older and more fragile. Her subsequent works, La Casa del Poeta (The House of the Poet, 1930) and Sole d'Estate (Summer Sun, 1933), indicate a more optimistic view of life even as she was experiencing serious health issues. Deledda died in Rome at the age of 64 of breast cancer. La chiesa della solitudine (The Church of Solitude, 1936), Deledda's last novel, is a semi-autobiographical depiction of a young Italian woman coming to terms with a fatal disease. A completed manuscript of the novel Cosima was discovered after her death and published posthumously in 1937. Accolades Deledda's work has been highly regarded by writers of Italian literature, including Luigi Capuana, Giovanni Verga, Enrico Thovez, , . Sardinian writers including Sergio Atzeni, Giulio Angioni and Salvatore Mannuzzu, were greatly influenced by her work, prompting them to found what has later become known as the Sardinian Literary Spring. In 1947 artist Amelia Camboni was commissioned a portrait of Deledda, currently standing close to her home in Rome in the Pincio neighbourhood. Deledda's birthplace and childhood home in Nuoro was declared a national heritage building and purchased in 1968 by the Municipality of Nuoro, which in 1979 handed it over to the Regional Ethnographic Institute (ISRE) for the symbolic price of 1,000 Italian Lire. The Institute transformed the house into a museum commemorating the writer, and it's now called the Museo Deleddiano. The museum consists of ten rooms showcasing the most important episodes in Deledda's life. A coal power plant opened in Portoscuso in 1965. , this powerplant called Grazia Deledda has a capacity of 590 MW. Tribute On 10 December 2017 Google celebrated her with a Google Doodle. Work The life, customs, and traditions of the Sardinian people are prominent in Deledda's writing. She often relies on detailed geographical descriptions and her characters often present a strong connection with their place of origins. Many of her characters are outcasts who silently struggle with isolation. Overall Deledda's work focuses on love, pain and death, upon which rests feelings of sin and fatality. Her novels tend to criticize social values and moral norms rather than the people who are victims of such circumstances. In her works it can be recognized the influence of the verism of Giovanni Verga and, sometimes, also that of the decadentism of Gabriele D'Annunzio, although her writing style is not so ornate. Despite her groundbreaking role in Italian and World literature, Deledda has failed to be acknowledged as a feminist writer, possibly due to her tendency of depicting women's pain and suffering as opposed to women's autonomy. Complete list of works Below is a complete list of Deledda's works: Stella d'Oriente (1890) Nell'azzuro (1890) Fior di Sardegna (1891) Racconti sardi (1894) Tradizioni popolari di Nuoro in Sardegna (1894) La via del male (1896) Anime oneste (1895) Paesaggi sardi (1897) La tentazioni (1899) Il tesoro (1897) L'ospite (1897) La giustizia (1899) Nostra Signora del buon consiglio: leggenda sarda (1899) Le disgrazie che può causare il denaro (1899) Il Vecchio della montagna (1900) Dopo il divorzio (1902; English translation: After the Divorce, 1905) La regina delle tenebre (1902) Elias Portolu (1900) Cenere (1904; English translation: Ashes, 1908) Odio Vince (1904) Nostalgie (1905) L'ombra del passato (1907) Amori moderni (1907) L'edera (1908), English translation as Ivy by Mary Ann Frese Witt and Martha Witt (2019) Il nonno (1908), English translation of the short story "Il ciclamino" as "The Cyclamen" by Maria Di Salvatore and Pan Skordos, in "Journal of Italian Translation", Volume XIV, Number 1, Spring 2019 Il nostro padrone (1910) Sino al confine (1910) I giuochi della vita (1911) Nel deserto (1911) L'edera: dramma in tre atti (1912) Colombi e sparvieri (1912) Chiaroscuro (1912) Canne al vento (1913), Reeds in the Wind (1999 English translation by Martha King) Le colpe altrui (1914) Marianna Sirca (1915) Il fanciullo nascosto (1915) L'incendio nell'oliveto (1918) Il ritorno del figlio (1919) Naufraghi in porto (1920) La madre (1920; English translation: The Woman and the Priest, 1922; English translation: The Mother, by Mary G. Steegman, 1923) Il segreto dell'uomo solitario (1921) Cattive compagnie: novelle (1921) La grazia (1921) Il Dio dei viventi (1922) Silvio Pellico (1923) Il flauto nel bosco (1923) La danza della collana; A sinistra (1924) La fuga in Egitto (1925) Il sigillo d'amore (1926) Annalena Bilsini (1927) Il vecchio e i fanciulli (1928) Il dono di natale (1930) La casa del poeta (1930) Eugenia Grandet, Onorato di Balzac (1930) Il libro della terza classe elementare: letture, religione, storia, geografia, aritmetica (1931) Giaffa: racconti per ragazzi (1931) Il paese del vento (1931) Sole d'estate (1933) L'argine (1934) La chiesa della solitudine (1936); English translation by E. Ann Matter, The Church of Solitude (University of New York Press, 2002) Cosima (1937) published posthumously, English translation by Martha King (1988) Il cedro del Libano (1939) published posthumously Grazia Deledda: premio Nobel per la letteratura 1926 (1966) Opere scelte (1968) Letter inedite di Grazia Deledda ad Arturo Giordano direttore della rivista letteraria (Alchero: Nemaprress, 2004) See also List of female Nobel laureates References Bibliography Attilio Momigliano, "Intorno a Grazia Deledda", in Ultimi studi, La Nuova Italia, Florence, 1954. Emilio Cecchi, "Grazia Deledda", in Storia della Letteratura Italiana: Il Novecento, Garzanti, Milan, 1967. Antonio Piromalli, "Grazia Deledda", La Nuova Italia, Florence, 1968. Natalino Sapegno, "Prefazione", in Romanzi e novelle, Mondadori, Milan, 1972. Giulio Angioni, "Grazia Deledda: l'antropologia positivistica e la diversità della Sardegna", in Grazia Deledda nella cultura contemporanea, Satta, Nuoro, 1992 Giulio Angioni, "Introduzione", in Tradizioni popolari di Nuoro, Ilisso, Biblioteca Sarda, Nuoro, 2010. Voice recording The voice of Grazia Deledda speaking (in Italian) at the Nobel Prize Ceremony in 1926. External links Werkverzeichnis Summary of works by Grazia Deledda and complete texts Martha King's English translation of Cosima. Martha King's English translation of Canne al vento as Reeds in the Wind. BBC Radio 4's 10-part dramatisation of Reeds in the Wind 2012 Il bilinguismo di Grazia Deledda - Il Manifesto Sardo (article written in Italian) Biography: Deledda, Grazia at The Italian Women Writers project 1871 births 1936 deaths People from Nuoro 19th-century Italian novelists 20th-century Italian novelists Italian women poets Italian dramatists and playwrights Italian women dramatists and playwrights Italian women novelists Sardinian literature Nobel laureates in Literature Italian Nobel laureates Women Nobel laureates 20th-century Italian women writers 19th-century Italian women writers Sardinian women
Grazia Deledda
[ "Technology" ]
2,840
[ "Women Nobel laureates", "Women in science and technology" ]
13,120
https://en.wikipedia.org/wiki/Glenn%20T.%20Seaborg
Glenn Theodore Seaborg ( ; April 19, 1912February 25, 1999) was an American chemist whose involvement in the synthesis, discovery and investigation of ten transuranium elements earned him a share of the 1951 Nobel Prize in Chemistry. His work in this area also led to his development of the actinide concept and the arrangement of the actinide series in the periodic table of the elements. Seaborg spent most of his career as an educator and research scientist at the University of California, Berkeley, serving as a professor, and, between 1958 and 1961, as the university's second chancellor. He advised ten US presidents—from Harry S. Truman to Bill Clinton—on nuclear policy and was Chairman of the United States Atomic Energy Commission from 1961 to 1971, where he pushed for commercial nuclear energy and the peaceful applications of nuclear science. Throughout his career, Seaborg worked for arms control. He was a signatory to the Franck Report and contributed to the Limited Test Ban Treaty, the Nuclear Non-Proliferation Treaty and the Comprehensive Test Ban Treaty. He was a well-known advocate of science education and federal funding for pure research. Toward the end of the Eisenhower administration, he was the principal author of the Seaborg Report on academic science, and, as a member of President Ronald Reagan's National Commission on Excellence in Education, he was a key contributor to its 1983 report "A Nation at Risk". Seaborg was the principal or co-discoverer of ten elements: plutonium, americium, curium, berkelium, californium, einsteinium, fermium, mendelevium, nobelium and element 106, which, while he was still living, was named seaborgium in his honor. He said about this naming, "This is the greatest honor ever bestowed upon me—even better, I think, than winning the Nobel Prize. Future students of chemistry, in learning about the periodic table, may have reason to ask why the element was named for me, and thereby learn more about my work." He also discovered more than 100 isotopes of transuranium elements and is credited with important contributions to the chemistry of plutonium, originally as part of the Manhattan Project where he developed the extraction process used to isolate the plutonium fuel for the implosion-type atomic bomb. Early in his career, he was a pioneer in nuclear medicine and discovered isotopes of elements with important applications in the diagnosis and treatment of diseases, including iodine-131, which is used in the treatment of thyroid disease. In addition to his theoretical work in the development of the actinide concept, which placed the actinide series beneath the lanthanide series on the periodic table, he postulated the existence of super-heavy elements in the transactinide and superactinide series. After sharing the 1951 Nobel Prize in Chemistry with Edwin McMillan, he received approximately 50 honorary doctorates and numerous other awards and honors. The list of things named after Seaborg ranges from the chemical element seaborgium to the asteroid 4856 Seaborg. He was a prolific author, penning numerous books and 500 journal articles, often in collaboration with others. He was once listed in the Guinness Book of World Records as the person with the longest entry in Who's Who in America. Early life Glenn Theodore Seaborg was born in Ishpeming, Michigan, on April 19, 1912, the son of Herman Theodore (Ted) and Selma Olivia Erickson Seaborg. He had one sister, Jeanette, who was two years younger. His family spoke Swedish at home. When Glenn Seaborg was a boy, the family moved to Los Angeles County, California, settling in a subdivision called Home Gardens, later annexed to the City of South Gate, California. About this time he changed the spelling of his first name from Glen to Glenn. Seaborg kept a daily journal from 1927 until he suffered a stroke in 1998. As a youth, Seaborg was both a devoted sports fan and an avid movie buff. His mother encouraged him to become a bookkeeper as she felt his literary interests were impractical. He did not take an interest in science until his junior year when he was inspired by Dwight Logan Reid, a chemistry and physics teacher at David Starr Jordan High School in Watts. Seaborg graduated from Jordan in 1929 at the top of his class and received a Bachelor of Arts (BA) degree in chemistry at the University of California, Los Angeles, in 1933. He worked his way through school as a stevedore and a laboratory assistant at Firestone. Seaborg received his PhD in chemistry at the University of California, Berkeley, in 1937 with a doctoral thesis on the "Interaction of Fast Neutrons with Lead", in which he coined the term "nuclear spallation". Seaborg was a member of the professional chemistry fraternity Alpha Chi Sigma. As a graduate student in the 1930s Seaborg performed wet chemistry research for his advisor Gilbert Newton Lewis, and published three papers with him on the theory of acids and bases. Seaborg studied the text Applied Radiochemistry by Otto Hahn, of the Kaiser Wilhelm Institute for Chemistry in Berlin, and it had a major impact on his developing interests as a research scientist. For several years, Seaborg conducted important research in artificial radioactivity using the Lawrence cyclotron at UC Berkeley. He was excited to learn from others that nuclear fission was possible—but also chagrined, as his own research might have led him to the same discovery. Seaborg also became an adept interlocutor of Berkeley physicist Robert Oppenheimer. Oppenheimer had a daunting reputation and often answered a junior colleague's question before it had even been stated. Often the question answered was more profound than the one asked, but of little practical help. Seaborg learned to state his questions to Oppenheimer quickly and succinctly. Pioneering work in nuclear chemistry Seaborg remained at the University of California, Berkeley, for post-doctoral research. He followed Frederick Soddy's work investigating isotopes and contributed to the discovery of more than 100 isotopes of elements. Using one of Lawrence's advanced cyclotrons, John Livingood, Fred Fairbrother, and Seaborg created a new isotope of iron, iron-59 in 1937. Iron-59 was useful in the studies of the hemoglobin in human blood. In 1938, Livingood and Seaborg collaborated (as they did for five years) to create an important isotope of iodine, iodine-131, which is still used to treat thyroid disease. (Many years later, it was credited with prolonging the life of Seaborg's mother.) As a result of these and other contributions, Seaborg is regarded as a pioneer in nuclear medicine and is one of its most prolific discoverers of isotopes. In 1939 he became an instructor in chemistry at Berkeley, was promoted to assistant professor in 1941 and professor in 1945. University of California, Berkeley, physicist Edwin McMillan led a team that discovered element 93, which he named neptunium in 1940. In November, he was persuaded to leave Berkeley temporarily to assist with urgent research in radar technology. Since Seaborg and his colleagues had perfected McMillan's oxidation-reduction technique for isolating neptunium, he asked McMillan for permission to continue the research and search for element 94. McMillan agreed to the collaboration. Seaborg first reported alpha decay proportionate to only a fraction of the element 93 under observation. The first hypothesis for this alpha particle accumulation was contamination by uranium, which produces alpha-decay particles; analysis of alpha-decay particles ruled this out. Seaborg then postulated that a distinct alpha-producing element was being formed from element 93. In February 1941, Seaborg and his collaborators produced plutonium-239 through the bombardment of uranium. In their experiments bombarding uranium with deuterons, they observed the creation of neptunium, element 93. But it then underwent beta-decay, forming a new element, plutonium, with 94 protons. Plutonium is fairly stable, but undergoes alpha-decay, which explained the presence of alpha particles coming from neptunium. Thus, on March 28, 1941, Seaborg, physicist Emilio Segrè and Berkeley chemist Joseph W. Kennedy were able to show that plutonium (then known only as element 94) was fissile, an important distinction that was crucial to the decisions made in directing Manhattan Project research. In 1966, Room 307 of Gilman Hall on the campus at the Berkeley, where Seaborg did his work, was declared a US National Historic Landmark. In addition to plutonium, he is credited as a lead discoverer of americium, curium, and berkelium, and as a co-discoverer of californium, einsteinium, fermium, mendelevium, nobelium and seaborgium, the first element named after a living person. He shared the Nobel Prize in Chemistry in 1951 with Edwin McMillan for "their discoveries in the chemistry of the first transuranium elements." Scientific contributions during the Manhattan Project On April 19, 1942, Seaborg reached Chicago and joined the chemistry group at the Metallurgical Laboratory of the Manhattan Project at the University of Chicago, where Enrico Fermi and his group would later convert uranium-238 to plutonium-239 in a controlled nuclear chain reaction. Seaborg's role was to figure out how to extract the tiny bit of plutonium from the mass of uranium. Plutonium-239 was isolated in visible amounts using a transmutation reaction on August 20, 1942, and weighed on September 10, 1942, in Seaborg's Chicago laboratory. He was responsible for the multi-stage chemical process that separated, concentrated and isolated plutonium. This process was further developed at the Clinton Engineering Works in Oak Ridge, Tennessee, and then entered full-scale production at the Hanford Engineer Works, in Richland, Washington. Seaborg's theoretical development of the actinide concept resulted in a redrawing of the periodic table into its current configuration with the actinide series appearing below the lanthanide series. Seaborg developed the chemical elements americium and curium while in Chicago. He managed to secure patents for both elements. His patent on curium never proved commercially viable because of the element's short half-life, but americium is commonly used in household smoke detectors and thus provided a good source of royalty income to Seaborg in later years. Prior to the test of the first nuclear weapon, Seaborg joined with several other leading scientists in a written statement known as the Franck Report (secret at the time but since published) unsuccessfully calling on President Truman to conduct a public demonstration of the atomic bomb witnessed by the Japanese. Professor and Chancellor at the University of California, Berkeley After the conclusion of World War II and the Manhattan Project, Seaborg was eager to return to academic life and university research free from the restrictions of wartime secrecy. In 1946, he added to his responsibilities as a professor by heading the nuclear chemistry research at the Lawrence Radiation Laboratory operated by the University of California on behalf of the United States Atomic Energy Commission (AEC). Seaborg was named one of the "Ten Outstanding Young Men in America" by the US Junior Chamber of Commerce in 1947 (along with Richard Nixon and others). Seaborg was elected a Member of the National Academy of Sciences in 1948. From 1954 to 1961 he served as associate director of the radiation laboratory. He was appointed by President Truman to serve as a member of the General Advisory Committee of the AEC, an assignment he retained until 1960. Seaborg served as chancellor at the University of California, Berkeley, from 1958 to 1961. His term coincided with a relaxation of McCarthy-era restrictions on students' freedom of expression that had begun under his predecessor, Clark Kerr. In October 1958, Seaborg announced that the university had relaxed its prior prohibitions on political activity on a trial basis, and the ban on communists speaking on campus was lifted. This paved the way for the Free Speech Movement of 1964–65. Seaborg was an enthusiastic supporter of Cal's sports teams. San Francisco columnist Herb Caen was fond of pointing out that Seaborg's surname is an anagram of "Go Bears", a popular cheer at UC Berkeley. Seaborg was proud of the fact that the Cal Bears won their first and only National Collegiate Athletic Association (NCAA) basketball championship in 1959, while he was chancellor. The football team also won the conference title and played in the Rose Bowl that year. He served on the Faculty Athletic Committee for several years and was the co-author of a book, Roses from the Ashes: Breakup and Rebirth in Pacific Coast Intercollegiate Athletics (2000), concerning the Pacific Coast Conference recruiting scandal, and the founding of what is now the Pac-12, in which he played a role in restoring confidence in the integrity of collegiate sports. Seaborg served on the President's Science Advisory Committee (PSAC) during the Eisenhower administration. PSAC produced a report on "Scientific Progress, the Universities, and the Federal Government", also known as the "Seaborg Report", in November 1960, that urged greater federal funding of science. In 1959, he helped found the Berkeley Space Sciences Laboratory with Clark Kerr. Chairman of the Atomic Energy Commission After appointment by President John F. Kennedy and confirmation by the United States Senate, Seaborg was chairman of the Atomic Energy Commission (AEC) from 1961 to 1971. His pending appointment by President-elect Kennedy was nearly derailed in late 1960 when members of the Kennedy transition team learned that Seaborg had been listed in a U.S. News & World Report article as a member of "Nixon's Idea Men". Seaborg said that as a lifetime Democrat he was baffled when the article appeared associating him with outgoing Vice President Richard Nixon, a Republican whom Seaborg considered a casual acquaintance. During the early 1960s, Seaborg became concerned with the ecological and biological effects of nuclear weapons, especially those that would impact human life significantly. In response, he commissioned the Technical Analysis Branch of the AEC to study these matters further. Seaborg's provision for these innovative studies led the US Government to more seriously pursue the development and possible use of "clean" nuclear weapons. While chairman of the AEC, Seaborg participated on the negotiating team for the Limited Test Ban Treaty (LTBT), in which the US, UK, and USSR agreed to ban all above-ground test detonations of nuclear weapons. Seaborg considered his contributions to the achievement of the LTBT as one of his greatest accomplishments. Despite strict rules from the Soviets about photography at the signing ceremony, Seaborg used a tiny camera to take a close-up photograph of Soviet Premier Nikita Khrushchev as he signed the treaty. Seaborg enjoyed a close relationship with President Lyndon Johnson and influenced the administration to pursue the Nuclear Non-Proliferation Treaty. Seaborg was called to the White House in the first week of the Nixon Administration in January 1969 to advise President Richard Nixon on his first diplomatic crisis involving the Soviets and nuclear testing. He clashed with Nixon presidential adviser John Ehrlichman over the treatment of a Jewish scientist, Zalman Shapiro, whom the Nixon administration suspected of leaking nuclear secrets to Israel. Seaborg published several books and journal articles during his tenure at the AEC. He predicted the existence of elements beyond those on the periodic table, the transactinide series and the superactinide series of undiscovered synthetic elements. While most of these theoretical future elements have extremely short half-lives and thus no expected practical applications, he also hypothesized the existence of stable super-heavy isotopes of certain elements in an island of stability. Seaborg served as chairman of the AEC until 1971. Return to California Following his service as Chairman of the AEC, Seaborg returned to UC Berkeley where he was awarded the position of University Professor. At the time, there had been fewer University Professors at UC Berkeley than Nobel Prize winners. He also served as Chairman of the Lawrence Hall of Science where he became the principal investigator for Great Explorations in Math and Science (GEMS) working with director Jacqueline Barber. Seaborg served as chancellor at the University of California, Berkeley, from 1958 to 1961, and served as president of the American Association for the Advancement of Science in 1972 and as president of the American Chemical Society in 1976. In 1980, he transmuted several thousand atoms of bismuth-209 into gold () at the Lawrence Berkeley Laboratory. His experimental technique, using the lab's Bevalac particle accelerator, was able to remove protons and neutrons from the bismuth atoms by bombarding it with carbon and neon nuclei traveling near the speed of light. Seaborg's technique would have been far too expensive to enable routine manufacturing of gold, but his work was close to the mythical Philosopher's Stone. As gold has four fewer protons and (taking the only naturally occurring bulk isotopes of either) eight fewer neutrons than bismuth, a total of twelve nucleons have to be removed from the bismuth nucleus to produce gold using Seaborg's method. In 1981, Seaborg became a founding member of the World Cultural Council. In 1983, President Ronald Reagan appointed Seaborg to serve on the National Commission on Excellence in Education. The commission produced a report "A Nation at Risk: The Imperative for Educational Reform", which focused national attention on education as a national issue germane to the federal government. In 2008, Margaret Spellings wrote that Seaborg lived most of his later life in Lafayette, California, where he devoted himself to editing and publishing the journals that documented both his early life and later career. He rallied a group of scientists who criticized the science curriculum in the state of California, which he viewed as far too socially oriented and not nearly focused enough on hard science. California Governor Pete Wilson appointed Seaborg to head a committee that proposed changes to California's science curriculum despite outcries from labor organizations and others. Personal life In 1942, Seaborg married Helen Griggs, the secretary of physicist Ernest Lawrence. Under wartime pressure, Seaborg had moved to Chicago while engaged to Griggs. When Seaborg returned to accompany Griggs for the journey back to Chicago, friends expected them to marry in Chicago. But, eager to be married, Seaborg and Griggs impulsively got off the train in the town of Caliente, Nevada, for what they thought would be a quick wedding. When they asked for City Hall, they found Caliente had none—they would have to travel north to Pioche, the county seat. With no car, this was no easy feat, but one of Caliente's newest deputy sheriffs turned out to be a recent graduate of the Cal Berkeley chemistry department and was more than happy to do a favor for Seaborg. The deputy sheriff arranged for the wedding couple to ride up and back to Pioche in a mail truck. The witnesses at the Seaborg wedding were a clerk and a janitor. Glenn Seaborg and Helen Griggs Seaborg had seven children, of whom the first, Peter Glenn Seaborg, died in 1997 (his twin Paulette having died in infancy). The others were Lynne Seaborg Cobb, David Seaborg, Steve Seaborg, Eric Seaborg, and Dianne Seaborg. Seaborg was an avid hiker. Upon becoming Chairman of the AEC in 1961, he commenced taking daily hikes through a trail that he blazed at the headquarters site in Germantown, Maryland. He frequently invited colleagues and visitors to accompany him, and the trail became known as the "Glenn Seaborg Trail." He and his wife Helen are credited with blazing a trail in the East Bay area near their home in Lafayette, California. This trail has since become a part of the American Hiking Association's cross-country network of trails. Seaborg and his wife walked the trail network from Contra Costa County all the way to the California–Nevada border. Seaborg was elected a foreign member of the Royal Swedish Academy of Sciences in 1972 and a Foreign Member of the Royal Society (ForMemRS) of London in 1985. He was honored as Swedish-American of the Year in 1962 by the Vasa Order of America. In 1991, the organization named "Local Lodge Glenn T. Seaborg No. 719" in his honor during the Seaborg Honors ceremony at which he appeared. This lodge maintains a scholarship fund in his name, as does the unrelated Swedish-American Club of Los Angeles. Seaborg kept a close bond to his Swedish origin. He visited Sweden every so often, and his family were members of the Swedish Pemer Genealogical Society, a family association open for every descendant of the Pemer family, a Swedish family with German origin, from which Seaborg was descended on his mother's side. (In recent years, after both men's passings, it has been discovered that physicist colleague Edward J. Lofgren was also descended from the Pemer family.) Seaborg even responded to the Swedish king's Nobel prize toast in his mother's native region's dialect, which he described as "It was as if a Swede had ''y'alled" in English with a Southern Accent."". Death On August 24, 1998, while in Boston to attend a meeting by the American Chemical Society, Seaborg suffered a stroke, which led to his death six months later on February 25, 1999, at his home in Lafayette. Honors and awards During his lifetime, Seaborg is said to have been the author or co-author of numerous books and 500 scientific journal articles, many of them brief reports on fast-breaking discoveries in nuclear science while other subjects, most notably the actinide concept, represented major theoretical contributions in the history of science. He held more than 40 patents—among them the only patents ever issued for chemical elements, americium and curium, and received more than 50 doctorates and honorary degrees in his lifetime. At one time, he was listed in the Guinness Book of World Records as having the longest entry in Marquis Who's Who in America. In February 2005, he was posthumously inducted into the National Inventors Hall of Fame. In April 2011 the executive council of the Committee for Skeptical Inquiry (CSI) selected Seaborg for inclusion in CSI's Pantheon of Skeptics. The Pantheon of Skeptics was created by CSI to remember the legacy of deceased fellows of CSI and their contributions to the cause of scientific skepticism. His papers are in the Library of Congress. Seaborg was elected to the United States National Academy of Sciences in 1948, the American Philosophical Society in 1952, and the American Academy of Arts and Sciences in 1958. The American Chemical Society-Chicago Section honored him with the Willard Gibbs Award in 1966. The American Academy of Achievement presented Seaborg with the Golden Plate Award in 1972. The element seaborgium was named after Seaborg by Albert Ghiorso, E. Kenneth Hulet, and others, who also credited Seaborg as a co-discoverer. It was named while Seaborg was still alive, which proved controversial. He influenced the naming of so many elements that with the announcement of seaborgium, it was noted in Discover magazine's review of the year in science that he could receive a letter addressed in chemical elements: seaborgium, lawrencium (for the Lawrence Berkeley Laboratory where he worked), berkelium, californium, americium. Seaborgium is the first element ever to have been officially named after a living person. The second element to be so named is oganesson, in 2016, after Yuri Oganessian. Selected bibliography Citations General references Further reading External links including the Nobel Lecture on December 12, 1951 "The Transuranium Elements: Present Status" 1965 Audio Interview with Glenn Seaborg by Stephane Groueff Voices of the Manhattan Project National Academy of Sciences biography Annotated bibliography for Glenn Seaborg from the Alsos Digital Library Nobel Institute Official Biography UC Berkeley Biography of Chancellor Glenn T. Seaborg Lawrence Berkeley Laboratory's Glenn T. Seaborg website American Association for the Advancement of Science, List of Presidents Glenn Seaborg Trail, at Department of Energy official site Glenn T. Seaborg Center at Northern Michigan University Glenn T. Seaborg Medal and Symposium at the University of California, Los Angeles Video interview with Glenn Seaborg from 1986 with transcript "Clean" Nukes and the Ecology of Nuclear War , published by the National Security Archive 1912 births 1999 deaths 20th-century American chemists American Nobel laureates American people of Swedish descent American skeptics Chairpersons of the United States Atomic Energy Commission Leaders of the University of California, Berkeley Discoverers of chemical elements Enrico Fermi Award recipients Fellows of the American Association for the Advancement of Science Fellows of the American Physical Society Foreign members of the Royal Society Foreign members of the USSR Academy of Sciences Foreign members of the Russian Academy of Sciences Founding members of the World Cultural Council Manhattan Project people Members of the Royal Swedish Academy of Sciences Members of the United States National Academy of Sciences Members of the Serbian Academy of Sciences and Arts National Medal of Science laureates Nobel laureates in Chemistry People from Ishpeming, Michigan People involved with the periodic table South Gate, California University of California, Berkeley alumni University of California, Berkeley faculty University of California, Los Angeles alumni Vannevar Bush Award recipients Foreign members of the Serbian Academy of Sciences and Arts Members of the American Philosophical Society Recipients of Franklin Medal
Glenn T. Seaborg
[ "Chemistry" ]
5,212
[ "Periodic table", "People involved with the periodic table" ]
13,143
https://en.wikipedia.org/wiki/Generalized%20mean
In mathematics, generalized means (or power mean or Hölder mean from Otto Hölder) are a family of functions for aggregating sets of numbers. These include as special cases the Pythagorean means (arithmetic, geometric, and harmonic means). Definition If is a non-zero real number, and are positive real numbers, then the generalized mean or power mean with exponent of these positive real numbers is (See -norm). For we set it equal to the geometric mean (which is the limit of means with exponents approaching zero, as proved below): Furthermore, for a sequence of positive weights we define the weighted power mean as and when , it is equal to the weighted geometric mean: The unweighted means correspond to setting all . Special cases A few particular values of yield special cases with their own names: minimum harmonic mean geometric mean arithmetic mean root mean squareor quadratic mean cubic mean maximum Properties Let be a sequence of positive real numbers, then the following properties hold: . , where is a permutation operator. . . Generalized mean inequality In general, if , then and the two means are equal if and only if . The inequality is true for real values of and , as well as positive and negative infinity values. It follows from the fact that, for all real , which can be proved using Jensen's inequality. In particular, for in , the generalized mean inequality implies the Pythagorean means inequality as well as the inequality of arithmetic and geometric means. Proof of the weighted inequality We will prove the weighted power mean inequality. For the purpose of the proof we will assume the following without loss of generality: The proof for unweighted power means can be easily obtained by substituting . Equivalence of inequalities between means of opposite signs Suppose an average between power means with exponents and holds: applying this, then: We raise both sides to the power of −1 (strictly decreasing function in positive reals): We get the inequality for means with exponents and , and we can use the same reasoning backwards, thus proving the inequalities to be equivalent, which will be used in some of the later proofs. Geometric mean For any and non-negative weights summing to 1, the following inequality holds: The proof follows from Jensen's inequality, making use of the fact the logarithm is concave: By applying the exponential function to both sides and observing that as a strictly increasing function it preserves the sign of the inequality, we get Taking -th powers of the yields Thus, we are done for the inequality with positive ; the case for negatives is identical but for the swapped signs in the last step: Of course, taking each side to the power of a negative number swaps the direction of the inequality. Inequality between any two power means We are to prove that for any the following inequality holds: if is negative, and is positive, the inequality is equivalent to the one proved above: The proof for positive and is as follows: Define the following function: . is a power function, so it does have a second derivative: which is strictly positive within the domain of , since , so we know is convex. Using this, and the Jensen's inequality we get: after raising both side to the power of (an increasing function, since is positive) we get the inequality which was to be proven: Using the previously shown equivalence we can prove the inequality for negative and by replacing them with and , respectively. Generalized f-mean The power mean could be generalized further to the generalized -mean: This covers the geometric mean without using a limit with . The power mean is obtained for . Properties of these means are studied in de Carvalho (2016). Applications Signal processing A power mean serves a non-linear moving average which is shifted towards small signal values for small and emphasizes big signal values for big . Given an efficient implementation of a moving arithmetic mean called smooth one can implement a moving power mean according to the following Haskell code. powerSmooth :: Floating a => ([a] -> [a]) -> a -> [a] -> [a] powerSmooth smooth p = map (** recip p) . smooth . map (**p) For big it can serve as an envelope detector on a rectified signal. For small it can serve as a baseline detector on a mass spectrum. See also Arithmetic–geometric mean Average Heronian mean Inequality of arithmetic and geometric means Lehmer mean – also a mean related to powers Minkowski distance Quasi-arithmetic mean – another name for the generalized f-mean mentioned above Root mean square Notes References Further reading External links Power mean at MathWorld Examples of Generalized Mean A proof of the Generalized Mean on PlanetMath Means Inequalities Articles with example Haskell code
Generalized mean
[ "Physics", "Mathematics" ]
986
[ "Means", "Mathematical analysis", "Point (geometry)", "Mathematical theorems", "Geometric centers", "Binary relations", "Mathematical relations", "Inequalities (mathematics)", "Mathematical problems", "Symmetry" ]
13,146
https://en.wikipedia.org/wiki/Gabbro
Gabbro ( ) is a phaneritic (coarse-grained and magnesium- and iron-rich), mafic intrusive igneous rock formed from the slow cooling magma into a holocrystalline mass deep beneath the Earth's surface. Slow-cooling, coarse-grained gabbro is chemically equivalent to rapid-cooling, fine-grained basalt. Much of the Earth's oceanic crust is made of gabbro, formed at mid-ocean ridges. Gabbro is also found as plutons associated with continental volcanism. Due to its variant nature, the term gabbro may be applied loosely to a wide range of intrusive rocks, many of which are merely "gabbroic". By rough analogy, gabbro is to basalt as granite is to rhyolite. Etymology The term "gabbro" was used in the 1760s to name a set of rock types that were found in the ophiolites of the Apennine Mountains in Italy. It was named after Gabbro, a hamlet near Rosignano Marittimo in Tuscany. Then, in 1809, the German geologist Christian Leopold von Buch used the term more restrictively in his description of these Italian ophiolitic rocks. He assigned the name "gabbro" to rocks that geologists nowadays would more strictly call "metagabbro" (metamorphosed gabbro). Petrology Gabbro is a coarse-grained (phaneritic) igneous rock that is relatively low in silica and rich in iron, magnesium, and calcium. Such rock is described as mafic. Gabbro is composed of pyroxene (mostly clinopyroxene) and calcium-rich plagioclase, with minor amounts of hornblende, olivine, orthopyroxene and accessory minerals. With significant (>10%) olivine or orthopyroxene it is classified as olivine gabbro or gabbronorite respectively. Where present, hornblende is typically found as a rim around augite crystals or as large grains enclosing smaller grains of other minerals (poikilitic grains). Geologists use rigorous quantitative definitions to classify coarse-grained igneous rocks, based on the mineral content of the rock. For igneous rocks composed mostly of silicate minerals, and in which at least 10% of the mineral content consists of quartz, feldspar, or feldspathoid minerals, classification begins with the QAPF diagram. The relative abundances of quartz (Q), alkali feldspar (A), plagioclase (P), and feldspathoid (F), are used to plot the position of the rock on the diagram. The rock will be classified as either a gabbroid or a dioritoid if quartz makes up less than 20% of the QAPF content, feldspathoid makes up less than 10% of the QAPF content, and plagioclase makes up more than 65% of the total feldspar content. Gabbroids are distinguished from dioritoids by an anorthite (calcium plagioclase) fraction of their total plagioclase of greater than 50%. The composition of the plagioclase cannot easily be determined in the field, and then a preliminary distinction is made between dioritoid and gabbroid based on the content of mafic minerals. A gabbroid typically has over 35% mafic minerals, mostly pyroxenes or olivine, while a dioritoid typically has less than 35% mafic minerals, which typically includes hornblende. Gabbroids form a family of rock types similar to gabbro, such as monzogabbro, quartz gabbro, or nepheline-bearing gabbro. Gabbro itself is more narrowly defined, as a gabbroid in which quartz makes up less than 5% of the QAPF content, feldspathoids are not present, and plagioclase makes up more than 90% of the feldspar content. Gabbro is distinct from anorthosite, which contains less than 10% mafic minerals. Coarse-grained gabbroids are produced by slow crystallization of magma having the same composition as the lava that solidifies rapidly to form fine-grained (aphanitic) basalt. Subtypes There are a number of subtypes of gabbro recognized by geologists. Gabbros can be broadly divided into leucogabbros, with less than 35% mafic mineral content; mesogabbros, with 35% to 65% mafic mineral content; and melagabbros with more than 65% mafic mineral content. A rock with over 90% mafic mineral content will be classified instead as an ultramafic rock. A gabbroic rock with less than 10% mafic mineral content will be classified as an anorthosite. A more detailed classification is based on the relative percentages of plagioclase, pyroxene, hornblende, and olivine. The end members are: Normal gabbro (gabbro sensu stricto) is composed almost entirely of plagioclase and clinopyroxene (typically augite), with less than 5% each of hornblende, olivine, or orthopyroxene. Norite is composed almost entirely of plagioclase and orthopyroxene, with less than 5% each of hornblende, clinopyroxene, or olivine. Troctolite is composed almost entirely of plagioclase and olivine, with less than 5% each of pyroxene or hornblende. Hornblende gabbro is composed almost entirely of plagioclase and hornblende, with less than 5% each of pyroxene or olivine. Gabbros intermediate between these compositions are given names such as gabbronorite (for a gabbro intermediate between normal gabbro and norite, with almost equal amounts of clinopyroxene and orthopyroxene) or olivine gabbro (for a gabbro containing significant olivine, but almost no clinopyroxene or hornblende). A rock similar to normal gabbro but containing more orthopyroxene is called an orthopyroxene gabbro, while a rock similar to norite but containing more clinopyroxene is called a clinopyroxene norite. Gabbros are also sometimes classified as alkali or tholleiitic gabbros, by analogy with alkali or tholeiitic basalts, of which they are considered the intrusive equivalents. Alkali gabbro usually contains olivine, nepheline, or analcime, up to 10% of the mineral content, while tholeiitic gabbro contains both clinopyroxene and orthopyroxene, making it a gabbronorite. Gabbroids Gabbroids (also known as gabbroic-rocks) are a family of coarse-grained igneous rocks similar to gabbro: Quartz gabbro contains 5% to 20% quartz in its QAPF fraction. One example is the cizlakite at Pohorje in northeastern Slovenia, Monzogabbro contains 65% to 90% plagioclase out of its total feldspar content. Quartz monzogabbro combines the features of quartz gabbro and monzogabbro. It contains 5% to 20% quartz in its QAPF fraction, and 65% to 90% of its feldspar is plagioclase. Foid-bearing gabbro contains up to 10% feldspathoids rather than quartz. "Foid" in the name is usually replaced by the specific feldspathoid that is most abundant in the rock. For example, a nepheline-bearing gabbro is a foid-bearing gabbro in which the most abundant feldspathoid is nepheline. Foid-bearing monzogabbro resembles monzogabbro, but containing up to 10% feldspathoids in place of quartz. The same naming conventions apply as for foid-bearing gabbro, so that a gabbroid might be classified as a leucite-bearing monzogabbro. Gabbroids contain minor amounts, typically a few percent, of iron-titanium oxides such as magnetite, ilmenite, and ulvospinel. Apatite, zircon, and biotite may also be present as accessory minerals. Gabbro is generally coarse-grained, with crystals in the size range of 1 mm or larger. Finer-grained equivalents of gabbro are called diabase (also known as dolerite), although the term microgabbro is often used when extra descriptiveness is desired. Gabbro may be extremely coarse-grained to pegmatitic. Some pyroxene-plagioclase cumulates are essentially coarse-grained gabbro, and may exhibit acicular crystal habits. Gabbro is usually equigranular in texture, although it may also show ophitic texture (with laths of plagioclase enclosed in pyroxene). Distribution Nearly all gabbros are found in plutonic bodies, and the term (as the International Union of Geological Sciences recommends) is normally restricted just to plutonic rocks, although gabbro may be found as a coarse-grained interior facies of certain thick lavas. Gabbro can be formed as a massive, uniform intrusion via in-situ crystallisation of pyroxene and plagioclase, or as part of a layered intrusion as a cumulate formed by settling of pyroxene and plagioclase. An alternative name for gabbros formed by crystal settling is pyroxene-plagioclase adcumulate. Gabbro is much less common than more silica-rich intrusive rocks in the continental crust of the Earth. Gabbro and gabbroids occur in some batholiths but these rocks are relatively minor components of these very large intrusions because their iron and calcium content usually makes gabbro and gabbroid magmas too dense to have the necessary buoyancy. However, gabbro is an essential part of the oceanic crust, and can be found in many ophiolite complexes as layered gabbro underling sheeted dike complexes and overlying ultramafic rock derived from the Earth's mantle. These layered gabbros may have formed from relatively small but long-lived magma chambers underlying mid-ocean ridges. Layered gabbros are also characteristic of lopoliths, which are large, saucer-shaped intrusions that are primarily Precambrian in age. Prominent examples of lopoliths include the Bushveld Complex of South Africa, the Muskox intrusion of the Northwest Territories of Canada, the Rum layered intrusion of Scotland, the Stillwater complex of Montana, and the layered gabbros near Stavanger, Norway. Gabbros are also present in stocks associated with alkaline volcanism of continental rifting. Uses Gabbro often contains valuable amounts of chromium, nickel, cobalt, gold, silver, platinum, and copper sulfides. For example, the Merensky Reef is the world's most important source of platinum. Gabbro is known in the construction industry by the trade name of black granite. However, gabbro is hard and difficult to work, which limits its use. The term "indigo gabbro" is used as a common name for a mineralogically complex rock type often found in mottled tones of black and lilac-grey. It is mined in central Madagascar for use as a semi-precious stone. Indigo Gabbro can contain numerous minerals, including quartz and feldspar. Reports state that the dark matrix of the rock is composed of a mafic igneous rock, but whether this is basalt or gabbro is unclear. See also References External links Ocean drilling program gabbro petrology Scientists find the elusive gabbro Mafic rocks Plutonic rocks Phaneritic rocks
Gabbro
[ "Chemistry" ]
2,635
[ "Mafic rocks", "Igneous rocks by composition" ]
13,160
https://en.wikipedia.org/wiki/Gelatin
Gelatin or gelatine () is a translucent, colorless, flavorless food ingredient, commonly derived from collagen taken from animal body parts. It is brittle when dry and rubbery when moist. It may also be referred to as hydrolyzed collagen, collagen hydrolysate, gelatine hydrolysate, hydrolyzed gelatine, and collagen peptides after it has undergone hydrolysis. It is commonly used as a gelling agent in food, beverages, medications, drug or vitamin capsules, photographic films, papers, and cosmetics. Substances containing gelatin or functioning in a similar way are called gelatinous substances. Gelatin is an irreversibly hydrolyzed form of collagen, wherein the hydrolysis reduces protein fibrils into smaller peptides; depending on the physical and chemical methods of denaturation, the molecular weight of the peptides falls within a broad range. Gelatin is present in gelatin desserts, most gummy candy and marshmallows, ice creams, dips, and yogurts. Gelatin for cooking comes as powder, granules, and sheets. Instant types can be added to the food as they are; others must soak in water beforehand. Characteristics Properties Gelatin is a collection of peptides and proteins produced by partial hydrolysis of collagen extracted from the skin, bones, and connective tissues of animals such as domesticated cattle, chicken, pigs, and fish. During hydrolysis, some of the bonds between and within component proteins are broken. Its chemical composition is, in many aspects, closely similar to that of its parent collagen. Photographic and pharmaceutical grades of gelatin generally are sourced from cattle bones and pig skin. Gelatin is classified as a hydrogel. Gelatin is nearly tasteless and odorless with a colorless or slightly yellow appearance. It is transparent and brittle, and it can come as sheets, flakes, or as a powder. Polar solvents like hot water, glycerol, and acetic acid can dissolve gelatin, but it is insoluble in organic solvents like alcohol. Gelatin absorbs 5–10 times its weight in water to form a gel. The gel formed by gelatin can be melted by reheating, and it has an increasing viscosity under stress (thixotropic). The upper melting point of gelatin is below human body temperature, a factor that is important for mouthfeel of foods produced with gelatin. The viscosity of the gelatin-water mixture is greatest when the gelatin concentration is high and the mixture is kept cool at about . Commercial gelatin will have a gel strength of around 90 to 300 grams Bloom using the Bloom test of gel strength. Gelatin's strength (but not viscosity) declines if it is subjected to temperatures above , or if it is held at temperatures near 100 °C for an extended period of time. Gelatins have diverse melting points and gelation temperatures, depending on the source. For example, gelatin derived from fish has a lower melting and gelation point than gelatin derived from beef or pork. Composition When dry, gelatin consists of 98–99% protein, but it is not a nutritionally complete protein since it is missing tryptophan and is deficient in isoleucine, threonine, and methionine. The amino acid content of hydrolyzed collagen is the same as collagen. Hydrolyzed collagen contains 19 amino acids, predominantly glycine (Gly) 26–34%, proline (Pro) 10–18%, and hydroxyproline (Hyp) 7–15%, which together represent around 50% of the total amino acid content. Glycine is responsible for close packing of the chains. Presence of proline restricts the conformation. This is important for gelation properties of gelatin. Other amino acids that contribute highly include: alanine (Ala) 8–11%; arginine (Arg) 8–9%; aspartic acid (Asp) 6–7%; and glutamic acid (Glu) 10–12%. Research In 2011, the European Food Safety Authority Panel on Dietetic Products, Nutrition and Allergies concluded that "a cause and effect relationship has not been established between the consumption of collagen hydrolysate and maintenance of joints". Hydrolyzed collagen has been investigated as a type of wound dressing aimed at correcting imbalances in the wound microenvironment and the treatment of refractory wounds (chronic wounds that do not respond to normal treatment), as well as deep second-degree burn wounds. Safety concerns Hydrolyzed collagen, like gelatin, is made from animal by-products from the meat industry or sometimes animal carcasses removed and cleared by knackers, including skin, bones, and connective tissue. In 1997, the U.S. Food and Drug Administration (FDA), with support from the TSE (transmissible spongiform encephalopathy) Advisory Committee, began monitoring the potential risk of transmitting animal diseases, especially bovine spongiform encephalopathy (BSE), commonly known as mad cow disease. An FDA study from that year stated: "... steps such as heat, alkaline treatment, and filtration could be effective in reducing the level of contaminating TSE agents; however, scientific evidence is insufficient at this time to demonstrate that these treatments would effectively remove the BSE infectious agent if present in the source material." On 18 March 2016, the FDA finalized three previously issued interim final rules designed to further reduce the potential risk of BSE in human food. The final rule clarified that "gelatin is not considered a prohibited cattle material if it is manufactured using the customary industry processes specified." The Scientific Steering Committee (SSC) of the European Union in 2003 stated that the risk associated with bovine bone gelatin is very low or zero. In 2006, the European Food Safety Authority stated that the SSC opinion was confirmed, that the BSE risk of bone-derived gelatin was small, and that it recommended removal of the 2003 request to exclude the skull, brain, and vertebrae of bovine origin older than 12 months from the material used in gelatin manufacturing. Production In 2019, the worldwide demand of gelatin was about . On a commercial scale, gelatin is made from by-products of the meat and leather industries. Most gelatin is derived from pork skins, pork and cattle bones, or split cattle hides. Gelatin made from fish by-products avoids some of the religious objections to gelatin consumption. The raw materials are prepared by different curing, acid, and alkali processes that are employed to extract the dried collagen hydrolysate. These processes may take several weeks, and differences in such processes have great effects on the properties of the final gelatin products. Gelatin also can be prepared at home. Boiling certain cartilaginous cuts of meat or bones results in gelatin being dissolved into the water. Depending on the concentration, the resulting stock (when cooled) will form a jelly or gel naturally. This process is used for aspic. While many processes exist whereby collagen may be converted to gelatin, they all have several factors in common. The intermolecular and intramolecular bonds that stabilize insoluble collagen must be broken, and also, the hydrogen bonds that stabilize the collagen helix must be broken. The manufacturing processes of gelatin consists of several main stages: Pretreatments to make the raw materials ready for the main extraction step and to remove impurities that may have negative effects on physicochemical properties of the final gelatin product. Hydrolysis of collagen into gelatin. Extraction of gelatin from the hydrolysis mixture, which usually is done with hot water or dilute acid solutions as a multistage process. The refining and recovering treatments including filtration, clarification, evaporation, sterilization, drying, rutting, grinding, and sifting to remove the water from the gelatin solution, to blend the gelatin extracted, and to obtain dried, blended, ground final product. Pretreatments If the raw material used in the production of the gelatin is derived from bones, dilute acid solutions are used to remove calcium and other salts. Hot water or several solvents may be used to reduce the fat content, which should not exceed 1% before the main extraction step. If the raw material consists of hides and skin, then size reduction, washing, hair removal, and degreasing are necessary to prepare the materials for the hydrolysis step. Hydrolysis After preparation of the raw material, i.e., removing some of the impurities such as fat and salts, partially purified collagen is converted into gelatin through hydrolysis. Collagen hydrolysis is performed by one of three different methods: acid-, alkali-, and enzymatic hydrolysis. Acid treatment is especially suitable for less fully cross-linked materials such as pig skin collagen and normally requires 10 to 48 hours. Alkali treatment is suitable for more complex collagen such as that found in bovine hides and requires more time, normally several weeks. The purpose of the alkali treatment is to destroy certain chemical crosslinks still present in collagen. Within the gelatin industry, the gelatin obtained from acid-treated raw material has been called type-A gelatin and the gelatin obtained from alkali-treated raw material is referred to as type-B gelatin. Advances are occurring to optimize the yield of gelatin using enzymatic hydrolysis of collagen. The treatment time is shorter than that required for alkali treatment, and results in almost complete conversion to the pure product. The physical properties of the final gelatin product are considered better. Extraction Extraction is performed with either water or acid solutions at appropriate temperatures. All industrial processes are based on neutral or acid pH values because although alkali treatments speed up conversion, they also promote degradation processes. Acidic extraction conditions are extensively used in the industry, but the degree of acid varies with different processes. This extraction step is a multistage process, and the extraction temperature usually is increased in later extraction steps, which ensures minimum thermal degradation of the extracted gelatin. Recovery This process includes several steps such as filtration, evaporation, drying, grinding, and sifting. These operations are concentration-dependent and also dependent on the particular gelatin used. Gelatin degradation should be avoided and minimized, so the lowest temperature possible is used for the recovery process. Most recoveries are rapid, with all of the processes being done in several stages to avoid extensive deterioration of the peptide structure. A deteriorated peptide structure would result in a low gel strength, which is not generally desired. Uses Early history of food applications The 10th-century Kitab al-Tabikh includes a recipe for a fish aspic, made by boiling fish heads. A recipe for jelled meat broth is found in Le Viandier, written in or around 1375. In 15th century Britain, cattle hooves were boiled to produce a gel. By the late 17th century, the French inventor Denis Papin had discovered another method of gelatin extraction via boiling of bones. An English patent for gelatin production was granted in 1754. In 1812, the chemist further experimented with the use of hydrochloric acid to extract gelatin from bones, and later with steam extraction, which was much more efficient. The French government viewed gelatin as a potential source of cheap, accessible protein for the poor, particularly in Paris. Food applications in France and the United States during the 19th century appear to have established the versatility of gelatin, including the origin of its popularity in the US as Jell-O. In the mid-19th century, the American industrialist and inventor, Peter Cooper, registered a patent for a gelatin dessert powder he called "Portable Gelatin", which only needed the addition of water. In the late 19th century, Charles and Rose Knox set up the Charles B. Knox Gelatin Company in New York, which promoted and popularized the use of gelatin. Culinary uses Probably best known as a gelling agent in cooking, different types and grades of gelatin are used in a wide range of food and nonfood products. Common examples of foods that contain gelatin are gelatin desserts, trifles, aspic, marshmallows, candy corn, and confections such as Peeps, gummy bears, fruit snacks, and jelly babies. Gelatin may be used as a stabilizer, thickener, or texturizer in foods such as yogurt, cream cheese, and margarine; it is used, as well, in fat-reduced foods to simulate the mouthfeel of fat and to create volume. It also is used in the production of several types of Chinese soup dumplings, specifically Shanghainese soup dumplings, or xiaolongbao, as well as Shengjian mantou, a type of fried and steamed dumpling. The fillings of both are made by combining ground pork with gelatin cubes, and in the process of cooking, the gelatin melts, creating a soupy interior with a characteristic gelatinous stickiness. Gelatin is used for the clarification of juices, such as apple juice, and of vinegar. Isinglass is obtained from the swim bladders of fish. It is used as a fining agent for wine and beer. Besides hartshorn jelly, from deer antlers (hence the name "hartshorn"), isinglass was one of the oldest sources of gelatin. Cosmetics In cosmetics, hydrolyzed collagen may be found in topical creams, acting as a product texture conditioner, and moisturizer. Collagen implants or dermal fillers are also used to address the appearance of wrinkles, contour deficiencies, and acne scars, among others. The U.S. Food and Drug Administration has approved its use, and identifies cow (bovine) and human cells as the sources of these fillers. According to the FDA, the desired effects can last for 3–4 months, which is relatively the most short-lived compared to other materials used for the same purpose. Medicine Stabilizer in vaccines. Originally, gelatin constituted the shells of all drug and vitamin capsules to make them easier to swallow. Now, a vegetarian-acceptable alternative to gelatin, hypromellose, is also used, and is less expensive than gelatin to produce. Other technical uses Certain professional and theatrical lighting equipment use color gels to change the beam color. Historically, these were made with gelatin, hence the term, color gel. Some animal glues such as hide glue may be unrefined gelatin. It is used to hold silver halide crystals in an emulsion in virtually all photographic films and photographic papers. Despite significant effort, no suitable substitutes with the stability and low cost of gelatin have been found. Used as a carrier, coating, or separating agent for other substances, for example, it makes β-carotene water-soluble, thus imparting a yellow color to any soft drinks containing β-carotene. Ballistic gelatin is used to test and measure the performance of bullets shot from firearms. Gelatin is used as a binder in match heads and sandpaper. Cosmetics may contain a non-gelling variant of gelatin under the name hydrolyzed collagen (hydrolysate). Gelatin was first used as an external surface sizing for paper in 1337 and continued as a dominant sizing agent of all European papers through the mid-nineteenth century. In modern times, it is mostly found in watercolor paper, and occasionally in glossy printing papers, artistic papers, and playing cards. It maintains the wrinkles in crêpe paper. Biotechnology: Gelatin is also used in synthesizing hydrogels for tissue engineering applications. Gelatin is also used as a saturating agent in immunoassays, and as a coat. Gelatin degradation assay allows visualizing and quantifying invasion at the subcellular level instead of analyzing the invasive behavior of whole cells, for the study of cellular protrusions called invadopodia and podosomes, which are protrusive structures in cancer cells and play an important role in cell attachment and remodeling of the extracellular matrix (ECM). Religious considerations The consumption of gelatin from particular animals may be forbidden by religious rules or cultural taboos. Islamic halal and Jewish kosher customs generally require gelatin from sources other than pigs, such as cattle that have been slaughtered according to religious regulations (halal or kosher), or fish (that Jews and Muslims are allowed to consume). On the other hand, some Islamic jurists have argued that the chemical treatment "purifies" the gelatin enough to always be halal, an argument most common in the field of medicine. It has similarly been argued that gelatin in medicine is permissible in Judaism, as it is not used as food. According to The Jewish Dietary Laws, the book of kosher guidelines published by the Rabbinical Assembly, the organization of Conservative Jewish rabbis, all gelatin is kosher and pareve because the chemical transformation undergone in the manufacturing process renders it a different physical and chemical substance. Buddhist, Hindu, and Jain customs may require gelatin alternatives from sources other than animals, as many Hindus, almost all Jains and some Buddhists are vegetarian. See also Agar Carrageenan Konjac Pectin Gulaman References External links Animal products Conservation and restoration materials Dietary supplements Edible thickening agents Excipients Gels Skin care Structural proteins Photographic chemicals
Gelatin
[ "Physics", "Chemistry" ]
3,715
[ "Natural products", "Animal products", "Colloids", "Conservation and restoration materials", "Materials", "Gels", "Matter" ]
13,255
https://en.wikipedia.org/wiki/Hydrogen
Hydrogen is a chemical element; it has symbol H and atomic number 1. It is the lightest element and, at standard conditions, is a gas of diatomic molecules with the formula , sometimes called dihydrogen, hydrogen gas, molecular hydrogen, or simply hydrogen. It is colorless, odorless, non-toxic, and highly combustible. Constituting about 75% of all normal matter, hydrogen is the most abundant chemical element in the universe. Stars, including the Sun, mainly consist of hydrogen in a plasma state, while on Earth, hydrogen is found in water, organic compounds, as dihydrogen, and in other molecular forms. The most common isotope of hydrogen (protium, H) consists of one proton, one electron, and no neutrons. In the early universe, the formation of hydrogen's protons occurred in the first second after the Big Bang; neutral hydrogen atoms only formed about 370,000 years later during the recombination epoch as the universe expanded and plasma had cooled enough for electrons to remain bound to protons. Hydrogen gas was first produced artificially in the early 16th century by the reaction of acids with metals. Henry Cavendish, in 1766–81, identified hydrogen gas as a distinct substance and discovered its property of producing water when burned; hence its name means "water-former" in Greek. Understanding the colors of light absorbed and emitted by hydrogen was a crucial part of developing quantum mechanics. Hydrogen, typically nonmetallic except under extreme pressure, readily forms covalent bonds with most nonmetals, contributing to the formation of compounds like water and various organic substances. Its role is crucial in acid-base reactions, which mainly involve proton exchange among soluble molecules. In ionic compounds, hydrogen can take the form of either a negatively charged anion, where it is known as hydride, or as a positively charged cation, H, called a proton. Although tightly bonded to water molecules, protons strongly affect the behavior of aqueous solutions, as reflected in the importance of pH. Hydride on the other hand, is rarely observed because it tends to deprotonate solvents, yielding H2. Industrial hydrogen production occurs through steam reforming of natural gas. The more familiar electrolysis of water is uncommon because it is energy-intensive, i.e. expensive. Its main industrial uses include fossil fuel processing, such as hydrocracking and hydrodesulfurization. Ammonia production also is a major consumer of hydrogen. Fuel cells for electricity generation from hydrogen is rapidly emerging. Properties Combustion Hydrogen gas is highly flammable: (572 kJ/2 mol = 286 kJ/mol = 141.865 MJ/kg) Enthalpy of combustion: −286 kJ/mol. Hydrogen gas forms explosive mixtures with air in concentrations from 4–74% and with chlorine at 5–95%. The hydrogen autoignition temperature, the temperature of spontaneous ignition in air, is . Flame Pure hydrogen-oxygen flames emit ultraviolet light and with high oxygen mix are nearly invisible to the naked eye, as illustrated by the faint plume of the Space Shuttle Main Engine, compared to the highly visible plume of a Space Shuttle Solid Rocket Booster, which uses an ammonium perchlorate composite. The detection of a burning hydrogen leak, may require a flame detector; such leaks can be very dangerous. Hydrogen flames in other conditions are blue, resembling blue natural gas flames. The destruction of the Hindenburg airship was a notorious example of hydrogen combustion and the cause is still debated. The visible flames in the photographs were the result of carbon compounds in the airship skin burning. Electron energy levels The ground state energy level of the electron in a hydrogen atom is −13.6 eV, equivalent to an ultraviolet photon of roughly 91 nm wavelength. The energy levels of hydrogen are referred to by consecutive quantum numbers, with being the ground state. The hydrogen spectral series corresponds to emission of light due to transitions from higher to lower energy levels. The energy levels of hydrogen can be calculated fairly accurately using the Bohr model of the atom, in which the electron "orbits" the proton, like how Earth orbits the Sun. However, the electron and proton are held together by electrostatic attraction, while planets and celestial objects are held by gravity. Due to the discretization of angular momentum postulated in early quantum mechanics by Bohr, the electron in the Bohr model can only occupy certain allowed distances from the proton, and therefore only certain allowed energies. An accurate description of the hydrogen atom comes from a quantum analysis that uses the Schrödinger equation, Dirac equation or Feynman path integral formulation to calculate the probability density of the electron around the proton. The most complex formulas include the small effects of special relativity and vacuum polarization. In the quantum mechanical treatment, the electron in a ground state hydrogen atom has no angular momentum—illustrating how the "planetary orbit" differs from electron motion. Spin isomers Molecular exists as two nuclear isomers that differ in the spin states of their nuclei. In the orthohydrogen form, the spins of the two nuclei are parallel, forming a spin triplet state having a total molecular spin ; in the parahydrogen form the spins are antiparallel and form a spin singlet state having spin . The equilibrium ratio of ortho- to para-hydrogen depends on temperature. At room temperature or warmer, equilibrium hydrogen gas contains about 25% of the para form and 75% of the ortho form. The ortho form is an excited state, having higher energy than the para form by 1.455 kJ/mol, and it converts to the para form over the course of several minutes when cooled to low temperature. The thermal properties of these isomers differ because each has distinct rotational quantum states. The ortho-to-para ratio in is an important consideration in the liquefaction and storage of liquid hydrogen: the conversion from ortho to para is exothermic and produces sufficient heat to evaporate most of the liquid if not converted first to parahydrogen during the cooling process. Catalysts for the ortho-para interconversion, such as ferric oxide and activated carbon compounds, are used during hydrogen cooling to avoid this loss of liquid. Phases Liquid hydrogen can exist at temperatures below hydrogen's critical point of 33 K. However, for it to be in a fully liquid state at atmospheric pressure, H2 needs to be cooled to . Hydrogen was liquefied by James Dewar in 1898 by using regenerative cooling and his invention, the vacuum flask. Liquid hydrogen is a common rocket propellant, and it can also be used as the fuel for an internal combustion engine or fuel cell. Solid hydrogen can be made at standard pressure, by decreasing the temperature below hydrogen's melting point of . It was collected for the first time by James Dewar in 1899. Multiple distinct solid phases exist, known as Phase I through Phase V, each exhibiting a characteristic molecular arrangement. Liquid and solid phases can exist in combination at the triple point, a substance known as slush hydrogen. Metallic hydrogen, a phase obtained at extremely high pressures (in excess of ), is an electrical conductor. It is believed to exist deep within giant planets like Jupiter. When ionized, hydrogen becomes a plasma. This is the form in which hydrogen exists within stars. Isotopes Hydrogen has three naturally occurring isotopes, denoted , and . Other, highly unstable nuclei ( to ) have been synthesized in the laboratory but not observed in nature. is the most common hydrogen isotope, with an abundance of >99.98%. Because the nucleus of this isotope consists of only a single proton, it is given the descriptive but rarely used formal name protium. It is the only stable isotope with no neutrons; see diproton for a discussion of why others do not exist. , the other stable hydrogen isotope, is known as deuterium and contains one proton and one neutron in the nucleus. Nearly all deuterium in the universe is thought to have been produced at the time of the Big Bang, and has endured since then. Deuterium is not radioactive, and is not a significant toxicity hazard. Water enriched in molecules that include deuterium instead of normal hydrogen is called heavy water. Deuterium and its compounds are used as a non-radioactive label in chemical experiments and in solvents for -NMR spectroscopy. Heavy water is used as a neutron moderator and coolant for nuclear reactors. Deuterium is also a potential fuel for commercial nuclear fusion. is known as tritium and contains one proton and two neutrons in its nucleus. It is radioactive, decaying into helium-3 through beta decay with a half-life of 12.32 years. It is radioactive enough to be used in luminous paint to enhance the visibility of data displays, such as for painting the hands and dial-markers of watches. The watch glass prevents the small amount of radiation from escaping the case. Small amounts of tritium are produced naturally by cosmic rays striking atmospheric gases; tritium has also been released in nuclear weapons tests. It is used in nuclear fusion, as a tracer in isotope geochemistry, and in specialized self-powered lighting devices. Tritium has also been used in chemical and biological labeling experiments as a radiolabel. Unique among the elements, distinct names are assigned to its isotopes in common use. During the early study of radioactivity, heavy radioisotopes were given their own names, but these are mostly no longer used. The symbols D and T (instead of and ) are sometimes used for deuterium and tritium, but the symbol P was already used for phosphorus and thus was not available for protium. In its nomenclatural guidelines, the International Union of Pure and Applied Chemistry (IUPAC) allows any of D, T, , and to be used, though and are preferred. The exotic atom muonium (symbol Mu), composed of an antimuon and an electron, can also be considered a light radioisotope of hydrogen. Because muons decay with lifetime , muonium is too unstable for observable chemistry. Nevertheless, muonium compounds are important test cases for quantum simulation, due to the mass difference between the antimuon and the proton, and IUPAC nomenclature incorporates such hypothetical compounds as muonium chloride (MuCl) and sodium muonide (NaMu), analogous to hydrogen chloride and sodium hydride respectively. Antihydrogen () is the antimatter counterpart to hydrogen. It consists of an antiproton with a positron. Antihydrogen is the only type of antimatter atom to have been produced . Thermal and physical properties Table of thermal and physical properties of hydrogen (H) at atmospheric pressure: History 18th century In 1671, Irish scientist Robert Boyle discovered and described the reaction between iron filings and dilute acids, which results in the production of hydrogen gas. Boyle did not note that the gas was inflammable, but hydrogen would play a key role in overturning the phlogiston theory of combustion. In 1766, Henry Cavendish was the first to recognize hydrogen gas as a discrete substance, by naming the gas from a metal-acid reaction "inflammable air". He speculated that "inflammable air" was in fact identical to the hypothetical substance "phlogiston" and further finding in 1781 that the gas produces water when burned. He is usually given credit for the discovery of hydrogen as an element. In 1783, Antoine Lavoisier identified the element that came to be known as hydrogen when he and Laplace reproduced Cavendish's finding that water is produced when hydrogen is burned. Lavoisier produced hydrogen for his experiments on mass conservation by treating metallic iron with a steam of H2 through an incandescent iron tube heated in a fire. Anaerobic oxidation of iron by the protons of water at high temperature can be schematically represented by the set of following reactions: 1) 2) 3) Many metals react similarly with water leading to the production of hydrogen. In some situations, this H2-producing process is problematic as is the case of zirconium cladding on nuclear fuel rods. 19th century By 1806 hydrogen was used to fill balloons. François Isaac de Rivaz built the first de Rivaz engine, an internal combustion engine powered by a mixture of hydrogen and oxygen in 1806. Edward Daniel Clarke invented the hydrogen gas blowpipe in 1819. The Döbereiner's lamp and limelight were invented in 1823.Hydrogen was liquefied for the first time by James Dewar in 1898 by using regenerative cooling and his invention, the vacuum flask. He produced solid hydrogen the next year. One of the first quantum effects to be explicitly noticed (but not understood at the time) was James Clerk Maxwell's observation that the specific heat capacity of unaccountably departs from that of a diatomic gas below room temperature and begins to increasingly resemble that of a monatomic gas at cryogenic temperatures. According to quantum theory, this behavior arises from the spacing of the (quantized) rotational energy levels, which are particularly wide-spaced in because of its low mass. These widely spaced levels inhibit equal partition of heat energy into rotational motion in hydrogen at low temperatures. Diatomic gases composed of heavier atoms do not have such widely spaced levels and do not exhibit the same effect. 20th century The existence of the hydride anion was suggested by Gilbert N. Lewis in 1916 for group 1 and 2 salt-like compounds. In 1920, Moers electrolyzed molten lithium hydride (LiH), producing a stoichiometric quantity of hydrogen at the anode. Because of its simple atomic structure, consisting only of a proton and an electron, the hydrogen atom, together with the spectrum of light produced from it or absorbed by it, has been central to the development of the theory of atomic structure. Hydrogen's unique position as the only neutral atom for which the Schrödinger equation can be directly solved, has significantly contributed to the understanding of quantum mechanics through the exploration of its energetics. Furthermore, study of the corresponding simplicity of the hydrogen molecule and the corresponding cation brought understanding of the nature of the chemical bond, which followed shortly after the quantum mechanical treatment of the hydrogen atom had been developed in the mid-1920s. Hydrogen-lifted airship The first hydrogen-filled balloon was invented by Jacques Charles in 1783. Hydrogen provided the lift for the first reliable form of air-travel following the 1852 invention of the first hydrogen-lifted airship by Henri Giffard. German count Ferdinand von Zeppelin promoted the idea of rigid airships lifted by hydrogen that later were called Zeppelins; the first of which had its maiden flight in 1900. Regularly scheduled flights started in 1910 and by the outbreak of World War I in August 1914, they had carried 35,000 passengers without a serious incident. Hydrogen-lifted airships were used as observation platforms and bombers during the war. The first non-stop transatlantic crossing was made by the British airship R34 in 1919. Regular passenger service resumed in the 1920s and the discovery of helium reserves in the United States promised increased safety, but the U.S. government refused to sell the gas for this purpose. Therefore, was used in the Hindenburg airship, which was destroyed in a midair fire over New Jersey on 6 May 1937. The incident was broadcast live on radio and filmed. Ignition of leaking hydrogen is widely assumed to be the cause, but later investigations pointed to the ignition of the aluminized fabric coating by static electricity. But the damage to hydrogen's reputation as a lifting gas was already done and commercial hydrogen airship travel ceased. Hydrogen is still used, in preference to non-flammable but more expensive helium, as a lifting gas for weather balloons. Deuterium and tritium Deuterium was discovered in December 1931 by Harold Urey, and tritium was prepared in 1934 by Ernest Rutherford, Mark Oliphant, and Paul Harteck. Heavy water, which consists of deuterium in the place of regular hydrogen, was discovered by Urey's group in 1932. Hydrogen-cooled turbogenerator The first hydrogen-cooled turbogenerator went into service using gaseous hydrogen as a coolant in the rotor and the stator in 1937 at Dayton, Ohio, owned by the Dayton Power & Light Co. This was justified by the high thermal conductivity and very low viscosity of hydrogen gas, thus lower drag than air. This is the most common coolant used for generators 60 MW and larger; smaller generators are usually air-cooled. Nickel–hydrogen battery The nickel–hydrogen battery was used for the first time in 1977 aboard the U.S. Navy's Navigation technology satellite-2 (NTS-2). The International Space Station, Mars Odyssey and the Mars Global Surveyor are equipped with nickel-hydrogen batteries. In the dark part of its orbit, the Hubble Space Telescope is also powered by nickel-hydrogen batteries, which were finally replaced in May 2009, more than 19 years after launch and 13 years beyond their design life. Chemistry Laboratory syntheses is produced in labs, often as a by-product of other reactions. Many metals react with water to produce , but the rate of hydrogen evolution depends on the metal, the pH, and the presence of alloying agents. Most often, hydrogen evolution is induced by acids. The alkali and alkaline earth metals, aluminium, zinc, manganese, and iron react readily with aqueous acids. This reaction is the basis of the Kipp's apparatus, which once was used as a laboratory gas source: In the absence of acid, the evolution of is slower. Because iron is widely used structural material, its anaerobic corrosion is of technological significance: Many metals, such as aluminium, are slow to react with water because they form passivated oxide coatings of oxides. An alloy of aluminium and gallium, however, does react with water. At high pH, aluminium can produce : Reactions of H2 is relatively unreactive. The thermodynamic basis of this low reactivity is the very strong H–H bond, with a bond dissociation energy of 435.7 kJ/mol. It does form coordination complexes called dihydrogen complexes. These species provide insights into the early steps in the interactions of hydrogen with metal catalysts. According to neutron diffraction, the metal and two H atoms form a triangle in these complexes. The H-H bond remains intact but is elongated. They are acidic. Although exotic on Earth, the ion is common in the universe. It is a triangular species, like the aforementioned dihydrogen complexes. It is known as protonated molecular hydrogen or the trihydrogen cation. Hydrogen directly reacts with chlorine, fluorine and bromine to give HF, HCl, and HBr, respectively. The conversion involves a radical chain mechanism. With heating, H2 reacts efficiently with the alkali and alkaline earth metals to give the saline hydrides of the formula MH and MH2, respectively. One of the striking properties of H2 is its inertness toward unsaturated organic compounds, such as alkenes and alkynes. These species only react with H2 in the presence of catalysts. Especially active catalysts are the platinum metals (platinum, rhodium, palladium, etc.). A major driver for the mining of these rare and expensive elements is their use as catalysts. Hydrogen-containing compounds Most known compounds contain hydrogen, not as H2, but as covalently bonded H atoms. This interaction is the basis of organic chemistry and biochemistry.Hydrogen forms many compounds with carbon, called the hydrocarbons. Hydrocarbons are called organic compounds. In nature, they almost always contain "heteroatoms" such as nitrogen, oxygen, and sulfur. The study of their properties is known as organic chemistry and their study in the context of living organisms is called biochemistry. By some definitions, "organic" compounds are only required to contain carbon. However, most of them also contain hydrogen, and because it is the carbon-hydrogen bond that gives this class of compounds most of its particular chemical characteristics, carbon-hydrogen bonds are required in some definitions of the word "organic" in chemistry. Millions of hydrocarbons are known, and they are usually formed by complicated pathways that seldom involve elemental hydrogen. Hydrides Hydrogen forms compounds with less electronegative elements, such as metals and main group elements. In these compounds, hydrogen takes on a partial negative charge. The term "hydride" suggests that the H atom has acquired a negative or anionic character, denoted . Usually hydride refers to hydrogen in a compound with a more electropositive element. For hydrides other than group 1 and 2 metals, the term can be misleading, considering the low electronegativity of hydrogen. A well known hydride is lithium aluminium hydride, the anion carries hydridic centers firmly attached to the Al(III). Perhaps the most extensive series of hydrides are the boranes, compounds consisting only of boron and hydrogen. Hydrides can bond to these electropositive elements not only as a terminal ligand but also as bridging ligands. In diborane (), four H's are terminal and two bridge between the two B atoms. Protons and acids When bonded to a more electronegative element, particularly fluorine, oxygen, or nitrogen, hydrogen can participate in a form of medium-strength noncovalent bonding with another electronegative element with a lone pair, a phenomenon called hydrogen bonding that is critical to the stability of many biological molecules. can also be obtained by oxidation of H2. Under the Brønsted–Lowry acid–base theory, acids are proton donors, while bases are proton acceptors. A bare proton, essentially cannot exist in anything other than a vacuum. Otherwise it attaches to other atoms, ions, or molecules. Even species as inert as methane can be protonated. The term 'proton' is used loosely and metaphorically to refer to refer to solvated " without any implication that any single protons exist freely as a species. To avoid the implication of the naked proton in solution, acidic aqueous solutions are sometimes considered to contain the "hydronium ion" () or still more accurately, . Other oxonium ions are found when water is in acidic solution with other solvents. Occurrence Cosmic Hydrogen, as atomic H, is the most abundant chemical element in the universe, making up 75% of normal matter by mass and >90% by number of atoms. In astrophysics, neutral hydrogen in the interstellar medium is called H I and ionized hydrogen is called H II. Radiation from stars ionizes H I to H II, creating spheres of ionized H II around stars. In the chronology of the universe neutral hydrogen dominated until the birth of stars during the era of reionization led to bubbles of ionized hydrogen that grew and merged over 500 million of years. They are the source of the 21-cm hydrogen line at 1420 MHz that is detected in order to probe primordial hydrogen. The large amount of neutral hydrogen found in the damped Lyman-alpha systems is thought to dominate the cosmological baryonic density of the universe up to a redshift of z = 4. Hydrogen is found in great abundance in stars and gas giant planets. Molecular clouds of are associated with star formation. Hydrogen plays a vital role in powering stars through the proton-proton reaction in lower-mass stars, and through the CNO cycle of nuclear fusion in case of stars more massive than the Sun. Hydrogen plasma states have properties quite distinct from those of molecular or atomic hydrogen. As a plasma, hydrogen's electron and proton are not bound together, resulting in very high electrical conductivity and high emissivity (producing the light from the Sun and other stars). The charged particles are highly influenced by magnetic and electric fields. For example, in the solar wind they interact with the Earth's magnetosphere giving rise to Birkeland currents and the aurora. A molecular form called protonated molecular hydrogen () is found in the interstellar medium, where it is generated by ionization of molecular hydrogen from cosmic rays. This ion has also been observed in the upper atmosphere of Jupiter. The ion is long-lived in outer space due to the low temperature and density. is one of the most abundant ions in the universe, and it plays a notable role in the chemistry of the interstellar medium. Neutral triatomic hydrogen can exist only in an excited form and is unstable. By contrast, the positive hydrogen molecular ion () is a rare in the universe. Terrestrial Under ordinary conditions on Earth, elemental hydrogen exists as the diatomic gas, . Hydrogen gas is very rare in Earth's atmosphere (around 0.53 ppm on a molar basis) because of its light weight, which enables it to escape the atmosphere more rapidly than heavier gases. However, hydrogen, usually in the form of water, is the third most abundant element on the Earth's surface, mostly in the form of chemical compounds such as hydrocarbons and water. Despite its low concentration in our atmosphere, terrestrial hydrogen is sufficiently abundant to support the metabolism of several bacteria. Deposits of hydrogen gas have been discovered in several countries including Mali, France and Australia. Production and storage Industrial routes Many methods exist for producing H2, but three dominate commercially: steam reforming often coupled to water-gas shift, partial oxidation of hydrocarbons, and water electrolysis. Steam reforming Hydrogen is mainly produced by steam methane reforming (SMR), the reaction of water and methane. Thus, at high temperature (1000–1400 K, 700–1100 °C or 1300–2000 °F), steam (water vapor) reacts with methane to yield carbon monoxide and . Steam reforming is also used for the industrial preparation of ammonia. This reaction is favored at low pressures, Nonetheless, conducted at high pressures (2.0 MPa, 20 atm or 600 inHg) because high-pressure is the most marketable product, and pressure swing adsorption (PSA) purification systems work better at higher pressures. The product mixture is known as "synthesis gas" because it is often used directly for the production of methanol and many other compounds. Hydrocarbons other than methane can be used to produce synthesis gas with varying product ratios. One of the many complications to this highly optimized technology is the formation of coke or carbon: Therefore, steam reforming typically employs an excess of . Additional hydrogen can be recovered from the steam by using carbon monoxide through the water gas shift reaction (WGS). This process requires an iron oxide catalyst: Hydrogen is sometimes produced and consumed in the same industrial process, without being separated. In the Haber process for ammonia production, hydrogen is generated from natural gas. Partial oxidation of hydrocarbons Other methods for CO and production include partial oxidation of hydrocarbons: Although less important commercially, coal can serve as a prelude to the shift reaction above: Olefin production units may produce substantial quantities of byproduct hydrogen particularly from cracking light feedstocks like ethane or propane. Water electrolysis Electrolysis of water is a conceptually simple method of producing hydrogen. Commercial electrolyzers use nickel-based catalysts in strongly alkaline solution. Platinum is a better catalyst but is expensive. Electrolysis of brine to yield chlorine also produces high purity hydrogen as a co-product, which is used for a variety of transformations such as hydrogenations. The electrolysis process is more expensive than producing hydrogen from methane without CCS and the efficiency of energy conversion is inherently low. Innovation in hydrogen electrolyzers could make large-scale production of hydrogen from electricity more cost-competitive. Hydrogen produced in this manner could play a significant role in decarbonizing energy systems where there are challenges and limitations to replacing fossil fuels with direct use of electricity. Methane pyrolysis Hydrogen can be produced by pyrolysis of natural gas (methane). This route has a lower carbon footprint than commercial hydrogen production processes. Developing a commercial methane pyrolysis process could expedite the expanded use of hydrogen in industrial and transportation applications. Methane pyrolysis is accomplished by passing methane through a molten metal catalyst containing dissolved nickel. Methane is converted to hydrogen gas and solid carbon. (ΔH° = 74 kJ/mol) The carbon may be sold as a manufacturing feedstock or fuel, or landfilled. Further research continues in several laboratories, including at Karlsruhe Liquid-metal Laboratory and at University of California – Santa Barbara. BASF built a methane pyrolysis pilot plant. Thermochemical Water splitting is the process by which water is decomposed into its components. Relevant to the biological scenario is this simple equation: The reaction occurs in the light reactions in all photosynthetic organisms. A few organisms, including the alga Chlamydomonas reinhardtii and cyanobacteria, have evolved a second step in the dark reactions in which protons and electrons are reduced to form gas by specialized hydrogenases in the chloroplast. Efforts have been undertaken to genetically modify cyanobacterial hydrogenases to more efficiently generate gas even in the presence of oxygen. Efforts have also been undertaken with genetically modified alga in a bioreactor. Relevant to the thermal water-splitting scenario is this simple equation: More than 200 thermochemical cycles can be used for water splitting. Many of these cycles such as the iron oxide cycle, cerium(IV) oxide–cerium(III) oxide cycle, zinc zinc-oxide cycle, sulfur-iodine cycle, copper-chlorine cycle and hybrid sulfur cycle have been evaluated for their commercial potential to produce hydrogen and oxygen from water and heat without using electricity. A number of labs (including in France, Germany, Greece, Japan, and the United States) are developing thermochemical methods to produce hydrogen from solar energy and water. Natural routes Biohydrogen is produced by enzymes called hydrogenases. This process allows the host organism to use fermentation as a source of energy. These same enzymes also can oxidize H2, such that the host organisms can subsist by reducing oxidized substrates using electrons extracted from H2. The hydrogenase enzyme feature iron or nickel-iron centers at their active sites. The natural cycle of hydrogen production and consumption by organisms is called the hydrogen cycle. Some bacteria such as Mycobacterium smegmatis can use the small amount of hydrogen in the atmosphere as a source of energy when other sources are lacking. Their hydrogenase are designed with small channels that exclude oxygen and so permits the reaction to occur even though the hydrogen concentration is very low and the oxygen concentration is as in normal air. Confirming the existence of hydrogenases in the human gut, occurs in human breath. The concentration in the breath of fasting people at rest is typically less than 5 parts per million (ppm) but can be 50 ppm when people with intestinal disorders consume molecules they cannot absorb during diagnostic hydrogen breath tests. Serpentinization Serpentinization is a geological mechanism that produce highly reducing conditions. Under these conditions, water is capable of oxidizing ferrous () ions in fayalite. The process is of interest because it generates hydrogen gas: Closely related to this geological process is the Schikorr reaction: This process also is relevant to the corrosion of iron and steel in oxygen-free groundwater and in reducing soils below the water table. Storage Hydrogen produced when there is a surplus of variable renewable electricity could in principle be stored and later used to generate heat or to re-generate electricity. The hydrogen created through electrolysis using renewable energy is commonly referred to as "green hydrogen". It can be further transformed into synthetic fuels such as ammonia and methanol. Disadvantages of hydrogen as an energy carrier include high costs of storage and distribution due to hydrogen's explosivity, its large volume compared to other fuels, and its tendency to make pipes brittle. If H2 is to used as an energy source, its storage is important. It dissolves only poorly in solvents. For example, at room temperature and 0.1 Mpascal, ca. 0.05 moles dissolves in one kilogram of diethyl ether. The H2 can be stored in compressed form, although compressing costs energy. Liquifaction is impractical given its low critical temperature. In contrast, ammonia and many hydrocarbons can be liquified at room temperature under pressure. For these reasons, hydrogen carriers - materials that reversibly bind H2 - have attracted much attention. The key question is then the weight percent of H2-equivalents within the carrier material. For example, hydrogen can be reversibly absorbed into many rare earth and transition metals and is soluble in both nanocrystalline and amorphous metals. Hydrogen solubility in metals is influenced by local distortions or impurities in the crystal lattice. These properties may be useful when hydrogen is purified by passage through hot palladium disks, but the gas's high solubility is also a metallurgical problem, contributing to the embrittlement of many metals, complicating the design of pipelines and storage tanks. The most problematic aspect of metal hydrides for storage is their modest H2 content, often on the order of 1%. For this reason, there is interest in storage of H2 in compounds of low molecular weight. For example, ammonia borane () contains 19.8 weight percent of H2. The problem with this material is that after release of H2, the resulting boron nitride does not re-add H2, i.e. ammonia borane is an irreversible hydrogen carrier. More attractive, somewhat ironically, are hydrocarbons such as tetrahydroquinoline, which reversibly release some H2 when heated in the presence of a catalyst: Applications Petrochemical industry Large quantities of are used in the "upgrading" of fossil fuels. Key consumers of include hydrodesulfurization, and hydrocracking. Many of these reactions can be classified as hydrogenolysis, i.e., the cleavage of bonds by hydrogen. Illustrative is the separation of sulfur from liquid fossil fuels: Hydrogenation Hydrogenation, the addition of to various substrates, is done on a large scale. Hydrogenation of to produce ammonia by the Haber process, consumes a few percent of the energy budget in the entire industry. The resulting ammonia is used to supply most of the protein consumed by humans. Hydrogenation is used to convert unsaturated fats and oils to saturated fats and oils. The major application is the production of margarine. Methanol is produced by hydrogenation of carbon dioxide. It is similarly the source of hydrogen in the manufacture of hydrochloric acid. is also used as a reducing agent for the conversion of some ores to the metals. Coolant Hydrogen is commonly used in power stations as a coolant in generators due to a number of favorable properties that are a direct result of its light diatomic molecules. These include low density, low viscosity, and the highest specific heat and thermal conductivity of all gases. Fuel Hydrogen (H2) is widely discussed as a carrier of energy with potential to help to decarbonize economies and mitigate greenhouse gas emissions. This scenario requires the efficient production and storage of hydrogen. Hydrogen fuel can produce the intense heat required for industrial production of steel, cement, glass, and chemicals, thus contributing to the decarbonisation of industry alongside other technologies, such as electric arc furnaces for steelmaking. However, it is likely to play a larger role in providing industrial feedstock for cleaner production of ammonia and organic chemicals. For example, in steelmaking, hydrogen could function as a clean energy carrier and also as a low-carbon catalyst, replacing coal-derived coke (carbon): vs Hydrogen used to decarbonise transportation is likely to find its largest applications in shipping, aviation and, to a lesser extent, heavy goods vehicles, through the use of hydrogen-derived synthetic fuels such as ammonia and methanol and fuel cell technology. For light-duty vehicles including cars, hydrogen is far behind other alternative fuel vehicles, especially compared with the rate of adoption of battery electric vehicles, and may not play a significant role in future. Liquid hydrogen and liquid oxygen together serve as cryogenic propellants in liquid-propellant rockets, as in the Space Shuttle main engines. NASA has investigated the use of rocket propellant made from atomic hydrogen, boron or carbon that is frozen into solid molecular hydrogen particles suspended in liquid helium. Upon warming, the mixture vaporizes to allow the atomic species to recombine, heating the mixture to high temperature. Semiconductor industry Hydrogen is employed to saturate broken ("dangling") bonds of amorphous silicon and amorphous carbon that helps stabilizing material properties. It is also a potential electron donor in various oxide materials, including ZnO, , CdO, MgO, , , , , , , , , , , , and . Niche and evolving uses Shielding gas: Hydrogen is used as a shielding gas in welding methods such as atomic hydrogen welding. Cryogenic research: Liquid is used in cryogenic research, including superconductivity studies. Buoyant lifting: Because is only 7% the density of air, it was once widely used as a lifting gas in balloons and airships. Leak detection: Pure or mixed with nitrogen (sometimes called forming gas), hydrogen is a tracer gas for detection of minute leaks. Applications can be found in the automotive, chemical, power generation, aerospace, and telecommunications industries. Hydrogen is an authorized food additive (E 949) that allows food package leak testing, as well as having anti-oxidizing properties. Neutron moderation: Deuterium (hydrogen-2) is used in nuclear fission applications as a moderator to slow neutrons. Nuclear fusion fuel: Deuterium is used in nuclear fusion reactions. Isotopic labeling: Deuterium compounds have applications in chemistry and biology in studies of isotope effects on reaction rates. Tritium uses: Tritium (hydrogen-3), produced in nuclear reactors, is used in the production of hydrogen bombs, as an isotopic label in the biosciences, and as a source of beta radiation in radioluminescent paint for instrument dials and emergency signage. Safety and precautions Hydrogen poses few hazards to human safety. The chief hazards are for detonations and asphyxiation, but both are mitigated by its high diffusivity. Because hydrogen has been intensively investigated as a fuel, there is extensive documentation on the risks. Because H2 reacts with very few substrates, it is nontoxic as evidenced by the fact that humans exhale small amounts of it. See also Combined cycle hydrogen power plant (for hydrogen) Notes References Further reading The Hyperfine Splitting in Hydrogen - The Feynman Lectures on Physics External links Basic Hydrogen Calculations of Quantum Mechanics Hydrogen at The Periodic Table of Videos (University of Nottingham) High temperature hydrogen phase diagram Wavefunction of hydrogen Chemical elements Reactive nonmetals Diatomic nonmetals Nuclear fusion fuels Airship technology Reducing agents Refrigerants Gaseous signaling molecules E-number additives Least dense things
Hydrogen
[ "Physics", "Chemistry", "Materials_science" ]
8,160
[ "Chemical elements", "Redox", "Diatomic nonmetals", "Nonmetals", "Signal transduction", "Reducing agents", "Gaseous signaling molecules", "Reactive nonmetals", "Atoms", "Matter" ]
13,256
https://en.wikipedia.org/wiki/Helium
Helium (from ) is a chemical element; it has symbol He and atomic number 2. It is a colorless, odorless, non-toxic, inert, monatomic gas and the first in the noble gas group in the periodic table. Its boiling point is the lowest among all the elements, and it does not have a melting point at standard pressures. It is the second-lightest and second most abundant element in the observable universe, after hydrogen. It is present at about 24% of the total elemental mass, which is more than 12 times the mass of all the heavier elements combined. Its abundance is similar to this in both the Sun and Jupiter, because of the very high nuclear binding energy (per nucleon) of helium-4, with respect to the next three elements after helium. This helium-4 binding energy also accounts for why it is a product of both nuclear fusion and radioactive decay. The most common isotope of helium in the universe is helium-4, the vast majority of which was formed during the Big Bang. Large amounts of new helium are created by nuclear fusion of hydrogen in stars. Helium was first detected as an unknown, yellow spectral line signature in sunlight during a solar eclipse in 1868 by Georges Rayet, Captain C. T. Haig, Norman R. Pogson, and Lieutenant John Herschel, and was subsequently confirmed by French astronomer Jules Janssen. Janssen is often jointly credited with detecting the element, along with Norman Lockyer. Janssen recorded the helium spectral line during the solar eclipse of 1868, while Lockyer observed it from Britain. However, only Lockyer proposed that the line was due to a new element, which he named after the Sun. The formal discovery of the element was made in 1895 by chemists Sir William Ramsay, Per Teodor Cleve, and Nils Abraham Langlet, who found helium emanating from the uranium ore cleveite, which is now not regarded as a separate mineral species, but as a variety of uraninite. In 1903, large reserves of helium were found in natural gas fields in parts of the United States, by far the largest supplier of the gas today. Liquid helium is used in cryogenics (its largest single use, consuming about a quarter of production), and in the cooling of superconducting magnets, with its main commercial application in MRI scanners. Helium's other industrial uses—as a pressurizing and purge gas, as a protective atmosphere for arc welding, and in processes such as growing crystals to make silicon wafers—account for half of the gas produced. A small but well-known use is as a lifting gas in balloons and airships. As with any gas whose density differs from that of air, inhaling a small volume of helium temporarily changes the timbre and quality of the human voice. In scientific research, the behavior of the two fluid phases of helium-4 (helium I and helium II) is important to researchers studying quantum mechanics (in particular the property of superfluidity) and to those looking at the phenomena, such as superconductivity, produced in matter near absolute zero. On Earth, it is relatively rare—5.2 ppm by volume in the atmosphere. Most terrestrial helium present today is created by the natural radioactive decay of heavy radioactive elements (thorium and uranium, although there are other examples), as the alpha particles emitted by such decays consist of helium-4 nuclei. This radiogenic helium is trapped with natural gas in concentrations as great as 7% by volume, from which it is extracted commercially by a low-temperature separation process called fractional distillation. Terrestrial helium is a non-renewable resource because once released into the atmosphere, it promptly escapes into space. Its supply is thought to be rapidly diminishing. However, some studies suggest that helium produced deep in the Earth by radioactive decay can collect in natural gas reserves in larger-than-expected quantities, in some cases having been released by volcanic activity. History Scientific discoveries The first evidence of helium was observed on August 18, 1868, as a bright yellow line with a wavelength of 587.49 nanometers in the spectrum of the chromosphere of the Sun. The line was detected by French astronomer Jules Janssen during a total solar eclipse in Guntur, India. This line was initially assumed to be sodium. On October 20 of the same year, English astronomer Norman Lockyer observed a yellow line in the solar spectrum, which he named the D3 because it was near the known D1 and D2 Fraunhofer lines of sodium. He concluded that it was caused by an element in the Sun unknown on Earth. Lockyer named the element with the Greek word for the Sun, ἥλιος (helios). It is sometimes said that English chemist Edward Frankland was also involved in the naming, but this is unlikely as he doubted the existence of this new element. The ending "-ium" is unusual, as it normally applies only to metallic elements; probably Lockyer, being an astronomer, was unaware of the chemical conventions. In 1881, Italian physicist Luigi Palmieri detected helium on Earth for the first time through its D3 spectral line, when he analyzed a material that had been sublimated during a recent eruption of Mount Vesuvius. On March 26, 1895, Scottish chemist Sir William Ramsay isolated helium on Earth by treating the mineral cleveite (a variety of uraninite with at least 10% rare-earth elements) with mineral acids. Ramsay was looking for argon but, after separating nitrogen and oxygen from the gas, liberated by sulfuric acid, he noticed a bright yellow line that matched the D3 line observed in the spectrum of the Sun. These samples were identified as helium by Lockyer and British physicist William Crookes. It was independently isolated from cleveite in the same year by chemists Per Teodor Cleve and Abraham Langlet in Uppsala, Sweden, who collected enough of the gas to accurately determine its atomic weight. Helium was also isolated by American geochemist William Francis Hillebrand prior to Ramsay's discovery, when he noticed unusual spectral lines while testing a sample of the mineral uraninite. Hillebrand, however, attributed the lines to nitrogen. His letter of congratulations to Ramsay offers an interesting case of discovery, and near-discovery, in science. In 1907, Ernest Rutherford and Thomas Royds demonstrated that alpha particles are helium nuclei by allowing the particles to penetrate the thin glass wall of an evacuated tube, then creating a discharge in the tube, to study the spectrum of the new gas inside. In 1908, helium was first liquefied by Dutch physicist Heike Kamerlingh Onnes by cooling the gas to less than . He tried to solidify it by further reducing the temperature but failed, because helium does not solidify at atmospheric pressure. Onnes' student Willem Hendrik Keesom was eventually able to solidify 1 cm3 of helium in 1926 by applying additional external pressure. In 1913, Niels Bohr published his "trilogy" on atomic structure that included a reconsideration of the Pickering–Fowler series as central evidence in support of his model of the atom. This series is named for Edward Charles Pickering, who in 1896 published observations of previously unknown lines in the spectrum of the star ζ Puppis (these are now known to occur with Wolf–Rayet and other hot stars). Pickering attributed the observation (lines at 4551, 5411, and 10123 Å) to a new form of hydrogen with half-integer transition levels. In 1912, Alfred Fowler managed to produce similar lines from a hydrogen-helium mixture, and supported Pickering's conclusion as to their origin. Bohr's model does not allow for half-integer transitions (nor does quantum mechanics) and Bohr concluded that Pickering and Fowler were wrong, and instead assigned these spectral lines to ionised helium, He+. Fowler was initially skeptical but was ultimately convinced that Bohr was correct, and by 1915 "spectroscopists had transferred [the Pickering–Fowler series] definitively [from hydrogen] to helium." Bohr's theoretical work on the Pickering series had demonstrated the need for "a re-examination of problems that seemed already to have been solved within classical theories" and provided important confirmation for his atomic theory. In 1938, Russian physicist Pyotr Leonidovich Kapitsa discovered that helium-4 has almost no viscosity at temperatures near absolute zero, a phenomenon now called superfluidity. This phenomenon is related to Bose–Einstein condensation. In 1972, the same phenomenon was observed in helium-3, but at temperatures much closer to absolute zero, by American physicists Douglas D. Osheroff, David M. Lee, and Robert C. Richardson. The phenomenon in helium-3 is thought to be related to pairing of helium-3 fermions to make bosons, in analogy to Cooper pairs of electrons producing superconductivity. In 1961, Vignos and Fairbank reported the existence of a different phase of solid helium-4, designated the gamma-phase. It exists for a narrow range of pressure between 1.45 and 1.78 K. Extraction and use After an oil drilling operation in 1903 in Dexter, Kansas produced a gas geyser that would not burn, Kansas state geologist Erasmus Haworth collected samples of the escaping gas and took them back to the University of Kansas at Lawrence where, with the help of chemists Hamilton Cady and David McFarland, he discovered that the gas consisted of, by volume, 72% nitrogen, 15% methane (a combustible percentage only with sufficient oxygen), 1% hydrogen, and 12% an unidentifiable gas. With further analysis, Cady and McFarland discovered that 1.84% of the gas sample was helium. This showed that despite its overall rarity on Earth, helium was concentrated in large quantities under the American Great Plains, available for extraction as a byproduct of natural gas. Following a suggestion by Sir Richard Threlfall, the United States Navy sponsored three small experimental helium plants during World War I. The goal was to supply barrage balloons with the non-flammable, lighter-than-air gas. A total of of 92% helium was produced in the program even though less than a cubic meter of the gas had previously been obtained. Some of this gas was used in the world's first helium-filled airship, the U.S. Navy's C-class blimp C-7, which flew its maiden voyage from Hampton Roads, Virginia, to Bolling Field in Washington, D.C., on December 1, 1921, nearly two years before the Navy's first rigid helium-filled airship, the Naval Aircraft Factory-built USS Shenandoah, flew in September 1923. Although the extraction process using low-temperature gas liquefaction was not developed in time to be significant during World War I, production continued. Helium was primarily used as a lifting gas in lighter-than-air craft. During World War II, the demand increased for helium for lifting gas and for shielded arc welding. The helium mass spectrometer was also vital in the atomic bomb Manhattan Project. The government of the United States set up the National Helium Reserve in 1925 at Amarillo, Texas, with the goal of supplying military airships in time of war and commercial airships in peacetime. Because of the Helium Act of 1925, which banned the export of scarce helium on which the US then had a production monopoly, together with the prohibitive cost of the gas, German Zeppelins were forced to use hydrogen as lifting gas, which would gain infamy in the Hindenburg disaster. The helium market after World War II was depressed but the reserve was expanded in the 1950s to ensure a supply of liquid helium as a coolant to create oxygen/hydrogen rocket fuel (among other uses) during the Space Race and Cold War. Helium use in the United States in 1965 was more than eight times the peak wartime consumption. After the Helium Acts Amendments of 1960 (Public Law 86–777), the U.S. Bureau of Mines arranged for five private plants to recover helium from natural gas. For this helium conservation program, the Bureau built a pipeline from Bushton, Kansas, to connect those plants with the government's partially depleted Cliffside gas field near Amarillo, Texas. This helium-nitrogen mixture was injected and stored in the Cliffside gas field until needed, at which time it was further purified. By 1995, a billion cubic meters of the gas had been collected and the reserve was US$1.4 billion in debt, prompting the Congress of the United States in 1996 to discontinue the reserve. The resulting Helium Privatization Act of 1996 (Public Law 104–273) directed the United States Department of the Interior to empty the reserve, with sales starting by 2005. Helium produced between 1930 and 1945 was about 98.3% pure (2% nitrogen), which was adequate for airships. In 1945, a small amount of 99.9% helium was produced for welding use. By 1949, commercial quantities of Grade A 99.95% helium were available. For many years, the United States produced more than 90% of commercially usable helium in the world, while extraction plants in Canada, Poland, Russia, and other nations produced the remainder. In the mid-1990s, a new plant in Arzew, Algeria, producing began operation, with enough production to cover all of Europe's demand. Meanwhile, by 2000, the consumption of helium within the U.S. had risen to more than 15 million kg per year. In 2004–2006, additional plants in Ras Laffan, Qatar, and Skikda, Algeria were built. Algeria quickly became the second leading producer of helium. Through this time, both helium consumption and the costs of producing helium increased. From 2002 to 2007 helium prices doubled. , the United States National Helium Reserve accounted for 30 percent of the world's helium. The reserve was expected to run out of helium in 2018. Despite that, a proposed bill in the United States Senate would allow the reserve to continue to sell the gas. Other large reserves were in the Hugoton in Kansas, United States, and nearby gas fields of Kansas and the panhandles of Texas and Oklahoma. New helium plants were scheduled to open in 2012 in Qatar, Russia, and the US state of Wyoming, but they were not expected to ease the shortage. In 2013, Qatar started up the world's largest helium unit, although the 2017 Qatar diplomatic crisis severely affected helium production there. 2014 was widely acknowledged to be a year of over-supply in the helium business, following years of renowned shortages. Nasdaq reported (2015) that for Air Products, an international corporation that sells gases for industrial use, helium volumes remain under economic pressure due to feedstock supply constraints. Characteristics Atom In quantum mechanics In the perspective of quantum mechanics, helium is the second simplest atom to model, following the hydrogen atom. Helium is composed of two electrons in atomic orbitals surrounding a nucleus containing two protons and (usually) two neutrons. As in Newtonian mechanics, no system that consists of more than two particles can be solved with an exact analytical mathematical approach (see 3-body problem) and helium is no exception. Thus, numerical mathematical methods are required, even to solve the system of one nucleus and two electrons. Such computational chemistry methods have been used to create a quantum mechanical picture of helium electron binding which is accurate to within < 2% of the correct value, in a few computational steps. Such models show that each electron in helium partly screens the nucleus from the other, so that the effective nuclear charge Zeff which each electron sees is about 1.69 units, not the 2 charges of a classic "bare" helium nucleus. Related stability of the helium-4 nucleus and electron shell The nucleus of the helium-4 atom is identical with an alpha particle. High-energy electron-scattering experiments show its charge to decrease exponentially from a maximum at a central point, exactly as does the charge density of helium's own electron cloud. This symmetry reflects similar underlying physics: the pair of neutrons and the pair of protons in helium's nucleus obey the same quantum mechanical rules as do helium's pair of electrons (although the nuclear particles are subject to a different nuclear binding potential), so that all these fermions fully occupy 1s orbitals in pairs, none of them possessing orbital angular momentum, and each cancelling the other's intrinsic spin. This arrangement is thus energetically extremely stable for all these particles and has astrophysical implications. Namely, adding another particle – proton, neutron, or alpha particle – would consume rather than release energy; all systems with mass number 5, as well as beryllium-8 (comprising two alpha particles), are unbound. For example, the stability and low energy of the electron cloud state in helium accounts for the element's chemical inertness, and also the lack of interaction of helium atoms with each other, producing the lowest melting and boiling points of all the elements. In a similar way, the particular energetic stability of the helium-4 nucleus, produced by similar effects, accounts for the ease of helium-4 production in atomic reactions that involve either heavy-particle emission or fusion. Some stable helium-3 (two protons and one neutron) is produced in fusion reactions from hydrogen, though its estimated abundance in the universe is about relative to helium-4. The unusual stability of the helium-4 nucleus is also important cosmologically: it explains the fact that in the first few minutes after the Big Bang, as the "soup" of free protons and neutrons which had initially been created in about 6:1 ratio cooled to the point that nuclear binding was possible, almost all first compound atomic nuclei to form were helium-4 nuclei. Owing to the relatively tight binding of helium-4 nuclei, its production consumed nearly all of the free neutrons in a few minutes, before they could beta-decay, and thus few neutrons were available to form heavier atoms such as lithium, beryllium, or boron. Helium-4 nuclear binding per nucleon is stronger than in any of these elements (see nucleogenesis and binding energy) and thus, once helium had been formed, no energetic drive was available to make elements 3, 4 and 5. It is barely energetically favorable for helium to fuse into the next element with a lower energy per nucleon, carbon. However, due to the short lifetime of the intermediate beryllium-8, this process requires three helium nuclei striking each other nearly simultaneously (see triple-alpha process). There was thus no time for significant carbon to be formed in the few minutes after the Big Bang, before the early expanding universe cooled to the temperature and pressure point where helium fusion to carbon was no longer possible. This left the early universe with a very similar ratio of hydrogen/helium as is observed today (3 parts hydrogen to 1 part helium-4 by mass), with nearly all the neutrons in the universe trapped in helium-4. All heavier elements (including those necessary for rocky planets like the Earth, and for carbon-based or other life) have thus been created since the Big Bang in stars which were hot enough to fuse helium itself. All elements other than hydrogen and helium today account for only 2% of the mass of atomic matter in the universe. Helium-4, by contrast, comprises about 24% of the mass of the universe's ordinary matter—nearly all the ordinary matter that is not hydrogen. Gas and plasma phases Helium is the second least reactive noble gas after neon, and thus the second least reactive of all elements. It is chemically inert and monatomic in all standard conditions. Because of helium's relatively low molar (atomic) mass, its thermal conductivity, specific heat, and sound speed in the gas phase are all greater than any other gas except hydrogen. For these reasons and the small size of helium monatomic molecules, helium diffuses through solids at a rate three times that of air and around 65% that of hydrogen. Helium is the least water-soluble monatomic gas, and one of the least water-soluble of any gas (CF4, SF6, and C4F8 have lower mole fraction solubilities: 0.3802, 0.4394, and 0.2372 x2/10−5, respectively, versus helium's 0.70797 x2/10−5), and helium's index of refraction is closer to unity than that of any other gas. Helium has a negative Joule–Thomson coefficient at normal ambient temperatures, meaning it heats up when allowed to freely expand. Only below its Joule–Thomson inversion temperature (of about 32 to 50 K at 1 atmosphere) does it cool upon free expansion. Once precooled below this temperature, helium can be liquefied through expansion cooling. Most extraterrestrial helium is plasma in stars, with properties quite different from those of atomic helium. In a plasma, helium's electrons are not bound to its nucleus, resulting in very high electrical conductivity, even when the gas is only partially ionized. The charged particles are highly influenced by magnetic and electric fields. For example, in the solar wind together with ionized hydrogen, the particles interact with the Earth's magnetosphere, giving rise to Birkeland currents and the aurora. Liquid phase Helium liquifies when cooled below 4.2 K at atmospheric pressure. Unlike any other element, however, helium remains liquid down to a temperature of absolute zero. This is a direct effect of quantum mechanics: specifically, the zero point energy of the system is too high to allow freezing. Pressures above about 25 atmospheres are required to freeze it. There are two liquid phases: Helium I is a conventional liquid, and Helium II, which occurs at a lower temperature, is a superfluid. Helium I Below its boiling point of and above the lambda point of , the isotope helium-4 exists in a normal colorless liquid state, called helium I. Like other cryogenic liquids, helium I boils when it is heated and contracts when its temperature is lowered. Below the lambda point, however, helium does not boil, and it expands as the temperature is lowered further. Helium I has a gas-like index of refraction of 1.026 which makes its surface so hard to see that floats of Styrofoam are often used to show where the surface is. This colorless liquid has a very low viscosity and a density of 0.145–0.125 g/mL (between about 0 and 4 K), which is only one-fourth the value expected from classical physics. Quantum mechanics is needed to explain this property and thus both states of liquid helium (helium I and helium II) are called quantum fluids, meaning they display atomic properties on a macroscopic scale. This may be an effect of its boiling point being so close to absolute zero, preventing random molecular motion (thermal energy) from masking the atomic properties. Helium II Liquid helium below its lambda point (called helium II) exhibits very unusual characteristics. Due to its high thermal conductivity, when it boils, it does not bubble but rather evaporates directly from its surface. Helium-3 also has a superfluid phase, but only at much lower temperatures; as a result, less is known about the properties of the isotope. Helium II is a superfluid, a quantum mechanical state of matter with strange properties. For example, when it flows through capillaries as thin as 10 to 100 nm it has no measurable viscosity. However, when measurements were done between two moving discs, a viscosity comparable to that of gaseous helium was observed. Existing theory explains this using the two-fluid model for helium II. In this model, liquid helium below the lambda point is viewed as containing a proportion of helium atoms in a ground state, which are superfluid and flow with exactly zero viscosity, and a proportion of helium atoms in an excited state, which behave more like an ordinary fluid. In the fountain effect, a chamber is constructed which is connected to a reservoir of helium II by a sintered disc through which superfluid helium leaks easily but through which non-superfluid helium cannot pass. If the interior of the container is heated, the superfluid helium changes to non-superfluid helium. In order to maintain the equilibrium fraction of superfluid helium, superfluid helium leaks through and increases the pressure, causing liquid to fountain out of the container. The thermal conductivity of helium II is greater than that of any other known substance, a million times that of helium I and several hundred times that of copper. This is because heat conduction occurs by an exceptional quantum mechanism. Most materials that conduct heat well have a valence band of free electrons which serve to transfer the heat. Helium II has no such valence band but nevertheless conducts heat well. The flow of heat is governed by equations that are similar to the wave equation used to characterize sound propagation in air. When heat is introduced, it moves at 20 meters per second at 1.8 K through helium II as waves in a phenomenon known as second sound. Helium II also exhibits a creeping effect. When a surface extends past the level of helium II, the helium II moves along the surface, against the force of gravity. Helium II will escape from a vessel that is not sealed by creeping along the sides until it reaches a warmer region where it evaporates. It moves in a 30 nm-thick film regardless of surface material. This film is called a Rollin film and is named after the man who first characterized this trait, Bernard V. Rollin. As a result of this creeping behavior and helium II's ability to leak rapidly through tiny openings, it is very difficult to confine. Unless the container is carefully constructed, the helium II will creep along the surfaces and through valves until it reaches somewhere warmer, where it will evaporate. Waves propagating across a Rollin film are governed by the same equation as gravity waves in shallow water, but rather than gravity, the restoring force is the van der Waals force. These waves are known as third sound. Solid phases Helium remains liquid down to absolute zero at atmospheric pressure, but it freezes at high pressure. Solid helium requires a temperature of 1–1.5 K (about −272 °C or −457 °F) at about 25 bar (2.5 MPa) of pressure. It is often hard to distinguish solid from liquid helium since the refractive index of the two phases are nearly the same. The solid has a sharp melting point and has a crystalline structure, but it is highly compressible; applying pressure in a laboratory can decrease its volume by more than 30%. With a bulk modulus of about 27 MPa it is ~100 times more compressible than water. Solid helium has a density of at 1.15 K and 66 atm; the projected density at 0 K and 25 bar (2.5 MPa) is . At higher temperatures, helium will solidify with sufficient pressure. At room temperature, this requires about 114,000 atm. Helium-4 and helium-3 both form several crystalline solid phases, all requiring at least 25 bar. They both form an α phase, which has a hexagonal close-packed (hcp) crystal structure, a β phase, which is face-centered cubic (fcc), and a γ phase, which is body-centered cubic (bcc). Isotopes There are nine known isotopes of helium of which two, helium-3 and helium-4, are stable. In the Earth's atmosphere, one atom is for every million that are . Unlike most elements, helium's isotopic abundance varies greatly by origin, due to the different formation processes. The most common isotope, helium-4, is produced on Earth by alpha decay of heavier radioactive elements; the alpha particles that emerge are fully ionized helium-4 nuclei. Helium-4 is an unusually stable nucleus because its nucleons are arranged into complete shells. It was also formed in enormous quantities during Big Bang nucleosynthesis. Helium-3 is present on Earth only in trace amounts. Most of it has been present since Earth's formation, though some falls to Earth trapped in cosmic dust. Trace amounts are also produced by the beta decay of tritium. Rocks from the Earth's crust have isotope ratios varying by as much as a factor of ten, and these ratios can be used to investigate the origin of rocks and the composition of the Earth's mantle. is much more abundant in stars as a product of nuclear fusion. Thus in the interstellar medium, the proportion of to is about 100 times higher than on Earth. Extraplanetary material, such as lunar and asteroid regolith, have trace amounts of helium-3 from being bombarded by solar winds. The Moon's surface contains helium-3 at concentrations on the order of 10 ppb, much higher than the approximately 5 ppt found in the Earth's atmosphere. A number of people, starting with Gerald Kulcinski in 1986, have proposed to explore the Moon, mine lunar regolith, and use the helium-3 for fusion. Liquid helium-4 can be cooled to about using evaporative cooling in a 1-K pot. Similar cooling of helium-3, which has a lower boiling point, can achieve about in a helium-3 refrigerator. Equal mixtures of liquid and below separate into two immiscible phases due to their dissimilarity (they follow different quantum statistics: helium-4 atoms are bosons while helium-3 atoms are fermions). Dilution refrigerators use this immiscibility to achieve temperatures of a few millikelvins. It is possible to produce exotic helium isotopes, which rapidly decay into other substances. The shortest-lived heavy helium isotope is the unbound helium-10 with a half-life of . Helium-6 decays by emitting a beta particle and has a half-life of 0.8 second. Helium-7 and helium-8 are created in certain nuclear reactions. Helium-6 and helium-8 are known to exhibit a nuclear halo. Properties Table of thermal and physical properties of helium gas at atmospheric pressure: Compounds Helium has a valence of zero and is chemically unreactive under all normal conditions. It is an electrical insulator unless ionized. As with the other noble gases, helium has metastable energy levels that allow it to remain ionized in an electrical discharge with a voltage below its ionization potential. Helium can form unstable compounds, known as excimers, with tungsten, iodine, fluorine, sulfur, and phosphorus when it is subjected to a glow discharge, to electron bombardment, or reduced to plasma by other means. The molecular compounds HeNe, HgHe10, and WHe2, and the molecular ions , , , and have been created this way. HeH+ is also stable in its ground state but is extremely reactive—it is the strongest Brønsted acid known, and therefore can exist only in isolation, as it will protonate any molecule or counteranion it contacts. This technique has also produced the neutral molecule He2, which has a large number of band systems, and HgHe, which is apparently held together only by polarization forces. Van der Waals compounds of helium can also be formed with cryogenic helium gas and atoms of some other substance, such as LiHe and He2. Theoretically, other true compounds may be possible, such as helium fluorohydride (HHeF), which would be analogous to HArF, discovered in 2000. Calculations show that two new compounds containing a helium-oxygen bond could be stable. Two new molecular species, predicted using theory, CsFHeO and N(CH3)4FHeO, are derivatives of a metastable FHeO− anion first theorized in 2005 by a group from Taiwan. Helium atoms have been inserted into the hollow carbon cage molecules (the fullerenes) by heating under high pressure. The endohedral fullerene molecules formed are stable at high temperatures. When chemical derivatives of these fullerenes are formed, the helium stays inside. If helium-3 is used, it can be readily observed by helium nuclear magnetic resonance spectroscopy. Many fullerenes containing helium-3 have been reported. Although the helium atoms are not attached by covalent or ionic bonds, these substances have distinct properties and a definite composition, like all stoichiometric chemical compounds. Under high pressures helium can form compounds with various other elements. Helium-nitrogen clathrate (He(N2)11) crystals have been grown at room temperature at pressures ca. 10 GPa in a diamond anvil cell. The insulating electride Na2He has been shown to be thermodynamically stable at pressures above 113 GPa. It has a fluorite structure. Occurrence and production Natural abundance Although it is rare on Earth, helium is the second most abundant element in the known Universe, constituting 23% of its baryonic mass. Only hydrogen is more abundant. The vast majority of helium was formed by Big Bang nucleosynthesis one to three minutes after the Big Bang. As such, measurements of its abundance contribute to cosmological models. In stars, it is formed by the nuclear fusion of hydrogen in proton–proton chain reactions and the CNO cycle, part of stellar nucleosynthesis. In the Earth's atmosphere, the concentration of helium by volume is only 5.2 parts per million. The concentration is low and fairly constant despite the continuous production of new helium because most helium in the Earth's atmosphere escapes into space by several processes. In the Earth's heterosphere, a part of the upper atmosphere, helium and hydrogen are the most abundant elements. Most helium on Earth is a result of radioactive decay. Helium is found in large amounts in minerals of uranium and thorium, including uraninite and its varieties cleveite and pitchblende, carnotite and monazite (a group name; "monazite" usually refers to monazite-(Ce)), because they emit alpha particles (helium nuclei, He2+) to which electrons immediately combine as soon as the particle is stopped by the rock. In this way an estimated 3000 metric tons of helium are generated per year throughout the lithosphere. In the Earth's crust, the concentration of helium is 8 parts per billion. In seawater, the concentration is only 4 parts per trillion. There are also small amounts in mineral springs, volcanic gas, and meteoric iron. Because helium is trapped in the subsurface under conditions that also trap natural gas, the greatest natural concentrations of helium on the planet are found in natural gas, from which most commercial helium is extracted. The concentration varies in a broad range from a few ppm to more than 7% in a small gas field in San Juan County, New Mexico. , the world's helium reserves were estimated at 31 billion cubic meters, with a third of that being in Qatar. In 2015 and 2016 additional probable reserves were announced to be under the Rocky Mountains in North America and in the East African Rift. The Bureau of Land Management (BLM) has proposed an October 2024 plan for managing natural resources in western Colorado. The plan involves closing 543,000 acres to oil and gas leasing while keeping 692,300 acres open. Among the open areas, 165,700 acres have been identified as suitable for helium recovery. The United States possesses an estimated 306 billion cubic feet of recoverable helium, sufficient to meet current consumption rates of 2.15 billion cubic feet per year for approximately 150 years. Modern extraction and distribution For large-scale use, helium is extracted by fractional distillation from natural gas, which can contain as much as 7% helium. Since helium has a lower boiling point than any other element, low temperatures and high pressure are used to liquefy nearly all the other gases (mostly nitrogen and methane). The resulting crude helium gas is purified by successive exposures to lowering temperatures, in which almost all of the remaining nitrogen and other gases are precipitated out of the gaseous mixture. Activated charcoal is used as a final purification step, usually resulting in 99.995% pure Grade-A helium. The principal impurity in Grade-A helium is neon. In a final production step, most of the helium that is produced is liquefied via a cryogenic process. This is necessary for applications requiring liquid helium and also allows helium suppliers to reduce the cost of long-distance transportation, as the largest liquid helium containers have more than five times the capacity of the largest gaseous helium tube trailers. In 2008, approximately 169 million standard cubic meters (SCM) of helium were extracted from natural gas or withdrawn from helium reserves, with approximately 78% from the United States, 10% from Algeria, and most of the remainder from Russia, Poland, and Qatar. By 2013, increases in helium production in Qatar (under the company Qatargas managed by Air Liquide) had increased Qatar's fraction of world helium production to 25%, making it the second largest exporter after the United States. An estimated deposit of helium was found in Tanzania in 2016. A large-scale helium plant was opened in Ningxia, China in 2020. In the United States, most helium is extracted from the natural gas of the Hugoton and nearby gas fields in Kansas, Oklahoma, and the Panhandle Field in Texas. Much of this gas was once sent by pipeline to the National Helium Reserve, but since 2005, this reserve has been depleted and sold off, and it is expected to be largely depleted by 2021 under the October 2013 Responsible Helium Administration and Stewardship Act (H.R. 527). The helium fields of the western United States are emerging as an alternate source of helium supply, particularly those of the "Four Corners" region (the states of Arizona, Colorado, New Mexico and Utah). Diffusion of crude natural gas through special semipermeable membranes and other barriers is another method to recover and purify helium. In 1996, the U.S. had proven helium reserves in such gas well complexes of about 147 billion standard cubic feet (4.2 billion SCM). At rates of use at that time (72 million SCM per year in the U.S.; see pie chart below) this would have been enough helium for about 58 years of U.S. use, and less than this (perhaps 80% of the time) at world use rates, although factors in saving and processing impact effective reserve numbers. Helium is generally extracted from natural gas because it is present in air at only a fraction of that of neon, yet the demand for it is far higher. It is estimated that if all neon production were retooled to save helium, 0.1% of the world's helium demands would be satisfied. Similarly, only 1% of the world's helium demands could be satisfied by re-tooling all air distillation plants. Helium can be synthesized by bombardment of lithium or boron with high-velocity protons, or by bombardment of lithium with deuterons, but these processes are a completely uneconomical method of production. Helium is commercially available in either liquid or gaseous form. As a liquid, it can be supplied in small insulated containers called dewars which hold as much as 1,000 liters of helium, or in large ISO containers, which have nominal capacities as large as 42 m3 (around 11,000 U.S. gallons). In gaseous form, small quantities of helium are supplied in high-pressure cylinders holding as much as 8 m3 (approximately . 282 standard cubic feet), while large quantities of high-pressure gas are supplied in tube trailers, which have capacities of as much as 4,860 m3 (approx. 172,000 standard cubic feet). Conservation advocates According to helium conservationists like Nobel laureate physicist Robert Coleman Richardson, writing in 2010, the free market price of helium has contributed to "wasteful" usage (e.g. for helium balloons). Prices in the 2000s had been lowered by the decision of the U.S. Congress to sell off the country's large helium stockpile by 2015. According to Richardson, the price needed to be multiplied by 20 to eliminate the excessive wasting of helium. In the 2012 Nuttall et al. paper titled "Stop squandering helium", it was also proposed to create an International Helium Agency that would build a sustainable market for "this precious commodity". Applications While balloons are perhaps the best-known use of helium, they are a minor part of all helium use. Helium is used for many purposes that require some of its unique properties, such as its low boiling point, low density, low solubility, high thermal conductivity, or inertness. Of the 2014 world helium total production of about 32 million kg (180 million standard cubic meters) helium per year, the largest use (about 32% of the total in 2014) is in cryogenic applications, most of which involves cooling the superconducting magnets in medical MRI scanners and NMR spectrometers. Other major uses were pressurizing and purging systems, welding, maintenance of controlled atmospheres, and leak detection. Other uses by category were relatively minor fractions. Controlled atmospheres Helium is used as a protective gas in growing silicon and germanium crystals, in titanium and zirconium production, and in gas chromatography, because it is inert. Because of its inertness, thermally and calorically perfect nature, high speed of sound, and high value of the heat capacity ratio, it is also useful in supersonic wind tunnels and impulse facilities. Gas tungsten arc welding Helium is used as a shielding gas in arc welding processes on materials that, at welding temperatures are contaminated and weakened by air or nitrogen. A number of inert shielding gases are used in gas tungsten arc welding, but helium is used instead of cheaper argon especially for welding materials that have higher heat conductivity, like aluminium or copper. Minor uses Industrial leak detection One industrial application for helium is leak detection. Because helium diffuses through solids three times faster than air, it is used as a tracer gas to detect leaks in high-vacuum equipment (such as cryogenic tanks) and high-pressure containers. The tested object is placed in a chamber, which is then evacuated and filled with helium. The helium that escapes through the leaks is detected by a sensitive device (helium mass spectrometer), even at the leak rates as small as 10−9 mbar·L/s (10−10 Pa·m3/s). The measurement procedure is normally automatic and is called helium integral test. A simpler procedure is to fill the tested object with helium and to manually search for leaks with a hand-held device. Helium leaks through cracks should not be confused with gas permeation through a bulk material. While helium has documented permeation constants (thus a calculable permeation rate) through glasses, ceramics, and synthetic materials, inert gases such as helium will not permeate most bulk metals. Flight Because it is lighter than air, airships and balloons are inflated with helium for lift. While hydrogen gas is more buoyant and escapes permeating through a membrane at a lower rate, helium has the advantage of being non-flammable, and indeed fire-retardant. Another minor use is in rocketry, where helium is used as an ullage medium to backfill rocket propellant tanks in flight and to condense hydrogen and oxygen to make rocket fuel. It is also used to purge fuel and oxidizer from ground support equipment prior to launch and to pre-cool liquid hydrogen in space vehicles. For example, the Saturn V rocket used in the Apollo program needed about of helium to launch. Minor commercial and recreational uses Helium as a breathing gas has no narcotic properties, so helium mixtures such as trimix, heliox and heliair are used for deep diving to reduce the effects of narcosis, which worsen with increasing depth. As pressure increases with depth, the density of the breathing gas also increases, and the low molecular weight of helium is found to considerably reduce the effort of breathing by lowering the density of the mixture. This reduces the Reynolds number of flow, leading to a reduction of turbulent flow and an increase in laminar flow, which requires less breathing. At depths below divers breathing helium-oxygen mixtures begin to experience tremors and a decrease in psychomotor function, symptoms of high-pressure nervous syndrome. This effect may be countered to some extent by adding an amount of narcotic gas such as hydrogen or nitrogen to a helium–oxygen mixture. Helium–neon lasers, a type of low-powered gas laser producing a red beam, had various practical applications which included barcode readers and laser pointers, before they were almost universally replaced by cheaper diode lasers. For its inertness and high thermal conductivity, neutron transparency, and because it does not form radioactive isotopes under reactor conditions, helium is used as a heat-transfer medium in some gas-cooled nuclear reactors. Helium, mixed with a heavier gas such as xenon, is useful for thermoacoustic refrigeration due to the resulting high heat capacity ratio and low Prandtl number. The inertness of helium has environmental advantages over conventional refrigeration systems which contribute to ozone depletion or global warming. Helium is also used in some hard disk drives. Scientific uses The use of helium reduces the distorting effects of temperature variations in the space between lenses in some telescopes due to its extremely low index of refraction. This method is especially used in solar telescopes where a vacuum tight telescope tube would be too heavy. Helium is a commonly used carrier gas for gas chromatography. The age of rocks and minerals that contain uranium and thorium can be estimated by measuring the level of helium with a process known as helium dating. Helium at low temperatures is used in cryogenics and in certain cryogenic applications. As examples of applications, liquid helium is used to cool certain metals to the extremely low temperatures required for superconductivity, such as in superconducting magnets for magnetic resonance imaging. The Large Hadron Collider at CERN uses 96 metric tons of liquid helium to maintain the temperature at . Medical uses Helium was approved for medical use in the United States in April 2020 for humans and animals. As a contaminant While chemically inert, helium contamination impairs the operation of microelectromechanical systems (MEMS) such that iPhones may fail. Inhalation and safety Effects Neutral helium at standard conditions is non-toxic, plays no biological role and is found in trace amounts in human blood. The speed of sound in helium is nearly three times the speed of sound in air. Because the natural resonance frequency of a gas-filled cavity is proportional to the speed of sound in the gas, when helium is inhaled, a corresponding increase occurs in the resonant frequencies of the vocal tract, which is the amplifier of vocal sound. This increase in the resonant frequency of the amplifier (the vocal tract) gives increased amplification to the high-frequency components of the sound wave produced by the direct vibration of the vocal folds, compared to the case when the voice box is filled with air. When a person speaks after inhaling helium gas, the muscles that control the voice box still move in the same way as when the voice box is filled with air; therefore the fundamental frequency (sometimes called pitch) produced by direct vibration of the vocal folds does not change. However, the high-frequency-preferred amplification causes a change in timbre of the amplified sound, resulting in a reedy, duck-like vocal quality. The opposite effect, lowering resonant frequencies, can be obtained by inhaling a dense gas such as sulfur hexafluoride or xenon. Hazards Inhaling helium can be dangerous if done to excess, since helium is a simple asphyxiant and so displaces oxygen needed for normal respiration. Fatalities have been recorded, including a youth who suffocated in Vancouver in 2003 and two adults who suffocated in South Florida in 2006. In 1998, an Australian girl from Victoria fell unconscious and temporarily turned blue after inhaling the entire contents of a party balloon. Inhaling helium directly from pressurized cylinders or even balloon filling valves is extremely dangerous, as high flow rate and pressure can result in barotrauma, fatally rupturing lung tissue. Death caused by helium is rare. The first media-recorded case was that of a 15-year-old girl from Texas who died in 1998 from helium inhalation at a friend's party; the exact type of helium death is unidentified. In the United States, only two fatalities were reported between 2000 and 2004, including a man who died in North Carolina of barotrauma in 2002. A youth asphyxiated in Vancouver during 2003, and a 27-year-old man in Australia had an embolism after breathing from a cylinder in 2000. Since then, two adults asphyxiated in South Florida in 2006, and there were cases in 2009 and 2010, one of whom was a Californian youth who was found with a bag over his head, attached to a helium tank, and another teenager in Northern Ireland died of asphyxiation. At Eagle Point, Oregon a teenage girl died in 2012 from barotrauma at a party. A girl from Michigan died from hypoxia later in the year. On February 4, 2015, it was revealed that, during the recording of their main TV show on January 28, a 12-year-old member (name withheld) of Japanese all-girl singing group 3B Junior suffered from air embolism, losing consciousness and falling into a coma as a result of air bubbles blocking the flow of blood to the brain after inhaling huge quantities of helium as part of a game. The incident was not made public until a week later. The staff of TV Asahi held an emergency press conference to communicate that the member had been taken to the hospital and is showing signs of rehabilitation such as moving eyes and limbs, but her consciousness has not yet been sufficiently recovered. Police have launched an investigation due to a neglect of safety measures. The safety issues for cryogenic helium are similar to those of liquid nitrogen; its extremely low temperatures can result in cold burns, and the liquid-to-gas expansion ratio can cause explosions if no pressure-relief devices are installed. Containers of helium gas at 5 to 10 K should be handled as if they contain liquid helium due to the rapid and significant thermal expansion that occurs when helium gas at less than 10 K is warmed to room temperature. At high pressures (more than about 20 atm or two MPa), a mixture of helium and oxygen (heliox) can lead to high-pressure nervous syndrome, a sort of reverse-anesthetic effect; adding a small amount of nitrogen to the mixture can alleviate the problem. See also Abiogenic petroleum origin Helium-3 propulsion Leidenfrost effect Superfluid Tracer-gas leak testing method Hamilton Cady Notes References Bibliography External links General U.S. Government's Bureau of Land Management: Sources, Refinement, and Shortage. With some history of helium. U.S. Geological Survey publications on helium beginning 1996: Helium Where is all the helium? Aga website It's Elemental – Helium Chemistry in its element podcast (MP3) from the Royal Society of Chemistry's Chemistry World: Helium International Chemical Safety Cards – Helium includes health and safety information regarding accidental exposures to helium More detail Helium at The Periodic Table of Videos (University of Nottingham) Helium at the Helsinki University of Technology; includes pressure-temperature phase diagrams for helium-3 and helium-4 Lancaster University, Ultra Low Temperature Physics – includes a summary of some low temperature techniques Video: Demonstration of superfluid helium (Alfred Leitner, 1963, 38 min.) Miscellaneous Physics in Speech with audio samples that demonstrate the unchanged voice pitch Article about helium and other noble gases Helium shortage America's Helium Supply: Options for Producing More Helium from Federal Land: Oversight Hearing before the Subcommittee on Energy and Mineral Resources of the Committee on Natural Resources, U.S. House Of Representatives, One Hundred Thirteenth Congress, First Session, Thursday, July 11, 2013 Helium Program: Urgent Issues Facing BLM's Storage and Sale of Helium Reserves: Testimony before the Committee on Natural Resources, House of Representatives Government Accountability Office Chemical elements Noble gases Quantum phases Airship technology Coolants Nuclear reactor coolants Underwater diving equipment E-number additives Helios
Helium
[ "Physics", "Chemistry", "Materials_science" ]
10,905
[ "Quantum phases", "Noble gases", "Chemical elements", "Phases of matter", "Quantum mechanics", "Nonmetals", "Condensed matter physics", "Atoms", "Matter" ]
13,257
https://en.wikipedia.org/wiki/Hydrocarbon
In organic chemistry, a hydrocarbon is an organic compound consisting entirely of hydrogen and carbon. Hydrocarbons are examples of group 14 hydrides. Hydrocarbons are generally colourless and hydrophobic; their odor is usually faint, and may be similar to that of gasoline or lighter fluid. They occur in a diverse range of molecular structures and phases: they can be gases (such as methane and propane), liquids (such as hexane and benzene), low melting solids (such as paraffin wax and naphthalene) or polymers (such as polyethylene and polystyrene). In the fossil fuel industries, hydrocarbon refers to naturally occurring petroleum, natural gas and coal, or their hydrocarbon derivatives and purified forms. Combustion of hydrocarbons is the main source of the world's energy. Petroleum is the dominant raw-material source for organic commodity chemicals such as solvents and polymers. Most anthropogenic (human-generated) emissions of greenhouse gases are either carbon dioxide released by the burning of fossil fuels, or methane released from the handling of natural gas or from agriculture. Types As defined by the International Union of Pure and Applied Chemistry's nomenclature of organic chemistry, hydrocarbons are classified as follows: Saturated hydrocarbons, which are the simplest of the hydrocarbon types. They are composed entirely of single bonds and are saturated with hydrogen. The formula for acyclic saturated hydrocarbons (i.e., alkanes) is CH. The most general form of saturated hydrocarbons, (whether linear or branched species, and whether with or without one or more rings) is CH, where r is the number of rings. Those with exactly one ring are the cycloalkanes. Saturated hydrocarbons are the basis of petroleum fuels and may be either linear or branched species. One or more of the hydrogen atoms can be replaced with other atoms, for example chlorine or another halogen: this is called a substitution reaction. An example is the conversion of methane to chloroform using a chlorination reaction. Halogenating a hydrocarbon produces something that is not a hydrocarbon. It is a very common and useful process. Hydrocarbons with the same molecular formula but different structural formulae are called structural isomers. As given in the example of 3-methylhexane and its higher homologues, branched hydrocarbons can be chiral. Chiral saturated hydrocarbons constitute the side chains of biomolecules such as chlorophyll and tocopherol. Unsaturated hydrocarbons, which have one or more double or triple bonds between carbon atoms. Those with one or more double bonds are called alkenes. Those with one double bond have the formula CH (assuming non-cyclic structures). Those containing triple bonds are called alkyne. Those with one triple bond have the formula CH. Aromatic hydrocarbons, also known as arenes, which are hydrocarbons that have at least one aromatic ring. 10% of total nonmethane organic carbon emission are aromatic hydrocarbons from the exhaust of gasoline-powered vehicles. The term 'aliphatic' refers to non-aromatic hydrocarbons. Saturated aliphatic hydrocarbons are sometimes referred to as 'paraffins'. Aliphatic hydrocarbons containing a double bond between carbon atoms are sometimes referred to as 'olefins'. Usage The predominant use of hydrocarbons is as a combustible fuel source. Methane is the predominant component of natural gas. C6 through C10 alkanes, alkenes, cycloalkanes, and aromatic hydrocarbons are the main components of gasoline, naphtha, jet fuel, and specialized industrial solvent mixtures. With the progressive addition of carbon units, the simple non-ring structured hydrocarbons have higher viscosities, lubricating indices, boiling points, and solidification temperatures. At the opposite extreme from methane lie the heavy tars that remain as the lowest fraction in a crude oil refining retort. They are collected and widely utilized as roofing compounds, pavement material (bitumen), wood preservatives (the creosote series) and as extremely high viscosity shear-resisting liquids. Some large-scale non-fuel applications of hydrocarbons begin with ethane and propane, which are obtained from petroleum and natural gas. These two gases are converted either to syngas or to ethylene and propylene respectively. Global consumption of benzene in 2021 is estimated at more than 58 million metric tons, which will increase to 60 million tons in 2022. Hydrocarbons are also prevalent in nature. Some eusocial arthropods, such as the Brazilian stingless bee, Schwarziana quadripunctata, use unique cuticular hydrocarbon "scents" in order to determine kin from non-kin. This hydrocarbon composition varies between age, sex, nest location, and hierarchal position. There is also potential to harvest hydrocarbons from plants like Euphorbia lathyris and E. tirucalli as an alternative and renewable energy source for vehicles that use diesel. Furthermore, endophytic bacteria from plants that naturally produce hydrocarbons have been used in hydrocarbon degradation in attempts to deplete hydrocarbon concentration in polluted soils. Reactions Saturated hydrocarbons are notable for their inertness. Unsaturated hydrocarbons (alkanes, alkenes and aromatic compounds) react more readily, by means of substitution, addition, polymerization. At higher temperatures they undergo dehydrogenation, oxidation and combustion. Saturated hydrocarbons Cracking The cracking of saturated hydrocarbons is the main industrial route to alkenes and alkyne. These reactions require heterogeneous catalysts and temperatures >500 °C. Oxidation Widely practice conversions of hydrocarbons involves their reaction with oxygen. In the presence of excess oxygen, hydrocarbons combust. With, however, careful conditions, which have been optimized for many years, partial oxidation results. Useful compounds can obtained in this way: maleic acid from butane, terephthalic acid from xylenes, acetone together with phenol from cumene (isopropylbenzene), and cyclohexanone from cyclohexane]]. The process, which is called autoxidation, begins with the formation of hydroperoxides (ROOH). Combustion Combustion of hydrocarbons is currently the main source of the world's energy for electric power generation, heating (such as home heating), and transportation. Often this energy is used directly as heat such as in home heaters, which use either petroleum or natural gas. The hydrocarbon is burnt and the heat is used to heat water, which is then circulated. A similar principle is used to create electrical energy in power plants. Both saturated and unsaturated hydrocarbons undergo this process. Common properties of hydrocarbons are the facts that they produce steam, carbon dioxide and heat during combustion and that oxygen is required for combustion to take place. The simplest hydrocarbon, methane, burns as follows: \underset{methane}{CH4} + 2O2 -> CO2 + 2H2O In inadequate supply of air, carbon black and water vapour are formed: \underset{methane}{CH4} + O2 -> C + 2H2O And finally, for any linear alkane of n carbon atoms, Partial oxidation characterizes the reactions of alkenes and oxygen. This process is the basis of rancidification and paint drying. Benzene burns with sooty flame when heated in air: \underset{benzene}{C6H6} + {15\over 2}O2 -> 6CO2 {+} 3H2O Halogenation Saturated hydrocarbons react with chlorine and fluorine. In the case of chlorination, one of the chlorine atoms replaces a hydrogen atom. The reactions proceed via free-radical pathways, in which the halogen first dissociates into a two neutral radical atoms (homolytic fission). CH + Cl → CHCl + HCl CHCl + Cl → CHCl + HCl all the way to CCl (carbon tetrachloride) CH + Cl → CHCl + HCl CHCl + Cl → CHCl + HCl all the way to CCl (hexachloroethane) Unsaturated hydrocarbons Substitution Aromatic compounds, almost uniquely for hydrocarbons, undergo substitution reactions. The chemical process practiced on the largest scale is the reaction of benzene and ethene to give ethylbenzene: The resulting ethylbenzene is dehydrogenated to styrene and then polymerized to manufacture polystyrene, a common thermoplastic material. Addition Addition reactions apply to alkenes and alkynes. It is because they add reagents that they are called unsaturated. In this reaction a variety of reagents add "across" the pi-bond(s). Chlorine, hydrogen chloride, water, and hydrogen are illustrative reagents. Polymerization is a form of addition. Alkenes and some alkynes also undergo polymerization by opening of the multiple bonds to produce polyethylene, polybutylene, and polystyrene. The alkyne acetylene polymerizes to produce polyacetylene. Oligomers (chains of a few monomers) may be produced, for example in the Shell higher olefin process, where α-olefins are extended to make longer α-olefins by adding ethylene repeatedly. Metathesis Some hydrocarbons undergo metathesis, in which substituents attached by C–C bonds are exchanged between molecules. For a single C–C bond it is alkane metathesis, for a double C–C bond it is alkene metathesis (olefin metathesis), and for a triple C–C bond it is alkyne metathesis. Origin The vast majority of hydrocarbons found on Earth occur in crude oil, petroleum, coal, and natural gas. For thousands of years they have been exploited and used for a vast range of purposes. Petroleum () and coal are generally thought to be products of decomposition of organic matter. Coal, in contrast to petroleum, is richer in carbon and poorer in hydrogen. Natural gas is the product of methanogenesis. A seemingly limitless variety of compounds comprise petroleum, hence the necessity of refineries. These hydrocarbons consist of saturated hydrocarbons, aromatic hydrocarbons, or combinations of the two. Missing in petroleum are alkenes and alkynes. Their production requires refineries. Petroleum-derived hydrocarbons are mainly consumed for fuel, but they are also the source of virtually all synthetic organic compounds, including plastics and pharmaceuticals. Natural gas is consumed almost exclusively as fuel. Coal is used as a fuel and as a reducing agent in metallurgy. A small fraction of hydrocarbon found on earth, and all currently known hydrocarbon found on other planets and moons, is thought to be abiological. Hydrocarbons such as ethylene, isoprene, and monoterpenes are emitted by living vegetation. Some hydrocarbons also are widespread and abundant in the Solar System. Lakes of liquid methane and ethane have been found on Titan, Saturn's largest moon, as confirmed by the Cassini–Huygens space probe. Hydrocarbons are also abundant in nebulae forming polycyclic aromatic hydrocarbon compounds. Environmental impact Burning hydrocarbons as fuel, which produces carbon dioxide and water, is a major contributor to anthropogenic global warming. Hydrocarbons are introduced into the environment through their extensive use as fuels and chemicals as well as through leaks or accidental spills during exploration, production, refining, or transport of fossil fuels. Anthropogenic hydrocarbon contamination of soil is a serious global issue due to contaminant persistence and the negative impact on human health. When soil is contaminated by hydrocarbons, it can have a significant impact on its microbiological, chemical, and physical properties. This can serve to prevent, slow down or even accelerate the growth of vegetation depending on the exact changes that occur. Crude oil and natural gas are the two largest sources of hydrocarbon contamination of soil. Bioremediation Bioremediation of hydrocarbon from soil or water contaminated is a formidable challenge because of the chemical inertness that characterize hydrocarbons (hence they survived millions of years in the source rock). Nonetheless, many strategies have been devised, bioremediation being prominent. The basic problem with bioremediation is the paucity of enzymes that act on them. Nonetheless, the area has received regular attention. Bacteria in the gabbroic layer of the ocean's crust can degrade hydrocarbons; but the extreme environment makes research difficult. Other bacteria such as Lutibacterium anuloederans can also degrade hydrocarbons. Mycoremediation or breaking down of hydrocarbon by mycelium and mushrooms is possible. Safety Hydrocarbons are generally of low toxicity, hence the widespread use of gasoline and related volatile products. Aromatic compounds such as benzene and toluene are narcotic and chronic toxins, and benzene in particular is known to be carcinogenic. Certain rare polycyclic aromatic compounds are carcinogenic. Hydrocarbons are highly flammable. See also Abiogenic petroleum origin Biomass to liquid Carbohydrate Energy storage Fractional distillation Functional group Hydrocarbon mixtures Organic nuclear reactor References
Hydrocarbon
[ "Chemistry" ]
2,860
[ "Organic compounds", "Hydrocarbons" ]
13,258
https://en.wikipedia.org/wiki/Halogen
|- ! colspan=2 style="text-align:left;" | ↓ Period |- ! 2 | |- ! 3 | |- ! 4 | |- ! 5 | |- ! 6 | |- ! 7 | |- | colspan="2"| Legend |} The halogens () are a group in the periodic table consisting of six chemically related elements: fluorine (F), chlorine (Cl), bromine (Br), iodine (I), and the radioactive elements astatine (At) and tennessine (Ts), though some authors would exclude tennessine as its chemistry is unknown and is theoretically expected to be more like that of gallium. In the modern IUPAC nomenclature, this group is known as group 17. The word "halogen" means "salt former" or "salt maker". When halogens react with metals, they produce a wide range of salts, including calcium fluoride, sodium chloride (common table salt), silver bromide and potassium iodide. The group of halogens is the only periodic table group that contains elements in three of the main states of matter at standard temperature and pressure, though not far above room temperature the same becomes true of groups 1 and 15, assuming white phosphorus is taken as the standard state. All of the halogens form acids when bonded to hydrogen. Most halogens are typically produced from minerals or salts. The middle halogens—chlorine, bromine, and iodine—are often used as disinfectants. Organobromides are the most important class of flame retardants, while elemental halogens are dangerous and can be toxic. History The fluorine mineral fluorospar was known as early as 1529. Early chemists realized that fluorine compounds contain an undiscovered element, but were unable to isolate it. In 1860, George Gore, an English chemist, ran a current of electricity through hydrofluoric acid and probably produced fluorine, but he was unable to prove his results at the time. In 1886, Henri Moissan, a chemist in Paris, performed electrolysis on potassium bifluoride dissolved in anhydrous hydrogen fluoride, and successfully isolated fluorine. Hydrochloric acid was known to alchemists and early chemists. However, elemental chlorine was not produced until 1774, when Carl Wilhelm Scheele heated hydrochloric acid with manganese dioxide. Scheele called the element "dephlogisticated muriatic acid", which is how chlorine was known for 33 years. In 1807, Humphry Davy investigated chlorine and discovered that it is an actual element. Chlorine gas was used as a poisonous gas during World War I. It displaced oxygen in contaminated areas and replaced common oxygenated air with the toxic chlorine gas. The gas would burn human tissue externally and internally, especially the lungs, making breathing difficult or impossible depending on the level of contamination. Bromine was discovered in the 1820s by Antoine Jérôme Balard. Balard discovered bromine by passing chlorine gas through a sample of brine. He originally proposed the name muride for the new element, but the French Academy changed the element's name to bromine. Iodine was discovered by Bernard Courtois, who was using seaweed ash as part of a process for saltpeter manufacture. Courtois typically boiled the seaweed ash with water to generate potassium chloride. However, in 1811, Courtois added sulfuric acid to his process and found that his process produced purple fumes that condensed into black crystals. Suspecting that these crystals were a new element, Courtois sent samples to other chemists for investigation. Iodine was proven to be a new element by Joseph Gay-Lussac. In 1931, Fred Allison claimed to have discovered element 85 with a magneto-optical machine, and named the element Alabamine, but was mistaken. In 1937, Rajendralal De claimed to have discovered element 85 in minerals, and called the element dakine, but he was also mistaken. An attempt at discovering element 85 in 1939 by Horia Hulubei and Yvette Cauchois via spectroscopy was also unsuccessful, as was an attempt in the same year by Walter Minder, who discovered an iodine-like element resulting from beta decay of polonium. Element 85, now named astatine, was produced successfully in 1940 by Dale R. Corson, K.R. Mackenzie, and Emilio G. Segrè, who bombarded bismuth with alpha particles. In 2010, a team led by nuclear physicist Yuri Oganessian involving scientists from the JINR, Oak Ridge National Laboratory, Lawrence Livermore National Laboratory, and Vanderbilt University successfully bombarded berkelium-249 atoms with calcium-48 atoms to make tennessine. Etymology In 1811, the German chemist Johann Schweigger proposed that the name "halogen" – meaning "salt producer", from αλς [hals] "salt" and γενειν [genein] "to beget" – replace the name "chlorine", which had been proposed by the English chemist Humphry Davy. Davy's name for the element prevailed. However, in 1826, the Swedish chemist Baron Jöns Jacob Berzelius proposed the term "halogen" for the elements fluorine, chlorine, and iodine, which produce a sea-salt-like substance when they form a compound with an alkaline metal. The English names of these elements all have the ending -ine. Fluorine's name comes from the Latin word fluere, meaning "to flow", because it was derived from the mineral fluorite, which was used as a flux in metalworking. Chlorine's name comes from the Greek word chloros, meaning "greenish-yellow". Bromine's name comes from the Greek word bromos, meaning "stench". Iodine's name comes from the Greek word iodes, meaning "violet". Astatine's name comes from the Greek word astatos, meaning "unstable". Tennessine is named after the US state of Tennessee, where it was synthesized. Characteristics Chemical The halogens fluorine, chlorine, bromine, and iodine are nonmetals; the chemical properties of astatine and tennessine, two heaviest group 17 members, have not been conclusively investigated. The halogens show trends in chemical bond energy moving from top to bottom of the periodic table column with fluorine deviating slightly. It follows a trend in having the highest bond energy in compounds with other atoms, but it has very weak bonds within the diatomic F2 molecule. This means that further down group 17 in the periodic table, the reactivity of elements decreases because of the increasing size of the atoms. Halogens are highly reactive, and as such can be harmful or lethal to biological organisms in sufficient quantities. This high reactivity is due to the high electronegativity of the atoms due to their high effective nuclear charge. Because the halogens have seven valence electrons in their outermost energy level, they can gain an electron by reacting with atoms of other elements to satisfy the octet rule. Fluorine is the most reactive of all elements; it is the only element more electronegative than oxygen, it attacks otherwise-inert materials such as glass, and it forms compounds with the usually inert noble gases. It is a corrosive and highly toxic gas. The reactivity of fluorine is such that, if used or stored in laboratory glassware, it can react with glass in the presence of small amounts of water to form silicon tetrafluoride (SiF4). Thus, fluorine must be handled with substances such as Teflon (which is itself an organofluorine compound), extremely dry glass, or metals such as copper or steel, which form a protective layer of fluoride on their surface. The high reactivity of fluorine allows some of the strongest bonds possible, especially to carbon. For example, Teflon is fluorine bonded with carbon and is extremely resistant to thermal and chemical attacks and has a high melting point. Molecules Diatomic halogen molecules The stable halogens form homonuclear diatomic molecules. Due to relatively weak intermolecular forces, chlorine and fluorine form part of the group known as "elemental gases". The elements become less reactive and have higher melting points as the atomic number increases. The higher melting points are caused by stronger London dispersion forces resulting from more electrons. Compounds Hydrogen halides All of the halogens have been observed to react with hydrogen to form hydrogen halides. For fluorine, chlorine, and bromine, this reaction is in the form of: H2 + X2 → 2HX However, hydrogen iodide and hydrogen astatide can split back into their constituent elements. The hydrogen-halogen reactions get gradually less reactive toward the heavier halogens. A fluorine-hydrogen reaction is explosive even when it is dark and cold. A chlorine-hydrogen reaction is also explosive, but only in the presence of light and heat. A bromine-hydrogen reaction is even less explosive; it is explosive only when exposed to flames. Iodine and astatine only partially react with hydrogen, forming equilibria. All halogens form binary compounds with hydrogen known as the hydrogen halides: hydrogen fluoride (HF), hydrogen chloride (HCl), hydrogen bromide (HBr), hydrogen iodide (HI), and hydrogen astatide (HAt). All of these compounds form acids when mixed with water. Hydrogen fluoride is the only hydrogen halide that forms hydrogen bonds. Hydrochloric acid, hydrobromic acid, hydroiodic acid, and acid are all strong acids, but hydrofluoric acid is a weak acid. All of the hydrogen halides are irritants. Hydrogen fluoride and hydrogen chloride are highly acidic. Hydrogen fluoride is used as an industrial chemical, and is highly toxic, causing pulmonary edema and damaging cells. Hydrogen chloride is also a dangerous chemical. Breathing in gas with more than fifty parts per million of hydrogen chloride can cause death in humans. Hydrogen bromide is even more toxic and irritating than hydrogen chloride. Breathing in gas with more than thirty parts per million of hydrogen bromide can be lethal to humans. Hydrogen iodide, like other hydrogen halides, is toxic. Metal halides All the halogens are known to react with sodium to form sodium fluoride, sodium chloride, sodium bromide, sodium iodide, and sodium astatide. Heated sodium's reaction with halogens produces bright-orange flames. Sodium's reaction with chlorine is in the form of: Iron reacts with fluorine, chlorine, and bromine to form iron(III) halides. These reactions are in the form of: However, when iron reacts with iodine, it forms only iron(II) iodide. Iron wool can react rapidly with fluorine to form the white compound iron(III) fluoride even in cold temperatures. When chlorine comes into contact with a heated iron, they react to form the black iron(III) chloride. However, if the reaction conditions are moist, this reaction will instead result in a reddish-brown product. Iron can also react with bromine to form iron(III) bromide. This compound is reddish-brown in dry conditions. Iron's reaction with bromine is less reactive than its reaction with fluorine or chlorine. A hot iron can also react with iodine, but it forms iron(II) iodide. This compound may be gray, but the reaction is always contaminated with excess iodine, so it is not known for sure. Iron's reaction with iodine is less vigorous than its reaction with the lighter halogens. Interhalogen compounds Interhalogen compounds are in the form of XYn where X and Y are halogens and n is one, three, five, or seven. Interhalogen compounds contain at most two different halogens. Large interhalogens, such as can be produced by a reaction of a pure halogen with a smaller interhalogen such as . All interhalogens except can be produced by directly combining pure halogens in various conditions. Interhalogens are typically more reactive than all diatomic halogen molecules except F2 because interhalogen bonds are weaker. However, the chemical properties of interhalogens are still roughly the same as those of diatomic halogens. Many interhalogens consist of one or more atoms of fluorine bonding to a heavier halogen. Chlorine and bromine can bond with up to five fluorine atoms, and iodine can bond with up to seven fluorine atoms. Most interhalogen compounds are covalent gases. However, some interhalogens are liquids, such as BrF3, and many iodine-containing interhalogens are solids. Organohalogen compounds Many synthetic organic compounds such as plastic polymers, and a few natural ones, contain halogen atoms; these are known as halogenated compounds or organic halides. Chlorine is by far the most abundant of the halogens in seawater, and the only one needed in relatively large amounts (as chloride ions) by humans. For example, chloride ions play a key role in brain function by mediating the action of the inhibitory transmitter GABA and are also used by the body to produce stomach acid. Iodine is needed in trace amounts for the production of thyroid hormones such as thyroxine. Organohalogens are also synthesized through the nucleophilic abstraction reaction. Polyhalogenated compounds Polyhalogenated compounds are industrially created compounds substituted with multiple halogens. Many of them are very toxic and bioaccumulate in humans, and have a very wide application range. They include PCBs, PBDEs, and perfluorinated compounds (PFCs), as well as numerous other compounds. Reactions Reactions with water Fluorine reacts vigorously with water to produce oxygen (O2) and hydrogen fluoride (HF): Chlorine has maximum solubility of ca. 7.1 g Cl2 per kg of water at ambient temperature (21 °C). Dissolved chlorine reacts to form hydrochloric acid (HCl) and hypochlorous acid, a solution that can be used as a disinfectant or bleach: Bromine has a solubility of 3.41 g per 100 g of water, but it slowly reacts to form hydrogen bromide (HBr) and hypobromous acid (HBrO): Iodine, however, is minimally soluble in water (0.03 g/100 g water at 20 °C) and does not react with it. However, iodine will form an aqueous solution in the presence of iodide ion, such as by addition of potassium iodide (KI), because the triiodide ion is formed. Physical and atomic The table below is a summary of the key physical and atomic properties of the halogens. Data marked with question marks are either uncertain or are estimations partially based on periodic trends rather than observations. Isotopes Fluorine has one stable and naturally occurring isotope, fluorine-19. However, there are trace amounts in nature of the radioactive isotope fluorine-23, which occurs via cluster decay of protactinium-231. A total of eighteen isotopes of fluorine have been discovered, with atomic masses ranging from 13 to 31. Chlorine has two stable and naturally occurring isotopes, chlorine-35 and chlorine-37. However, there are trace amounts in nature of the isotope chlorine-36, which occurs via spallation of argon-36. A total of 24 isotopes of chlorine have been discovered, with atomic masses ranging from 28 to 51. There are two stable and naturally occurring isotopes of bromine, bromine-79 and bromine-81. A total of 33 isotopes of bromine have been discovered, with atomic masses ranging from 66 to 98. There is one stable and naturally occurring isotope of iodine, iodine-127. However, there are trace amounts in nature of the radioactive isotope iodine-129, which occurs via spallation and from the radioactive decay of uranium in ores. Several other radioactive isotopes of iodine have also been created naturally via the decay of uranium. A total of 38 isotopes of iodine have been discovered, with atomic masses ranging from 108 to 145. There are no stable isotopes of astatine. However, there are four naturally occurring radioactive isotopes of astatine produced via radioactive decay of uranium, neptunium, and plutonium. These isotopes are astatine-215, astatine-217, astatine-218, and astatine-219. A total of 31 isotopes of astatine have been discovered, with atomic masses ranging from 191 to 227. There are no stable isotopes of tennessine. Tennessine has only two known synthetic radioisotopes, tennessine-293 and tennessine-294. Production Approximately six million metric tons of the fluorine mineral fluorite are produced each year. Four hundred-thousand metric tons of hydrofluoric acid are made each year. Fluorine gas is made from hydrofluoric acid produced as a by-product in phosphoric acid manufacture. Approximately 15,000 metric tons of fluorine gas are made per year. The mineral halite is the mineral that is most commonly mined for chlorine, but the minerals carnallite and sylvite are also mined for chlorine. Forty million metric tons of chlorine are produced each year by the electrolysis of brine. Approximately 450,000 metric tons of bromine are produced each year. Fifty percent of all bromine produced is produced in the United States, 35% in Israel, and most of the remainder in China. Historically, bromine was produced by adding sulfuric acid and bleaching powder to natural brine. However, in modern times, bromine is produced by electrolysis, a method invented by Herbert Dow. It is also possible to produce bromine by passing chlorine through seawater and then passing air through the seawater. In 2003, 22,000 metric tons of iodine were produced. Chile produces 40% of all iodine produced, Japan produces 30%, and smaller amounts are produced in Russia and the United States. Until the 1950s, iodine was extracted from kelp. However, in modern times, iodine is produced in other ways. One way that iodine is produced is by mixing sulfur dioxide with nitrate ores, which contain some iodates. Iodine is also extracted from natural gas fields. Even though astatine is naturally occurring, it is usually produced by bombarding bismuth with alpha particles. Tennessine is made by using a cyclotron, fusing berkelium-249 and calcium-48 to make tennessine-293 and tennessine-294. Applications Disinfectants Both chlorine and bromine are used as disinfectants for drinking water, swimming pools, fresh wounds, spas, dishes, and surfaces. They kill bacteria and other potentially harmful microorganisms through a process known as sterilization. Their reactivity is also put to use in bleaching. Sodium hypochlorite, which is produced from chlorine, is the active ingredient of most fabric bleaches, and chlorine-derived bleaches are used in the production of some paper products. Lighting Halogen lamps are a type of incandescent lamp using a tungsten filament in bulbs that have small amounts of a halogen, such as iodine or bromine added. This enables the production of lamps that are much smaller than non-halogen incandescent lightbulbs at the same wattage. The gas reduces the thinning of the filament and blackening of the inside of the bulb resulting in a bulb that has a much greater life. Halogen lamps glow at a higher temperature (2800 to 3400 kelvin) with a whiter colour than other incandescent bulbs. However, this requires bulbs to be manufactured from fused quartz rather than silica glass to reduce breakage. Drug components In drug discovery, the incorporation of halogen atoms into a lead drug candidate results in analogues that are usually more lipophilic and less water-soluble. As a consequence, halogen atoms are used to improve penetration through lipid membranes and tissues. It follows that there is a tendency for some halogenated drugs to accumulate in adipose tissue. The chemical reactivity of halogen atoms depends on both their point of attachment to the lead and the nature of the halogen. Aromatic halogen groups are far less reactive than aliphatic halogen groups, which can exhibit considerable chemical reactivity. For aliphatic carbon-halogen bonds, the C-F bond is the strongest and usually less chemically reactive than aliphatic C-H bonds. The other aliphatic-halogen bonds are weaker, their reactivity increasing down the periodic table. They are usually more chemically reactive than aliphatic C-H bonds. As a consequence, the most common halogen substitutions are the less reactive aromatic fluorine and chlorine groups. Biological role Fluoride anions are found in ivory, bones, teeth, blood, eggs, urine, and hair of organisms. Fluoride anions in very small amounts may be essential for humans. There are 0.5 milligrams of fluorine per liter of human blood. Human bones contain 0.2 to 1.2% fluorine. Human tissue contains approximately 50 parts per billion of fluorine. A typical 70-kilogram human contains 3 to 6 grams of fluorine. Chloride anions are essential to a large number of species, humans included. The concentration of chlorine in the dry weight of cereals is 10 to 20 parts per million, while in potatoes the concentration of chloride is 0.5%. Plant growth is adversely affected by chloride levels in the soil falling below 2 parts per million. Human blood contains an average of 0.3% chlorine. Human bone typically contains 900 parts per million of chlorine. Human tissue contains approximately 0.2 to 0.5% chlorine. There is a total of 95 grams of chlorine in a typical 70-kilogram human. Some bromine in the form of the bromide anion is present in all organisms. A biological role for bromine in humans has not been proven, but some organisms contain organobromine compounds. Humans typically consume 1 to 20 milligrams of bromine per day. There are typically 5 parts per million of bromine in human blood, 7 parts per million of bromine in human bones, and 7 parts per million of bromine in human tissue. A typical 70-kilogram human contains 260 milligrams of bromine. Humans typically consume less than 100 micrograms of iodine per day. Iodine deficiency can cause intellectual disability. Organoiodine compounds occur in humans in some of the glands, especially the thyroid gland, as well as the stomach, epidermis, and immune system. Foods containing iodine include cod, oysters, shrimp, herring, lobsters, sunflower seeds, seaweed, and mushrooms. However, iodine is not known to have a biological role in plants. There are typically 0.06 milligrams per liter of iodine in human blood, 300 parts per billion of iodine in human bones, and 50 to 700 parts per billion of iodine in human tissue. There are 10 to 20 milligrams of iodine in a typical 70-kilogram human. Astatine, although very scarce, has been found in micrograms in the earth. It has no known biological role because of its high radioactivity, extreme rarity, and has a half-life of just about 8 hours for the most stable isotope. Tennessine is purely man-made and has no other roles in nature. Toxicity The halogens tend to decrease in toxicity towards the heavier halogens. Fluorine gas is extremely toxic; breathing in fluorine at a concentration of 25 parts per million is potentially lethal. Hydrofluoric acid is also toxic, being able to penetrate skin and cause highly painful burns. In addition, fluoride anions are toxic, but not as toxic as pure fluorine. Fluoride can be lethal in amounts of 5 to 10 grams. Prolonged consumption of fluoride above concentrations of 1.5 mg/L is associated with a risk of dental fluorosis, an aesthetic condition of the teeth. At concentrations above 4 mg/L, there is an increased risk of developing skeletal fluorosis, a condition in which bone fractures become more common due to the hardening of bones. Current recommended levels in water fluoridation, a way to prevent dental caries, range from 0.7 to 1.2 mg/L to avoid the detrimental effects of fluoride while at the same time reaping the benefits. People with levels between normal levels and those required for skeletal fluorosis tend to have symptoms similar to arthritis. Chlorine gas is highly toxic. Breathing in chlorine at a concentration of 3 parts per million can rapidly cause a toxic reaction. Breathing in chlorine at a concentration of 50 parts per million is highly dangerous. Breathing in chlorine at a concentration of 500 parts per million for a few minutes is lethal. In addition, breathing in chlorine gas is highly painful because of its corrosive properties. Hydrochloric acid is the acid of chlorine, while relatively nontoxic, it is highly corrosive and releases very irritating and toxic hydrogen chloride gas in open air. Pure bromine is somewhat toxic but less toxic than fluorine and chlorine. One hundred milligrams of bromine is lethal. Bromide anions are also toxic, but less so than bromine. Bromide has a lethal dose of 30 grams. Iodine is somewhat toxic, being able to irritate the lungs and eyes, with a safety limit of 1 milligram per cubic meter. When taken orally, 3 grams of iodine can be lethal. Iodide anions are mostly nontoxic, but these can also be deadly if ingested in large amounts. Astatine is radioactive and thus highly dangerous, but it has not been produced in macroscopic quantities and hence it is most unlikely that its toxicity will be of much relevance to the average individual. Tennessine cannot be chemically investigated due to how short its half-life is, although its radioactivity would make it very dangerous. Superhalogen Certain aluminium clusters have superatom properties. These aluminium clusters are generated as anions ( with n = 1, 2, 3, ... ) in helium gas and reacted with a gas containing iodine. When analyzed by mass spectrometry one main reaction product turns out to be . These clusters of 13 aluminium atoms with an extra electron added do not appear to react with oxygen when it is introduced in the same gas stream. Assuming each atom liberates its 3 valence electrons, this means 40 electrons are present, which is one of the magic numbers for sodium and implies that these numbers are a reflection of the noble gases. Calculations show that the additional electron is located in the aluminium cluster at the location directly opposite from the iodine atom. The cluster must therefore have a higher electron affinity for the electron than iodine and therefore the aluminium cluster is called a superhalogen (i.e., the vertical electron detachment energies of the moieties that make up the negative ions are larger than those of any halogen atom). The cluster component in the ion is similar to an iodide ion or a bromide ion. The related cluster is expected to behave chemically like the triiodide ion. See also Halogen bond Halogen addition reaction Halogen lamp Halogenation Interhalogen Pseudohalogen Notes References Bibliography Groups (periodic table)
Halogen
[ "Chemistry" ]
5,934
[ "Periodic table", "Groups (periodic table)" ]
13,259
https://en.wikipedia.org/wiki/Home%20page
A home page (or homepage) is the main web page of a website. Usually, the home page is located at the root of the website's domain or subdomain. For example, if the domain is example.com, the home page is likely located at the URL www.example.com/. The term may also refer to the start page shown in a web browser when the application first opens. Function A home page is the main web page that a visitor will view when they navigate to a website via a search engine, and it may also function as a landing page to attract visitors. In some cases, the home page is a site directory, particularly when a website has multiple home pages. Good home page design is usually a high priority for a website; for example, a news website may curate headlines and first paragraphs of top stories, with links to full articles. According to Homepage Usability, the home page is the "most important page on any website" and receives the most views of any page. A poorly designed home page can overwhelm and deter visitors from the site. One important use of home pages is communicating the identity and value of a company. Browser start page When a web browser is launched, it will automatically open at least one web page. This is the browser's start page, which is also called its home page. Start pages can be a website or a special browser page, such as thumbnails of frequently visited websites. Moreover, there is a niche market of websites intended to be used solely as start pages. See also Contact page Site map References Bibliography Web design
Home page
[ "Engineering" ]
331
[ "Design", "Web design" ]
13,263
https://en.wikipedia.org/wiki/Hexadecimal
Hexadecimal (also known as base-16 or simply hex) is a positional numeral system that represents numbers using a radix (base) of sixteen. Unlike the decimal system representing numbers using ten symbols, hexadecimal uses sixteen distinct symbols, most often the symbols "0"–"9" to represent values 0 to 9 and "A"–"F" to represent values from ten to fifteen. Software developers and system designers widely use hexadecimal numbers because they provide a convenient representation of binary-coded values. Each hexadecimal digit represents four bits (binary digits), also known as a nibble (or nybble). For example, an 8-bit byte is two hexadecimal digits and its value can be written as to in hexadecimal. In mathematics, a subscript is typically used to specify the base. For example, the decimal value would be expressed in hexadecimal as . In programming, several notations denote hexadecimal numbers, usually involving a prefix. The prefix 0x is used in C, which would denote this value as 0x. Hexadecimal is used in the transfer encoding Base 16, in which each byte of the plain text is broken into two 4-bit values and represented by two hexadecimal digits. Representation Written representation In most current use cases, the letters A–F or a–f represent the values 10–15, while the numerals 0–9 are used to represent their decimal values. There is no universal convention to use lowercase or uppercase, so each is prevalent or preferred in particular environments by community standards or convention; even mixed case is used. Some seven-segment displays use mixed-case 'A b C d E F' to distinguish the digits A–F from one another and from 0–9. There is some standardization of using spaces (rather than commas or another punctuation mark) to separate hex values in a long list. For instance, in the following hex dump, each 8-bit byte is a 2-digit hex number, with spaces between them, while the 32-bit offset at the start is an 8-digit hex number. 00000000 57 69 6b 69 70 65 64 69 61 2c 20 74 68 65 20 66 00000010 72 65 65 20 65 6e 63 79 63 6c 6f 70 65 64 69 61 00000020 20 74 68 61 74 20 61 6e 79 6f 6e 65 20 63 61 6e 00000030 20 65 64 69 74 0a Distinguishing from decimal In contexts where the base is not clear, hexadecimal numbers can be ambiguous and confused with numbers expressed in other bases. There are several conventions for expressing values unambiguously. A numerical subscript (itself written in decimal) can give the base explicitly: 15910 is decimal 159; 15916 is hexadecimal 159, which equals 34510. Some authors prefer a text subscript, such as 159decimal and 159hex, or 159d and 159h. Donald Knuth introduced the use of a particular typeface to represent a particular radix in his book The TeXbook. Hexadecimal representations are written there in a typewriter typeface: , In linear text systems, such as those used in most computer programming environments, a variety of methods have arisen: Although best known from the C programming language (and the many languages influenced by C), the prefix 0x to indicate a hex constant may have had origins in the IBM Stretch systems. It is derived from the 0 prefix already in use for octal constants. Byte values can be expressed in hexadecimal with the prefix \x followed by two hex digits: '\x1B' represents the Esc control character; "\x1B[0m\x1B[25;1H" is a string containing 11 characters with two embedded Esc characters. To output an integer as hexadecimal with the printf function family, the format conversion code %X or %x is used. In XML and XHTML, characters can be expressed as hexadecimal numeric character references using the notation &#xcode;, for instance &#x0054; represents the character U+0054 (the uppercase letter "T"). If there is no the number is decimal (thus &#0084; is the same character). In Intel-derived assembly languages and Modula-2, hexadecimal is denoted with a suffixed or : FFh or 05A3H. Some implementations require a leading zero when the first hexadecimal digit character is not a decimal digit, so one would write 0FFh instead of FFh. Some other implementations (such as NASM) allow C-style numbers (0x42). Other assembly languages (6502, Motorola), Pascal, Delphi, some versions of BASIC (Commodore), GameMaker Language, Godot and Forth use $ as a prefix: $5A3, $C1F27ED. Some assembly languages (Microchip) use the notation H'ABCD' (for ABCD16). Similarly, Fortran 95 uses Z'ABCD'. Ada and VHDL enclose hexadecimal numerals in based "numeric quotes": 16#5A3#, 16#C1F27ED#. For bit vector constants VHDL uses the notation x"5A3", x"C1F27ED". Verilog represents hexadecimal constants in the form 8'hFF, where 8 is the number of bits in the value and FF is the hexadecimal constant. The Icon and Smalltalk languages use the prefix 16r: 16r5A3 PostScript and the Bourne shell and its derivatives denote hex with prefix 16#: 16#5A3, 16#C1F27ED. Common Lisp uses the prefixes #x and #16r. Setting the variables *read-base* and *print-base* to 16 can also be used to switch the reader and printer of a Common Lisp system to Hexadecimal number representation for reading and printing numbers. Thus Hexadecimal numbers can be represented without the #x or #16r prefix code, when the input or output base has been changed to 16. MSX BASIC, QuickBASIC, FreeBASIC and Visual Basic prefix hexadecimal numbers with &H: &H5A3 BBC BASIC and Locomotive BASIC use & for hex. TI-89 and 92 series uses a 0h prefix: 0h5A3, 0hC1F27ED ALGOL 68 uses the prefix 16r to denote hexadecimal numbers: 16r5a3, 16rC1F27ED. Binary, quaternary (base-4), and octal numbers can be specified similarly. The most common format for hexadecimal on IBM mainframes (zSeries) and midrange computers (IBM i) running the traditional OS's (zOS, zVSE, zVM, TPF, IBM i) is X'5A3' or X'C1F27ED', and is used in Assembler, PL/I, COBOL, JCL, scripts, commands and other places. This format was common on other (and now obsolete) IBM systems as well. Occasionally quotation marks were used instead of apostrophes. Syntax that is always Hex Sometimes the numbers are known to be Hex. In URIs (including URLs), character codes are written as hexadecimal pairs prefixed with : where is the code for the space (blank) character, ASCII code point 20 in hex, 32 in decimal. In the Unicode standard, a character value is represented with followed by the hex value, e.g. is the inverted exclamation point (¡). Color references in HTML, CSS and X Window can be expressed with six hexadecimal digits (two each for the red, green and blue components, in that order) prefixed with : magenta, for example, is represented as . CSS also allows 3-hexdigit abbreviations with one hexdigit per component: abbreviates (a golden orange: ). In MIME (e-mail extensions) quoted-printable encoding, character codes are written as hexadecimal pairs prefixed with : is "España" (F1 is the code for ñ in the ISO/IEC 8859-1 character set).) PostScript binary data (such as image pixels) can be expressed as unprefixed consecutive hexadecimal pairs:  ... Any IPv6 address can be written as eight groups of four hexadecimal digits (sometimes called hextets), where each group is separated by a colon (). This, for example, is a valid IPv6 address: or abbreviated by removing leading zeros as (IPv4 addresses are usually written in decimal). Globally unique identifiers are written as thirty-two hexadecimal digits, often in unequal hyphen-separated groupings, for example . Other symbols for 10–15 and mostly different symbol sets The use of the letters A through F to represent the digits above 9 was not universal in the early history of computers. During the 1950s, some installations, such as Bendix-14, favored using the digits 0 through 5 with an overline to denote the values as , , , , and . The SWAC (1950) and Bendix G-15 (1956) computers used the lowercase letters u, v, w, x, y and z for the values 10 to 15. The ORDVAC and ILLIAC I (1952) computers (and some derived designs, e.g. BRLESC) used the uppercase letters K, S, N, J, F and L for the values 10 to 15. The Librascope LGP-30 (1956) used the letters F, G, J, K, Q and W for the values 10 to 15. On the PERM (1956) computer, hexadecimal numbers were written as letters O for zero, A to N and P for 1 to 15. Many machine instructions had mnemonic hex-codes (A=add, M=multiply, L=load, F=fixed-point etc.); programs were written without instruction names. The Honeywell Datamatic D-1000 (1957) used the lowercase letters b, c, d, e, f, and g whereas the Elbit 100 (1967) used the uppercase letters B, C, D, E, F and G for the values 10 to 15. The Monrobot XI (1960) used the letters S, T, U, V, W and X for the values 10 to 15. The NEC parametron computer NEAC 1103 (1960) used the letters D, G, H, J, K (and possibly V) for values 10–15. The Pacific Data Systems 1020 (1964) used the letters L, C, A, S, M and D for the values 10 to 15. New numeric symbols and names were introduced in the Bibi-binary notation by Boby Lapointe in 1968. Bruce Alan Martin of Brookhaven National Laboratory considered the choice of A–F "ridiculous". In a 1968 letter to the editor of the CACM, he proposed an entirely new set of symbols based on the bit locations. In 1972, Ronald O. Whitaker of Rowco Engineering Co. proposed a triangular font that allows "direct binary reading" to "permit both input and output from computers without respect to encoding matrices." Some seven-segment display decoder chips (i.e., 74LS47) show unexpected output due to logic designed only to produce 0–9 correctly. Verbal and digital representations Since there were no traditional numerals to represent the quantities from ten to fifteen, alphabetic letters were re-employed as a substitute. Most European languages lack non-decimal-based words for some of the numerals eleven to fifteen. Some people read hexadecimal numbers digit by digit, like a phone number, or using the NATO phonetic alphabet, the Joint Army/Navy Phonetic Alphabet, or a similar ad-hoc system. In the wake of the adoption of hexadecimal among IBM System/360 programmers, Magnuson (1968) suggested a pronunciation guide that gave short names to the letters of hexadecimal – for instance, "A" was pronounced "ann", B "bet", C "chris", etc. Another naming-system was published online by Rogers (2007) that tries to make the verbal representation distinguishable in any case, even when the actual number does not contain numbers A–F. Examples are listed in the tables below. Yet another naming system was elaborated by Babb (2015), based on a joke in Silicon Valley. The system proposed by Babb was further improved by Atkins-Bittner in 2015-2016. Others have proposed using the verbal Morse Code conventions to express four-bit hexadecimal digits, with "dit" and "dah" representing zero and one, respectively, so that "0000" is voiced as "dit-dit-dit-dit" (....), dah-dit-dit-dah (-..-) voices the digit with a value of nine, and "dah-dah-dah-dah" (----) voices the hexadecimal digit for decimal 15. Systems of counting on digits have been devised for both binary and hexadecimal. Arthur C. Clarke suggested using each finger as an on/off bit, allowing finger counting from zero to 102310 on ten fingers. Another system for counting up to FF16 (25510) is illustrated on the right. Signs The hexadecimal system can express negative numbers the same way as in decimal: −2A to represent −4210, −B01D9 to represent −72136910 and so on. Hexadecimal can also be used to express the exact bit patterns used in the processor, so a sequence of hexadecimal digits may represent a signed or even a floating-point value. This way, the negative number −4210 can be written as FFFF FFD6 in a 32-bit CPU register (in two's complement), as C228 0000 in a 32-bit FPU register or C045 0000 0000 0000 in a 64-bit FPU register (in the IEEE floating-point standard). Hexadecimal exponential notation Just as decimal numbers can be represented in exponential notation, so too can hexadecimal numbers. P notation uses the letter P (or p, for "power"), whereas E (or e) serves a similar purpose in decimal E notation. The number after the P is decimal and represents the binary exponent. Increasing the exponent by 1 multiplies by 2, not 16: . Usually, the number is normalized so that the hexadecimal digits start with (zero is usually with no P). Example: represents . P notation is required by the IEEE 754-2008 binary floating-point standard and can be used for floating-point literals in the C99 edition of the C programming language. Using the %a or %A conversion specifiers, this notation can be produced by implementations of the printf family of functions following the C99 specification and Single Unix Specification (IEEE Std 1003.1) POSIX standard. Conversion Binary conversion Most computers manipulate binary data, but it is difficult for humans to work with a large number of digits for even a relatively small binary number. Although most humans are familiar with the base 10 system, it is much easier to map binary to hexadecimal than to decimal because each hexadecimal digit maps to a whole number of bits (410). This example converts 11112 to base ten. Since each position in a binary numeral can contain either a 1 or a 0, its value may be easily determined by its position from the right: 00012 = 110 00102 = 210 01002 = 410 10002 = 810 Therefore: With little practice, mapping 11112 to F16 in one step becomes easy (see table in written representation). The advantage of using hexadecimal rather than decimal increases rapidly with the size of the number. When the number becomes large, conversion to decimal is very tedious. However, when mapping to hexadecimal, it is trivial to regard the binary string as 4-digit groups and map each to a single hexadecimal digit. This example shows the conversion of a binary number to decimal, mapping each digit to the decimal value, and adding the results. Compare this to the conversion to hexadecimal, where each group of four digits can be considered independently and converted directly: The conversion from hexadecimal to binary is equally direct. Other simple conversions Although quaternary (base 4) is little used, it can easily be converted to and from hexadecimal or binary. Each hexadecimal digit corresponds to a pair of quaternary digits, and each quaternary digit corresponds to a pair of binary digits. In the above example 2 5 C16 = 02 11 304. The octal (base 8) system can also be converted with relative ease, although not quite as trivially as with bases 2 and 4. Each octal digit corresponds to three binary digits, rather than four. Therefore, we can convert between octal and hexadecimal via an intermediate conversion to binary followed by regrouping the binary digits in groups of either three or four. Division-remainder in source base As with all bases there is a simple algorithm for converting a representation of a number to hexadecimal by doing integer division and remainder operations in the source base. In theory, this is possible from any base, but for most humans, only decimal and for most computers, only binary (which can be converted by far more efficient methods) can be easily handled with this method. Let d be the number to represent in hexadecimal, and the series hihi−1...h2h1 be the hexadecimal digits representing the number. i ← 1 hi ← d mod 16 d ← (d − hi) / 16 If d = 0 (return series hi) else increment i and go to step 2 "16" may be replaced with any other base that may be desired. The following is a JavaScript implementation of the above algorithm for converting any number to a hexadecimal in String representation. Its purpose is to illustrate the above algorithm. To work with data seriously, however, it is much more advisable to work with bitwise operators. function toHex(d) { var r = d % 16; if (d - r == 0) { return toChar(r); } return toHex((d - r) / 16) + toChar(r); } function toChar(n) { const alpha = "0123456789ABCDEF"; return alpha.charAt(n); } Conversion through addition and multiplication It is also possible to make the conversion by assigning each place in the source base the hexadecimal representation of its place value — before carrying out multiplication and addition to get the final representation. For example, to convert the number B3AD to decimal, one can split the hexadecimal number into its digits: B (1110), 3 (310), A (1010) and D (1310), and then get the final result by multiplying each decimal representation by 16p (p being the corresponding hex digit position, counting from right to left, beginning with 0). In this case, we have that: which is 45997 in base 10. Tools for conversion Many computer systems provide a calculator utility capable of performing conversions between the various radices frequently including hexadecimal. In Microsoft Windows, the Calculator utility can be set to Programmer mode, which allows conversions between radix 16 (hexadecimal), 10 (decimal), 8 (octal), and 2 (binary), the bases most commonly used by programmers. In Programmer Mode, the on-screen numeric keypad includes the hexadecimal digits A through F, which are active when "Hex" is selected. In hex mode, however, the Windows Calculator supports only integers. Elementary arithmetic Elementary operations such as division can be carried out indirectly through conversion to an alternate numeral system, such as the commonly used decimal system or the binary system where each hex digit corresponds to four binary digits. Alternatively, one can also perform elementary operations directly within the hex system itself — by relying on its addition/multiplication tables and its corresponding standard algorithms such as long division and the traditional subtraction algorithm. Real numbers Rational numbers As with other numeral systems, the hexadecimal system can be used to represent rational numbers, although repeating expansions are common since sixteen (1016) has only a single prime factor: two. For any base, 0.1 (or "1/10") is always equivalent to one divided by the representation of that base value in its own number system. Thus, whether dividing one by two for binary or dividing one by sixteen for hexadecimal, both of these fractions are written as 0.1. Because the radix 16 is a perfect square (42), fractions expressed in hexadecimal have an odd period much more often than decimal ones, and there are no cyclic numbers (other than trivial single digits). Recurring digits are exhibited when the denominator in lowest terms has a prime factor not found in the radix; thus, when using hexadecimal notation, all fractions with denominators that are not a power of two result in an infinite string of recurring digits (such as thirds and fifths). This makes hexadecimal (and binary) less convenient than decimal for representing rational numbers since a larger proportion lies outside its range of finite representation. All rational numbers finitely representable in hexadecimal are also finitely representable in decimal, duodecimal and sexagesimal: that is, any hexadecimal number with a finite number of digits also has a finite number of digits when expressed in those other bases. Conversely, only a fraction of those finitely representable in the latter bases are finitely representable in hexadecimal. For example, decimal 0.1 corresponds to the infinite recurring representation 0.1 in hexadecimal. However, hexadecimal is more efficient than duodecimal and sexagesimal for representing fractions with powers of two in the denominator. For example, 0.062510 (one-sixteenth) is equivalent to 0.116, 0.0912, and 0;3,4560. Irrational numbers The table below gives the expansions of some common irrational numbers in decimal and hexadecimal. Powers Powers of two have very simple expansions in hexadecimal. The first sixteen powers of two are shown below. Cultural history The traditional Chinese units of measurement were base-16. For example, one jīn (斤) in the old system equals sixteen taels. The suanpan (Chinese abacus) can be used to perform hexadecimal calculations such as additions and subtractions. As with the duodecimal system, there have been occasional attempts to promote hexadecimal as the preferred numeral system. These attempts often propose specific pronunciation and symbols for the individual numerals. Some proposals unify standard measures so that they are multiples of 16. An early such proposal was put forward by John W. Nystrom in Project of a New System of Arithmetic, Weight, Measure and Coins: Proposed to be called the Tonal System, with Sixteen to the Base, published in 1862. Nystrom among other things suggested hexadecimal time, which subdivides a day by 16, so that there are 16 "hours" (or "10 tims", pronounced tontim) in a day. The word hexadecimal is first recorded in 1952. It is macaronic in the sense that it combines Greek ἕξ (hex) "six" with Latinate -decimal. The all-Latin alternative sexadecimal (compare the word sexagesimal for base 60) is older, and sees at least occasional use from the late 19th century. It is still in use in the 1950s in Bendix documentation. Schwartzman (1994) argues that use of sexadecimal may have been avoided because of its suggestive abbreviation to sex. Many western languages since the 1960s have adopted terms equivalent in formation to hexadecimal (e.g. French hexadécimal, Italian esadecimale, Romanian hexazecimal, Serbian хексадецимални, etc.) but others have introduced terms which substitute native words for "sixteen" (e.g. Greek δεκαεξαδικός, Icelandic sextándakerfi, Russian шестнадцатеричной etc.) Terminology and notation did not become settled until the end of the 1960s. In 1969, Donald Knuth argued that the etymologically correct term would be senidenary, or possibly sedenary, a Latinate term intended to convey "grouped by 16" modelled on binary, ternary, quaternary, etc. According to Knuth's argument, the correct terms for decimal and octal arithmetic would be denary and octonary, respectively. Alfred B. Taylor used senidenary in his mid-1800s work on alternative number bases, although he rejected base 16 because of its "incommodious number of digits". The now-current notation using the letters A to F establishes itself as the de facto standard beginning in 1966, in the wake of the publication of the Fortran IV manual for IBM System/360, which (unlike earlier variants of Fortran) recognizes a standard for entering hexadecimal constants. As noted above, alternative notations were used by NEC (1960) and The Pacific Data Systems 1020 (1964). The standard adopted by IBM seems to have become widely adopted by 1968, when Bruce Alan Martin in his letter to the editor of the CACM complains that Martin's argument was that use of numerals 0 to 9 in nondecimal numbers "imply to us a base-ten place-value scheme": "Why not use entirely new symbols (and names) for the seven or fifteen nonzero digits needed in octal or hex. Even use of the letters A through P would be an improvement, but entirely new symbols could reflect the binary nature of the system". He also argued that "re-using alphabetic letters for numerical digits represents a gigantic backward step from the invention of distinct, non-alphabetic glyphs for numerals sixteen centuries ago" (as Brahmi numerals, and later in a Hindu–Arabic numeral system), and that the recent ASCII standards (ASA X3.4-1963 and USAS X3.4-1968) "should have preserved six code table positions following the ten decimal digits -- rather than needlessly filling these with punctuation characters" (":;<=>?") that might have been placed elsewhere among the 128 available positions. Base16 (transfer encoding) Base16 (as a proper name without a space) can also refer to a binary to text encoding belonging to the same family as Base32, Base58, and Base64. In this case, data is broken into 4-bit sequences, and each value (between 0 and 15 inclusively) is encoded using one of 16 symbols from the ASCII character set. Although any 16 symbols from the ASCII character set can be used, in practice, the ASCII digits "0"–"9" and the letters "A"–"F" (or the lowercase "a"–"f") are always chosen in order to align with standard written notation for hexadecimal numbers. There are several advantages of Base16 encoding: Most programming languages already have facilities to parse ASCII-encoded hexadecimal Being exactly half a byte, 4-bits is easier to process than the 5 or 6 bits of Base32 and Base64 respectively The symbols 0–9 and A–F are universal in hexadecimal notation, so it is easily understood at a glance without needing to rely on a symbol lookup table. Many CPU architectures have dedicated instructions that allow access to a half-byte (otherwise known as a "nibble"), making it more efficient in hardware than Base32 and Base64 The main disadvantages of Base16 encoding are: Space efficiency is only 50%, since each 4-bit value from the original data will be encoded as an 8-bit byte. In contrast, Base32 and Base64 encodings have a space efficiency of 63% and 75% respectively. Possible added complexity of having to accept both uppercase and lowercase letters Support for Base16 encoding is ubiquitous in modern computing. It is the basis for the W3C standard for URL percent encoding, where a character is replaced with a percent sign "%" and its Base16-encoded form. Most modern programming languages directly include support for formatting and parsing Base16-encoded numbers. See also Base32, Base64 (content encoding schemes) Hexadecimal time IBM hexadecimal floating-point Hex editor Hex dump Bailey–Borwein–Plouffe formula (BBP) Hexspeak P notation References Binary arithmetic Hexadecimal numeral system Power-of-two numeral systems Positional numeral systems
Hexadecimal
[ "Mathematics" ]
6,386
[ "Numeral systems", "Arithmetic", "Binary arithmetic", "Positional numeral systems" ]
13,266
https://en.wikipedia.org/wiki/Histogram
A histogram is a visual representation of the distribution of quantitative data. To construct a histogram, the first step is to "bin" (or "bucket") the range of values— divide the entire range of values into a series of intervals—and then count how many values fall into each interval. The bins are usually specified as consecutive, non-overlapping intervals of a variable. The bins (intervals) are adjacent and are typically (but not required to be) of equal size. Histograms give a rough sense of the density of the underlying distribution of the data, and often for density estimation: estimating the probability density function of the underlying variable. The total area of a histogram used for probability density is always normalized to 1. If the length of the intervals on the x-axis are all 1, then a histogram is identical to a relative frequency plot. Histograms are sometimes confused with bar charts. In a histogram, each bin is for a different range of values, so altogether the histogram illustrates the distribution of values. But in a bar chart, each bar is for a different category of observations (e.g., each bar might be for a different population), so altogether the bar chart can be used to compare different categories. Some authors recommend that bar charts always have gaps between the bars to clarify that they are not histograms. Etymology The term "histogram" was first introduced by Karl Pearson, the founder of mathematical statistics, in lectures delivered in 1892 at University College London. Pearson's term is sometimes incorrectly said to combine the Greek root γραμμα (gramma) = "figure" or "drawing" with the root ἱστορία (historia) = "inquiry" or "history". Alternatively the root ἱστίον (histion) is also proposed, meaning "web" or "tissue" (as in histology, the study of biological tissue). Both of these etymologies are incorrect, and in fact Pearson, who knew Ancient Greek well, derived the term from a different if homophonous Greek root, ἱστός = "something set upright", referring to the vertical bars in the graph. Pearson's new term was embedded in a series of other analogous neologisms, such as "stigmogram" and "radiogram". Pearson himself noted in 1895 that although the term "histogram" was new, the type of graph it designates was "a common form of graphical representation". In fact the technique of using a bar graph to represent statistical measurements was devised by the Scottish economist, William Playfair, in his Commercial and political atlas (1786). Examples This is the data for the histogram to the right, using 500 items: The words used to describe the patterns in a histogram are: "symmetric", "skewed left" or "right", "unimodal", "bimodal" or "multimodal". It is a good idea to plot the data using several different bin widths to learn more about it. Here is an example on tips given in a restaurant. The U.S. Census Bureau found that there were 124 million people who work outside of their homes. Using their data on the time occupied by travel to work, the table below shows the absolute number of people who responded with travel times "at least 30 but less than 35 minutes" is higher than the numbers for the categories above and below it. This is likely due to people rounding their reported journey time. The problem of reporting values as somewhat arbitrarily rounded numbers is a common phenomenon when collecting data from people. {| class="wikitable" style="text-align:center" |+Data by absolute numbers |- ! Interval !! Width !! Quantity !! Quantity/width |- | 0 || 5 || 4180 || 836 |- | 5 || 5 || 13687 || 2737 |- | 10 || 5 || 18618 || 3723 |- | 15 || 5 || 19634 || 3926 |- | 20 || 5 || 17981 || 3596 |- | 25 || 5 || 7190 || 1438 |- | 30 || 5 || 16369 || 3273 |- | 35 || 5 || 3212 || 642 |- | 40 || 5 || 4122 || 824 |- | 45 || 15 || 9200 || 613 |- | 60 || 30 || 6461 || 215 |- | 90 || 60 || 3435 || 57 |} This histogram shows the number of cases per unit interval as the height of each block, so that the area of each block is equal to the number of people in the survey who fall into its category. The area under the curve represents the total number of cases (124 million). This type of histogram shows absolute numbers, with Q in thousands. {| class="wikitable" style="text-align:center" |+Data by proportion |- ! Interval !! Width !! Quantity (Q) !! Q/total/width |- | 0 || 5 || 4180 || 0.0067 |- | 5 || 5 || 13687 || 0.0221 |- | 10 || 5 || 18618 || 0.0300 |- | 15 || 5 || 19634 || 0.0316 |- | 20 || 5 || 17981 || 0.0290 |- | 25 || 5 || 7190 || 0.0116 |- | 30 || 5 || 16369 || 0.0264 |- | 35 || 5 || 3212 || 0.0052 |- | 40 || 5 || 4122 || 0.0066 |- | 45 || 15 || 9200 || 0.0049 |- | 60 || 30 || 6461 || 0.0017 |- | 90 || 60 || 3435 || 0.0005 |} This histogram differs from the first only in the vertical scale. The area of each block is the fraction of the total that each category represents, and the total area of all the bars is equal to 1 (the fraction meaning "all"). The curve displayed is a simple density estimate. This version shows proportions, and is also known as a unit area histogram. In other words, a histogram represents a frequency distribution by means of rectangles whose widths represent class intervals and whose areas are proportional to the corresponding frequencies: the height of each is the average frequency density for the interval. The intervals are placed together in order to show that the data represented by the histogram, while exclusive, is also contiguous. (E.g., in a histogram it is possible to have two connecting intervals of 10.5–20.5 and 20.5–33.5, but not two connecting intervals of 10.5–20.5 and 22.5–32.5. Empty intervals are represented as empty and not skipped.) Mathematical definitions The data used to construct a histogram are generated via a function mi that counts the number of observations that fall into each of the disjoint categories (known as bins). Thus, if we let n be the total number of observations and k be the total number of bins, the histogram data mi meet the following conditions: A histogram can be thought of as a simplistic kernel density estimation, which uses a kernel to smooth frequencies over the bins. This yields a smoother probability density function, which will in general more accurately reflect distribution of the underlying variable. The density estimate could be plotted as an alternative to the histogram, and is usually drawn as a curve rather than a set of boxes. Histograms are nevertheless preferred in applications, when their statistical properties need to be modeled. The correlated variation of a kernel density estimate is very difficult to describe mathematically, while it is simple for a histogram where each bin varies independently. An alternative to kernel density estimation is the average shifted histogram, which is fast to compute and gives a smooth curve estimate of the density without using kernels. Cumulative histogram A cumulative histogram is a mapping that counts the cumulative number of observations in all of the bins up to the specified bin. That is, the cumulative histogram Mi of a histogram mj is defined as: Number of bins and width There is no "best" number of bins, and different bin sizes can reveal different features of the data. Grouping data is at least as old as Graunt's work in the 17th century, but no systematic guidelines were given until Sturges's work in 1926. Using wider bins where the density of the underlying data points is low reduces noise due to sampling randomness; using narrower bins where the density is high (so the signal drowns the noise) gives greater precision to the density estimation. Thus varying the bin-width within a histogram can be beneficial. Nonetheless, equal-width bins are widely used. Some theoreticians have attempted to determine an optimal number of bins, but these methods generally make strong assumptions about the shape of the distribution. Depending on the actual data distribution and the goals of the analysis, different bin widths may be appropriate, so experimentation is usually needed to determine an appropriate width. There are, however, various useful guidelines and rules of thumb. The number of bins k can be assigned directly or can be calculated from a suggested bin width h as: The braces indicate the ceiling function. Square-root choice which takes the square root of the number of data points in the sample and rounds to the next integer. This rule is suggested by a number of elementary statistics textbooks and widely implemented in many software packages. Sturges's formula Sturges's rule is derived from a binomial distribution and implicitly assumes an approximately normal distribution. Sturges's formula implicitly bases bin sizes on the range of the data, and can perform poorly if , because the number of bins will be small—less than seven—and unlikely to show trends in the data well. On the other extreme, Sturges's formula may overestimate bin width for very large datasets, resulting in oversmoothed histograms. It may also perform poorly if the data are not normally distributed. When compared to Scott's rule and the Terrell-Scott rule, two other widely accepted formulas for histogram bins, the output of Sturges's formula is closest when . Rice rule The Rice rule is presented as a simple alternative to Sturges's rule. Doane's formula Doane's formula is a modification of Sturges's formula which attempts to improve its performance with non-normal data. where is the estimated 3rd-moment-skewness of the distribution and Scott's normal reference rule Bin width is given by where is the sample standard deviation. Scott's normal reference rule is optimal for random samples of normally distributed data, in the sense that it minimizes the integrated mean squared error of the density estimate. This is the default rule used in Microsoft Excel. Terrell–Scott rule The Terrell–Scott rule is not a normal reference rule. It gives the minimum number of bins required for an asymptotically optimal histogram, where optimality is measured by the integrated mean squared error. The bound is derived by finding the 'smoothest' possible density, which turns out to be . Any other density will require more bins, hence the above estimate is also referred to as the 'oversmoothed' rule. The similarity of the formulas and the fact that Terrell and Scott were at Rice University when the proposed it suggests that this is also the origin of the Rice rule. Freedman–Diaconis rule The Freedman–Diaconis rule gives bin width as: which is based on the interquartile range, denoted by IQR. It replaces 3.5σ of Scott's rule with 2 IQR, which is less sensitive than the standard deviation to outliers in data. Minimizing cross-validation estimated squared error This approach of minimizing integrated mean squared error from Scott's rule can be generalized beyond normal distributions, by using leave-one out cross validation: Here, is the number of datapoints in the kth bin, and choosing the value of h that minimizes J will minimize integrated mean squared error. Shimazaki and Shinomoto's choice The choice is based on minimization of an estimated L2 risk function where and are mean and biased variance of a histogram with bin-width , and . Variable bin widths Rather than choosing evenly spaced bins, for some applications it is preferable to vary the bin width. This avoids bins with low counts. A common case is to choose equiprobable bins, where the number of samples in each bin is expected to be approximately equal. The bins may be chosen according to some known distribution or may be chosen based on the data so that each bin has samples. When plotting the histogram, the frequency density is used for the dependent axis. While all bins have approximately equal area, the heights of the histogram approximate the density distribution. For equiprobable bins, the following rule for the number of bins is suggested: This choice of bins is motivated by maximizing the power of a Pearson chi-squared test testing whether the bins do contain equal numbers of samples. More specifically, for a given confidence interval it is recommended to choose between 1/2 and 1 times the following equation: Where is the probit function. Following this rule for would give between and ; the coefficient of 2 is chosen as an easy-to-remember value from this broad optimum. Remark A good reason why the number of bins should be proportional to is the following: suppose that the data are obtained as independent realizations of a bounded probability distribution with smooth density. Then the histogram remains equally "rugged" as tends to infinity. If is the "width" of the distribution (e. g., the standard deviation or the inter-quartile range), then the number of units in a bin (the frequency) is of order and the relative standard error is of order . Compared to the next bin, the relative change of the frequency is of order provided that the derivative of the density is non-zero. These two are of the same order if is of order , so that is of order . This simple cubic root choice can also be applied to bins with non-constant widths. Applications In hydrology the histogram and estimated density function of rainfall and river discharge data, analysed with a probability distribution, are used to gain insight in their behaviour and frequency of occurrence. An example is shown in the blue figure. In many Digital image processing programs there is an histogram tool, which show you the distribution of the contrast / brightness of the pixels. See also Data and information visualization Data binning Density estimation Kernel density estimation, a smoother but more complex method of density estimation Entropy estimation Freedman–Diaconis rule Image histogram Pareto chart Seven basic tools of quality V-optimal histograms References Further reading Lancaster, H.O. An Introduction to Medical Statistics. John Wiley and Sons. 1974. External links Exploring Histograms, an essay by Aran Lunzer and Amelia McNamara Journey To Work and Place Of Work (location of census document cited in example) Smooth histogram for signals and images from a few samples Histograms: Construction, Analysis and Understanding with external links and an application to particle Physics. A Method for Selecting the Bin Size of a Histogram Histograms: Theory and Practice, some great illustrations of some of the Bin Width concepts derived above. Matlab function to plot nice histograms Dynamic Histogram in MS Excel Histogram construction and manipulation using Java applets, and charts on SOCR Toolbox for constructing the best histograms Statistical charts and diagrams Quality control tools Estimation of densities Nonparametric statistics Statistics articles needing expert attention Frequency distribution
Histogram
[ "Mathematics" ]
3,443
[ "Mathematical objects", "Functions and mappings", "Mathematical relations", "Frequency distribution" ]
13,311
https://en.wikipedia.org/wiki/Hormone
A hormone (from the Greek participle , "setting in motion") is a class of signaling molecules in multicellular organisms that are sent to distant organs or tissues by complex biological processes to regulate physiology and behavior. Hormones are required for the correct development of animals, plants and fungi. Due to the broad definition of a hormone (as a signaling molecule that exerts its effects far from its site of production), numerous kinds of molecules can be classified as hormones. Among the substances that can be considered hormones, are eicosanoids (e.g. prostaglandins and thromboxanes), steroids (e.g. oestrogen and brassinosteroid), amino acid derivatives (e.g. epinephrine and auxin), protein or peptides (e.g. insulin and CLE peptides), and gases (e.g. ethylene and nitric oxide). Hormones are used to communicate between organs and tissues. In vertebrates, hormones are responsible for regulating a wide range of processes including both physiological processes and behavioral activities such as digestion, metabolism, respiration, sensory perception, sleep, excretion, lactation, stress induction, growth and development, movement, reproduction, and mood manipulation. In plants, hormones modulate almost all aspects of development, from germination to senescence. Hormones affect distant cells by binding to specific receptor proteins in the target cell, resulting in a change in cell function. When a hormone binds to the receptor, it results in the activation of a signal transduction pathway that typically activates gene transcription, resulting in increased expression of target proteins. Hormones can also act in non-genomic pathways that synergize with genomic effects. Water-soluble hormones (such as peptides and amines) generally act on the surface of target cells via second messengers. Lipid soluble hormones, (such as steroids) generally pass through the plasma membranes of target cells (both cytoplasmic and nuclear) to act within their nuclei. Brassinosteroids, a type of polyhydroxysteroids, are a sixth class of plant hormones and may be useful as an anticancer drug for endocrine-responsive tumors to cause apoptosis and limit plant growth. Despite being lipid soluble, they nevertheless attach to their receptor at the cell surface. In vertebrates, endocrine glands are specialized organs that secrete hormones into the endocrine signaling system. Hormone secretion occurs in response to specific biochemical signals and is often subject to negative feedback regulation. For instance, high blood sugar (serum glucose concentration) promotes insulin synthesis. Insulin then acts to reduce glucose levels and maintain homeostasis, leading to reduced insulin levels. Upon secretion, water-soluble hormones are readily transported through the circulatory system. Lipid-soluble hormones must bond to carrier plasma glycoproteins (e.g., thyroxine-binding globulin (TBG)) to form ligand-protein complexes. Some hormones, such as insulin and growth hormones, can be released into the bloodstream already fully active. Other hormones, called prohormones, must be activated in certain cells through a series of steps that are usually tightly controlled. The endocrine system secretes hormones directly into the bloodstream, typically via fenestrated capillaries, whereas the exocrine system secretes its hormones indirectly using ducts. Hormones with paracrine function diffuse through the interstitial spaces to nearby target tissue. Plants lack specialized organs for the secretion of hormones, although there is spatial distribution of hormone production. For example, the hormone auxin is produced mainly at the tips of young leaves and in the shoot apical meristem. The lack of specialised glands means that the main site of hormone production can change throughout the life of a plant, and the site of production is dependent on the plant's age and environment. Introduction and overview Hormone producing cells are found in the endocrine glands, such as the thyroid gland, ovaries, and testes. Hormonal signaling involves the following steps: Biosynthesis of a particular hormone in a particular tissue. Storage and secretion of the hormone. Transport of the hormone to the target cell(s). Recognition of the hormone by an associated cell membrane or intracellular receptor protein. Relay and amplification of the received hormonal signal via a signal transduction process: This then leads to a cellular response. The reaction of the target cells may then be recognized by the original hormone-producing cells, leading to a downregulation in hormone production. This is an example of a homeostatic negative feedback loop. Breakdown of the hormone. Exocytosis and other methods of membrane transport are used to secrete hormones when the endocrine glands are signaled. The hierarchical model is an oversimplification of the hormonal signaling process. Cellular recipients of a particular hormonal signal may be one of several cell types that reside within a number of different tissues, as is the case for insulin, which triggers a diverse range of systemic physiological effects. Different tissue types may also respond differently to the same hormonal signal. Discovery Arnold Adolph Berthold (1849) Arnold Adolph Berthold was a German physiologist and zoologist, who, in 1849, had a question about the function of the testes. He noticed in castrated roosters that they did not have the same sexual behaviors as roosters with their testes intact. He decided to run an experiment on male roosters to examine this phenomenon. He kept a group of roosters with their testes intact, and saw that they had normal sized wattles and combs (secondary sexual organs), a normal crow, and normal sexual and aggressive behaviors. He also had a group with their testes surgically removed, and noticed that their secondary sexual organs were decreased in size, had a weak crow, did not have sexual attraction towards females, and were not aggressive. He realized that this organ was essential for these behaviors, but he did not know how. To test this further, he removed one testis and placed it in the abdominal cavity. The roosters acted and had normal physical anatomy. He was able to see that location of the testes does not matter. He then wanted to see if it was a genetic factor that was involved in the testes that provided these functions. He transplanted a testis from another rooster to a rooster with one testis removed, and saw that they had normal behavior and physical anatomy as well. Berthold determined that the location or genetic factors of the testes do not matter in relation to sexual organs and behaviors, but that some chemical in the testes being secreted is causing this phenomenon. It was later identified that this factor was the hormone testosterone. Charles and Francis Darwin (1880) Although known primarily for his work on the Theory of Evolution, Charles Darwin was also keenly interested in plants. Through the 1870s, he and his son Francis studied the movement of plants towards light. They were able to show that light is perceived at the tip of a young stem (the coleoptile), whereas the bending occurs lower down the stem. They proposed that a 'transmissible substance' communicated the direction of light from the tip down to the stem. The idea of a 'transmissible substance' was initially dismissed by other plant biologists, but their work later led to the discovery of the first plant hormone. In the 1920s Dutch scientist Frits Warmolt Went and Russian scientist Nikolai Cholodny (working independently of each other) conclusively showed that asymmetric accumulation of a growth hormone was responsible for this bending. In 1933 this hormone was finally isolated by Kögl, Haagen-Smit and Erxleben and given the name 'auxin'. Oliver and Schäfer (1894) British physician George Oliver and physiologist Edward Albert Schäfer, professor at University College London, collaborated on the physiological effects of adrenal extracts. They first published their findings in two reports in 1894, a full publication followed in 1895. Though frequently falsely attributed to secretin, found in 1902 by Bayliss and Starling, Oliver and Schäfer's adrenal extract containing adrenaline, the substance causing the physiological changes, was the first hormone to be discovered. The term hormone would later be coined by Starling. Bayliss and Starling (1902) William Bayliss and Ernest Starling, a physiologist and biologist, respectively, wanted to see if the nervous system had an impact on the digestive system. They knew that the pancreas was involved in the secretion of digestive fluids after the passage of food from the stomach to the intestines, which they believed to be due to the nervous system. They cut the nerves to the pancreas in an animal model and discovered that it was not nerve impulses that controlled secretion from the pancreas. It was determined that a factor secreted from the intestines into the bloodstream was stimulating the pancreas to secrete digestive fluids. This was named secretin: a hormone. Types of signaling Hormonal effects are dependent on where they are released, as they can be released in different manners. Not all hormones are released from a cell and into the blood until it binds to a receptor on a target. The major types of hormone signaling are: Chemical classes As hormones are defined functionally, not structurally, they may have diverse chemical structures. Hormones occur in multicellular organisms (plants, animals, fungi, brown algae, and red algae). These compounds occur also in unicellular organisms, and may act as signaling molecules however there is no agreement that these molecules can be called hormones. Vertebrates Invertebrates Compared with vertebrates, insects and crustaceans possess a number of structurally unusual hormones such as the juvenile hormone, a sesquiterpenoid. Plants Examples include abscisic acid, auxin, cytokinin, ethylene, and gibberellin. Receptors Most hormones initiate a cellular response by initially binding to either cell surface receptors or intracellular receptors. A cell may have several different receptors that recognize the same hormone but activate different signal transduction pathways, or a cell may have several different receptors that recognize different hormones and activate the same biochemical pathway. Receptors for most peptide as well as many eicosanoid hormones are embedded in the cell membrane as cell surface receptors, and the majority of these belong to the G protein-coupled receptor (GPCR) class of seven alpha helix transmembrane proteins. The interaction of hormone and receptor typically triggers a cascade of secondary effects within the cytoplasm of the cell, described as signal transduction, often involving phosphorylation or dephosphorylation of various other cytoplasmic proteins, changes in ion channel permeability, or increased concentrations of intracellular molecules that may act as secondary messengers (e.g., cyclic AMP). Some protein hormones also interact with intracellular receptors located in the cytoplasm or nucleus by an intracrine mechanism. For steroid or thyroid hormones, their receptors are located inside the cell within the cytoplasm of the target cell. These receptors belong to the nuclear receptor family of ligand-activated transcription factors. To bind their receptors, these hormones must first cross the cell membrane. They can do so because they are lipid-soluble. The combined hormone-receptor complex then moves across the nuclear membrane into the nucleus of the cell, where it binds to specific DNA sequences, regulating the expression of certain genes, and thereby increasing the levels of the proteins encoded by these genes. However, it has been shown that not all steroid receptors are located inside the cell. Some are associated with the plasma membrane. Effects in humans Hormones have the following effects on the body: stimulation or inhibition of growth wake-sleep cycle and other circadian rhythms mood swings induction or suppression of apoptosis (programmed cell death) activation or inhibition of the immune system regulation of metabolism preparation of the body for mating, fighting, fleeing, and other activity preparation of the body for a new phase of life, such as puberty, parenting, and menopause control of the reproductive cycle hunger cravings A hormone may also regulate the production and release of other hormones. Hormone signals control the internal environment of the body through homeostasis. Regulation The rate of hormone biosynthesis and secretion is often regulated by a homeostatic negative feedback control mechanism. Such a mechanism depends on factors that influence the metabolism and excretion of hormones. Thus, higher hormone concentration alone cannot trigger the negative feedback mechanism. Negative feedback must be triggered by overproduction of an "effect" of the hormone. Hormone secretion can be stimulated and inhibited by: Other hormones (stimulating- or releasing -hormones) Plasma concentrations of ions or nutrients, as well as binding globulins Neurons and mental activity Environmental changes, e.g., of light or temperature One special group of hormones is the tropic hormones that stimulate the hormone production of other endocrine glands. For example, thyroid-stimulating hormone (TSH) causes growth and increased activity of another endocrine gland, the thyroid, which increases output of thyroid hormones. To release active hormones quickly into the circulation, hormone biosynthetic cells may produce and store biologically inactive hormones in the form of pre- or prohormones. These can then be quickly converted into their active hormone form in response to a particular stimulus. Eicosanoids are considered to act as local hormones. They are considered to be "local" because they possess specific effects on target cells close to their site of formation. They also have a rapid degradation cycle, making sure they do not reach distant sites within the body. Hormones are also regulated by receptor agonists. Hormones are ligands, which are any kinds of molecules that produce a signal by binding to a receptor site on a protein. Hormone effects can be inhibited, thus regulated, by competing ligands that bind to the same target receptor as the hormone in question. When a competing ligand is bound to the receptor site, the hormone is unable to bind to that site and is unable to elicit a response from the target cell. These competing ligands are called antagonists of the hormone. Therapeutic use Many hormones and their structural and functional analogs are used as medication. The most commonly prescribed hormones are estrogens and progestogens (as methods of hormonal contraception and as HRT), thyroxine (as levothyroxine, for hypothyroidism) and steroids (for autoimmune diseases and several respiratory disorders). Insulin is used by many diabetics. Local preparations for use in otolaryngology often contain pharmacologic equivalents of adrenaline, while steroid and vitamin D creams are used extensively in dermatological practice. A "pharmacologic dose" or "supraphysiological dose" of a hormone is a medical usage referring to an amount of a hormone far greater than naturally occurs in a healthy body. The effects of pharmacologic doses of hormones may be different from responses to naturally occurring amounts and may be therapeutically useful, though not without potentially adverse side effects. An example is the ability of pharmacologic doses of glucocorticoids to suppress inflammation. Hormone-behavior interactions At the neurological level, behavior can be inferred based on hormone concentration, which in turn are influenced by hormone-release patterns; the numbers and locations of hormone receptors; and the efficiency of hormone receptors for those involved in gene transcription. Hormone concentration does not incite behavior, as that would undermine other external stimuli; however, it influences the system by increasing the probability of a certain event to occur. Not only can hormones influence behavior, but also behavior and the environment can influence hormone concentration. Thus, a feedback loop is formed, meaning behavior can affect hormone concentration, which in turn can affect behavior, which in turn can affect hormone concentration, and so on. For example, hormone-behavior feedback loops are essential in providing constancy to episodic hormone secretion, as the behaviors affected by episodically secreted hormones directly prevent the continuous release of sad hormones. Three broad stages of reasoning may be used to determine if a specific hormone-behavior interaction is present within a system: The frequency of occurrence of a hormonally dependent behavior should correspond to that of its hormonal source. A hormonally dependent behavior is not expected if the hormonal source (or its types of action) is non-existent. The reintroduction of a missing behaviorally dependent hormonal source (or its types of action) is expected to bring back the absent behavior. Comparison with neurotransmitters Though colloquially oftentimes used interchangeably, there are various clear distinctions between hormones and neurotransmitters: A hormone can perform functions over a larger spatial and temporal scale than can a neurotransmitter, which often acts in micrometer-scale distances. Hormonal signals can travel virtually anywhere in the circulatory system, whereas neural signals are restricted to pre-existing nerve tracts. Assuming the travel distance is equivalent, neural signals can be transmitted much more quickly (in the range of milliseconds) than can hormonal signals (in the range of seconds, minutes, or hours). Neural signals can be sent at speeds up to 100 meters per second. Neural signalling is an all-or-nothing (digital) action, whereas hormonal signalling is an action that can be continuously variable as it is dependent upon hormone concentration. Neurohormones are a type of hormone that share a commonality with neurotransmitters. They are produced by endocrine cells that receive input from neurons, or neuroendocrine cells. Both classic hormones and neurohormones are secreted by endocrine tissue; however, neurohormones are the result of a combination between endocrine reflexes and neural reflexes, creating a neuroendocrine pathway. While endocrine pathways produce chemical signals in the form of hormones, the neuroendocrine pathway involves the electrical signals of neurons. In this pathway, the result of the electrical signal produced by a neuron is the release of a chemical, which is the neurohormone. Finally, like a classic hormone, the neurohormone is released into the bloodstream to reach its target. Binding proteins Hormone transport and the involvement of binding proteins is an essential aspect when considering the function of hormones. The formation of a complex with a binding protein has several benefits: the effective half-life of the bound hormone is increased, and a reservoir of bound hormones is created, which evens the variations in concentration of unbound hormones (bound hormones will replace the unbound hormones when these are eliminated). An example of the usage of hormone-binding proteins is in the thyroxine-binding protein which carries up to 80% of all thyroxine in the body, a crucial element in regulating the metabolic rate. See also Autocrine signaling Adipokine Cytokine Hepatokine Endocrine disease Endocrine system Endocrinology Environmental hormones Growth factor Intracrine List of investigational sex-hormonal agents Metabolomics Myokine Neohormone Neuroendocrinology Paracrine signaling Plant hormones, a.k.a. plant growth regulators Semiochemical Sex-hormonal agent Sexual motivation and hormones Xenohormone List of human hormones References External links HMRbase: A database of hormones and their receptors Physiology Endocrinology Cell signaling Signal transduction Human female endocrine system
Hormone
[ "Chemistry", "Biology" ]
4,080
[ "Biochemistry", "Neurochemistry", "Physiology", "Signal transduction" ]
13,372
https://en.wikipedia.org/wiki/Human%20geography
Human geography or anthropogeography is the branch of geography which studies spatial relationships between human communities, cultures, economies, and their interactions with the environment, examples of which include urban sprawl and urban redevelopment. It analyzes spatial interdependencies between social interactions and the environment through qualitative and quantitative methods. This multidisciplinary approach draws from sociology, anthropology, economics, and environmental science, contributing to a comprehensive understanding of the intricate connections that shape lived spaces. History The Royal Geographical Society was founded in England in 1830. The first professor of geography in the United Kingdom was appointed in 1883, and the first major geographical intellect to emerge in the UK was Halford John Mackinder, appointed professor of geography at the London School of Economics in 1922. The National Geographic Society was founded in the United States in 1888 and began publication of the National Geographic magazine which became, and continues to be, a great popularizer of geographic information. The society has long supported geographic research and education on geographical topics. The Association of American Geographers was founded in 1904 and was renamed the American Association of Geographers in 2016 to better reflect the increasingly international character of its membership. One of the first examples of geographic methods being used for purposes other than to describe and theorize the physical properties of the earth is John Snow's map of the 1854 Broad Street cholera outbreak. Though Snow was primarily a physician and a pioneer of epidemiology rather than a geographer, his map is probably one of the earliest examples of health geography. The now fairly distinct differences between the subfields of physical and human geography developed at a later date. The connection between both physical and human properties of geography is most apparent in the theory of environmental determinism, made popular in the 19th century by Carl Ritter and others, and has close links to the field of evolutionary biology of the time. Environmental determinism is the theory that people's physical, mental and moral habits are directly due to the influence of their natural environment. However, by the mid-19th century, environmental determinism was under attack for lacking methodological rigor associated with modern science, and later as a means to justify racism and imperialism. A similar concern with both human and physical aspects is apparent during the later 19th and first half of the 20th centuries focused on regional geography. The goal of regional geography, through something known as regionalisation, was to delineate space into regions and then understand and describe the unique characteristics of each region through both human and physical aspects. With links to possibilism and cultural ecology some of the same notions of causal effect of the environment on society and culture remain with environmental determinism. By the 1960s, however, the quantitative revolution led to strong criticism of regional geography. Due to a perceived lack of scientific rigor in an overly descriptive nature of the discipline, and a continued separation of geography from its two subfields of physical and human geography and from geology, geographers in the mid-20th century began to apply statistical and mathematical models in order to solve spatial problems. Much of the development during the quantitative revolution is now apparent in the use of geographic information systems; the use of statistics, spatial modeling, and positivist approaches are still important to many branches of human geography. Well-known geographers from this period are Fred K. Schaefer, Waldo Tobler, William Garrison, Peter Haggett, Richard J. Chorley, William Bunge, and Torsten Hägerstrand. From the 1970s, a number of critiques of the positivism now associated with geography emerged. Known under the term 'critical geography,' these critiques signaled another turning point in the discipline. Behavioral geography emerged for some time as a means to understand how people made perceived spaces and places and made locational decisions. The more influential 'radical geography' emerged in the 1970s and 1980s. It draws heavily on Marxist theory and techniques and is associated with geographers such as David Harvey and Richard Peet. Radical geographers seek to say meaningful things about problems recognized through quantitative methods, provide explanations rather than descriptions, put forward alternatives and solutions, and be politically engaged, rather than using the detachment associated with positivists. (The detachment and objectivity of the quantitative revolution was itself critiqued by radical geographers as being a tool of capital). Radical geography and the links to Marxism and related theories remain an important part of contemporary human geography (See: Antipode). Critical geography also saw the introduction of 'humanistic geography', associated with the work of Yi-Fu Tuan, which pushed for a much more qualitative approach in methodology. The changes under critical geography have led to contemporary approaches in the discipline such as feminist geography, new cultural geography, settlement geography, and the engagement with postmodern and post-structural theories and philosophies. Fields The primary fields of study in human geography focus on the core fields of: Cultures Cultural geography is the study of cultural products and norms – their variation across spaces and places, as well as their relations. It focuses on describing and analyzing the ways language, religion, economy, government, and other cultural phenomena vary or remain constant from one place to another and on explaining how humans function spatially. Subfields include: Social geography, Animal geographies, Language geography, Sexuality and space, Children's geographies, and Religion and geography. Development Development geography is the study of the Earth's geography with reference to the standard of living and the quality of life of its human inhabitants, study of the location, distribution and spatial organization of economic activities, across the Earth. The subject matter investigated is strongly influenced by the researcher's methodological approach. Economies Economic geography examines relationships between human economic systems, states, and other factors, and the biophysical environment. Subfields include: Marketing geography and Transportation geography Emotion Food Health Medical or health geography is the application of geographical information, perspectives, and methods to the study of health, disease, and health care. Health geography deals with the spatial relations and patterns between people and the environment. This is a sub-discipline of human geography, researching how and why diseases are spread and contained. Histories Historical geography is the study of the human, physical, fictional, theoretical, and "real" geographies of the past. Historical geography studies a wide variety of issues and topics. A common theme is the study of the geographies of the past and how a place or region changes through time. Many historical geographers study geographical patterns through time, including how people have interacted with their environment, and created the cultural landscape. Politics Political geography is concerned with the study of both the spatially uneven outcomes of political processes and the ways in which political processes are themselves affected by spatial structures. Subfields include: Electoral geography, Geopolitics, Strategic geography and Military geography. Population Population geography is the study of ways in which spatial variations in the distribution, composition, migration, and growth of populations are related to their environment or location. Settlement Settlement geography, including urban geography, is the study of urban and rural areas with specific regards to spatial, relational and theoretical aspects of settlement. That is the study of areas which have a concentration of buildings and infrastructure. These are areas where the majority of economic activities are in the secondary sector and tertiary sectors. Urbanism Urban geography is the study of cities, towns, and other areas of relatively dense settlement. Two main interests are site (how a settlement is positioned relative to the physical environment) and situation (how a settlement is positioned relative to other settlements). Another area of interest is the internal organization of urban areas with regard to different demographic groups and the layout of infrastructure. This subdiscipline also draws on ideas from other branches of Human Geography to see their involvement in the processes and patterns evident in an urban area. Subfields include: Economic geography, Population geography, and Settlement geography. These are clearly not the only subfields that could be used to assist in the study of Urban geography, but they are some major players. Philosophical and theoretical approaches Within each of the subfields, various philosophical approaches can be used in research; therefore, an urban geographer could be a Feminist or Marxist geographer, etc. Such approaches are: Animal geographies Behavioral geography Cognitive geography Critical geography Feminist geography Marxist geography Non-representational theory Positivism Postcolonialism Poststructuralist geography Psychoanalytic geography Psychogeography Spatial analysis Time geography List of notable human geographers Journals As with all social sciences, human geographers publish research and other written work in a variety of academic journals. Whilst human geography is interdisciplinary, there are a number of journals that focus on human geography. These include: ACME: An International E-Journal for Critical Geographies Antipode Area Dialogues in Human Geography Economic geography Environment and Planning Geoforum Geografiska Annaler GeoHumanities Global Environmental Change: Human and Policy Dimensions Human Geography Migration Letters Progress in Human Geography Southeastern Geographer Social & Cultural Geography Tijdschrift voor economische en sociale geografie Transactions of the Institute of British Geographers See also References Further reading External links Worldmapper – Mapping project using social data sets Anthropology Environmental social science
Human geography
[ "Environmental_science" ]
1,887
[ "Environmental social science", "Human geography" ]
13,435
https://en.wikipedia.org/wiki/Hydrology
Hydrology () is the scientific study of the movement, distribution, and management of water on Earth and other planets, including the water cycle, water resources, and drainage basin sustainability. A practitioner of hydrology is called a hydrologist. Hydrologists are scientists studying earth or environmental science, civil or environmental engineering, and physical geography. Using various analytical methods and scientific techniques, they collect and analyze data to help solve water related problems such as environmental preservation, natural disasters, and water management. Hydrology subdivides into surface water hydrology, groundwater hydrology (hydrogeology), and marine hydrology. Domains of hydrology include hydrometeorology, surface hydrology, hydrogeology, drainage-basin management, and water quality. Oceanography and meteorology are not included because water is only one of many important aspects within those fields. Hydrological research can inform environmental engineering, policy, and planning. Branches Chemical hydrology is the study of the chemical characteristics of water. Ecohydrology is the study of interactions between organisms and the hydrologic cycle. Hydrogeology is the study of the presence and movement of groundwater. Hydrogeochemistry is the study of how terrestrial water dissolves minerals weathering and this effect on water chemistry. Hydroinformatics is the adaptation of information technology to hydrology and water resources applications. Hydrometeorology is the study of the transfer of water and energy between land and water body surfaces and the lower atmosphere. Isotope hydrology is the study of the isotopic signatures of water. Surface hydrology is the study of hydrologic processes that operate at or near Earth's surface. Drainage basin management covers water storage, in the form of reservoirs, and floods protection. Water quality includes the chemistry of water in rivers and lakes, both of pollutants and natural solutes. Applications Calculation of rainfall. Calculation of Evapotranspiration Calculating surface runoff and precipitation. Determining the water balance of a region. Determining the agricultural water balance. Designing riparian-zone restoration projects. Mitigating and predicting flood, landslide and Drought risk. Real-time flood forecasting, flood warning, Flood Frequency Analysis Designing irrigation schemes and managing agricultural productivity. Part of the hazard module in catastrophe modeling. Providing drinking water. Designing dams for water supply or hydroelectric power generation. Designing bridges. Designing sewers and urban drainage systems. Analyzing the impacts of antecedent moisture on sanitary sewer systems. Predicting geomorphologic changes, such as erosion or sedimentation. Assessing the impacts of natural and anthropogenic environmental change on water resources. Assessing contaminant transport risk and establishing environmental policy guidelines. Estimating the water resource potential of river basins. Water resources management. Water resources engineering - application of hydrological and hydraulic principles to the planning, development, and management of water resources for beneficial human use. It involves assessing water availability, quality, and demand; designing and operating water infrastructure; and implementing strategies for sustainable water management. History Hydrology has been subject to investigation and engineering for millennia. Ancient Egyptians were one of the first to employ hydrology in their engineering and agriculture, inventing a form of water management known as basin irrigation. Mesopotamian towns were protected from flooding with high earthen walls. Aqueducts were built by the Greeks and Romans, while history shows that the Chinese built irrigation and flood control works. The ancient Sinhalese used hydrology to build complex irrigation works in Sri Lanka, also known for the invention of the Valve Pit which allowed construction of large reservoirs, anicuts and canals which still function. Marcus Vitruvius, in the first century BC, described a philosophical theory of the hydrologic cycle, in which precipitation falling in the mountains infiltrated the Earth's surface and led to streams and springs in the lowlands. With the adoption of a more scientific approach, Leonardo da Vinci and Bernard Palissy independently reached an accurate representation of the hydrologic cycle. It was not until the 17th century that hydrologic variables began to be quantified. Pioneers of the modern science of hydrology include Pierre Perrault, Edme Mariotte and Edmund Halley. By measuring rainfall, runoff, and drainage area, Perrault showed that rainfall was sufficient to account for the flow of the Seine. Mariotte combined velocity and river cross-section measurements to obtain a discharge value, again in the Seine. Halley showed that the evaporation from the Mediterranean Sea was sufficient to account for the outflow of rivers flowing into the sea. Advances in the 18th century included the Bernoulli piezometer and Bernoulli's equation, by Daniel Bernoulli, and the Pitot tube, by Henri Pitot. The 19th century saw development in groundwater hydrology, including Darcy's law, the Dupuit-Thiem well formula, and Hagen-Poiseuille's capillary flow equation. Rational analyses began to replace empiricism in the 20th century, while governmental agencies began their own hydrological research programs. Of particular importance were Leroy Sherman's unit hydrograph, the infiltration theory of Robert E. Horton, and C.V. Theis' aquifer test/equation describing well hydraulics. Since the 1950s, hydrology has been approached with a more theoretical basis than in the past, facilitated by advances in the physical understanding of hydrological processes and by the advent of computers and especially geographic information systems (GIS). (See also GIS and hydrology) Themes The central theme of hydrology is that water circulates throughout the Earth through different pathways and at different rates. The most vivid image of this is in the evaporation of water from the ocean, which forms clouds. These clouds drift over the land and produce rain. The rainwater flows into lakes, rivers, or aquifers. The water in lakes, rivers, and aquifers then either evaporates back to the atmosphere or eventually flows back to the ocean, completing a cycle. Water changes its state of being several times throughout this cycle. The areas of research within hydrology concern the movement of water between its various states, or within a given state, or simply quantifying the amounts in these states in a given region. Parts of hydrology concern developing methods for directly measuring these flows or amounts of water, while others concern modeling these processes either for scientific knowledge or for making a prediction in practical applications. Groundwater Ground water is water beneath Earth's surface, often pumped for drinking water. Groundwater hydrology (hydrogeology) considers quantifying groundwater flow and solute transport. Problems in describing the saturated zone include the characterization of aquifers in terms of flow direction, groundwater pressure and, by inference, groundwater depth (see: aquifer test). Measurements here can be made using a piezometer. Aquifers are also described in terms of hydraulic conductivity, storativity and transmissivity. There are a number of geophysical methods for characterizing aquifers. There are also problems in characterizing the vadose zone (unsaturated zone). Infiltration Infiltration is the process by which water enters the soil. Some of the water is absorbed, and the rest percolates down to the water table. The infiltration capacity, the maximum rate at which the soil can absorb water, depends on several factors. The layer that is already saturated provides a resistance that is proportional to its thickness, while that plus the depth of water above the soil provides the driving force (hydraulic head). Dry soil can allow rapid infiltration by capillary action; this force diminishes as the soil becomes wet. Compaction reduces the porosity and the pore sizes. Surface cover increases capacity by retarding runoff, reducing compaction and other processes. Higher temperatures reduce viscosity, increasing infiltration. Soil moisture Soil moisture can be measured in various ways; by capacitance probe, time domain reflectometer or tensiometer. Other methods include solute sampling and geophysical methods. Surface water flow Hydrology considers quantifying surface water flow and solute transport, although the treatment of flows in large rivers is sometimes considered as a distinct topic of hydraulics or hydrodynamics. Surface water flow can include flow both in recognizable river channels and otherwise. Methods for measuring flow once the water has reached a river include the stream gauge (see: discharge), and tracer techniques. Other topics include chemical transport as part of surface water, sediment transport and erosion. One of the important areas of hydrology is the interchange between rivers and aquifers. Groundwater/surface water interactions in streams and aquifers can be complex and the direction of net water flux (into surface water or into the aquifer) may vary spatially along a stream channel and over time at any particular location, depending on the relationship between stream stage and groundwater levels. Precipitation and evaporation In some considerations, hydrology is thought of as starting at the land-atmosphere boundary and so it is important to have adequate knowledge of both precipitation and evaporation. Precipitation can be measured in various ways: disdrometer for precipitation characteristics at a fine time scale; radar for cloud properties, rain rate estimation, hail and snow detection; rain gauge for routine accurate measurements of rain and snowfall; satellite for rainy area identification, rain rate estimation, land-cover/land-use, and soil moisture, snow cover or snow water equivalent for example. Evaporation is an important part of the water cycle. It is partly affected by humidity, which can be measured by a sling psychrometer. It is also affected by the presence of snow, hail, and ice and can relate to dew, mist and fog. Hydrology considers evaporation of various forms: from water surfaces; as transpiration from plant surfaces in natural and agronomic ecosystems. Direct measurement of evaporation can be obtained using Simon's evaporation pan. Detailed studies of evaporation involve boundary layer considerations as well as momentum, heat flux, and energy budgets. Remote sensing Remote sensing of hydrologic processes can provide information on locations where in situ sensors may be unavailable or sparse. It also enables observations over large spatial extents. Many of the variables constituting the terrestrial water balance, for example surface water storage, soil moisture, precipitation, evapotranspiration, and snow and ice, are measurable using remote sensing at various spatial-temporal resolutions and accuracies. Sources of remote sensing include land-based sensors, airborne sensors and satellite sensors which can capture microwave, thermal and near-infrared data or use lidar, for example. Water quality In hydrology, studies of water quality concern organic and inorganic compounds, and both dissolved and sediment material. In addition, water quality is affected by the interaction of dissolved oxygen with organic material and various chemical transformations that may take place. Measurements of water quality may involve either in-situ methods, in which analyses take place on-site, often automatically, and laboratory-based analyses and may include microbiological analysis. Integrating measurement and modelling Budget analyses Parameter estimation Scaling in time and space Data assimilation Quality control of data – see for example Double mass analysis Prediction Observations of hydrologic processes are used to make predictions of the future behavior of hydrologic systems (water flow, water quality). One of the major current concerns in hydrologic research is "Prediction in Ungauged Basins" (PUB), i.e. in basins where no or only very few data exist. Statistical hydrology The aims of Statistical hydrology is to provide appropriate statistical methods for analyzing and modeling various parts of the hydrological cycle. By analyzing the statistical properties of hydrologic records, such as rainfall or river flow, hydrologists can estimate future hydrologic phenomena. When making assessments of how often relatively rare events will occur, analyses are made in terms of the return period of such events. Other quantities of interest include the average flow in a river, in a year or by season. These estimates are important for engineers and economists so that proper risk analysis can be performed to influence investment decisions in future infrastructure and to determine the yield reliability characteristics of water supply systems. Statistical information is utilized to formulate operating rules for large dams forming part of systems which include agricultural, industrial and residential demands. Modeling Hydrological models are simplified, conceptual representations of a part of the hydrologic cycle. They are primarily used for hydrological prediction and for understanding hydrological processes, within the general field of scientific modeling. Two major types of hydrological models can be distinguished: Models based on data. These models are black box systems, using mathematical and statistical concepts to link a certain input (for instance rainfall) to the model output (for instance runoff). Commonly used techniques are regression, transfer functions, and system identification. The simplest of these models may be linear models, but it is common to deploy non-linear components to represent some general aspects of a catchment's response without going deeply into the real physical processes involved. An example of such an aspect is the well-known behavior that a catchment will respond much more quickly and strongly when it is already wet than when it is dry. Models based on process descriptions. These models try to represent the physical processes observed in the real world. Typically, such models contain representations of surface runoff, subsurface flow, evapotranspiration, and channel flow, but they can be far more complicated. Within this category, models can be divided into conceptual and deterministic. Conceptual models link simplified representations of the hydrological processes in an area, whereas deterministic models seek to resolve as much of the physics of a system as possible. These models can be subdivided into single-event models and continuous simulation models. Recent research in hydrological modeling tries to have a more global approach to the understanding of the behavior of hydrologic systems to make better predictions and to face the major challenges in water resources management. Transport Water movement is a significant means by which other materials, such as soil, gravel, boulders or pollutants, are transported from place to place. Initial input to receiving waters may arise from a point source discharge or a line source or area source, such as surface runoff. Since the 1960s rather complex mathematical models have been developed, facilitated by the availability of high-speed computers. The most common pollutant classes analyzed are nutrients, pesticides, total dissolved solids and sediment. Organizations Intergovernmental organizations International Hydrological Programme (IHP) International research bodies International Water Management Institute (IWMI) UN-IHE Delft Institute for Water Education National research bodies Centre for Ecology and Hydrology – UK Centre for Water Science, Cranfield University, UK eawag – aquatic research, ETH Zürich, Switzerland Institute of Hydrology, Albert-Ludwigs-University of Freiburg, Germany United States Geological Survey – Water Resources of the United States NOAA's National Weather Service – Office of Hydrologic Development, US US Army Corps of Engineers Hydrologic Engineering Center, US Hydrologic Research Center, US NOAA Economics and Social Sciences, United States University of Oklahoma Center for Natural Hazards and Disasters Research, US National Hydrology Research Centre, Canada National Institute of Hydrology, India National and international societies American Institute of Hydrology (AIH) Geological Society of America (GSA) – Hydrogeology Division American Geophysical Union (AGU) – Hydrology Section National Ground Water Association (NGWA) American Water Resources Association Consortium of Universities for the Advancement of Hydrologic Science, Inc. (CUAHSI) International Association of Hydrological Sciences (IAHS) Statistics in Hydrology Working Group (subgroup of IAHS) German Hydrological Society (DHG: Deutsche Hydrologische Gesellschaft) Italian Hydrological Society (SII-IHS) – Società Idrologica Italiana Nordic Association for Hydrology British Hydrological Society Russian Geographical Society (Moscow Center) – Hydrology Commission International Association for Environmental Hydrology International Association of Hydrogeologists Society of Hydrologists and Meteorologists – Nepal Basin- and catchment-wide overviews Connected Waters Initiative, University of New South Wales – Investigating and raising awareness of groundwater and water resource issues in Australia Murray Darling Basin Initiative, Department of Environment and Heritage, Australia Research journals International Journal of Hydrology Science and Technology Hydrological Processes, (electronic) 0885-6087 (paper), John Wiley & Sons Hydrology Research, , IWA Publishing (formerly Nordic Hydrology) Journal of Hydroinformatics, , IWA Publishing Journal of Hydrologic Engineering, , ASCE Publication Journal of Hydrology Water Research Water Resources Research Hydrological Sciences Journal - Journal of the International Association of Hydrological Sciences (IAHS) (Print), (Online) Hydrology and Earth System Sciences Journal of Hydrometeorology See also Aqueous solution Climatology Environmental engineering science Geological Engineering Green Kenue – a software tool for hydrologic modellers Hydraulics HydroCAD – hydrology and hydraulics modeling software Hydrography Hydrology (agriculture) International Hydrological Programme Nash–Sutcliffe model efficiency coefficient Outline of hydrology Potamal Socio-hydrology Soil science Water distribution on Earth WEAP (Water Evaluation And Planning) software to model catchment hydrology from climate and land use data Catchment hydrology Other water-related fields Oceanography is the more general study of water in the oceans and estuaries. Meteorology is the more general study of the atmosphere and of weather, including precipitation as snow and rainfall. Limnology is the study of lakes, rivers and wetlands ecosystems. It covers the biological, chemical, physical, geological, and other attributes of all inland waters (running and standing waters, both fresh and saline, natural or man-made). Water resources are sources of water that are useful or potentially useful. Hydrology studies the availability of those resources, but usually not their uses. References Further reading Eslamian, S., 2014, (ed.) Handbook of Engineering Hydrology, Vol. 1: Fundamentals and Applications, Francis and Taylor, CRC Group, 636 Pages, USA. Eslamian, S., 2014, (ed.) Handbook of Engineering Hydrology, Vol. 2: Modeling, Climate Change and Variability, Francis and Taylor, CRC Group, 646 Pages, USA. Eslamian, S, 2014, (ed.) Handbook of Engineering Hydrology, Vol. 3: Environmental Hydrology and Water Management, Francis and Taylor, CRC Group, 606 Pages, USA. External links Hydrology.nl – Portal to international hydrology and water resources Decision tree to choose an uncertainty method for hydrological and hydraulic modelling (archived 1 June 2013) Experimental Hydrology Wiki Hydraulic engineering Environmental engineering Environmental science Physical geography
Hydrology
[ "Physics", "Chemistry", "Engineering", "Environmental_science" ]
3,812
[ "Hydrology", "Chemical engineering", "Physical systems", "Hydraulics", "Civil engineering", "nan", "Environmental engineering", "Hydraulic engineering" ]