id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
5,670,575
https://en.wikipedia.org/wiki/De%20novo%20synthesis
In chemistry, de novo synthesis () is the synthesis of complex molecules from simple molecules such as sugars or amino acids, as opposed to recycling after partial degradation. For example, nucleotides are not needed in the diet as they can be constructed from small precursor molecules such as formate and aspartate. Methionine, on the other hand, is needed in the diet because while it can be degraded to and then regenerated from homocysteine, it cannot be synthesized de novo. Nucleotide De novo pathways of nucleotides do not use free bases: adenine (abbreviated as A), guanine (G), cytosine (C), thymine (T), or uracil (U). The purine ring is built up one atom or a few atoms at a time and attached to ribose throughout the process. Pyrimidine ring is synthesized as orotate and attached to ribose phosphate and later converted to common pyrimidine nucleotides. Cholesterol Cholesterol is an essential structural component of animal cell membranes. Cholesterol also serves as a precursor for the biosynthesis of steroid hormones, bile acid and vitamin D. In mammals cholesterol is either absorbed from dietary sources or is synthesized de novo. Up to 70-80% of de novo cholesterol synthesis occurs in the liver, and about 10% of de novo cholesterol synthesis occurs in the small intestine. Cancer cells require cholesterol for cell membranes, so cancer cells contain many enzymes for de novo cholesterol synthesis from acetyl-CoA. Fatty-acid (de novo lipogenesis) De novo lipogenesis (DNL) is the process by which excess carbohydrates from the circulation are converted into fatty acids, which can be further converted into triglycerides or other lipids. Acetate and some amino acids (notably leucine and isoleucine) can also be carbon sources for DNL. Normally, de novo lipogenesis occurs primarily in adipose tissue. But in conditions of obesity, insulin resistance, or type 2 diabetes de novo lipogenesis is reduced in adipose tissue (where carbohydrate-responsive element-binding protein (ChREBP) is the major transcription factor) and is increased in the liver (where sterol regulatory element-binding protein 1 (SREBP-1c) is the major transcription factor). ChREBP is normally activated in the liver by glucose (independent of insulin). Obesity and high-fat diets cause levels of carbohydrate-responsive element-binding protein in adipose tissue to be reduced. By contrast, high blood levels of insulin, due to a high carbohydrate meal or insulin resistance, strongly induces SREBP-1c expression in the liver. The reduction of adipose tissue de novo lipogenesis, and the increase in liver de novo lipogenesis due to obesity and insulin resistance leads to fatty liver disease. Fructose consumption (in contrast to glucose) activates both SREBP-1c and ChREBP in an insulin independent manner. Although glucose can be converted into glycogen in the liver, fructose invariably increases de novo lipogenesis in the liver, elevating plasma triglycerides, more than glucose. Moreover, when equal amounts of glucose or fructose sweetened beverages are consumed, the fructose beverage not only causes a greater increase in plasma triglycerides, but causes a greater increase in abdominal fat. DNL is elevated in non-alcoholic fatty liver disease (NAFLD), and is a hallmark of the disease. Compared with healthy controls, patients with NAFLD have an average 3.5 -fold increase in DNL. De novo fatty-acid synthesis is regulated by two important enzymes, namely acetyl-CoA carboxylase and fatty acid synthase. The enzyme acetyl CoA carboxylase is responsible for introducing a carboxyl group to acetyl CoA, rendering malonyl-CoA. Then, the enzyme fatty-acid synthase is responsible for turning malonlyl-CoA into fatty-acid chain. De novo fatty-acid synthesis is mainly not active in human cells, since diet is the major source for it. Thus, it is considered to be a minor contributor to the serum lipid homeostasis. In mice, FA de novo synthesis increases in WAT with the exposure to cold temperatures which might be important for maintenance of circulating TAG levels in the blood stream, and to supply FA for thermogenesis during prolonged cold exposures. DNA De novo DNA synthesis refers to the synthetic creation of DNA rather than assembly or modification of natural precursor template DNA sequences. Initial oligonucleotide synthesis is followed by artificial gene synthesis, and finally by a process of cloning, error correction, and verification, which often involves cloning the genes into plasmids into Escherichia coli or yeast. Primase is an RNA polymerase, and it can add a primer to an existing strand awaiting replication. DNA polymerase cannot add primers, and therefore, needs primase to add the primer de novo. References Further reading Harper's Illustrated Biochemistry, 26th Ed - Robert K. Murray, Darryl K. Granner, Peter A. Mayes, Victor W. Rodwell Lehninger Principles of Biochemistry, Fourth Edition - David L. Nelson, Michael M. Cox Biochemistry 5th ed - Jeremy M. Berg, John L. Tymoczko, Lubert Stryer Biochemistry- Garrett.and.Grisham.2nd.ed Biochemistry, 2/e by Reiginald and Charles Grisham Biochemistry for dummies by John T Moore, EdD and Richard Langley, PhD Stryer L (2007). Biochemistry. 6th Edition. WH Freeman and Company. New York. USA External links Purine and pyrimidine metabolism De novo synthesis of purine nucleotides Cell biology Latin biological phrases
De novo synthesis
[ "Biology" ]
1,253
[ "Latin biological phrases", "Cell biology" ]
5,670,581
https://en.wikipedia.org/wiki/System%20of%20systems%20engineering
System of systems engineering (SoSE) is a set of developing processes, tools, and methods for designing, re-designing and deploying solutions to system-of-systems challenges. Overview System of Systems Engineering (SoSE) methodology is heavily used in U.S. Department of Defense applications, but is increasingly being applied to non-defense related problems such as architectural design of problems in air and auto transportation, healthcare, global communication networks, search and rescue, space exploration, industry 4.0 and many other System of Systems application domains. SoSE is more than systems engineering of monolithic, complex systems because design for System-of-Systems problems is performed under some level of uncertainty in the requirements and the constituent systems, and it involves considerations in multiple levels and domains. Whereas systems engineering focuses on building the system right, SoSE focuses on choosing the right system(s) and their interactions to satisfy the requirements. System-of-Systems Engineering and Systems Engineering are related but different fields of study. Whereas systems engineering addresses the development and operations of monolithic products, SoSE addresses the development and operations of evolving programs. In other words, traditional systems engineering seeks to optimize an individual system (i.e., the product), while SoSE seeks to optimize network of various interacting legacy and new systems brought together to satisfy multiple objectives of the program. SoSE should enable the decision-makers to understand the implications of various choices on technical performance, costs, extensibility and flexibility over time; thus, effective SoSE methodology should prepare decision-makers to design informed architectural solutions for System-of-Systems problems. Due to varied methodology and domains of applications in existing literature, there does not exist a single unified consensus for processes involved in System-of-Systems Engineering. One of the proposed SoSE frameworks, by Dr. Daniel A. DeLaurentis, recommends a three-phase method where a SoS problem is defined (understood), abstracted, modeled and analyzed for behavioral patterns. More information on this method and other proposed methods can be found in the listed SoSE focused organizations and SoSE literature in the subsequent sections. See also Enterprise systems engineering System of systems Enterprise architecture References Further reading Kenneth Cureton, F. Stan Settlers, "System-of-Systems Architecting: Educational Findings and Implications," 2005 IEEE International Conference on Systems, Man and Cybernetics, Waikoloa, Hawaii, October 10–12, 2005. pp. 2726–2731. Mo Jamshidi, "System-of-Systems Engineering — A Definition," IEEE SMC 2005, Big Island, Hawaii, URL: http://ieeesmc2005.unm.edu/SoSE_Defn.htm Saurabh Mittal, Jose L. Risco Martin, "Netcentric System of Systems Engineering with DEVS Unified Process", CRC Press, Boca Raton, Florida, 2013 URL:http://www.crcpress.com/product/isbn/9781439827062 Charles Keating, Ralph Rogers, Resit Unal, David Dryer, et al. "System of Systems Engineering," Engineering Management Journal, Vol. 15, no. 3, pp. 36. Charles Keating, "Research Foundations for System of Systems Engineering," 2005 IEEE International Conference on Systems, Man and Cybernetics, Waikoloa, Hawaii, October 10–12, 2005. pp. 2720–2725. Jack Ring, Azad Madni, "Key Challenges and Opportunities in 'System of Systems' Engineering," 2005 IEEE International Conference on Systems, Man and Cybernetics, Waikoloa, Hawaii, October 10–12, 2005. pp. 973–978. R.E. Raygan, "Configuration management in a system-of-systems environment delivering IT services," 2007 IEEE International Engineering Management Conference, Austin, Texas, July 29, 2007-Aug, 1 2007. pp. 330 – 335. D. Luzeaux & JR Ruault, "Systems of Systems", ISTE Ltd and John Wiley & Sons Inc, 2010 D. Luzeaux, JR Ruault & JL Wippler, "Complex System and Systems of Systems Engineering", ISTE Ltd and John Wiley & Sons Inc, 2011 External links System of Systems Signature Area at Purdue University's College of Engineering (Apr 2015 - content no longer specific to System of Systems) National Centers for System of Systems Engineering at Old Dominion University (Apr 2015 - content blocked) Center for Intelligent Networked Systems at Stevens Institute of Technology (Apr 2015 - page timed out, presumed to no longer exist) System of Systems Engineering Center of Excellence (Apr 2015 - no SOSE content) Systems engineering Complex systems theory
System of systems engineering
[ "Engineering" ]
978
[ "Systems engineering" ]
5,670,694
https://en.wikipedia.org/wiki/Laboratory%20automation
Laboratory automation is a multi-disciplinary strategy to research, develop, optimize and capitalize on technologies in the laboratory that enable new and improved processes. Laboratory automation professionals are academic, commercial and government researchers, scientists and engineers who conduct research and develop new technologies to increase productivity, elevate experimental data quality, reduce lab process cycle times, or enable experimentation that otherwise would be impossible. The most widely known application of laboratory automation technology is laboratory robotics. More generally, the field of laboratory automation comprises many different automated laboratory instruments, devices (the most common being autosamplers), software algorithms, and methodologies used to enable, expedite and increase the efficiency and effectiveness of scientific research in laboratories. The application of technology in today's laboratories is required to achieve timely progress and remain competitive. Laboratories devoted to activities such as high-throughput screening, combinatorial chemistry, automated clinical and analytical testing, diagnostics, large-scale biorepositories, and many others, would not exist without advancements in laboratory automation. Some universities offer entire programs that focus on lab technologies. For example, Indiana University-Purdue University at Indianapolis offers a graduate program devoted to Laboratory Informatics. Also, the Keck Graduate Institute in California offers a graduate degree with an emphasis on development of assays, instrumentation and data analysis tools required for clinical diagnostics, high-throughput screening, genotyping, microarray technologies, proteomics, imaging and other applications. History At least since 1875 there have been reports of automated devices for scientific investigation. These first devices were mostly built by scientists themselves in order to solve problems in the laboratory. After the second world war, companies started to provide automated equipment with greater and greater complexity. Automation steadily spread in laboratories through the 20th century, but then a revolution took place: in the early 1980s, the first fully automated laboratory was opened by Dr. Masahide Sasaki. In 1993, Dr. Rod Markin at the University of Nebraska Medical Center created one of the world's first clinical automated laboratory management systems. In the mid-1990s, he chaired a standards group called the Clinical Testing Automation Standards Steering Committee (CTASSC) of the American Association for Clinical Chemistry, which later evolved into an area committee of the Clinical and Laboratory Standards Institute. In 2004, the National Institutes of Health (NIH) and more than 300 nationally recognized leaders in academia, industry, government, and the public completed the NIH Roadmap to accelerate medical discovery to improve health. The NIH Roadmap clearly identifies technology development as a mission critical factor in the Molecular Libraries and Imaging Implementation Group (see the first theme – New Pathways to Discovery – at https://web.archive.org/web/20100611171315/http://nihroadmap.nih.gov/). Despite the success of Dr. Sasaki laboratory and others of the kind, the multi-million dollar cost of such laboratories has prevented adoption by smaller groups. This is all more difficult because devices made by different manufactures often cannot communicate with each other. However, recent advances based on the use of scripting languages like Autoit have made possible the integration of equipment from different manufacturers. Using this approach, many low-cost electronic devices, including open-source devices, become compatible with common laboratory instruments. Some startups such as Emerald Cloud Lab and Strateos provide on-demand and remote laboratory access on a commercial scale. A 2017 study indicates that these commercial-scale, fully integrated automated laboratories can improve reproducibility and transparency in basic biomedical experiments, and that over nine in ten biomedical papers use methods currently available through these groups. Low-cost laboratory automation A large obstacle to the implementation of automation in laboratories has been its high cost. Many laboratory instruments are very expensive. This is justifiable in many cases, as such equipment can perform very specific tasks employing cutting-edge technology. However, there are devices employed in the laboratory that are not highly technological but still are very expensive. This is the case of many automated devices, which perform tasks that could easily be done by simple and low-cost devices like simple robotic arms, universal (open-source) electronic modules, Lego Mindstorms, or 3D printers. So far, using such low-cost devices together with laboratory equipment was considered to be very difficult. However, it has been demonstrated that such low-cost devices can substitute without problems the standard machines used in laboratory. It can be anticipated that more laboratories will take advantage of this new reality as low-cost automation is very attractive for laboratories. A technology that enables the integration of any machine regardless of their brand is scripting, more specifically, scripting involving the control of mouse clicks and keyboard entries, like AutoIt. By timing clicks and keyboard inputs, different software interfaces controlling different devices can be perfectly synchronized. References Further reading Laboratory techniques Laboratory equipment Robotics Robotics engineering
Laboratory automation
[ "Chemistry", "Technology", "Engineering" ]
1,001
[ "Computer engineering", "Robotics engineering", "Automation", "Robotics", "nan", "Laboratory automation" ]
5,671,050
https://en.wikipedia.org/wiki/Jacobsen%20rearrangement
The Jacobsen rearrangement is a chemical reaction, commonly described as the migration of an alkyl group in a sulfonic acid derived from a polyalkyl- or polyhalobenzene: The exact reaction mechanism is not completely clear, but evidence indicates that the rearrangement occurs intermolecularly and that the migrating group is transferred to a polyalkylbenzene, not to the sulfonic acid (sulfonation only takes place after migration). The intermolecular mechanism is partially illustrated by the side products found in the following example: Furthermore, the reaction is limited to benzene rings with at least four substituents (alkyl and/or halogen groups). The sulfo group is easily removed, so the Jacobsen rearrangement can also be considered as a rearrangement of polyalkylbenzenes. It was Herzig who described this type of rearrangement for the first time in 1881 using polyhalogenated benzenesulfonic acids, but the reaction took the name of the German chemist , who described the rearrangement of polyalkylbenzene derivatives in 1886. References J. Herzig (1881) "Ueber die Einwirkung von Schwefelsäure auf Mono-, Di- und Tribromobenzol" (On the effect of sulfuric acid on mono-, di- and tribromobenzene), Monatshefte für Chemie, 2 (1) : 192–99. A condensed version of this article appeared in: J. Herzig (1881) "Ueber die Einwirkung von Schwefelsäure auf Mono-, Di- und Tribromobenzol", Berichte der deutschen chemischen Gesellschaft, 14 (1) : 1205–06. Oscar Jacobsen (1886) "Ueber die Einwirkung von Schwefelsäure auf Durol und über das dritte Tetramethylbenzol" (On the effect of sulfuric acid on durene [1,2,4,5-tetramethylbenzene] and about the third tetramethylbenzene), Berichte der deutschen chemischen Gesellschaft, 19 : 1209–17. L I Smith. Organic Reactions I: The Jacobsen Reaction (Wiley, 1942) M B Smith, J March. March's Advanced Organic Chemistry (Wiley, 2001) () W Pötsch. Lexikon bedeutender Chemiker (VEB Bibliographisches Institut Leipzig, 1989) () Rearrangement reactions Name reactions
Jacobsen rearrangement
[ "Chemistry" ]
562
[ "Name reactions", "Rearrangement reactions", "Organic reactions" ]
5,671,092
https://en.wikipedia.org/wiki/Gliese%20876%20d
Gliese 876 d is an exoplanet away in the constellation of Aquarius. The planet was the third planet discovered orbiting the red dwarf Gliese 876, and is the innermost planet in the system. It was the lowest-mass known exoplanet apart from the pulsar planets orbiting PSR B1257+12 at the time of its discovery. Due to its low mass, it can be categorized as a super-Earth. Characteristics Mass, radius, and temperature The mass of any exoplanet from radial velocity has one problem, in that only a lower limit on the mass can be obtained. This is because the measured mass value also depends on the orbital inclination, which in general is unknown. However, in the case of Gliese 876, models incorporating the gravitational interactions between the resonant outer planets enables the inclination of the orbits to be determined. This reveals that the outer planets are nearly coplanar with an inclination of around 59° with respect to the plane of the sky. Assuming that Gliese 876 d orbits in the same plane as the other planets, the true mass of the planet is revealed to be 6.83 times the mass of Earth. The low mass of the planet has led to suggestions that it may be a terrestrial planet. This type of massive terrestrial planet could be formed in the inner part of the Gliese 876 system from material pushed towards the star by the inward migration of the gas giants. Alternatively the planet could have formed further from Gliese 876, as a gas giant, and migrated inwards with the other gas giants. This would result in a composition richer in volatile substances, such as water. As it arrived in range, the star would have blown off the planet's hydrogen layer via coronal mass ejection. In this model, the planet would have a pressurised ocean of water (in the form of a supercritical fluid) separated from the silicate core by a layer of ice kept frozen by the high pressures in the planetary interior. Such a planet would have an atmosphere containing water vapor and free oxygen produced by the breakdown of water by ultraviolet radiation. Distinguishing between these two models would require more information about the planet's radius or composition. The planet does not transit its star, which makes obtaining this information impossible with current observational capabilities. The equilibrium temperature of Gliese 876 d, is estimated to be around . Host star The planet orbits a (M-type) star named Gliese 876. The star has a mass of 0.33 and a radius of around 0.36 . It has a surface temperature of 3350 K and is 2.55 billion years old. In comparison, the Sun is about 4.6 billion years old and has a surface temperature of 5778 K. Orbit Gliese 876 d is located in an orbit with a semimajor axis of only 0.0208 AU (3.11 million km). At this distance from the star, tidal interactions should in theory circularize the orbit; however, measurements reveal that it has a high eccentricity of 0.207, comparable to that of Mercury in the Solar System. Models predict that, if its non-Keplerian orbit could be averaged to a Keplerian eccentricity of 0.28, then tidal heating would play a significant role in the planet's geology to the point of keeping it completely molten. Predicted total heat flux is approximately 104–5 W/m2 at the planet's surface; for comparison the surface heat flux for Io is around 3 W/m2. This is similar to the radiative energy it receives from its parent star of about 40,000 W/m2. Discovery Gliese 876 d was discovered by analysing changes in its star's radial velocity as a result of the planet's gravity. The radial velocity measurements were made by observing the Doppler shift in the star's spectral lines. At the time of discovery, Gliese 876 was known to host two extrasolar planets, designated Gliese 876 b and c, in a 2:1 orbital resonance. After the two planets were taken into account, the radial velocity still showed another period, at around two days. The planet, designated Gliese 876 d, was announced on June 13, 2005 by a team led by Eugenio Rivera and was estimated to have a mass approximately 7.5 times that of Earth. Notes References External links GJ 876 d Catalog Gliese 876 Aquarius (constellation) Exoplanets discovered in 2005 Super-Earths Terrestrial planets Exoplanets detected by radial velocity 8
Gliese 876 d
[ "Astronomy" ]
961
[ "Constellations", "Aquarius (constellation)" ]
5,671,098
https://en.wikipedia.org/wiki/Gliese%20876%20c
Gliese 876 c is an exoplanet orbiting the red dwarf Gliese 876, taking about 30 days to complete an orbit. The planet was discovered in April 2001 and is the second planet in order of increasing distance from its star. Discovery At the time of discovery, Gliese 876 was already known to host an extrasolar planet designated Gliese 876 b. On January 9, 2001, it was announced that further analysis of the star's radial velocity had revealed the existence of a second planet in the system, which was designated Gliese 876 c. The orbital period of Gliese 876 c was found to be exactly half that of the outer planet, which meant that the radial velocity signature of the second planet was initially interpreted as a higher eccentricity of the orbit of Gliese 876 b. Host star The planet orbits a (M-type) star named Gliese 876. The star has a mass of 0.33 and a radius of around 0.36 . It has a surface temperature of 3350 K and is 2.55 billion years old. In comparison, the Sun is about 4.6 billion years old and has a surface temperature of 5778 K. Orbit and mass Gliese 876 c is in a 1:2:4 Laplace resonance with the outer planets Gliese 876 b and Gliese 876 e: for every orbit of planet e, planet b completes two orbits and planet c completes four. This leads to strong gravitational interactions between the planets, causing the orbital elements to change rapidly as the orbits precess. This is the second known example of a Laplace resonance, the first being Jupiter's moons Io, Europa and Ganymede. The orbital semimajor axis is only 0.13 AU, around a third of the average distance between Mercury and the Sun, and is more eccentric than the orbit of any of the major planets of the Sun's Solar System. Despite this, it is located in the inner regions of the system's habitable zone, since Gliese 876 is such an intrinsically faint star. A limitation of the radial velocity method used to detect Gliese 876 c is that only a lower limit on the planet's mass can be obtained. This is because the measured mass value depends on the inclination of the orbit, which is not determined by the radial velocity measurements. However, in a resonant system such as Gliese 876, gravitational interactions between the planets can be used to determine the true masses. Using this method, the inclination of the orbit can be determined, revealing the planet's true mass to be 0.72 times that of Jupiter. Characteristics Based on its high mass, Gliese 876 c is likely to be a gas giant with no solid surface. Since it was detected indirectly through its gravitational effects on the star, properties such as its radius, composition, and temperature are unknown. Assuming a composition similar to Jupiter and an environment close to chemical equilibrium, the planet is predicted to have a cloudless upper atmosphere. Gliese 876 c lies at the inner edge of the system's habitable zone. While the prospects for life on gas giants are unknown, it might be possible for a large moon of the planet to provide a habitable environment. Unfortunately tidal interactions between a hypothetical moon, the planet, and the star could destroy moons massive enough to be habitable over the lifetime of the system. In addition it is unclear whether such moons could form in the first place. This planet, like b and e, has likely migrated inward. See also Appearance of extrasolar planets Eccentric Jupiter Gliese 581 List of nearest stars Notes References External links Gliese 876 Aquarius (constellation) Exoplanets discovered in 2001 Exoplanets detected by radial velocity Giant planets in the habitable zone 8
Gliese 876 c
[ "Astronomy" ]
799
[ "Constellations", "Aquarius (constellation)" ]
5,671,102
https://en.wikipedia.org/wiki/Gliese%20876%20b
Gliese 876 b is an exoplanet orbiting the red dwarf Gliese 876. It completes one orbit in approximately 61 days. Discovered in June 1998, Gliese 876 b was the first planet to be discovered orbiting a red dwarf. Discovery Gliese 876 b was initially announced by Geoffrey Marcy on June 22, 1998 at a symposium of the International Astronomical Union in Victoria, British Columbia, Canada. The discovery was made using data from the Keck and Lick observatories. Only 2 hours after his announcement, he was shown an e-mail from the Geneva Extrasolar Planet Search team confirming the planet. The Geneva team used telescopes at the Haute-Provence Observatory in France and the European Southern Observatory in La Serena, Chile. Like the majority of early extrasolar planet discoveries it was discovered by detecting variations in its star's radial velocity as a result of the planet's gravity. This was done by making sensitive measurements of the Doppler shift of the spectral lines of Gliese 876. It was the first discovered of four known planets in the Gliese 876 system. Characteristics Mass, radius, and temperature Given the planet's high mass, it is likely that Gliese 876 b is a gas giant with no solid surface. Since the planet has only been detected indirectly through its gravitational effects on the star, properties such as its radius, composition, and temperature are unknown. Assuming a composition similar to Jupiter and an environment close to chemical equilibrium, it is predicted that the atmosphere of Gliese 876 b is cloudless, though cooler regions of the planet may be able to form water clouds. A limitation of the radial velocity method used to detect Gliese 876 b is that only a lower limit on the planet's mass can be obtained. This lower limit is around 1.93 times the mass of Jupiter. The true mass depends on the inclination of the orbit, which in general is unknown. However, because Gliese 876 is only 15 light years from Earth Benedict et al. (2002) were able to use one of the Fine Guidance Sensors on the Hubble Space Telescope to detect the astrometric wobble created by Gliese 876 b. This constituted the first unambiguous astrometric detection of an extrasolar planet. Their analysis suggested that the orbital inclination is 84°±6° (close to edge-on). In the case of Gliese 876 b, modelling the planet-planet interactions from the Laplace resonance shows that the actual inclination of the orbit is 59°, resulting in a true mass of 2.2756 times the mass of Jupiter. The equilibrium temperature of Gliese 876 b, is estimated to be around . This planet, like c and e, has likely migrated inward. Host star The planet orbits a (M-type) star named Gliese 876. The star has a mass of 0.33 and a radius of around 0.36 . It has a surface temperature of 3350 K and is 2.55 billion years old. In comparison, the Sun is about 4.6 billion years old and has a surface temperature of 5778 K. Orbit Gliese 876 b is in a 1:2:4 Laplace resonance with the inner planet Gliese 876 c and the outer planet Gliese 876 e: in the time it takes planet e to complete one orbit, planet b completes two and planet c completes four. This is the second known example of a Laplace resonance, the first being Jupiter's moons Io, Europa and Ganymede. As a result, the orbital elements of the planets change fairly rapidly as they dynamically interact with one another. The planet's orbit has a low eccentricity, similar to the planets in the Solar System. The semimajor axis of the orbit is only 0.208 AU, less than that of Mercury in the Solar System. However Gliese 876 is such a faint star that this puts it in the outer part of the habitable zone. Future habitability Gliese 876 b currently lies beyond the outer edge of the habitable zone but because Gliese 876 is a slowly evolving main-sequence red dwarf its habitable zone is very slowly moving outwards and will continue to do so for trillions of years. Therefore, Gliese 876 b will, in trillions of years time, lie inside Gliese 876's habitable zone, as defined by the ability of an Earth-mass planet to retain liquid water at its surface, and remain there for at least 4.6 billion years. While the prospects for life on a gas giant are unknown, large moons may be able to support a habitable environment. Models of tidal interactions between a hypothetical moon, the planet and the star suggest that large moons should be able to survive in orbit around Gliese 876 b for the lifetime of the system. On the other hand, it is unclear whether such moons could form in the first place. However, the large mass of the gas giant may make it more likely for larger moons to form. For a stable orbit the ratio between the moon's orbital period Ps around its primary and that of the primary around its star Pp must be < 1/9, e.g. if a planet takes 90 days to orbit its star, the maximum stable orbit for a moon of that planet is less than 10 days. Simulations suggest that a moon with an orbital period less than about 45 to 60 days will remain safely bound to a massive giant planet or brown dwarf that orbits 1 AU from a Sun-like star. In the case of Gliese 876 b, the orbital period would have to be no greater than a week (7 days) in order to have a stable orbit. Tidal effects could also allow the moon to sustain plate tectonics, which would cause volcanic activity to regulate the moon's temperature and create a geodynamo effect which would give the satellite a strong magnetic field. To support an Earth-like atmosphere for about 4.6 billion years (the age of the Earth), the moon would have to have a Mars-like density and at least a mass of 0.07 . One way to decrease loss from sputtering is for the moon to have a strong magnetic field that can deflect stellar wind and radiation belts. NASA's Galileo's measurements hints large moons can have magnetic fields; it found that Jupiter's moon Ganymede has its own magnetosphere, even though its mass is only 0.025 . See also Appearance of extrasolar planets List of exoplanets discovered before 2000 Notes References External links Gliese 876 Aquarius (constellation) Exoplanets discovered in 1998 Exoplanets detected by radial velocity Exoplanets detected by astrometry Giant planets in the habitable zone 8
Gliese 876 b
[ "Astronomy" ]
1,413
[ "Constellations", "Aquarius (constellation)" ]
5,671,185
https://en.wikipedia.org/wiki/National%20School%20of%20Glass
The National School of Glass in Orrefors (Swedish: Riksglasskolan) is an educational center focused on glass arts, design and entrepreneurship in the field of glass. It was located next to the Orrefors glassworks in the Orrefors with the same name, at the center of what is known as the Kingdom of Crystal in Småland in Southern Sweden. The glassworks in Orrefors closed in 2012. The school then moved to Pukeberg in Nybro, which has become one of the main remaining glassworks centres in Sweden. While primarily focused on Swedish and Nordic students, the school also welcomes international students to its programs. History Around 1960, Orrefors glassworks started a formalized glass education. The glass-related, practical parts were taught in the factory by special personnel while the theoretical subjects were taught one day a week at Nybro Vocational school. In 1969, the municipality of Nybro took over all responsibility for the school of glass. Until 1979, the school of glass was housed in the buildings of the Orrefors glassworks. In the fall of 1979, the municipality of Nybro inaugurated the new premises of the National School of Glass near the Orrefors Glassworks. References External links The National School of Glass in Orrefors Orrefors Glass Student's Home Glassmaking schools Schools in Sweden Design schools
National School of Glass
[ "Materials_science", "Engineering" ]
278
[ "Glass engineering and science", "Glassmaking schools" ]
5,671,511
https://en.wikipedia.org/wiki/MS%200735.6%2B7421
MS 0735.6+7421 is a galaxy cluster located in the constellation Camelopardalis, approximately 2.6 billion light-years away. It is notable as the location of one of the largest central galactic black holes in the known universe, which has also apparently produced one of the most powerful active galactic nucleus eruptions discovered. In February 2020, it was reported that another similar but much more energetic AGN outburst - the Ophiuchus Supercluster eruption in the NeVe 1 galaxy, was five times the energy of MS 0735.6+7421. Black hole eruption Using data from the Chandra X-ray Observatory, scientists have deduced that an eruption has been occurring for the last 100 million years at the heart of the galaxy cluster, releasing as much energy over this time as hundreds of millions of gamma ray bursts. (The amount of energy released in a year is thus equivalent to several GRBs.) The remnants of the eruption are seen as two cavities on either side of a large central galaxy. If this outburst, with a total energy budget of more than 1055 J, was caused by a black hole accretion event, it must have consumed nearly 600 million solar masses. Work done by Brian McNamara et al. (2008) point out the striking possibility that the outburst was not the result of an accretion event, but was instead powered by the rotation of the black hole. Moreover, the scientists mentioned the possibility that the central black hole in MS 0735.6+7421 could be one of the biggest black holes inhabiting the visible universe. This speculation is supported by the fact that the central cD Galaxy inside MS 0735.6+7421 possess the largest break radius known, as of today. With a calculated light deficit of more than 20 billion solar luminosities and an assumed light-to-mass ratio of 3, this yields a central black hole mass much above 10 billion solar masses, as far as the break radius was caused by the merger of several black holes in the past. In combination with the gargantuan energy outburst it is therefore very likely that MS 0735.6+7421 hosts a supermassive black hole in its core. The cluster has a red shift of 64,800 ± 900 km/s and an apparent size of 25. Newer calculations using the spheroidal luminosity of the central galaxy and the estimation of its break radius yielded black hole masses of 15.85 billion and 51.3 billion , respectively. Brightest cluster galaxy The brightest cluster galaxy in MS 0735.6+7421 is the elliptical galaxy, 4C +74.13. Known as LEDA 2760958, it is classified as a radio galaxy. With a diameter of around 400 kpc, the galaxy shows a steep spectrum radio source. The core of the 4C +74.13 has a spectrum index of α1400325 = -1.54, with its outer radio lobes found to measure α1400325 < -3.1. According to studies, it is evident that the core activity has recently restarted in a form of two inner lobes. It is also known to have ongoing star formation. With its stellar core estimating to be 3.8 kiloparsecs across, it is indicated 4C +74.13 might well contain an ultramassive black hole in its center. X-ray source Hot X-ray emitting gas pervades MS 0735.6+7421. Two vast cavities—each 600,000 ly in diameter—appear on opposite sides of a large galaxy at the center of the cluster. These cavities are filled with a two-sided, elongated, magnetized bubble of extremely high-energy electrons that emit radio waves. See also X-ray astronomy Astrophysical X-ray source AT 2021lwx References External links Most Powerful Eruption In The Universe Discovered NASA/Marshall Space Flight Center (ScienceDaily) January 6, 2005 MS 0735.6+7421: Most Powerful Eruption in the Universe Discovered (CXO at Harvard) Hungry for More (NASA) Super-Super-massive Black Hole (Universetoday) A site for the cluster An Energetic AGN Outburst Powered by a Rapidly Spinning Supermassive Black Hole Scientists Reveal Secrets to Burping Black Hole with the Green Bank Telescope Galaxy clusters Camelopardalis X-ray astronomy Astronomical X-ray sources
MS 0735.6+7421
[ "Astronomy" ]
907
[ "Galaxy clusters", "Constellations", "Camelopardalis", "X-ray astronomy", "Astronomical X-ray sources", "Astronomical objects", "Astronomical sub-disciplines" ]
5,671,648
https://en.wikipedia.org/wiki/List%20of%20causes%20of%20death%20by%20rate
The following is a list of the causes of human deaths worldwide for different years arranged by their associated mortality rates. In 2002, there were about 57 million deaths. In 2005, according to the World Health Organization (WHO) using the International Classification of Diseases (ICD), about 58 million people died. In 2010, according to the Institute for Health Metrics and Evaluation, 52.8 million people died. In 2016, the WHO recorded 56.7 million deaths with the leading cause of death as cardiovascular disease causing more than 17 million deaths (about 31% of the total) as shown in the chart to the side. In 2021, there were approx. 68 million deaths worldwide, as per WHO report. Some causes listed include deaths also included in more specific subordinate causes, and some causes are omitted, so the percentages may only sum approximately to 100%. The causes listed are relatively immediate medical causes, but the ultimate cause of death might be described differently. For example, tobacco smoking often causes lung disease or cancer, and alcohol use disorder can cause liver failure or a motor vehicle accident. For statistics on preventable ultimate causes, see preventable causes of death. Besides frequency, other measures to compare, consider, and monitor trends of causes of deaths include disability-adjusted life year (DALY) and years of potential life lost (YPLL). By frequency Age standardized death rate, per 100,000, by cause, in 2017, and percentage change 2007–2017. Overview table This first table gives a convenient overview of the general categories and broad causes. The leading cause is cardiovascular disease at 31.59% of all deaths. Developed vs. developing economies Top causes of death, according to the World Health Organization report for the calendar year 2001: Detailed table This table gives a more detailed and specific breakdown of the causes for the year 2017. Figures have a margin of error of about 5% on average. By lost years Underlying causes Causes of death can be structured into immediate causes of death or primary causes of death, conditions leading to cause of death, underlying causes, and further relevant conditions that may have contributed to fatal outcome. According to the WHO, underlying causes are "the disease[s] or injury[ies] which initiated the train[s] of morbid events leading directly to death, or the circumstances of the accident or violence which produced the fatal injury". Malnutrition Malnutrition can be identified as an underlying cause for shortened life. 70% of childhood deaths (age 0–4) are reportedly due to diarrheal illness, acute respiratory infection, malaria and immunizable disease. However 56% of these childhood deaths can be attributed to the effects of malnutrition as an underlying cause. The effects of malnutrition include increased susceptibility to infection, musculature wasting, skeletal deformities and neurologic development delays. According to the World Health Organization, malnutrition is named as the biggest contributor to child mortality with 36 million deaths in 2005 related to malnutrition. Obesity and unhealthy diets Beyond undernutrition and micronutrient deficiencies, malnutrition also includes obesity, which predisposes towards several chronic diseases, including 13 different types of cancer, cardiovascular diseases, and type 2 diabetes. According to the WHO, being "chronically overweight and obesity are among the leading causes of death and disability in Europe", with estimates suggesting they cause more than 1.2 million deaths annually, corresponding to over 13% of total mortality in the region. Various types of health policy could counter the trend and reduce obesity. Diets, not just in terms of obesity but also of food composition, can have a major impact on underlying factors , with reviews suggesting i.a. that a 20-years old male in Europe who switches to the "optimal diet" could gain a mean of ~13.7 years of life and a 60-years old female in the U.S. switching to the "optimal diet" could gain a mean of ~8.0 years of life. It found the largest gains would be made by eating more legumes, whole grains, and nuts, and less red meat and processed meat. It also contains no consumption of sugar-sweetened beverages (moving from "typical Western diet" of 500 g/day to 0 g/day). Pollution A review concluded that pollution was responsible for 9 million premature deaths in 2019 (one in six deaths, ¾ from air pollution). It concluded that little real progress against pollution can be identified. Air pollution Overall, air pollution causes the deaths of around ca. 7 million people worldwide each year, and is the world's largest single environmental health risk, according to the WHO (2012) and the IEA (2016). The IEA notes that many of root causes and cures can be found in the energy industry and suggests solutions such as retiring polluting coal-fired power plants and to establishing stricter standards for motor vehicles. In September 2020 the European Environment Agency reported that environmental factors such as air pollution and heatwaves contributed to around 13% of all human deaths in EU countries in 2012 (~630,000). A 2021 study using a high spatial resolution model and an updated concentration-response function finds that 10.2 million global excess deaths in 2012 and 8.7 million in 2018 – or – were due to air pollution generated by fossil fuel combustion, significantly higher than earlier estimates and with spatially subdivided mortality impacts. A 2020 study indicates that the global mean loss of life expectancy (LLE) from air pollution in 2015 was 2.9 years, substantially more than, for example, 0.3 years from all forms of direct violence, albeit a significant fraction of the LLE is considered to be unavoidable. Uses of nervous system drugs According to the WHO, worldwide, about 0.5 million deaths are attributable to uses of drugs, with more than 70% of these being related to opioids, with overdose being the direct cause of more than 30% of those deaths. Various uses of various opioids accounts for many deaths worldwide, termed opioid epidemic. Nearly 75% of the 91,799 drug overdose deaths in 2020 in the United States involved an opioid. Not all nervous system drugs are associated with risks for contributing to deaths as an underlying factor or for uses that are. In some cases, potentially harmful or harmful drugs can be substituted or weaned off with the help of pharmacological alternatives – such as potentially NAC and modafinil in the case of cocaine dependence – whose uses are not considered to be underlying causes of deaths. In some cases, they – including caffeine – can help improve general health such as, directly and indirectly, physical fitness and mental health either in general or in specific ranges of informed administrations. Smoking Smoking is the leading cause of preventable death in the United States. It is an underlying cause of many cancers, cardiovascular diseases, stroke, and respiratory diseases. Smoking usually refers to smoking of tobacco products. E-cigarettes also pose large risks to health. The health impacts of tobacco-alternative products such as various herbs and the use of charcoal filters are often investigated less, with existing research suggesting only limited benefits over tobacco smoking. Some smokers may benefit from switching to a vaporizer as a harm reduction measure if they do not quit, which however also only has little robust evidence. Frequency of use is a major factor in the level of risks or permanence and extent of health impacts. A review found smoking and second-hand smoke to be a global underlying cause of death as large as pollution, which in that analysis was the largest major underlying factor. Alcohol Globally, alcohol use was the seventh leading risk factor for both deaths and DALY in 2016. A review found that the "risk of all-cause mortality, and of cancers specifically, rises with increasing levels of consumption, and the level of consumption that minimises health loss is zero". Non-optimal ambient temperatures A study found that 9.4% of global deaths between 2000 and 2019 – ~5 million annually – can be attributed to extreme temperature with cold-related ones making up the larger share and decreasing and heat-related ones making up ~0.91% and increasing. Incidences of heart attacks, cardiac arrests and strokes increase under such conditions. Antimicrobial resistance In a global assessment, scientists reported, based on medical records, that antibiotic resistance may have contributed to ~4.95 million (3.62–6.57) deaths in 2019, with 1.3 million directly attributed – the latter being more than deaths than from e.g. AIDS or Malaria, despite being project to rise substantially. Comorbidities, general health, social factors and infectious diseases Co-existing diseases can but don't necessarily contribute to death to various degrees in various ways. In some cases, comorbidities can be major causes with complex underlying mechanisms, and a range of comorbidities can be present once. Pandemics and infectious diseases or epidemics can be major underlying causes of deaths. In a small study of 26 decedents, the pandemized COVID-19 and infection-related disease were "major contributors" to patients' death. Such deaths are sometimes evaluated via excess deaths per capita – the COVID-19 pandemic deaths between January 1, 2020, and December 31, 2021, are estimated to be ~18.2 million. Research could help distinguish the proportions directly caused by COVID-19 from those caused by indirect consequences of the pandemic. Mental health issues and related issues such as economic conditions and/or various uses of nervous system drugs can contribute to causes such as suicide or risky behavior related deaths. Loneliness or insufficient social relationships is also a major underlying factor, which may be comparable to smoking and, according to one meta-analysis of 148 studies, "exceeds many well-known risk factors for mortality (e.g., obesity, physical inactivity)". Injuries and violence are "the leading causes of death among children, adolescents, and young adults in the US" with underlying risk factors for such including "detrimental community, family, or individual circumstances" that increase the likelihood of violence. Types of preventive measures may include support of "healthy development of individuals, families, schools, and communities, and build[ing] capacity for positive relationships and interactions". Lifestyle factors – including physical inactivity, and tobacco smoking and excessive alcohol use , healthy eating – and/or general health – including fitness beyond healthy diet and non-obesity – can be underlying contributors to death. For example, in a sample of U.S. adults, ~9.9% deaths of adults aged 40 to 69 years and ~7.8% adults aged 70 years or older were attributed to inadequate levels of physical activity. Aging Traditionally aging is not considered as a cause of death. It is believed that there is always a more direct cause, and usually it is one of many age-related diseases. It is estimated that, as a root cause, the aging process underlies 2/3 of all death in the world (approximately 100,000 people per day in 2007). In highly developed countries this proportion can reach 90%. There are requests of granting aging an official status of a disease and treating it directly (such as via dietary changes and senolytics). Economics and policies Economics and policies may be factors underlying deaths at a more fundamental level. For example, economics may result in certain therapies or screenings being expensive rather than produced at an affordable price or medication costs being too high for an individual to afford them even if they are made available at low cost, poverty can affect nutrition, marketing can increase the consumption of unhealthy products, incentives and regulations for health and healthy environments may be weak or missing, and occupational safety and humans' environment can suffer due to economic pressures for low production costs and high consumption. Health policy and health systems can have impacts on deaths and thereby may also be a factor of deaths, also including for example education policy (e.g. health illiteracy), climate policy (e.g. future water scarcity impacts) and transportation policy (e.g. motor vehicle accidents, pollution and physical activity), as well as in/action on policy-influenceable physical inactivity. 'Recent financial difficulties' appears to be a factor of mortality. One study estimated how many people die from poverty in the U.S. Low socioeconomic status, as determined by economics, appears to reduce life expectancy. The current systemic incentive for maximized profits may inhibit global occupational health and safety. The negative externality of environmental damages can have substantial impacts on global healthcare. Underlying factors by cause Underlying factors can also be analyzed per cause of (or major contributor to) death and can be distinguished between "preventable" factors and other factors. For example, various Global Burden of Disease Studies investigate such factors and quantify recent developments – one such systematic analysis analyzed the (non)progress on cancer and its causes during the 2010–19-decade, indicating that 2019, ~44% of all cancer deaths – or ~4.5 M deaths or ~105 million lost disability-adjusted life years – were due to known clearly preventable risk factors, led by smoking, alcohol use and high BMI. Determination and tracking of underlying factors Electronic health records, death certificates as well as post-mortem analyses (such as post-mortem computed tomography and other other pathology) can and are often used to investigate underlying causes of deaths such as for mortality statistics, relevant to . Improvements to this reporting, where e.g. certain diseases are often under-reported or underlying cause-of-death (COD) statement are incorrect, could ultimately improve public health. One reason for this is that from "a public health point of view, preventing this first disease or injury will result in the greatest health gain". United States By age group (U.S.) By occupation (U.S.) With an average of 123.6 deaths per 100,000 from 2003 through 2010 the most dangerous occupation in the United States is the cell tower construction industry. See also Capital punishment by country Epidemiology of suicide List of countries by intentional homicide rate List of killings by law enforcement officers by country List of sovereign states and dependent territories by mortality rate List of terrorist incidents List of unusual deaths Pollutants Preventable causes of death Medical error References External links Deaths: Leading Causes for 2009 by the Centers for Disease Control and Prevention Leading causes of death in the U.S. for 2015–2020 in the Journal of the American Medical Association Health effects of alcohol Death-related lists Demography Substance-related disorders
List of causes of death by rate
[ "Environmental_science" ]
3,016
[ "Demography", "Environmental social science" ]
5,671,882
https://en.wikipedia.org/wiki/Abell%203266
Abell 3266 is a galaxy cluster in the southern sky. It is part of the Horologium-Reticulum Supercluster. The galaxy cluster is one of the largest in the southern sky, and one of the largest mass concentrations in the nearby universe. The Department of Physics at the University of Maryland, Baltimore County discovered that a large mass of gas is hurtling through the cluster at a speed of 750 km/s (466 miles/second). The mass is billions of solar masses, approximately 3 million light-years in diameter and is the largest of its kind discovered as of June 2006. See also Abell catalogue List of Abell clusters X-ray astronomy References External links Abel 3266 on SIMBAD Horologium Supercluster Galaxy clusters 3266 Abell richness class 2 Reticulum
Abell 3266
[ "Astronomy" ]
172
[ "Galaxy clusters", "Reticulum", "Astronomical objects", "Constellations" ]
5,671,899
https://en.wikipedia.org/wiki/Linux%20User%20and%20Developer
Linux User & Developer was a monthly magazine about Linux and related Free and open source software published by Future. It was a UK magazine written specifically for Linux professionals and IT decision makers. It was available worldwide in newsagents or via subscription, and it could be downloaded via Zinio or Apple's Newsstand. History and profile Linux User & Developer was first published in September 1999. In August 2014 its sister magazine, RasPi, was launched. The magazine was acquired by Future plc (owner of competing title Linux Format) as part of its acquisition of Imagine Publishing in 2016. The last issue of Linux User & developer was on 20 September 2018 (#196). All previous subscribers received issues of Linux Format as compensation for the next remaining issues of their subscription. Staff Chris Thornett - Editor References External links Official homepage Future plc Defunct computer magazines published in the United Kingdom Linux magazines Magazines established in 1999 Magazines disestablished in 2018 Monthly magazines published in the United Kingdom 1999 establishments in the United Kingdom
Linux User and Developer
[ "Technology" ]
203
[ "Computing stubs", "Computer magazine stubs" ]
5,671,944
https://en.wikipedia.org/wiki/Arp%20220
Arp 220 is the result of a collision between two galaxies which are now in the process of merging. It is the 220th object in Halton Arp's Atlas of Peculiar Galaxies. Features Arp 220 is the closest ultraluminous infrared galaxy (ULIRG) to Earth, at 250 million light years away. Its energy output was discovered by IRAS to be dominated by the far-infrared part of the spectrum. It is often regarded as the prototypical ULIRG and has been the subject of much study as a result. Most of its energy output is thought to be the result of a massive burst of star formation, or starburst, probably triggered by the merging of two smaller galaxies. Hubble Space Telescope observations of Arp 220 in 2002 and 1997, taken in visible light with the ACS, and in infrared light with NICMOS, revealed more than 200 huge star clusters in the central part of the galaxy. The most massive of these clusters contains enough material to equal about 10 million suns. X-ray observations by the Chandra and XMM-Newton satellites have shown that Arp 220 probably includes an active galactic nucleus (AGN) at its core, which raises interesting questions about the link between galaxy mergers and AGN, since it is believed that galactic mergers often trigger starbursts, and may also give rise to the supermassive black holes that appear to power AGN. Luminous far-infrared objects like Arp 220 have been found in surprisingly large numbers by sky surveys of submillimetre wavelengths using instruments such as the Submillimetre Common-User Bolometer Array (SCUBA) at the James Clerk Maxwell Telescope (JCMT). Arp 220 and other relatively local ULIRGs are being studied as equivalents of this kind of object. Astronomers from the Arecibo Observatory have detected organic molecules in the galaxy. Arp 220 contains at least two bright maser sources, an OH megamaser, and a water maser. In October 2011, astronomers spotted a record-breaking seven supernova all found at the same time in Arp 220. The merging of the two galaxies started around 700 million years ago. References External links "Hubble Sees Star Birth Gone Wild" (SpaceDaily) Jun 16, 2006 Spiral galaxies Galaxy mergers Starburst galaxies IC objects 09913 55497 220 18660504 20111002 Luminous infrared galaxies Serpens
Arp 220
[ "Astronomy" ]
493
[ "Constellations", "Serpens" ]
5,672,281
https://en.wikipedia.org/wiki/Calaveras%20Lake%20%28Texas%29
Calaveras Lake is a reservoir on Calaveras Creek, located 20 miles (32 kilometers) southeast of Downtown San Antonio, Texas, US. The reservoir was formed in 1969 by the construction of a dam to provide a cooling pond for a series of power plants, called the Calaveras Power Station, to supply additional electricity to the city of San Antonio. The dam and lake are managed by CPS Energy of San Antonio. Together with the smaller Victor Braunig Lake, Calaveras Lake was one of the first projects in the nation to use treated wastewater for power plant cooling. The reservoir is partly filled with wastewater that has undergone both primary and secondary treatment at a San Antonio Water System treatment plant. Calaveras Lake also serves as a venue for recreation, including fishing and boating. Sailboats are prohibited on the lake. Fish and plant life Calaveras Lake has been stocked with many species of fish for recreational fishing. Fish present in Calaveras Lake include red drum, hybrid striped bass, catfish, largemouth bass. Recreational uses Thousand Trails Management Services operates the 147 acre (57 ha) public facility under contract with CPS Energy at the lake. The lake features facilities for camping, picnicking, fishing, boating, and hiking. In popular culture Calaveras Lake is the setting of one of the culminating scenes in the 1996 film Courage Under Fire, during which Matt Damon's character reveals his knowledge of the truth regarding the death of the character of Captain Karen Walden, the investigation of which constitutes the subject of the film up to that point. See also List of lakes in Texas References External links Calaveras Lake - Texas Parks & Wildlife Calaveras Lake power plant complex pollution: monthly, annual, and daily (near real-time) reports from the Texas Commission on Environmental Quality (TCEQ) Calaveras Geography of San Antonio Protected areas of Bexar County, Texas Bodies of water of Bexar County, Texas Cooling ponds CPS Energy
Calaveras Lake (Texas)
[ "Chemistry", "Environmental_science" ]
400
[ "Cooling ponds", "Water pollution" ]
5,672,289
https://en.wikipedia.org/wiki/Diethylzinc
Diethylzinc (C2H5)2Zn, or DEZ, is a highly pyrophoric and reactive organozinc compound consisting of a zinc center bound to two ethyl groups. This colourless liquid is an important reagent in organic chemistry. It is available commercially as a solution in hexanes, heptane, or toluene, or as a pure liquid. Synthesis Edward Frankland first reported the compound in 1848 from zinc and ethyl iodide, the first organozinc compound discovered. He improved the synthesis by using diethyl mercury as starting material. The contemporary synthesis consists of the reaction of a 1:1 mixture of ethyl iodide and ethyl bromide with a zinc-copper couple, a source of reactive zinc. Structure The compound crystallizes in a tetragonal body-centered unit cell of space group symmetry I41md. In the solid-state diethylzinc shows nearly linear Zn centres. The Zn-C bonds measure 194.8(5) pm, while the C-Zn-C angle is slightly bent with 176.2(4)°. The structure of the gas-phase shows a very similar Zn-C distance (195.0(2) pm). Uses Despite its highly pyrophoric nature, diethylzinc is an important chemical reagent. It is used in organic synthesis as a source of the ethyl carbanion in addition reactions to carbonyl groups. For example, the asymmetric addition of an ethyl group to benzaldehyde and imines. Additionally, it is commonly used in combination with diiodomethane as a Simmons-Smith reagent to convert alkenes into cyclopropyl groups. It is less nucleophilic than related alkyllithium and Grignard reagents, so it may be used when a "softer" nucleophile is needed. It is also used extensively in materials science chemistry as a zinc source in the synthesis of nanoparticles. Particularly in the formation of the zinc sulfide shell for core/shell-type quantum dots. While in polymer chemistry, it can be used as part of the catalyst for a chain shuttling polymerization reaction, whereby it participates in living polymerization. Diethylzinc is not limited to only being used in chemistry. Because of its high reactivity toward air, it was used in small quantities as a hypergolic or "self igniting" liquid rocket fuel—it ignites on contact with oxidizer, so the rocket motor need only contain a pump, without a spark source for ignition. Diethylzinc was also investigated by the United States Library of Congress as a potential means of mass deacidification of books printed on wood pulp paper. Diethylzinc vapour would, in theory, neutralize acid residues in the paper, leaving slightly alkaline zinc oxide residues. Although initial results were promising, the project was abandoned. A variety of adverse results prevented the method's adoption. Most infamously, the final prototype suffered damage in a series of diethylzinc explosions from trace amounts of water vapor in the chamber. This led the authors of the study to humorously comment: In microelectronics, diethylzinc is used as a doping agent. For corrosion protection in nuclear reactors of the light water reactor design, depleted zinc oxide is produced by first passing diethylzinc through an enrichment centrifuge. The pyrophoricity of diethylzinc can be used to test the inert atmosphere inside a glovebox. An oxygen concentration of only a few parts per million will cause a bottle of diethylzinc to fume when opened. Safety Diethylzinc may explode when mixed with water and can spontaneously ignite upon contact with air. It should therefore be handled using air-free techniques. References External links Demonstration of the ignition of diethylzinc in air Video - University of Nottingham Reagents for organic chemistry Organozinc compounds Ethylating agents
Diethylzinc
[ "Chemistry" ]
846
[ "Reagents for organic chemistry" ]
5,672,369
https://en.wikipedia.org/wiki/Global%20Environment%20for%20Network%20Innovations
The Global Environment for Network Innovations (GENI) is a facility concept being explored by the United States computing community with support from the National Science Foundation. The goal of GENI is to enhance experimental research in computer networking and distributed systems, and to accelerate the transition of this research into products and services that will improve the economic competitiveness of the United States. GENI planning efforts are organized around several focus areas, including facility architecture, the backbone network, distributed services, wireless/mobile/sensor subnetworks, and research coordination amongst these. See also Internet2 Future Internet AKARI Project in Japan References External links GENI home page NSF GENI Initiative overview. NSF GENI Project Office solicitation. Foreign, independent presentation on GENI. A news article describing GENI plans. A news article referring to GENI. Another news article regarding GENI. Computer network organizations
Global Environment for Network Innovations
[ "Technology" ]
179
[ "Computing stubs", "Computer network stubs" ]
5,672,442
https://en.wikipedia.org/wiki/International%20Virtual%20Observatory%20Alliance
The International Virtual Observatory Alliance (IVOA) is a worldwide scientific organisation formed in June 2002. Its mission is to facilitate international coordination and collaboration necessary for enabling global and integrated access to data gathered by astronomical observatories. An information system allowing such an access is called a virtual observatory. The main task of the organisation so far has focused on defining standards to ensure interoperability of the different virtual observatory projects already existing or in development. The IVOA now comprises 19 VO projects from Argentina, Armenia, Australia, Brazil, Canada, China, Europe, France, Germany, Hungary, India, Italy, Japan, Korea, Russia, Spain, Ukraine, the United Kingdom, and the United States. Membership is open to other national and international projects according to the IVOA Guidelines for Participation. Senior representatives from each national VObs project form the IVOA Executive Committee. A chair is chosen from among the representatives and serves a 1.5 year term, preceded by a 1.5 year term as deputy chair. The Executive Committee meets 3-4 times a year to discuss goals, priorities, and strategies. Members IVOA currently brings together nineteen member organisations, both national and international : Argentina Virtual Observatory Armenian Virtual Observatory AstroGrid UK Australian Virtual Observatory Brazilian Virtual Observatory Chinese Virtual Observatory Chilean Virtual Observatory Canadian Virtual Observatory - ChiVO European Space Agency European Virtual Observatory - Euro-VO German Astrophysical Virtual Observatory Hungarian Virtual Observatory Italian Virtual Observatory Japanese Virtual Observatory National Virtual Observatory (USA) Observatoire Virtuel France Russian Virtual Observatory Spanish Virtual Observatory Ukrainian Virtual Observatory Virtual Observatory India Working Groups The tasks of the IVOA are distributed over different working groups: Applications The IVOA Applications Working Group is concerned primarily with the software tools that Astronomers use to access VO data and services for doing Astronomy. The role of the Applications Working Group is to: Provide a forum for announcement and discussion of VO Applications Provide feedback to IVOA on the implementation of interoperability standards in VO applications Identify missing or desirable technical capabilities for VO applications Identify missing or desirable components in terms of scientific usability Propose and develop standards specific to VO Astronomy-user-Applications Data Access Layer The task of the Data Access Layer (DAL) working group is to define and formulate VO standards for remote data access. Client data analysis software will use these services to access data via the VO framework; data providers will implement these services to publish data to the VO. The DAL working groups has defined various standards for accessing data sets, in particular images (Simple Image Access Protocol, SIAP6), spectra (Simple Spectra Access Protocol, SSAP7) and source catalogues (Simple Cone Search, SCS8). Data Modelling The role of the Data Modelling Working Group is to provide a framework for the description of metadata attached to observed or simulated data. The activity of the Data Model WG activity focuses on logical relationships between these metadata, examines how an astronomer wants to retrieve, process and interpret astronomical data, and provides an architecture to handle them. What is defined in this WG can then be re-used in the protocols defined by the DAL WG or in VO aware applications. Grid and Web Services The aim of the Grid and Web Services(GWS) Working Group is to define the use of Grid technologies and web services within the VO context and to investigate, specify, and implement required standards in this area. This group was formed from a merger of the Web Services group and the Grid group, ordered at the IVOA Executive meeting held during the IAU General Assembly in 2003. Resource Registry The Resource Registry Working Group defines the structure and interface to an IVOA Registry. Such a registry “ … will allow an astronomer to be able to locate, get details of, and make use of, any resource located anywhere in the IVO space, i.e. in any Virtual Observatory. The IVOA will define the protocols and standards whereby different registry services are able to interoperate and thereby realise this goal.” Semantics The Semantics Working Group will explore technology in the area of semantics with the aim of producing new standards that aid the interoperability of VO systems. The Semantics Working Group is concerned with the meaning or the interpretation of words, sentences, or other language forms in the context of astronomy. This includes standard descriptions of astrophysical objects, data types, concepts, events, or of any other phenomena in astronomy. The WG covers the study of relationships between words, symbols and concepts, as well as the meaning of such representations (ontology). The WG covers use of natural language in astronomy, including queries, translations, and internationalization of interfaces. VO Query Language The VO Query Language (VOQL) Working Group will be in charge of defining a universal Query Language to be used by applications accessing distributed data within the Virtual Observatory framework. VOTable The VOTable Working Group is in charge of the VOTable format, which is an XML standard for the interchange of data represented as a set of tables. In this context, a table is an unordered set of rows, each of a uniform format, as specified in the table metadata. Each row in a table is a sequence of table cells, and each of these contains either a primitive data type, or an array of such primitives. VOTable is derived from the Astrores format, itself modelled on the FITS Table format; VOTable was designed to be closer to the FITS Binary Table format. Theory Interest Group During the IVOA executive meeting of January 2004 in Garching, Germany, the IVOA Theory Interest Group (TIG) was formed with the goal of ensuring that theoretical data and services are taken into account in the IVOA standards process. By its charter, the IVOA Theory Interest Group intends to: Provide a forum for discussing theory specific issues in a VO context. Contribute to other IVOA working groups to ensure that theory specific requirements are included. Incorporate standard approaches defined in these groups when designing and implementing services on theoretical archives. Define standard services relevant for theoretical archives. Promote development of services for comparing theoretical results to observations and vice versa. Define relevant milestones and assign specific tasks to interested parties. References External links The International Virtual Observatory Alliance Scientific organizations established in 2002 2002 establishments in Germany Astronomical observatories
International Virtual Observatory Alliance
[ "Astronomy" ]
1,257
[ "Astronomical observatories", "Astronomy organizations" ]
5,672,534
https://en.wikipedia.org/wiki/Turboexpander
A turboexpander, also referred to as a turbo-expander or an expansion turbine, is a centrifugal or axial-flow turbine, through which a high-pressure gas is expanded to produce work that is often used to drive a compressor or generator. Because work is extracted from the expanding high-pressure gas, the expansion is approximated by an isentropic process (i.e., a constant-entropy process), and the low-pressure exhaust gas from the turbine is at a very low temperature, −150 °C or less, depending upon the operating pressure and gas properties. Partial liquefaction of the expanded gas is not uncommon. Turboexpanders are widely used as sources of refrigeration in industrial processes such as the extraction of ethane and natural-gas liquids (NGLs) from natural gas, the liquefaction of gases (such as oxygen, nitrogen, helium, argon and krypton) and other low-temperature processes. Turboexpanders currently in operation range in size from about 750 W to about 7.5 MW (1 hp to about 10,000 hp). Applications Although turboexpanders are commonly used in low-temperature processes, they are used in many other applications. This section discusses one of the low-temperature processes, as well as some of the other applications. Extracting hydrocarbon liquids from natural gas Raw natural gas consists primarily of methane (CH4), the shortest and lightest hydrocarbon molecule, along with various amounts of heavier hydrocarbon gases such as ethane (C2H6), propane (C3H8), normal butane (n-C4H10), isobutane (i-C4H10), pentanes and even higher-molecular-mass hydrocarbons. The raw gas also contains various amounts of acid gases such as carbon dioxide (CO2), hydrogen sulfide (H2S) and mercaptans such as methanethiol (CH3SH) and ethanethiol (C2H5SH). When processed into finished by-products (see Natural-gas processing), these heavier hydrocarbons are collectively referred to as NGL (natural-gas liquids). The extraction of the NGL often involves a turboexpander and a low-temperature distillation column (called a demethanizer) as shown in the figure. The inlet gas to the demethanizer is first cooled to about −51 °C in a heat exchanger (referred to as a cold box), which partially condenses the inlet gas. The resultant gas–liquid mixture is then separated into a gas stream and a liquid stream. The liquid stream from the gas–liquid separator flows through a valve and undergoes a throttling expansion from an absolute pressure of 62 bar to 21 bar (6.2 to 2.1 MPa), which is an isenthalpic process (i.e., a constant-enthalpy process) that results in lowering the temperature of the stream from about −51 °C to about −81 °C as the stream enters the demethanizer. The gas stream from the gas–liquid separator enters the turboexpander, where it undergoes an isentropic expansion from an absolute pressure of 62 bar to 21 bar (6.2 to 2.1 MPa) that lowers the gas stream temperature from about −51 °C to about −91 °C as it enters the demethanizer to serve as distillation reflux. Liquid from the top tray of the demethanizer (at about −90 °C) is routed through the cold box, where it is warmed to about 0 °C as it cools the inlet gas, and is then returned to the lower section of the demethanizer. Another liquid stream from the lower section of the demethanizer (at about 2 °C) is routed through the cold box and returned to the demethanizer at about 12 °C. In effect, the inlet gas provides the heat required to "reboil" the bottom of the demethanizer, and the turboexpander removes the heat required to provide reflux in the top of the demethanizer. The overhead gas product from the demethanizer at about −90 °C is processed natural gas that is of suitable quality for distribution to end-use consumers by pipeline. It is routed through the cold box, where it is warmed as it cools the inlet gas. It is then compressed in the gas compressor driven by the turboexpander and further compressed in a second-stage gas compressor driven by an electric motor before entering the distribution pipeline. The bottom product from the demethanizer is also warmed in the cold box, as it cools the inlet gas, before it leaves the system as NGL. The operating conditions of an offshore gas conditioning turbo-expander/recompressor are as follows: Power generation The figure depicts an electric power-generation system that uses a heat source, a cooling medium (air, water or other), a circulating working fluid and a turboexpander. The system can accommodate a wide variety of heat sources such as: geothermal hot water, exhaust gas from internal combustion engines burning a variety of fuels (natural gas, landfill gas, diesel oil, or fuel oil), a variety of waste heat sources (in the form of either gas or liquid). The circulating working fluid (usually an organic compound such as R-134a) is pumped to a high pressure and then vaporized in the evaporator by heat exchange with the available heat source. The resulting high-pressure vapor flows to the turboexpander, where it undergoes an isentropic expansion and exits as a vapor–liquid mixture, which is then condensed into a liquid by heat exchange with the available cooling medium. The condensed liquid is pumped back to the evaporator to complete the cycle. The system in the figure implements a Rankine cycle as it is used in fossil-fuel power plants, where water is the working fluid and the heat source is derived from the combustion of natural gas, fuel oil or coal used to generate high-pressure steam. The high-pressure steam then undergoes an isentropic expansion in a conventional steam turbine. The steam turbine exhaust steam is next condensed into liquid water, which is then pumped back to steam generator to complete the cycle. When an organic working fluid such as R-134a is used in the Rankine cycle, the cycle is sometimes referred to as an organic Rankine cycle (ORC). Refrigeration system A refrigeration system utilizes a compressor, a turboexpander and an electric motor. Depending on the operating conditions, the turboexpander reduces the load on the electric motor by 6–15% compared to a conventional vapor-compression refrigeration system that uses a throttling expansion valve rather than a turboexpander. Basically, this can be seen as a form of turbo compounding. The system employs a high-pressure refrigerant (i.e., one with a low normal boiling point) such as: chlorodifluoromethane (CHClF2) known as R-22, with a normal boiling point of −47 °C; 1,1,1,2-tetrafluoroethane (C2H2F4) known as R-134a, with a normal boiling point of −26 °C. As shown in the figure, refrigerant vapor is compressed to a higher pressure, resulting in a higher temperature as well. The hot, compressed vapor is then condensed into a liquid. The condenser is where heat is expelled from the circulating refrigerant and is carried away by whatever cooling medium is used in the condenser (air, water, etc.). The refrigerant liquid flows through the turboexpander, where it is vaporized, and the vapor undergoes an isentropic expansion, which results in a low-temperature mixture of vapor and liquid. The vapor–liquid mixture is then routed through the evaporator, where it is vaporized by heat absorbed from the space being cooled. The vaporized refrigerant flows to the compressor inlet to complete the cycle. In the case where the working fluid remains gaseous into the heat exchangers without undergoing phase changes, this cycle is also referred to as reverse Brayton cycle or "refrigerating Brayton cycle". Power recovery in fluid catalytic cracker The combustion flue gas from the catalyst regenerator of a fluid catalytic cracker is at a temperature of about 715 °C and at a pressure of about 2.4 barg (240 kPa gauge). Its gaseous components are mostly carbon monoxide (CO), carbon dioxide (CO2) and nitrogen (N2). Although the flue gas has been through two stages of cyclones (located within the regenerator) to remove entrained catalyst fines, it still contains some residual catalyst fines. The figure depicts how power is recovered and utilized by routing the regenerator flue gas through a turboexpander. After the flue gas exits the regenerator, it is routed through a secondary catalyst separator containing swirl tubes designed to remove 70–90% of the residual catalyst fines. This is required to prevent erosion damage to the turboexpander. As shown in the figure, expansion of the flue gas through a turboexpander provides sufficient power to drive the regenerator's combustion air compressor. The electrical motor-generator in the power-recovery system can consume or produce electrical power. If the expansion of the flue gas does not provide enough power to drive the air compressor, the electric motor-generator provides the needed additional power. If the flue gas expansion provides more power than needed to drive the air compressor, then the electric motor-generator converts the excess power into electric power and exports it to the refinery's electrical system. The steam turbine is used to drive the regenerator's combustion air compressor during start-ups of the fluid catalytic cracker until there is sufficient combustion flue gas to take over that task. The expanded flue gas is then routed through a steam-generating boiler (referred to as a CO boiler), where the carbon monoxide in the flue gas is burned as fuel to provide steam for use in the refinery. The flue gas from the CO boiler is processed through an electrostatic precipitator (ESP) to remove residual particulate matter. The ESP removes particulates in the size range of 2 to 20 micrometers from the flue gas. History The possible use of an expansion machine for isentropically creating low temperatures was suggested by Carl Wilhelm Siemens (Siemens cycle), a German engineer in 1857. About three decades later, in 1885, Ernest Solvay of Belgium attempted to use a reciprocating expander machine, but could not attain any temperatures lower than −98 °C because of problems with lubrication of the machine at such temperatures. In 1902, Georges Claude, a French engineer, successfully used a reciprocating expansion machine to liquefy air. He used a degreased, burnt leather packing as a piston seal without any lubrication. With an air pressure of only 40 bar (4 MPa), Claude achieved an almost isentropic expansion resulting in a lower temperature than had before been possible. The first turboexpanders seem to have been designed in about 1934 or 1935 by Guido Zerkowitz, an Italian engineer working for the German firm of Linde AG. In 1939, the Russian physicist Pyotr Kapitsa perfected the design of centrifugal turboexpanders. His first practical prototype was made of Monel metal, had an outside diameter of only 8 cm (3.1 in), operated at 40,000 revolutions per minute and expanded 1,000 cubic metres of air per hour. It used a water pump as a brake and had an efficiency of 79–83%. Most turboexpanders in industrial use since then have been based on Kapitsa's design, and centrifugal turboexpanders have taken over almost 100% of the industrial gas liquefaction and low-temperature process requirements. The availability of liquid oxygen revolutionized the production of steel using the basic oxygen steelmaking process. In 1978, Pyotr Kapitsa was awarded a Nobel physics prize for his body of work in the area of low-temperature physics. In 1983, San Diego Gas and Electric was among the first to install a turboexpander in a natural-gas letdown station for energy recovery. Types Turboexpanders can be classified by loading device or bearings. Three main loading devices used in turboexpanders are centrifugal compressors, electrical generators or hydraulic brakes. With centrifugal compressors and electrical generators the shaft power from the turboexpander is recouped either to recompress the process gas or to generate electrical energy, lowering utility bills. Hydraulic brakes are used when the turboexpander is very small and harvesting the shaft power is not economically justifiable. Bearings used are either oil bearings or magnetic bearings. See also Air separation Dry gas seal Flash evaporation Gas compressor Joule-Thomson effect Liquefaction of gases Rankine cycle Steam turbine Vapor-compression refrigeration Hydrogen turboexpander-generator References External links Use of Expansion Turbines in Natural Gas Pressure Reduction Stations Turbo Lab’s Turbomachinery & Pump Symposia Mechanical engineering Turbines Industrial gases Gas technologies Turbo generators
Turboexpander
[ "Physics", "Chemistry", "Engineering" ]
2,820
[ "Applied and interdisciplinary physics", "Turbomachinery", "Turbines", "Industrial gases", "Mechanical engineering", "Chemical process engineering" ]
5,672,610
https://en.wikipedia.org/wiki/Victor%20Braunig%20Lake
Victor Braunig Lake, formerly known as East Lake, is a reservoir on Calaveras Creek and Chupaderas Creek 17 miles (27 kilometres) south of Downtown San Antonio, Texas, USA. The reservoir was formed in 1962 by the construction of a dam to provide a cooling pond for a power plant to supply additional electrical supply to the city of San Antonio. Victor Braunig (1890-1982) was an employee from 1910 becoming in 1949 the general manager of the San Antonio City Public Service Board, the predecessor of CPS Energy. The dam and lake are managed by CPS Energy of San Antonio. Together with Calaveras Lake, Braunig Lake was one of the first projects in the nation to use treated wastewater for power plant cooling. The reservoir is partly filled with wastewater that has undergone both primary and secondary treatment at a San Antonio Water System treatment plant. Braunig Lake also serves as a venue for recreation, including fishing and boating. Fish and plant life Braunig Lake has been stocked with species of fish intended to improve the utility of the reservoir for recreational fishing. Fish present in Braunig Lake include red drum, hybrid striped bass, catfish, and largemouth bass. Recreational uses Thousand Trails Management Services operates a public park facility at the lake. The park features facilities for camping, picknicking, fishing, boating, and hiking. Swimming is prohibited. See also List of lakes in Texas References Powering a City: How Energy and Big Dreams Transformed San Antonio, Catherine Nixon Cooke, Trinity University Press, Nov 30, 2017 External links Braunig Lake - Texas Parks & Wildlife Braunig Lake Park Braunig, Victor Geography of San Antonio Protected areas of Bexar County, Texas Tourist attractions in San Antonio Bodies of water of Bexar County, Texas Cooling ponds
Victor Braunig Lake
[ "Chemistry", "Environmental_science" ]
358
[ "Cooling ponds", "Water pollution" ]
5,672,923
https://en.wikipedia.org/wiki/Mothers%20against%20decapentaplegic%20homolog%201
Mothers against decapentaplegic homolog 1 also known as SMAD family member 1 or SMAD1 is a protein that in humans is encoded by the SMAD1 gene. Nomenclature SMAD1 belongs to the SMAD, a family of proteins similar to the gene products of the Drosophila gene 'mothers against decapentaplegic' (Mad) and the C. elegans gene Sma. The name is a combination of the two; and based on a tradition of such unusual naming within the gene research community. It was found that a mutation in the 'Drosophila' gene, MAD, in the mother, repressed the gene, decapentaplegic, in the embryo. Mad mutations can be placed in an allelic series based on the relative severity of the maternal effect enhancement of weak dpp alleles, thus explaining the name Mothers against dpp. Function SMAD proteins are signal transducers and transcriptional modulators that mediate multiple signaling pathways. This protein mediates the signals of the bone morphogenetic proteins (BMPs), which are involved in a range of biological activities including cell growth, apoptosis, morphogenesis, development and immune responses. In response to BMP ligands, this protein can be phosphorylated and activated by the BMP receptor kinase. The phosphorylated form of this protein forms a complex with SMAD4, which is important for its function in the transcription regulation. This protein is a target for SMAD-specific E3 ubiquitin ligases, such as SMURF1 and SMURF2, and undergoes ubiquitination and proteasome-mediated degradation. Alternatively spliced transcript variants encoding the same protein have been observed. SMAD1 is a receptor regulated SMAD (R-SMAD) and is activated by bone morphogenetic protein type 1 receptor kinase. References External links Drosophila Mothers against dpp - The Interactive Fly Transcription factors Developmental genes and proteins MH1 domain MH2 domain R-SMAD Human proteins
Mothers against decapentaplegic homolog 1
[ "Chemistry", "Biology" ]
432
[ "Transcription factors", "Gene expression", "Signal transduction", "Developmental genes and proteins", "Induced stem cells" ]
5,673,664
https://en.wikipedia.org/wiki/Mothers%20against%20decapentaplegic%20homolog%202
Mothers against decapentaplegic homolog 2, also known as SMAD family member 2 or SMAD2, is a protein that in humans is encoded by the SMAD2 gene. MAD homolog 2 belongs to the SMAD, a family of proteins similar to the gene products of the Drosophila gene 'mothers against decapentaplegic' (Mad) and the C. elegans gene Sma. SMAD proteins are signal transducers and transcriptional modulators that mediate multiple signaling pathways. Function SMAD2 mediates the signal of the transforming growth factor (TGF)-beta, and thus regulates multiple cellular processes, such as cell proliferation, apoptosis, and differentiation. This protein is recruited to the TGF-beta receptors through its interaction with the SMAD anchor for receptor activation (SARA) protein. In response to TGF-beta signal, this protein is phosphorylated by the TGF-beta receptors. The phosphorylation induces the dissociation of this protein with SARA and the association with the family member SMAD4. The association with SMAD4 is important for the translocation of this protein into the cell nucleus, where it binds to target promoters and forms a transcription repressor complex with other cofactors. This protein can also be phosphorylated by activin type 1 receptor kinase, and mediates the signal from the activin. Alternatively spliced transcript variants encoding the same protein have been observed. Like other SMADs, SMAD2 plays a role in the transmission of extracellular signals from ligands of the Transforming Growth Factor beta (TGFβ) superfamily of growth factors into the cell nucleus. Binding of a subgroup of TGFβ superfamily ligands to extracellular receptors triggers phosphorylation of SMAD2 at a Serine-Serine-Methionine-Serine (SSMS) motif at its extreme C-terminus. Phosphorylated SMAD2 is then able to form a complex with SMAD4. These complexes accumulate in the cell nucleus, where they are directly participating in the regulation of gene expression. Nomenclature The SMAD proteins are homologs of both the drosophila protein, mothers against decapentaplegic (MAD), and the C. elegans protein SMA. The name is a combination of the two. During Drosophila research, it was found that a mutation in the gene MAD in the mother repressed the gene decapentaplegic in the embryo. The phrase "Mothers against" was added, since mothers often form organizations opposing various issues, e.g., Mothers Against Drunk Driving, or (MADD). The nomenclature for this protein is based on a tradition of such unusual naming within the gene research community. Interactions Mothers against decapentaplegic homolog 2 has been shown to interact with: ANAPC10, DAB2, EP300, FOXH1, HDAC1, TGIF1, Insulin receptor, LEF1, Myc, MEF2A, PIAS3, PIN1, SKI protein, SKIL, SMAD3, SMURF2, SNW1, STRAP WWTR1 References Further reading Developmental genes and proteins MH1 domain MH2 domain R-SMAD Transcription factors Human proteins
Mothers against decapentaplegic homolog 2
[ "Chemistry", "Biology" ]
695
[ "Transcription factors", "Gene expression", "Signal transduction", "Developmental genes and proteins", "Induced stem cells" ]
1,610,476
https://en.wikipedia.org/wiki/Carcinoembryonic%20antigen
Carcinoembryonic antigen (CEA) describes a set of highly-related glycoproteins involved in cell adhesion. CEA is normally produced in gastrointestinal tissue during fetal development, but the production stops before birth. Consequently, CEA is usually present at very low levels in the blood of healthy adults (about 2–4 ng/mL). However, the serum levels are raised in some types of cancer, which means that it can be used as a tumor marker in clinical tests. Serum levels can also be elevated in heavy smokers. CEA are glycosyl phosphatidyl inositol (GPI) cell-surface-anchored glycoproteins whose specialized sialofucosylated glycoforms serve as functional colon carcinoma L-selectin and E-selectin ligands, which may be critical to the metastatic dissemination of colon carcinoma cells. Immunologically they are characterized as members of the CD66 cluster of differentiation. The proteins include CD66a, CD66b, CD66c, CD66d, CD66e, CD66f. History CEA was first identified in 1965 by Phil Gold, a Canadian physician, scientist and professor and Samuel O. Freedman who is also a Canadian professor of immunology in human colon cancer tissue extracts. Diagnostic significance The CEA blood test is not reliable for diagnosing cancer or as a screening test for early detection of cancer. Most types of cancer do not result in a high CEA level. Serum from individuals with colorectal carcinoma often has higher levels of CEA than healthy individuals (above approximately 2.5ng/mL). CEA measurement is mainly used as a tumor marker to monitor colorectal carcinoma treatment, to identify recurrences after surgical resection, for staging or to localize cancer spread through measurement of biological fluids. CEA levels may also be raised in gastric carcinoma, pancreatic carcinoma, lung carcinoma, breast carcinoma, and medullary thyroid carcinoma, as well as some non-neoplastic conditions like ulcerative colitis, pancreatitis, cirrhosis, COPD, Crohn's disease, hypothyroidism as well as in smokers. Elevated CEA levels should return to normal after successful surgical removal of the tumor and can be used in follow up, especially of colorectal cancers. CEA elevation is known to be affected by multiple factors. It varies inversely with tumor grade; well-differentiated tumors secrete more CEA. CEA is elevated more in tumors with lymph node and distant metastasis than in organ-confined tumors and, thus, varies directly with tumor stage. Left-sided tumors generally tend to have higher CEA levels than right-sided tumors. Tumors causing bowel obstruction produce higher CEA levels. Aneuploid tumors produce more CEA than diploid tumors. Liver dysfunction increases CEA levels as the liver is the primary site of CEA metabolism. Antibodies An anti-CEA antibody is an antibody against CEA. Such antibodies to CEA are commonly used in immunohistochemistry to identify cells expressing the glycoprotein in tissue samples. In adults, CEA is primarily expressed in cells of tumors (some malignant, some benign) but they are particularly associated with the adenocarcinomas, such as those arising in the colon, lung, breast, stomach, or pancreas. It can therefore be used to distinguish between these and other similar cancers. For example, it can help to distinguish between adenocarcinoma of the lung and mesothelioma, a different type of lung cancer which is not normally CEA positive. Because even monoclonal antibodies to CEA tend to have some degree of cross-reactivity, occasionally giving false positive results, it is commonly employed in combination with other immunohistochemistry tests, such as those for BerEp4, WT1, and calretinin. For cancers that highly express CEA, targeting CEA through radioimmunotherapy is one of the therapy approaches. Engineered antibodies such as single-chain Fv antibodies () or bispecific antibodies have been used for targeting and therapy of CEA expressing tumors both in vitro and in vivo with promising results Regions of high CEA levels in the body can be detected with the monoclonal antibody arcitumomab. Genetics CEA and related genes make up the CEA family belonging to the immunoglobulin superfamily. In humans, the carcinoembryonic antigen family consists of 29 genes, 18 of which are normally expressed. The following is a list of human genes which encode carcinoembryonic antigen-related cell adhesion proteins: CEACAM1, CEACAM3, CEACAM4, CEACAM5, CEACAM6, CEACAM7, CEACAM8, CEACAM16, CEACAM18, CEACAM19, CEACAM20, CEACAM21 Clinical trials CEA is expressed in many different types of cancer like lung, gastric , pancreatic and colorectal cancer. Many clinical trials have been performed . CEA is used as tumor biomarker that can be used for Targeted Radionuclide Therapy. The cT84.66 is a chimeric antibody of murine origin that has been tested in phase I clinical trials with 111-In and 90-Yttrium. 111-In and 90-Y are β- emitters that are used in clinics for imaging and therapy respectively. The results were promising but a number of patients demostrated immune responses and they had to withdraw from participating in the clinical trial. The cT84.66 antibody was huminized and in 2020, a phase I clinical trial was performed during which 18 cancer patients received an injection of 90Y-DOTA-M5A. The results of this trial demonstrated a stable disease for 10/18 patients ( 56%) and had no immunogenic response. M5A-DOTA was coupled with 225-Ac , which is an alpa emitter, and an in vivo study was performed where cytokine therapy was combined with a-therapy. The result of the study revealed the benefit of combining these two treamtents. Based on the results of this study, an ongoing clinical phase I study is currently underway (NCT05204147). The goal of this study is to establish the safety level and THE possible benefit of administrating M5A-DOTA-225-Ac. See also List of histologic stains that aid in diagnosis of cutaneous conditions References External links CEA at Lab Tests Online CEA: analyte monograph from The Association for Clinical Biochemistry and Laboratory Medicine National Cancer Institute Definition of anti-CEA antibody Tumor markers Immunoglobulin superfamily
Carcinoembryonic antigen
[ "Chemistry", "Biology" ]
1,449
[ "Chemical pathology", "Tumor markers", "Biomarkers" ]
1,610,581
https://en.wikipedia.org/wiki/Voltage-gated%20calcium%20channel
Voltage-gated calcium channels (VGCCs), also known as voltage-dependent calcium channels (VDCCs), are a group of voltage-gated ion channels found in the membrane of excitable cells (e.g. muscle, glial cells, neurons) with a permeability to the calcium ion Ca2+. These channels are slightly permeable to sodium ions, so they are also called Ca2+–Na+ channels, but their permeability to calcium is about 1000-fold greater than to sodium under normal physiological conditions. At physiologic or resting membrane potential, VGCCs are normally closed. They are activated (i.e.: opened) at depolarized membrane potentials and this is the source of the "voltage-gated" epithet. The concentration of calcium (Ca2+ ions) is normally several thousand times higher outside the cell than inside. Activation of particular VGCCs allows a Ca2+ influx into the cell, which, depending on the cell type, results in activation of calcium-sensitive potassium channels, muscular contraction, excitation of neurons, up-regulation of gene expression, or release of hormones or neurotransmitters. VGCCs have been immunolocalized in the zona glomerulosa of normal and hyperplastic human adrenal, as well as in aldosterone-producing adenomas (APA), and in the latter T-type VGCCs correlated with plasma aldosterone levels of patients. Excessive activation of VGCCs is a major component of excitotoxicity, as severely elevated levels of intracellular calcium activates enzymes which, at high enough levels, can degrade essential cellular structures. Structure Voltage-gated calcium channels are formed as a complex of several different subunits: α1, α2δ, β1-4, and γ. The α1 subunit forms the ion-conducting pore while the associated subunits have several functions including modulation of gating. Channel subunits There are several different kinds of high-voltage-gated calcium channels (HVGCCs). They are structurally homologous among varying types; they are all similar, but not structurally identical. In the laboratory, it is possible to tell them apart by studying their physiological roles and/or inhibition by specific toxins. High-voltage-gated calcium channels include the neural N-type channel blocked by ω-conotoxin GVIA, the R-type channel (R stands for Resistant to the other blockers and toxins, except SNX-482) involved in poorly defined processes in the brain, the closely related P/Q-type channel blocked by ω-agatoxins, and the dihydropyridine-sensitive L-type channels responsible for excitation-contraction coupling of skeletal, smooth, and cardiac muscle and for hormone secretion in endocrine cells. Reference for the table can be found at Dunlap, Luebke and Turner (1995). α1 Subunit The α1 subunit pore (~190 kDa in molecular mass) is the primary subunit necessary for channel functioning in the HVGCC, and consists of the characteristic four homologous I–IV domains containing six transmembrane α-helices each. The α1 subunit forms the Ca2+ selective pore, which contains voltage-sensing machinery and the drug/toxin-binding sites. A total of ten α1 subunits that have been identified in humans: α1 subunit contains 4 homologous domains (labeled I–IV), each containing 6 transmembrane helices (S1–S6). This arrangement is analogous to a homo-tetramer formed by single-domain subunits of voltage-gated potassium channels (that also each contain 6 TM helices). The 4-domain architecture (and several key regulatory sites, such as the EF hand and IQ domain at the C-terminus) is also shared by the voltage gated sodium channels, which are thought to be evolutionarily related to VGCCs. The transmembrane helices from the 4 domains line up to form the channel proper; S5 and S6 helices are thought to line the inner pore surface, while S1–4 helices have roles in gating and voltage sensing (S4 in particular). VGCCs are subject to rapid inactivation, which is thought to consist of 2 components: voltage-gated (VGI) and calcium-gated (CGI). These are distinguished by using either Ba2+ or Ca2+ as the charge carrier in the external recording solution (in vitro). The CGI component is attributed to the binding of the Ca2+-binding signaling protein calmodulin (CaM) to at least 1 site on the channel, as Ca2+-null CaM mutants abolish CGI in L-type channels. Not all channels exhibit the same regulatory properties and the specific details of these mechanisms are still largely unknown. α2δ Subunit The α2δ gene forms two subunits: α2 and δ (which are both the product of the same gene). They are linked to each other via a disulfide bond and have a combined molecular weight of 170 kDa. The α2 is the extracellular glycosylated subunit that interacts the most with the α1 subunit. The δ subunit has a single transmembrane region with a short intracellular portion, which serves to anchor the protein in the plasma membrane. There are 4 α2δ genes: CACNA2D1 (), CACNA2D2 (), (), (). Co-expression of the α2δ enhances the level of expression of the α1 subunit and causes an increase in current amplitude, faster activation and inactivation kinetics and a hyperpolarizing shift in the voltage dependence of inactivation. Some of these effects are observed in the absence of the beta subunit, whereas, in other cases, the co-expression of beta is required. The α2δ-1 and α2δ-2 subunits are the binding site for gabapentinoids. This drug class includes two anticonvulsant drugs, gabapentin (Neurontin) and pregabalin (Lyrica), that also find use in treating chronic neuropathic pain. The α2δ subunit is also a binding site of the central depressant and anxiolytic phenibut, in addition to actions at other targets. β Subunit The intracellular β subunit (55 kDa) is an intracellular MAGUK-like protein (Membrane-Associated Guanylate Kinase) containing a guanylate kinase (GK) domain and an SH3 (src homology 3) domain. The guanylate kinase domain of the β subunit binds to the α1 subunit I-II cytoplasmic loop and regulates HVGCC activity. There are four known genes for the β subunit: CACNB1 (), CACNB2 (), CACNB3 (), CACNB4 (). It is hypothesized that the cytosolic β subunit has a major role in stabilizing the final α1 subunit conformation and delivering it to the cell membrane by its ability to mask an endoplasmic reticulum retention signal in the α1 subunit. The endoplasmic retention brake is contained in the I–II loop in the α1 subunit that becomes masked when the β subunit binds. Therefore, the β subunit functions initially to regulate the current density by controlling the amount of α1 subunit expressed at the cell membrane. In addition to this trafficking role, the β subunit has the added important functions of regulating the activation and inactivation kinetics, and hyperpolarizing the voltage-dependence for activation of the α1 subunit pore, so that more current passes for smaller depolarizations. The β subunit has effects on the kinetics of the cardiac α1C in Xenopus laevis oocytes co-expressed with β subunits. The β subunit acts as an important modulator of channel electrophysiological properties. Until very recently, the interaction between a highly conserved 18-amino acid region on the α1 subunit intracellular linker between domains I and II (the Alpha Interaction Domain, AID) and a region on the GK domain of the β subunit (Alpha Interaction Domain Binding Pocket) was thought to be solely responsible for the regulatory effects by the β subunit. Recently, it has been discovered that the SH3 domain of the β subunit also gives added regulatory effects on channel function, opening the possibility of the β subunit having multiple regulatory interactions with the α1 subunit pore. Furthermore, the AID sequence does not appear to contain an endoplasmic reticulum retention signal, and this may be located in other regions of the I–II α1 subunit linker. γ Subunit The γ1 subunit is known to be associated with skeletal muscle VGCC complexes, but the evidence is inconclusive regarding other subtypes of calcium channel. The γ1 subunit glycoprotein (33 kDa) is composed of four transmembrane spanning helices. The γ1 subunit does not affect trafficking, and, for the most part, is not required to regulate the channel complex. However, γ2, γ3, γ4 and γ8 are also associated with AMPA glutamate receptors. There are 8 genes for gamma subunits: γ1 (), γ2 (), γ3 (), γ4 (), (), (), (), and (). Muscle physiology When a smooth muscle cell is depolarized, it causes opening of the voltage-gated (L-type) calcium channels. Depolarization may be brought about by stretching of the cell, agonist-binding its G protein-coupled receptor (GPCR), or autonomic nervous system stimulation. Opening of the L-type calcium channel causes influx of extracellular Ca2+, which then binds calmodulin. The activated calmodulin molecule activates myosin light-chain kinase (MLCK), which phosphorylates the myosin in thick filaments. Phosphorylated myosin is able to form crossbridges with actin thin filaments, and the smooth muscle fiber (i.e., cell) contracts via the sliding filament mechanism. (See reference for an illustration of the signaling cascade involving L-type calcium channels in smooth muscle). L-type calcium channels are also enriched in the t-tubules of striated muscle cells, i.e., skeletal and cardiac myofibers. When these cells are depolarized, the L-type calcium channels open as in smooth muscle. In skeletal muscle, the actual opening of the channel, which is mechanically gated to a calcium-release channel (a.k.a. ryanodine receptor, or RYR) in the sarcoplasmic reticulum (SR), causes opening of the RYR. In cardiac muscle, opening of the L-type calcium channel permits influx of calcium into the cell. The calcium binds to the calcium release channels (RYRs) in the SR, opening them; this phenomenon is called "calcium-induced calcium release", or CICR. However the RYRs are opened, either through mechanical-gating or CICR, Ca2+ is released from the SR and is able to bind to troponin C on the actin filaments. The muscles then contract through the sliding filament mechanism, causing shortening of sarcomeres and muscle contraction. Changes in expression during development Early in development, there is a high amount of expression of T-type calcium channels. During maturation of the nervous system, the expression of N or L-type currents becomes more prominent. As a result, mature neurons express more calcium channels that will only be activated when the cell is significantly depolarized. The different expression levels of low-voltage activated (LVA) and high-voltage activated (HVA) channels can also play an important role in neuronal differentiation. In developing Xenopus spinal neurons LVA calcium channels carry a spontaneous calcium transient that may be necessary for the neuron to adopt a GABAergic phenotype as well as process outgrowth. Clinical significance Voltage-gated calcium channels antibodies are associated with Lambert-Eaton myasthenic syndrome and have also been implicated in paraneoplastic cerebellar degeneration. Voltage-gated calcium channels are also associated with malignant hyperthermia and Timothy syndrome. Mutations of the CACNA1C gene, with a single-nucleotide polymorphism in the third intron of the Cav1.2 gene, are associated with a variant of long QT syndrome called Timothy's syndrome and also with Brugada syndrome. Large-scale genetic analyses have shown the possibility that CACNA1C is associated with bipolar disorder and subsequently also with schizophrenia. Also, a CACNA1C risk allele has been associated to a disruption in brain connectivity in patients with bipolar disorder, while not or only to a minor degree, in their unaffected relatives or healthy controls. See also Glutamate receptors Inositol triphosphate receptor Ion channels NMDA receptors References External links Electrophysiology Membrane biology Integral membrane proteins Voltage-gated ion channels Calcium channels
Voltage-gated calcium channel
[ "Chemistry" ]
2,814
[ "Membrane biology", "Molecular biology" ]
1,610,676
https://en.wikipedia.org/wiki/Xfire
Xfire was a proprietary freeware instant messaging service for gamers that also served as a game server browser with various other features. It was available for Microsoft Windows. Xfire was originally developed by Ultimate Arena based in Menlo Park, California. Xfire's partnership with Livestream allowed users to broadcast live video streams of their current game to an audience. The Xfire website also maintained a "Top Ten" games list, ranking games by the number of hours Xfire users spend playing each game every day. World of Warcraft had been the most played game for many years, but was surpassed by League of Legends on June 20, 2011. Social.xfire.com was a community site for Xfire users, allowing them to upload screenshots, photos and videos and to make contacts. Xfire hosted events every month, which included debates, game tournaments, machinima contests, and chat sessions with Xfire or game developers. As of January 3, 2014, it had over 24 million registered users. Xfire's web based social media was discontinued on June 12, 2015, and the messaging function was shut down on June 27, 2015. The last of Xfire's services were shut down on April 30, 2016. History Xfire, Inc. was founded in 2002 by Dennis "Thresh" Fong, Mike Cassidy, Max Woon, and David Lawee. The company was formerly known as Ultimate Arena, but changed its name to Xfire when its desktop client Xfire became more popular and successful than its gaming website. The first version of the Xfire desktop client was code-named Scoville, which was first developed in 2003 by Garrett Blythe, Chris Kirmse, Mike Judge, and others. The services ability to track game play hours and quickly launch web games, compared to other services at the time quickly gained it popularity. On April 25, 2006, Xfire was acquired by Viacom in a US$102 million deal. In September 2006, Sony was misinterpreted to have announced that Xfire would be used for the PlayStation 3. The confusion came when one PlayStation 3 game, Untold Legends: Dark Kingdom, was to use some of Xfire's features with more game support planned for the future. On May 7, 2007, Xfire announced that they had reached 7 million registered users. Shortly after, on June 13, 2007, co-founder and former CEO Mike Cassidy departed the company to work for venture capital firm Benchmark Capital. Adam Boyden, Vice President of Business Development & Marketing, was assigned to take his place and manage the company for a temporary period. On August 2, 2010, Xfire was acquired by Titan Gaming, a skill-based matchmaking service for game developers. Titan Gaming had raised only US$1 million prior to the acquisition, so Viacom likely sold Xfire for significantly less than they bought it. On the day of the acquisition, the Xfire team broadcast a message to all users stating that most of the original employees would be leaving. The message was later put on Xfire's website. In October 2011, little over a year after it was acquired, Xfire was spun off from Titan Gaming and raised US$4 million in funding. Xfire's president estimated that US$44 million had been invested into the company up to that point. After regaining independence, Xfire pivoted to focus on the Asian market. On April 10, 2012, it hired Malcom CasSelle, a former Tencent executive, as CEO. On the same day, it announced a joint venture with a Chinese Communist Youth League-affiliated company to localize and distribute its service in mainland China. A month later, it raised US$3 million dollars in a funding round led by IDM Venture Capital, a Singapore-based firm. The financing was aimed at expanding Xfire's market share in Asia, and the company said it would likely be part of a larger round of funding. However, this was the last round of funding the company received before its demise. On June 10, 2015, Xfire announced that its social services would be shut down on Friday, June 12 with only 2 days' notice. The home page for the social part of Xfire at that time linked to an export page where users could download all their previously uploaded screenshots and videos. The export function ceased to be available on or around June 27, 2015. On July 6, 2015, the site was shut down and the contents of the service were deleted. Video game and pop culture news In 2020 a video game, movies and TV news website was launched on the Xfire domain (https://www.xfire.com). Lawsuits Yahoo! filed a lawsuit against Xfire, Inc. on January 28, 2005, claiming that Xfire has infringed Yahoo!'s U.S. patent No. 6,699,125 for a "Game server for use in connection with a messenger server". Xfire, Inc. filed a countersuit against Yahoo! on March 10, 2005, which was eventually disqualified by the judge. There has been a settlement between the companies as of January 31, 2006. More details were posted to Xfire's forums. Features Xfire had many features, the majority of which could only be used while in-game. Xfire featured the ability to detect the video game a particular user was running. By analyzing running processes, Xfire could detect active games and then send that information to the Xfire servers. Other user's clients would then be updated with this information. For many games, it could also detect which server users were playing on, the level which was running, and ping times. Using these features, users were also able to see what games their friends were playing, and to join any friends who were currently in-game by having Xfire launch the game and join the friend's server automatically. Xfire logged what games users were playing, how many hours they had played them, and saved other information (such as scores) from game servers. This information could be converted into a PNG image by the server via PHP for every user to use as a signature. Xfire allowed players to take screenshots in-game and save them to a specified folder, though this only worked with games that had Xfire in-game support. Users could select and caption any screenshots they wished to upload and share on their Xfire profile page. Xfire also had the ability to record video in-game, though this often had a significant impact on game performance and recording quality if one had a low-performance system, causing the frame rate to drop dramatically. However, this is typically true of all video recording during gaming, and was not unique to Xfire. The clients main function was as an instant messenger. Similar to other such online services, any user who had been added as a 'friend' could be immediately contacted through text chat. To communicate with other users in-game, Xfire users could send and receive instant messages from inside any game that was running in full screen mode, regardless of the games the sender or recipient were in. This eliminated the need to minimize the game window. Users were also able to directly send files to one another via the chat window. In August 2005, Xfire updated to version 1.43, which added a beta voice chat feature using Voice over Internet Protocol (VoIP) technology to the application called "Xfire Pro-Voice". Until early 2009, if voice chat was being used in a chat room, users had to host the voice chat, causing quality problems and lag due to some users having better system capabilities than others. Xfire hosted the voice chat sessions to resolve quality problems. On May 4, 2009, a built-in alpha AOL Instant Messenger and Windows Live Messenger plugin was released in 1.108. As of May 4, 2009, it only supported chatting, and none of AIM's other features. From December 1, 2009, users could access their Twitter accounts through Xfire, allowing players to view updates posted by other users, as well as post their own. Google Talk was also subsequently added. In December 2011, Xfire added support for Facebook chatting, enabling users to chat with their Facebook friends from within the game. Xfire installed itself as the system-wide handler for the xfire: URI scheme, which enabled users to add friends, join game servers and perform other functions in the client by clicking links on websites. The scheme was provisionally registered with IANA in 2012. On December 16, 2011, Xfire added a feature to allow its users to capture in-game video and upload it to YouTube. This feature was similar to other popular in-game video recording software products, but allowed users to record videos up to 10 minutes in length for free. Xfire added a video streaming feature in version 1.97. To view a broadcast, a web browser plugin was required, supporting only Internet Explorer and Mozilla Firefox. In version 1.113, released on August 17, 2009, the broadcast system was changed to allow a plugin-less, Flash-based view compatible with any Flash-enabled browser. This feature let anyone watch a live feed of a user's screen while they were playing a game. Live streams had accompanying chat rooms that let anyone who was watching a live feed communicate. In-game internet browsing capabilities were added to Xfire in version 1.103. Its homepage was set as a statistics page of the game currently being played by the user, including listing other players and any clans and guilds based around the game being played. Support As of December 1, 2012, Xfire provided support for more than 3,000 games, of many different genres. Support for Windows 98 and Windows Me was discontinued as of January 2007. Third-party modifications and software forking There were many third party modifications for Xfire's client and services, including skins, infoview templates, plugins, and protocol implementations. Some of these may or may not violate Xfire's terms of service. Skins could be used to provide a new look to the Xfire client and chat windows, while Infoview skins could be used to provide extra functionality in the Xfire Infoview pane. Skins were made using XML and image files, while Infoviews were made using HTML, JavaScript, and images. Plugins There were a variety of third-party plugins developed for use with Xfire. OpenFire: An open-source (LGPL licensed) Java and suite of tools to access the Xfire instant messaging network. Xfirelib: An open-source library written in C++ which implements the Xfire protocol. Based on it is an Extensible Messaging and Presence Protocol (XMPP) gateway to Xfire which also implements Gamers Own Instant Messenger (GOIM) extensions to the XMPP protocol. The following plugins let users chat on Xfire with other instant messaging clients: Gfire: A Pidgin plugin for Linux and Windows that lets users chat and see what games friends are playing. It has most of the major Xfire features: group chat, clan chat, file transfer, avatars, server, and game detection. Kopete plugin: A plugin that lets users chat and see the status of friends. Miranda NG plugin: A plugin that allows users to chat with others on Xfire, detect games, and more. Xblaze: An open-source plugin for Adium that allows communication over the Xfire protocol, using the MacFire implementation. It is the first Xfire client for Mac OS X. Clients Several Xfire clients were available for different platforms: MacFire: An open-source implementation for of the Xfire network protocol for Mac OS X. It was made possible, in part, by prior work done for Xblaze, XfireLib, and OpenFire. BlackFire: A client for Mac OS X Snow Leopard. Reception The editors of Computer Games Magazine presented Xfire with their 2006 "Best Utility" award. See also Comparison of screencasting software References External links Chat websites Discontinued software Game server browsers Instant messaging clients Online video game services Screencasting software Screenshot software Windows-only freeware Defunct websites Defunct social networking services Former Viacom subsidiaries Defunct instant messaging clients
Xfire
[ "Technology" ]
2,537
[ "Instant messaging", "Instant messaging clients" ]
1,610,806
https://en.wikipedia.org/wiki/Hypermetabolism
Hypermetabolism is defined as an elevated resting energy expenditure (REE) > 110% of predicted REE. Hypermetabolism is accompanied by a variety of internal and external symptoms, most notably extreme weight loss, and can also be a symptom in itself. This state of increased metabolic activity can signal underlying issues, especially hyperthyroidism. Patients with Fatal familial insomnia can also present with hypermetabolism; however, this universally fatal disorder is exceedingly rare, with only a few known cases worldwide. The drastic impact of the hypermetabolic state on patient nutritional requirements is often understated or overlooked as well. Signs and symptoms Symptoms may last for days, weeks, or months until the disorder is healed. The most apparent sign of hypermetabolism is an abnormally high intake of calories followed by continuous weight loss. Internal symptoms of hypermetabolism include: peripheral insulin resistance, elevated catabolism of protein, carbohydrates and triglycerides, and a negative nitrogen balance in the body. Outward symptoms of hypermetabolism may include: Weight loss Anemia Fatigue Elevated heart rate Irregular heartbeat Insomnia Dysautonomia Shortness of breath Muscle weakness Excessive sweating Pathophysiology During the acute phase, the liver redirects protein synthesis, causing up-regulation of certain proteins and down-regulation of others. Measuring the serum level of proteins that are up- and down-regulated during the acute phase can reveal extremely important information about the patient's nutritional state. The most important up-regulated protein is C-reactive protein, which can rapidly increase 20- to 1,000-fold during the acute phase. Hypermetabolism also causes expedited catabolism of carbohydrates, proteins, and triglycerides in order to meet the increased metabolic demands. Diagnosis Quantitation by indirect calorimetry, as opposed to the Harris-Benedict equation, is needed to accurately measure REE in cancer patients. Differential diagnosis Many different illnesses can cause an increase in metabolic activity as the body combats illness and disease in order to heal itself. Hypermetabolism is a common symptom of various pathologies. Some of the most prevalent diseases characterized by hypermetabolism are listed below. Hyperthyroidism: Manifestation: An overactive thyroid often causes a state of increased metabolic activity. Friedreich's ataxia: Manifestation: Local cerebral metabolic activity is increased extensively as the disease progresses. Fatal familial insomnia: Manifestation: Hypermetabolism in the thalamus occurs and disrupts sleep spindle formation that occurs there. Graves' disease: Manifestation: Excess hypermetabolically-induced thyroid hormone activates sympathetic pathways, causing the eyelids to retract and remain constantly elevated. Anorexia and bulimia: Manifestation: The prolonged stress put on the body as a result of these eating disorders forces the body into starvation mode. Some patients recovering from these disorders experience hypermetabolism until they resume normal diets. Astrocytoma: Manifestation: Causes hypermetabolic lesions in the brain Treatment Ibuprofen, polyunsaturated fatty acids, and beta-blockers have been reported in some preliminary studies to decrease REE, which may allow patients to meet their caloric needs and gain weight. References Metabolism
Hypermetabolism
[ "Chemistry", "Biology" ]
704
[ "Biochemistry", "Metabolism", "Cellular processes" ]
1,610,862
https://en.wikipedia.org/wiki/Uniform%20information%20representation
Uniform information representation allows information from several realms or disciplines to be displayed and worked with as if it came from the same realm or discipline. It takes information from a number of sources, which may have used different methodologies and metrics in their data collection, and builds a single large collection of information, where some records may be more complete than others across all fields of data Uniform information representation is particularly important in the fields of Enterprise Information Integration (EII) and Electronic Data Interchange (EDI), where different departments of a large organization may have collected information for different purposes, with different labels and units, until one department realized that data already collected by those other departments could be re-purposed for their own needs—saving the enterprise the effort and cost of re-collecting the same information. Data management
Uniform information representation
[ "Technology" ]
163
[ "Computer science stubs", "Data management", "Computer science", "Data", "Computing stubs" ]
1,610,890
https://en.wikipedia.org/wiki/Uniform%20data%20access
Uniform data access is a computational concept describing an even-ness of connectivity and controllability across numerous target data sources. Necessary to fields such as Enterprise Information Integration (EII) and Electronic Data Interchange (EDI), it is most often used regarding analysis of disparate data types and data sources, which must be rendered into a uniform information representation, and generally must appear homogenous to the analysis tools—when the data being analyzed is typically heterogeneous and widely varying in size, type, and original representation. Data management
Uniform data access
[ "Technology" ]
111
[ "Computer science stubs", "Data management", "Computer science", "Data", "Computing stubs" ]
1,610,950
https://en.wikipedia.org/wiki/Ajax%20%28programming%29
Ajax (also AJAX ; short for "asynchronous JavaScript and XML") is a set of web development techniques that uses various web technologies on the client-side to create asynchronous web applications. With Ajax, web applications can send and retrieve data from a server asynchronously (in the background) without interfering with the display and behaviour of the existing page. By decoupling the data interchange layer from the presentation layer, Ajax allows web pages and, by extension, web applications, to change content dynamically without the need to reload the entire page. In practice, modern implementations commonly utilize JSON instead of XML. Ajax is not a technology, but rather a programming pattern. HTML and CSS can be used in combination to mark up and style information. The webpage can be modified by JavaScript to dynamically display (and allow the user to interact with) the new information. The built-in XMLHttpRequest object is used to execute Ajax on webpages, allowing websites to load content onto the screen without refreshing the page. Ajax is not a new technology, nor is it a new language. Instead, it is existing technologies used in a new way. History In the early-to-mid 1990s, most Websites were based on complete HTML pages. Each user action required a complete new page to be loaded from the server. This process was inefficient, as reflected by the user experience: all page content disappeared, then the new page appeared. Each time the browser reloaded a page because of a partial change, all the content had to be re-sent, even though only some of the information had changed. This placed additional load on the server and made bandwidth a limiting factor in performance. In 1996, the iframe tag was introduced by Internet Explorer; it can load a part of the web page asynchronously. In 1998, the Microsoft Outlook Web Access team developed the concept behind the XMLHttpRequest scripting object. It appeared as XMLHTTP in the second version of the MSXML library, which shipped with Internet Explorer 5.0 in March 1999. The functionality of the Windows XMLHTTP ActiveX control in IE 5 was later implemented by Mozilla Firefox, Safari, Opera, Google Chrome, and other browsers as the XMLHttpRequest JavaScript object. Microsoft adopted the native XMLHttpRequest model as of Internet Explorer 7. The ActiveX version is still supported in Internet Explorer and on "Internet Explorer mode" in Microsoft Edge. The utility of these background HTTP requests and asynchronous Web technologies remained fairly obscure until it started appearing in large scale online applications such as Outlook Web Access (2000) and Oddpost (2002). Google made a wide deployment of standards-compliant, cross browser Ajax with Gmail (2004) and Google Maps (2005). In October 2004 Kayak.com's public beta release was among the first large-scale e-commerce uses of what their developers at that time called "the xml http thing". This increased interest in Ajax among web program developers. The term AJAX was publicly used on 18 February 2005 by Jesse James Garrett in an article titled Ajax: A New Approach to Web Applications, based on techniques used on Google pages. On 5 April 2006, the World Wide Web Consortium (W3C) released the first draft specification for the XMLHttpRequest object in an attempt to create an official Web standard. The latest draft of the XMLHttpRequest object was published on 6 October 2016, and the XMLHttpRequest specification is now a living standard. Technologies The term Ajax has come to represent a broad group of Web technologies that can be used to implement a Web application that communicates with a server in the background, without interfering with the current state of the page. In the article that coined the term Ajax, Jesse James Garrett explained that the following technologies are incorporated: HTML (or XHTML) and CSS for presentation The Document Object Model (DOM) for dynamic display of and interaction with data JSON or XML for the interchange of data, and XSLT for XML manipulation The XMLHttpRequest object for asynchronous communication JavaScript to bring these technologies together Since then, however, there have been a number of developments in the technologies used in an Ajax application, and in the definition of the term Ajax itself. XML is no longer required for data interchange and, therefore, XSLT is no longer required for the manipulation of data. JavaScript Object Notation (JSON) is often used as an alternative format for data interchange, although other formats such as preformatted HTML or plain text can also be used. A variety of popular JavaScript libraries, including JQuery, include abstractions to assist in executing Ajax requests. Examples JavaScript example An example of a simple Ajax request using the GET method, written in JavaScript. get-ajax-data.js: // This is the client-side script. // Initialize the HTTP request. let xhr = new XMLHttpRequest(); // define the request xhr.open('GET', 'send-ajax-data.php'); // Track the state changes of the request. xhr.onreadystatechange = function () { const DONE = 4; // readyState 4 means the request is done. const OK = 200; // status 200 is a successful return. if (xhr.readyState === DONE) { if (xhr.status === OK) { console.log(xhr.responseText); // 'This is the output.' } else { console.log('Error: ' + xhr.status); // An error occurred during the request. } } }; // Send the request to send-ajax-data.php xhr.send(null); send-ajax-data.php: <?php // This is the server-side script. // Set the content type. header('Content-Type: text/plain'); // Send the data back. echo "This is the output."; ?> Fetch example Fetch is a native JavaScript API. According to Google Developers Documentation, "Fetch makes it easier to make web requests and handle responses than with the older XMLHttpRequest." fetch('send-ajax-data.php') .then(data => console.log(data)) .catch (error => console.log('Error:' + error)); ES7 async/await example async function doAjax1() { try { const res = await fetch('send-ajax-data.php'); const data = await res.text(); console.log(data); } catch (error) { console.log('Error:' + error); } } doAjax1(); Fetch relies on JavaScript promises. The fetch specification differs from Ajax in the following significant ways: The Promise returned from fetch() won't reject on HTTP error status even if the response is an HTTP 404 or 500. Instead, as soon as the server responds with headers, the Promise will resolve normally (with the ok property of the response set to false if the response isn't in the range 200–299), and it will only reject on network failure or if anything prevented the request from completing. fetch() won't send cross-origin cookies unless you set the credentials init option. (Since April 2018. The spec changed the default credentials policy to same-origin. Firefox changed since 61.0b13.) Benefits Ajax offers several benefits that can significantly enhance web application performance and user experience. By reducing server traffic and improving speed, Ajax plays a crucial role in modern web development. One key advantage of Ajax is its capacity to render web applications without requiring data retrieval, resulting in reduced server traffic. This optimization minimizes response times on both the server and client sides, eliminating the need for users to endure loading screens. Furthermore, Ajax facilitates asynchronous processing by simplifying the utilization of XmlHttpRequest, which enables efficient handling of requests for asynchronous data retrieval. Additionally, the dynamic loading of content enhances the application's performance significantly. Besides, Ajax enjoys broad support across all major web browsers, including Microsoft Internet Explorer versions 5 and above, Mozilla Firefox versions 1.0 and beyond, Opera versions 7.6 and above, and Apple Safari versions 1.2 and higher. See also ActionScript Comet (programming) (also known as Reverse Ajax) Google Instant HTTP/2 List of Ajax frameworks Node.js Remote scripting Rich web application WebSocket HTML5 Web framework JavaScript library References External links Ajax: A New Approach to Web applications - Article that coined the Ajax term and Q&A Ajax Tutorial with GET, POST, text and XML examples. Cloud standards Inter-process communication Web 2.0 neologisms Web development Articles with example JavaScript code Articles with example PHP code
Ajax (programming)
[ "Technology", "Engineering" ]
1,905
[ "Software engineering", "Computer standards", "Cloud standards", "Web development" ]
1,610,978
https://en.wikipedia.org/wiki/Eosin%20methylene%20blue
Eosin methylene blue (EMB, also known as "Levine's formulation") is a selective and differential media used for the identification of Gram-negative bacteria, specifically the Enterobacteriaceae. EMB inhibits the growth of most Gram-positive bacteria. EMB is often used to confirm the presence of coliforms in a sample. It contains two dyes, eosin and methylene blue in the ratio of 6:1. EMB is a differential microbiological media, which inhibits the growth of Gram-positive bacteria and differentiates bacteria that ferment lactose (e.g., E. coli) from those that do not (e.g., Salmonella, Shigella). Organisms that ferment lactose appear dark/black or green often with "nucleated colonies"—colonies with dark centers. Organisms that do not ferment lactose will appear pink and often mucoid. This culture media is important in medical laboratories by allowing the identification of enteric bacteria microbes in a short period of time. Rapid lactose fermentation produces acids, which lower the pH. This encourages dye absorption by the colonies, which are now colored purple-black. Lactose non-fermenters may increase the pH by deamination of proteins. This ensures that the dye is not absorbed. The colonies will be colorless. On EMB if E. coli is grown it will give a distinctive metallic green sheen (due to the metachromatic properties of the dyes, E. coli movement using flagella, and strong acid end-products of fermentation). Some species of Citrobacter and Enterobacter will also react this way to EMB. This medium has been specifically designed to discourage the growth of Gram-positive bacteria. EMB contains the following ingredients: peptone, lactose, dipotassium phosphate, eosin Y (dye), methylene blue (dye), and agar. There are also EMB agars that do not contain lactose. References External links Uses of EMB Agar History of EMB Agar Acumedia - EMB Agar The Scientist - EMB Explanation Microbugz - EMB Agar Biochemistry detection reactions Microbiological media ingredients Staining Microbiological media
Eosin methylene blue
[ "Chemistry", "Biology" ]
481
[ "Staining", "Biochemistry detection reactions", "Biochemical reactions", "Microbiology equipment", "Microbiology techniques", "Microscopy", "Microbiological media", "Cell imaging" ]
1,610,997
https://en.wikipedia.org/wiki/Public%20sex
Public sex is sexual activity that takes place in a public context. It refers to one or more persons performing a sex act in a public place, or in a private place that can be viewed from a public place. Such a private place may be a back yard, a barn, balcony or a bedroom with the curtains open. Public sex also includes sexual acts in semi-public places where the general public is free to enter, such as shopping malls. Public sex acts can be performed in a car (colloquially called "parking"), on a beach, in a forest, theatre, bus, aeroplane, street, cubicle, elevator, bathroom, cemetery, and other locations. According to a large study in 2008, having sex in a public place is a common fantasy and a significant number of couples or individuals have done so. The fantasy is at times depicted in art or film. Incidence In ancient Greece, it was recorded that Crates of Thebes, the Cynic philosopher, had sexual intercourse with his wife Hipparchia of Maroneia, another Cynic philosopher, in public. In England, some Ranters engaged in public sex. In September 2003, the BBC reported on a "dogging" craze fueled by Internet publicity. Dogging is a British English slang term for engaging in public sex, usually in a car park or country park, while others watch. Dogging has aspects of exhibitionism and voyeurism. There is some evidence on the Internet that "dogging" has begun to spread to other countries, such as the United States, Canada, Ireland, Australia, Barbados, Brazil, the Netherlands, Denmark, Norway, Poland, and Sweden. Outdoor public sex purportedly takes place often in parks and beaches in Vancouver and San Francisco. According to the New York magazine, public sex occurs frequently in New York City, and is a fantasy common to many people. Perception Social views related to public sex and sexuality vary greatly between cultures and different times. There are many and varied laws that apply to sex in public, which use a variety of terms such as indecent exposure, public lewdness, gross indecency, and others. In some jurisdictions, an offense is committed only if the participants are seen by others, so that a sex act may occur in a closed toilet cubicle without an offense being committed. In the United Kingdom, there has been a rise in public sex venues, possibly due to a more relaxed approach to the enforcement of laws relating to public sex since the early 1990s. Legality In the United Kingdom, the legal status of public sexual acts was considered as part of the Sexual Offences Act 2003. Section 71 of the Act makes it an offence to engage in sexual activity in a public lavatory. In the United Kingdom public sex comes under laws related to voyeurism, exhibitionism or public displays of sexual behaviour, but public sex law enforcement remains ambiguous. Prosecution is possible for a number of offences under section 5 of the Public Order Act 1986, exposure under section 66 of the Sexual Offences Act 2003, or under the common law offence of outraging public decency. The policy of the Association of Chief Police Officers (ACPO) is that arrests are a last resort and a more gradual approach should be taken in such circumstances. See also Cottaging Dogging (sexual slang) Exhibitionism Girls Gone Wild (franchise) Nudity and sexuality Public display of affection Public nudity Public Sex (film) Sexualization References Further reading External links Sexual acts Nudity Casual sex
Public sex
[ "Biology" ]
713
[ "Sexual acts", "Behavior", "Sexuality", "Mating" ]
1,610,998
https://en.wikipedia.org/wiki/Lauryl%20tryptose%20broth
Lauryl tryptose broth (LTB) is a selective growth medium (broth) for coliforms. Lauryl tryptose broth is used for the most probable number test of coliforms in waters, effluent or sewage. It acts as a confirmation test for lactose fermentation with gas production. Sodium lauryl sulfate inhibits organisms other than coliforms. Formula in grams/litre (g/L) Tryptose: 20.0, Lactose : 5.0, Sodium chloride : 5.0, Dipotassium phosphate : 2.75, Potassium dihydrogen phosphate : 2.75, Sodium dodecyl sulfate : 0.1 pH 6.8 ± 0.2 Samples positive for gas production are transferred to brilliant green lactose bile broth (BLGB) to detect the ability to grow in the presence of bile and produce gas at 95 °F (35 °C) for 48 hours. The absence of gas production in 48 hours is considered a negative test for coliforms. Gas production serves as both a presumptive test and a confirmatory medium. Fecal coliforms are distinguished from coliforms by growth in EC broth at 113.9 °F (45.5 °C) for 24 hours. References Microbiological media
Lauryl tryptose broth
[ "Biology" ]
265
[ "Microbiological media", "Microbiology equipment" ]
1,611,358
https://en.wikipedia.org/wiki/Third-party%20reproduction
Third-party reproduction or donor-assisted reproduction is any human reproduction in which DNA or gestation is provided by a third party or donor other than the one or two parents who will raise the resulting child. This goes beyond the traditional father–mother model, and the third party's involvement is limited to the reproductive process and does not extend into the raising of the child. Third-party reproduction is used by couples unable to reproduce by traditional means, by same-sex couples, and by men and women without a partner. Where donor gametes are provided by a donor, the donor will be a biological parent of the resulting child, but in third party reproduction, he or she will not be the caring parent. Categories One can distinguish several categories, some of which may be combined: Sperm donation. A donor provides sperm in order to father a child for a third-party female. Egg donation. A donor provides ova to a woman or couple in order for the egg to be fertilized and implanted in the recipient woman. Spindle transfer. A third party's mitochondrial DNA is transferred to the future mother's ovum. This is used to prevent mitochondrial disease. Embryo donation with embryos which were originally created for a genetic mother's assisted pregnancy. Once the genetic mother has completed her own treatment, she may donate unused embryos for use by a third party. or where embryos are specifically created for donation using donor eggs and donor sperm. Embryo adoption. Embryos created during a donor's assisted pregnancy are adopted to be implanted in a third party recipient. Surrogacy. An embryo is gestated in a third party's uterus (traditional surrogacy) or a woman is inseminated in order to gestate a child for a third party (straight surrogacy). Gestation Gestation is typically initiated by artificial insemination in the case of sperm donation and by embryo transfer after in vitro fertilisation (IVF) in the case of egg donation, embryo donation, and surrogacy. Thus a child can have a genetic and social (non-genetic, non-biological) father, and a genetic, gestational, and social (non-biological) mother, and any combinations thereof. Theoretically a child thus could have 5 parents. Donor treatment A donor treatment is where gametes, i.e. sperm, ova or embryos are provided, or 'donated' by a third party for the purpose of third-party reproduction. Combinations Surrogacy includes, in its wider sense, all situations where a surrogate carries a pregnancy for another person. Recently, there has been a tendency to separate the gestational carrier situation from the "true" surrogate restricting the term for a woman who provides a combination of ovum donation and gestational carrier services. In a 'conventional surrogacy', a surrogate agrees to be inseminated with the sperm of the male partner of the 'commissioning' couple, or with the sperm of one of the male partners in a same-sex relationship, or with sperm provided by a sperm donor. The surrogate is inseminated, conceives, and hands over the baby at the completion of the pregnancy. In conventional surrogacy, the egg which is fertilized is therefore that of the surrogate. A famous case involving paternity rights and surrogacy is the Baby M case. In a 'gestational surrogacy', a surrogate agrees to the implantation in her of an embryo which may be created either by using an egg provided by another woman who may be part of a 'commissioning' couple, or she may be a single woman. Alternatively, an egg provided by a donor may be used to create the embryo. The embryo implanted in the surrogate may be fertilised using sperm from the male partner of the 'commissioning couple', or by using sperm provided by a sperm donor. Embryo donation is where extra embryos from a successful IVF of a couple are given to other couples or women for transfer with the goal of producing a successful pregnancy. Embryos for embryo donation may also be created specifically for embryo transfer using donor eggs and sperm, or in some cases donor eggs and donor sperm. It may thus be seen as a combination of sperm donation and egg donation, since what is donated is a combination of these. Such embryos may also be donated to a 'commissioning' woman or a 'commissioning' couple and gestated by a surrogate where, for example, the 'commissioning' woman or the woman of the 'commissioning' couple is infertile and is unable to bring a pregnancy to full term on medical grounds, or is unwilling for social, medical or other reasons, to do so. See also Egg donation Sexual surrogate Surrogacy References External links Annotated Bibliography for Children and Parents of Third Party Reproduction Compiled by the Education Committee, Spring 2007 UK Donorlink (UK Voluntary Information Exchange and Contact Register for donors and donor-conceived people) THIRD PARTY REPRODUCTION (Sperm, egg, and embryo donation and surrogacy) A Guide for Patients Different treatments and methods available for Third Party Reproduction Various articles on Third Party Reproduction Third-Party Reproduction, A Comprehensive Guide. Editor. Goldfarb, James M Fertility medicine Obstetrics Family Human pregnancy Human reproduction Assisted reproductive technology Assistive technology Medical technology Kinship and descent
Third-party reproduction
[ "Biology" ]
1,113
[ "Behavior", "Kinship and descent", "Human behavior", "Assisted reproductive technology", "Medical technology" ]
1,611,766
https://en.wikipedia.org/wiki/Law%20of%20total%20cumulance
In probability theory and mathematical statistics, the law of total cumulance is a generalization to cumulants of the law of total probability, the law of total expectation, and the law of total variance. It has applications in the analysis of time series. It was introduced by David Brillinger. It is most transparent when stated in its most general form, for joint cumulants, rather than for cumulants of a specified order for just one random variable. In general, we have where κ(X1, ..., Xn) is the joint cumulant of n random variables X1, ..., Xn, and the sum is over all partitions of the set { 1, ..., n } of indices, and "B ∈ ;" means B runs through the whole list of "blocks" of the partition , and κ(Xi : i ∈ B | Y) is a conditional cumulant given the value of the random variable Y. It is therefore a random variable in its own right—a function of the random variable Y. Examples The special case of just one random variable and n = 2 or 3 Only in case n = either 2 or 3 is the nth cumulant the same as the nth central moment. The case n = 2 is well-known (see law of total variance). Below is the case n = 3. The notation μ3 means the third central moment. General 4th-order joint cumulants For general 4th-order cumulants, the rule gives a sum of 15 terms, as follows: Cumulants of compound Poisson random variables Suppose Y has a Poisson distribution with expected value λ, and X is the sum of Y copies of W that are independent of each other and of Y. All of the cumulants of the Poisson distribution are equal to each other, and so in this case are equal to λ. Also recall that if random variables W1, ..., Wm are independent, then the nth cumulant is additive: We will find the 4th cumulant of X. We have: We recognize the last sum as the sum over all partitions of the set { 1, 2, 3, 4 }, of the product over all blocks of the partition, of cumulants of W of order equal to the size of the block. That is precisely the 4th raw moment of W (see cumulant for a more leisurely discussion of this fact). Hence the cumulants of X are the moments of W multiplied by λ. In this way we see that every moment sequence is also a cumulant sequence (the converse cannot be true, since cumulants of even order ≥ 4 are in some cases negative, and also because the cumulant sequence of the normal distribution is not a moment sequence of any probability distribution). Conditioning on a Bernoulli random variable Suppose Y = 1 with probability p and Y = 0 with probability q = 1 − p. Suppose the conditional probability distribution of X given Y is F if Y = 1 and G if Y = 0. Then we have where means is a partition of the set { 1, ..., n } that is finer than the coarsest partition – the sum is over all partitions except that one. For example, if n = 3, then we have References Algebra of random variables Theory of probability distributions Theorems in statistics Statistical laws
Law of total cumulance
[ "Mathematics" ]
706
[ "Mathematical theorems", "Mathematical problems", "Theorems in statistics" ]
1,611,956
https://en.wikipedia.org/wiki/Gimenez%20stain
The Gimenez staining technique uses biological stains to detect and identify bacterial infections in tissue samples. Although largely superseded by techniques like Giemsa staining, the Gimenez technique may be valuable for detecting certain slow-growing or fastidious bacteria. Basic fuchsin stain in aqueous solution with phenol and ethanol colours many bacteria (both gram positive and Gram negative) red, magenta, or pink. A malachite green counterstain gives a blue-green background cast to the surrounding tissue. See also Histology List of common staining protocols Microscopy References P. Bruneval et al.. "Detection of fastidious bacteria in cardiac valves in cases of blood culture negative endocarditis." Journal of Clinical Pathology. 54:238-240 (2001). D.F. Gimenez. "Staining Rickettsiae in yolksack cultures". Stain Technol 39:135–40 (1964). Staining
Gimenez stain
[ "Chemistry", "Biology" ]
201
[ "Bacteria stubs", "Staining", "Microbiology techniques", "Bacteria", "Microscopy", "Cell imaging" ]
1,612,114
https://en.wikipedia.org/wiki/Reinhold%20Baer
Reinhold Baer (22 July 1902 – 22 October 1979) was a German mathematician, known for his work in algebra. He introduced injective modules in 1940. He is the eponym of Baer rings, Baer groups, and Baer subplanes. Biography Baer studied mechanical engineering for a year at Leibniz University Hannover. He then went to study philosophy at Freiburg in 1921. While he was at Göttingen in 1922 he was influenced by Emmy Noether and Hellmuth Kneser. In 1924 he won a scholarship for specially gifted students. Baer wrote up his doctoral dissertation and it was published in Crelle's Journal in 1927. Baer accepted a post at Halle in 1928. There, he published Ernst Steinitz's "Algebraische Theorie der Körper" with Helmut Hasse, first published in Crelle's Journal in 1910. While Baer was with his wife in Austria, Adolf Hitler and the Nazis came into power. Both of Baer's parents were Jewish, and he was for this reason informed that his services at Halle were no longer required. Louis Mordell invited him to go to Manchester and Baer accepted. Baer stayed at Princeton University and was a visiting scholar at the nearby Institute for Advanced Study from 1935 to 1937. For a short while he lived in North Carolina. From 1938 to 1956 he worked at the University of Illinois at Urbana-Champaign. He returned to Germany in 1956. According to biographer K. W. Gruenberg, The rapid development of lattice theory in the mid-thirties suggested that projective geometry should be viewed as a special kind of lattice, the lattice of all subspaces of a vector space... [Linear Algebra and Projective Geometry (1952)] is an account of the representation of vector spaces over division rings, of projectivities by semi-linear transformations and of dualities by semi-bilinear forms. He died of heart failure on 22 October in 1979. In 2016 the Reinhold Baer Prize for the best Ph.D. thesis in group theory was set up in his honour. Bibliography 1934: "Erweiterung von Gruppen und ihren Isomorphismen", Mathematische Zeitschrift 38(1): 375–416 (German) 1940: "Nilpotent groups and their generalizations", Transactions of the American Mathematical Society 47: 393–434 1944: "The higher commutator subgroups of a group", Bulletin of the American Mathematical Society 50: 143–160 1945: "Representations of groups as quotient groups. II. Minimal central chains of a group", Transactions of the American Mathematical Society 58: 348–389 1945: "Representations of groups as quotient groups. III. Invariants of classes of related representations", Transactions of the American Mathematical Society 58: 390–419 See also Capable group Dedekind group Retract (group theory) Radical of a ring Semiprime ring Nielsen-Schreier theorem References O. H. Kegel (1979) "Reinhold Baer (1902 — 1979)", Mathematical Intelligencer 2:181,2. External links K.W. Gruenberg & Derek Robinson (2003) The Mathematical Legacy of Reinhold Baer, Illinois Journal of Mathematics'' 47(1-2) from Project Euclid. Author profile in the database zbMATH Baer Family's Schedule of 1940 US Census. Reproduction of a talk given by Baer on his last lecture in 1967, before his retirement from the University of Frankfurt - here is a translation. 1902 births 1979 deaths Scientists from Berlin Jewish emigrants from Nazi Germany to the United States 20th-century German mathematicians Algebraists University of Freiburg alumni University of Göttingen alumni Academic staff of the Martin Luther University of Halle-Wittenberg Princeton University faculty Institute for Advanced Study visiting scholars University of Illinois Urbana-Champaign faculty Academic staff of Goethe University Frankfurt
Reinhold Baer
[ "Mathematics" ]
810
[ "Algebra", "Algebraists" ]
1,612,567
https://en.wikipedia.org/wiki/Modal%20testing
Modal testing is the form of vibration testing of an object whereby the natural (modal) frequencies, modal masses, modal damping ratios and mode shapes of the object under test are determined. A modal test consists of an acquisition phase and an analysis phase. The complete process is often referred to as a Modal Analysis or Experimental Modal Analysis. There are several ways to do modal testing but impact hammer testing and shaker (vibration tester) testing are commonplace. In both cases energy is supplied to the system with a known frequency content. Where structural resonances occur there will be an amplification of the response, clearly seen in the response spectra. Using the response spectra and force spectra, a transfer function can be obtained. The transfer function (or frequency response function (FRF)) is often curve fitted to estimate the modal parameters; however, there are many methods of modal parameter estimation and it is the topic of much research. Impact Hammer Modal Testing An ideal impact to a structure is a perfect impulse, which has an infinitely small duration, causing a constant amplitude in the frequency domain; this would result in all modes of vibration being excited with equal energy. The impact hammer test is designed to replicate this; however, in reality a hammer strike cannot last for an infinitely small duration, but has a known contact time. The duration of the contact time directly influences the frequency content of the force, with a larger contact time causing a smaller range of bandwidth. A load cell is attached to the end of the hammer to record the force. Impact hammer testing is ideal for small lightweight structures. However, as the size of the structure increases, issues can occur due to a poor signal-to-noise ratio, which is common on large civil engineering structures. Shaker Modal Testing A shaker is a device that excites the object or structure according to its amplified input signal. Several input signals are available for modal testing, but the sine sweep and random frequency vibration profiles are by far the most commonly used signals. Small objects or structures can be attached directly to the shaker table. With some types of shakers, an armature is often attached to the body to be tested by way of piano wire (pulling force) or stinger (pushing force). When the signal is transmitted through the piano wire or the stinger, the object responds the same way as impact testing, by attenuating some and amplifying certain frequencies. These frequencies are measured as modal frequencies. Usually a load cell is placed between the shaker and the structure to obtain the excitation force. For large civil engineering structures much larger shakers are used, which can have a mass of 100 kg and above, and are able to apply a force of many hundreds of newtons. Several types of shakers are common: rotating mass shakers, electrodynamic shakers, and electrohydraulic shakers. For rotating mass shakers, the force can be calculated by knowing the mass and the speed of rotation, while for electrodynamic shakers, the force can be obtained through a load cell or an accelerometer placed on the moving mass of the shaker. Shakers have an advantage over the impact hammer as they can supply more energy to a structure over a longer period of time. However, problems can also be introduced; shakers can influence the dynamic properties of the structure and can also increase the complexity of analysis due to windowing errors. See also Modal Analysis Vibration Cushioning Shock absorber Shock (mechanics) Shock response spectrum Shaker (testing device) References Wave mechanics Tests de:Modalanalyse
Modal testing
[ "Physics" ]
745
[ "Wave mechanics", "Waves", "Physical phenomena", "Classical mechanics" ]
1,612,715
https://en.wikipedia.org/wiki/L%C3%A9on%20Brillouin
Léon Nicolas Brillouin (; August 7, 1889 – October 4, 1969) was a French physicist. He made contributions to quantum mechanics, radio wave propagation in the atmosphere, solid-state physics, and information theory. Early life Brillouin was born in Sèvres, near Paris, France. His father, Marcel Brillouin, grandfather, Éleuthère Mascart, and great-grandfather, Charles Briot, were physicists as well. Education From 1908 to 1912, Brillouin studied physics at the École Normale Supérieure, in Paris. From 1911 he studied under Jean Perrin until he left for the Ludwig Maximilian University of Munich (LMU), in 1912. At LMU, he studied theoretical physics with Arnold Sommerfeld. Just a few months before Brillouin's arrival at LMU, Max von Laue had conducted his experiment showing X-ray diffraction in a crystal lattice. In 1913, he went back to France to study at the University of Paris and it was in this year that Niels Bohr submitted his first paper on the Bohr model of the hydrogen atom. From 1914 until 1919, during World War I, he served in the military, developing the valve amplifier with G. A. Beauvais. At the conclusion of the war, he returned to the University of Paris to continue his studies with Paul Langevin, and was awarded his Docteur ès science in 1920. Brillouin's thesis jury was composed of Langevin, Marie Curie, and Jean Perrin and his thesis topic was on the quantum theory of solids. In his thesis, he proposed an equation of state based on the atomic vibrations (phonons) that propagate through it. He also studied the propagation of monochromatic light waves and their interaction with acoustic waves, i.e., scattering of light with a frequency change, which became known as Brillouin scattering. Career After receipt of his doctorate, Brillouin became the scientific secretary of the reorganized Journal de Physique et le Radium. In 1932, he became associate director of the physics laboratories at the Collège de France. In 1926, Gregor Wentzel, Hendrik Kramers, and Brillouin independently developed what is known as the Wentzel–Kramers–Brillouin approximation, also known as the WKB method, classical approach, and phase integral method. In 1928, after the Institut Henri Poincaré was established, he was appointed as professor to the Chair for Theoretical Physics. During his work on the propagation of electron waves in a crystal lattice, he introduced the concept of Brillouin zones in 1930. Quantum mechanical perturbations techniques by Brillouin and by Eugene Wigner resulted in what is known as the Brillouin–Wigner formula. Since Brillouin's study with Sommerfeld, he was interested and did pioneering work in the diffraction of electromagnetic radiation in a dispersive media. As a specialist in radio wave propagation, Brillouin was appointed director general of the French state-run agency, Radiodiffusion Nationale about a month before war with Germany, August 1939. In May 1940, upon the collapse of France, as part of the government, he retired to Vichy. Six months later, he resigned and went to the United States. Until 1942, Brillouin was a visiting professor at the University of Wisconsin–Madison, and then he was a professor at Brown University, in Providence, Rhode Island, until 1943. For the next two years, he was a research scientist with the National Defense Research Committee at Columbia University, working in the field of radar. From 1947 to 1949, he was professor of applied mathematics at Harvard University. During the period 1952 to 1954, he was with IBM Corporation in Poughkeepsie, New York, as well as a staff member of the IBM Watson Laboratory at Columbia University. In 1954, he became an adjunct professor at Columbia University. From 1957, he was founding editor of Information and Control, and served as one of its three, later four editors until 1966. He lived in New York City until he died in 1969. His wife Marcelle died in 1986. Brillouin was a founder of modern solid state physics for which he discovered, among other things, Brillouin zones. He applied information theory to physics and the design of computers and coined the concept of negentropy to demonstrate the similarity between entropy and information. Brillouin offered a solution to the problem of Maxwell's demon. In his book, Relativity Reexamined, he called for a "painful and complete re-appraisal" of relativity theory which "is now absolutely necessary." Contributions Brillouin function Brillouin limit Brillouin scattering Brillouin zone Brillouin theorem Brillouin doublet Brillouin flow Brillouin–Wigner formula Einstein–Brillouin–Keller method WKB approximation Acoustoelastic effect Negentropy Honors 1953 – Elected to the US National Academy of Sciences Books Les mesures en haute fréquence, with H. Armagnat (Chiron, 1924) Les Statistiques Quantiques Et Leurs Applications. 2 Vols. (Presse Universitaires de France, 1930) La Théorie des Quanta et l'Atome de Bohr (Presse Universitaires de France, 1922, 1931) Conductibilité électrique et thermique des métaux (Hermann, 1934) Notions Elementaires de Mathématiques pour les Sciences Expérimentales (Libraires de l'Academie de Médecine, 1939) The Mathematics of Ultra-High Frequencies Radio (Brown University, 1943) Wave Propagation in Periodic Structures: Electric Filters and Crystal Lattices (McGraw–Hill, 1946) (Dover, 1953, 2003) Les Tenseurs en mécanique et en élasticité: Cours de physique théorique (Dover, 1946) Mathématiques (Masson, 1947) Notions élémentaires de mathématiques pour les sciences expérimentales (Masson, 1947) Propagation des ondes dans les milieux périodiques, with Maurice Parodi (Masson – Dunod, 1956) La science et la théorie de l'information (Masson, 1959) Vie Matière et Observation (Albin Michel, 1959) Wave Propagation and Group Velocity (Academic Press, 1960) Science and Information Theory (Academic Press, 1956; second edition 1962, reprinted Dover, 2004) Scientific Uncertainty and Information (Academic Press, 1964) Tensors in Mechanics and Elasticity. Translated from the French By Robert O. Brennan. (Engineering Physics: An International Series of Monographs, Vol. 2) (Academic Press, 1964) Relativity Reexamined (Academic Press, 1970) Tres Vidas Ejemplares en la Física (Madrid, Marzo, 1970) References Further reading Mehra, Jagdish, and Helmut Rechenberg, The Historical Development of Quantum Theory. Volume 1 Part 2 The Quantum Theory of Planck, Einstein, Bohr and Sommerfeld 1900–1925: Its Foundation and the Rise of Its Difficulties. (Springer, 2001) Mehra, Jagdish, and Helmut Rechenberg, The Historical Development of Quantum Theory. Volume 5 Erwin Schrödinger and the Rise of Wave Mechanics. Part 2 Schrödinger in Vienna and Zurich 1887–1925. (Springer, 2001) Schiff, Leonard I, Quantum Mechanics (McGraw–Hill, 3rd edition, 1968) Mosseri, Rémy, Léon Brillouin à la croisée des ondes (Belin, Paris, 1999) External links Léon Brillouin – Biography Oral History interview transcript for Leon Brillouin on 29 March 1962, American Institute of Physics, Niels Bohr Library and Archives - Session I Oral History interview transcript for Leon Brillouin on 5 April 1962, American Institute of Physics, Niels Bohr Library and Archives - Session II National Academy of Sciences Biographical Memoir Archival collections Léon Brillouin papers, 1877-1972, Niels Bohr Library & Archives 1889 births 1969 deaths 20th-century French physicists 20th-century American physicists Optical physicists French emigrants to the United States Ludwig Maximilian University of Munich alumni University of Paris alumni École Normale Supérieure alumni University of Wisconsin–Madison faculty Brown University faculty Harvard University faculty Columbia University faculty Academic staff of the University of Paris Academic staff of the Collège de France People from Sèvres Members of the United States National Academy of Sciences Fellows of the American Physical Society Relativity critics
Léon Brillouin
[ "Physics" ]
1,760
[ "Relativity critics", "Theory of relativity" ]
1,612,722
https://en.wikipedia.org/wiki/Index%20of%20genetics%20articles
Genetics (from Ancient Greek , “genite” and that from , “origin”), a discipline of biology, is the science of heredity and variation in living organisms. Articles (arranged alphabetically) related to genetics include: # A B C D E F G H I J K L M N O P Q R S T U V W X Y Z References See also List of genetics research organizations List of geneticists & biochemists Articles Genetics-related topics Biotechnology
Index of genetics articles
[ "Biology" ]
98
[ "nan", "Biotechnology" ]
1,612,742
https://en.wikipedia.org/wiki/Planetary%20protection
Planetary protection is a guiding principle in the design of an interplanetary mission, aiming to prevent biological contamination of both the target celestial body and the Earth in the case of sample-return missions. Planetary protection reflects both the unknown nature of the space environment and the desire of the scientific community to preserve the pristine nature of celestial bodies until they can be studied in detail. There are two types of interplanetary contamination. Forward contamination is the transfer of viable organisms from Earth to another celestial body. Back contamination is the transfer of extraterrestrial organisms, if they exist, back to the Earth's biosphere. History The potential problem of lunar and planetary contamination was first raised at the International Astronautical Federation VIIth Congress in Rome in 1956. In 1958 the U.S. National Academy of Sciences (NAS) passed a resolution stating, “The National Academy of Sciences of the United States of America urges that scientists plan lunar and planetary studies with great care and deep concern so that initial operations do not compromise and make impossible forever after critical scientific experiments.” This led to creation of the ad hoc Committee on Contamination by Extraterrestrial Exploration (CETEX), which met for a year and recommended that interplanetary spacecraft be sterilized, and stated, “The need for sterilization is only temporary. Mars and possibly Venus need to remain uncontaminated only until study by manned ships becomes possible”. In 1959, planetary protection was transferred to the newly formed Committee on Space Research (COSPAR). COSPAR in 1964 issued Resolution 26 affirming that: In 1967, the US, USSR, and UK ratified the United Nations Outer Space Treaty. The legal basis for planetary protection lies in Article IX of this treaty: This treaty has since been signed and ratified by 104 nation-states. Another 24 have signed but not ratified. All the current space-faring nation-states, along with all current aspiring space-faring nation-states, have both signed and ratified the treaty. The Outer Space Treaty has consistent and widespread international support, and as a result of this, together with the fact that it is based on the 1963 declaration which was adopted by consensus in the UN National Assembly, it has taken on the status of customary international law. The provisions of the Outer Space Treaty are therefore binding on all states, even those who have neither signed nor ratified it. For forward contamination, the phrase to be interpreted is "harmful contamination". Two legal reviews came to differing interpretations of this clause (both reviews were unofficial). However the currently accepted interpretation is that “any contamination which would result in harm to a state’s experiments or programs is to be avoided”. NASA policy states explicitly that “the conduct of scientific investigations of possible extraterrestrial life forms, precursors, and remnants must not be jeopardized”. COSPAR recommendations and categories The Committee on Space Research (COSPAR) meets every two years, in a gathering of 2000 to 3000 scientists, and one of its tasks is to develop recommendations for avoiding interplanetary contamination. Its legal basis is Article IX of the Outer Space Treaty (see history below for details). Its recommendations depend on the type of space mission and the celestial body explored. COSPAR categorizes the missions into 5 groups: Category I: Any mission to locations not of direct interest for chemical evolution or the origin of life, such as the Sun or Mercury. No planetary protection requirements. Category II: Any mission to locations of significant interest for chemical evolution and the origin of life, but only a remote chance that spacecraft-borne contamination could compromise investigations. Examples include the Moon, Venus, and comets. Requires simple documentation only, primarily to outline intended or potential impact targets, and an end of mission report of any inadvertent impact site if such occurred. Category III: Flyby and orbiter missions to locations of significant interest for chemical evolution or the origin of life, and with a significant chance that contamination could compromise investigations e.g., Mars, Europa, Enceladus. Requires more involved documentation than Category II. Other requirements, depending on the mission, may include trajectory biasing, clean room assembly, bioburden reduction, and if impact is a possibility, inventory of organics. Category IV: Lander or probe missions to the same locations as Category III. Measures to be applied depend on the target body and the planned operations. "Sterilization of the entire spacecraft may be required for landers and rovers with life-detection experiments, and for those landing in or moving to a region where terrestrial microorganisms may survive and grow, or where indigenous life may be present. For other landers and rovers, the requirements would be for decontamination and partial sterilization of the landed hardware." Missions to Mars in category IV are subclassified further: Category IVa. Landers that do not search for Martian life - uses the Viking lander pre-sterilization requirements, a maximum of 300,000 spores per spacecraft and 300 spores per square meter. Category IVb. Landers that search for Martian life. Adds stringent extra requirements to prevent contamination of samples. Category IVc. Any component that accesses a Martian special region (see below) must be sterilized to at least to the Viking post-sterilization biological burden levels of 30 spores total per spacecraft. Category V: This is further divided into unrestricted and restricted sample return. Unrestricted Category V: samples from locations judged by scientific opinion to have no indigenous lifeforms. No special requirements. Restricted Category V: (where scientific opinion is unsure) the requirements include: absolute prohibition of destructive impact upon return, containment of all returned hardware which directly contacted the target body, and containment of any unsterilized sample returned to Earth. For Category IV missions, a certain level of biological burden is allowed for the mission. In general this is expressed as a 'probability of contamination', required to be less than one chance in 10,000 of forward contamination per mission, but in the case of Mars Category IV missions (above) the requirement has been translated into a count of Bacillus spores per surface area, as an easy to use assay method. More extensive documentation is also required for Category IV. Other procedures required, depending on the mission, may include trajectory biasing, the use of clean rooms during spacecraft assembly and testing, bioload reduction, partial sterilization of the hardware having direct contact with the target body, a bioshield for that hardware, and, in rare cases, complete sterilization of the entire spacecraft. For restricted Category V missions, the current recommendation is that no uncontained samples should be returned unless sterilized. Since sterilization of the returned samples would destroy much of their science value, current proposals involve containment and quarantine procedures. For details, see Containment and quarantine below. Category V missions also have to fulfill the requirements of Category IV to protect the target body from forward contamination. Mars special regions A special region is a region classified by COSPAR where terrestrial organisms could readily propagate, or thought to have a high potential for existence of Martian life forms. This is understood to apply to any region on Mars where liquid water occurs, or can occasionally occur, based on the current understanding of requirements for life. If a hard landing risks biological contamination of a special region, then the whole lander system must be sterilized to COSPAR category IVc. Target categories Some targets are easily categorized. Others are assigned provisional categories by COSPAR, pending future discoveries and research. The 2009 COSPAR Workshop on Planetary Protection for Outer Planet Satellites and Small Solar System Bodies covered this in some detail. Most of these assessments are from that report, with some future refinements. This workshop also gave more precise definitions for some of the categories: Category I Io, Sun, Mercury, undifferentiated metamorphosed asteroids Category II Callisto, comets, asteroids of category P, D, and C, Venus, Kuiper belt objects (KBO) < 1/2 size of Pluto. Provisional Category II Ganymede, Titan, Triton, the Pluto–Charon system, and other large KBOs (> 1/2 size of Pluto), Ceres Provisionally, they assigned these objects to Category II. However, they state that more research is needed, because there is a remote possibility that the tidal interactions of Pluto and Charon could maintain some water reservoir below the surface. Similar considerations apply to the other larger KBOs. Triton is insufficiently well understood at present to say it is definitely devoid of liquid water. The only close up observations to date are those of Voyager 2. In a detailed discussion of Titan, scientists concluded that there was no danger of contamination of its surface, except short term adding of negligible amounts of organics, but Titan could have a below surface water reservoir that communicates with the surface, and if so, this could be contaminated. In the case of Ganymede, the question is, given that its surface shows pervasive signs of resurfacing, is there any communication with its subsurface ocean? They found no known mechanism by which this could happen, and the Galileo spacecraft found no evidence of cryovolcanism. Initially, they assigned it as Priority B minus, meaning that precursor missions are needed to assess its category before any surface missions. However, after further discussion they provisionally assigned it to Category II, so no precursor missions are required, depending on future research. If there is cryovolcanism on Ganymede or Titan, the undersurface reservoir is thought to be 50 – 150 km below the surface. They were unable to find a process that could transfer the surface melted water back down through 50 km of ice to the under surface sea. This is why both Ganymede and Titan were assigned a reasonably firm provisional Category II, but pending results of future research. Icy bodies that show signs of recent resurfacing need further discussion and might need to be assigned to a new category depending on future research. This approach has been applied, for instance, to missions to Ceres. The planetary protection Category is subject for review during the mission of the Ceres orbiter (Dawn) depending on the results found. Category III / IV Mars because of possible subsurface habitats. Europa because of its subsurface ocean. Enceladus because of evidence of water plumes. Category V In the category V for sample return the conclusions so far are: Unrestricted Category V: Venus, the Moon. Restricted Category V: Mars, Europa, Enceladus. The Coleman–Sagan equation The aim of the current regulations is to keep the number of microorganisms low enough so that the probability of contamination of Mars (and other targets) is acceptable. It is not an objective to make the probability of contamination zero. The aim is to keep the probability of contamination of 1 chance in 10,000 of contamination per mission flown. This figure is obtained typically by multiplying together the number of microorganisms on the spacecraft, the probability of growth on the target body, and a series of bioload reduction factors. In detail the method used is the Coleman–Sagan equation. . where = the number of microorganisms on the spacecraft initially = Reduction due to conditions on spacecraft before and after launch = Probability that microorganisms on the spacecraft reach the surface of the planet = Probability that spacecraft will hit the planet - this is 1 for a lander = Probability of microorganism to be released in the environment when on the ground, usually set to 1 for crashlanding. = Probability of growth. For targets with liquid water this is set to 1 for sake of the calculation. Then the requirement is The is a number chosen by Sagan et al., somewhat arbitrarily. Sagan and Coleman assumed that about 60 missions to the Mars surface would occur before the exobiology of Mars is thoroughly understood, 54 of those successful, and 30 flybys or orbiters, and the number was chosen to endure a probability to keep the planet free from contamination of at least 99.9% over the duration of the exploration period. Critiques The Coleman–Sagan equation has been criticised because the individual parameters are often not known to better than a magnitude or so. For example, the thickness of the surface ice of Europa is unknown, and may be thin in places, which can give rise to a high level of uncertainty in the equation. It has also been criticised because of the inherent assumption made of an end to the protection period and future human exploration. In the case of Europa, this would only protect it with reasonable probability for the duration of the period of exploration. Greenberg has suggested an alternative, to use the natural contamination standard — that our missions to Europa should not have a higher chance of contaminating it than the chance of contamination by meteorites from Earth. Another approach for Europa is the use of binary decision trees which is favoured by the Committee on Planetary Protection Standards for Icy Bodies in the Outer Solar System under the auspices of the Space Studies Board. This goes through a series of seven steps, leading to a final decision on whether to go ahead with the mission or not. Containment and quarantine for restricted Category V sample return In the case of restricted Category V missions, Earth would be protected through quarantine of sample and astronauts in a yet to be built Biosafety level 4 facility. In the case of a Mars sample return, missions would be designed so that no part of the capsule that encounters the Mars surface is exposed to the Earth environment. One way to do that is to enclose the sample container within a larger outer container from Earth, in the vacuum of space. The integrity of any seals is essential and the system must also be monitored to check for the possibility of micro-meteorite damage during return to Earth. The recommendation of the ESF report is that No restricted category V returns have been carried out. During the Apollo program, the sample-returns were regulated through the Extra-Terrestrial Exposure Law. This was rescinded in 1991, so new regulations would need to be enacted. The Apollo era quarantine procedures are of interest as the only attempt to date of a return to Earth of a sample that, at the time, was thought to have a remote possibility of including extraterrestrial life. Samples and astronauts were quarantined in the Lunar Receiving Laboratory. The methods used would be considered inadequate for containment by modern standards. Also the lunar receiving laboratory would be judged a failure by its own design criteria as the sample return didn't contain the lunar material, with two failure points during the Apollo 11 return mission, at the splashdown and at the facility itself. However the Lunar Receiving Laboratory was built quickly with only two years from start to finish, a time period now considered inadequate. Lessons learned from it can help with design of any Mars sample return receiving facility. Design criteria for a proposed Mars Sample Return Facility, and for the return mission, have been developed by the American National Research Council, and the European Space Foundation. They concluded that it could be based on biohazard 4 containment but with more stringent requirements to contain unknown microorganisms possibly as small as or smaller than the smallest Earth microorganisms known, the ultramicrobacteria. The ESF study also recommended that it should be designed to contain the smaller gene transfer agents if possible, as these could potentially transfer DNA from martian microorganisms to terrestrial microorganisms if they have a shared evolutionary ancestry. It also needs to double as a clean room facility to protect the samples from terrestrial contamination that could confuse the sensitive life detection tests that would be used on the samples. Before a sample return, new quarantine laws would be required. Environmental assessment would also be required, and various other domestic and international laws not present during the Apollo era would need to be negotiated. Decontamination procedures For all spacecraft missions requiring decontamination, the starting point is clean room assembly in US federal standard class 100 cleanrooms. These are rooms with fewer than 100 particles of size 0.5 μm or larger per cubic foot. Engineers wear cleanroom suits with only their eyes exposed. Components are sterilized individually before assembly, as far as possible, and they clean surfaces frequently with alcohol wipes during assembly. Spores of Bacillus subtilis was chosen for not only its ability to readily generate spores, but its well-established use as a model species. It is a useful tracker of UV irradiation effects because of its high resilience to a variety of extreme conditions. As such it is an important indicator species for forward contamination in the context of planetary protection. For Category IVa missions (Mars landers that do not search for Martian life), the aim is to reduce the bioburden to 300,000 bacterial spores on any surface from which the spores could get into the Martian environment. Any heat tolerant components are heat sterilized to 114 °C. Sensitive electronics such as the core box of the rover including the computer, are sealed and vented through high-efficiency filters to keep any microbes inside. For more sensitive missions such as Category IVc (to Mars special regions), a far higher level of sterilization is required. These need to be similar to levels implemented on the Viking landers, which were sterilized for a surface which, at the time, was thought to be potentially hospitable to life similar to special regions on Mars today. In microbiology, it is usually impossible to prove that there are no microorganisms left viable, since many microorganisms are either not yet studied, or not cultivable. Instead, sterilization is done using a series of tenfold reductions of the numbers of microorganisms present. After a sufficient number of tenfold reductions, the chance that there any microorganisms left will be extremely low. The two Viking Mars landers were sterilized using dry heat sterilization. After preliminary cleaning to reduce the bioburden to levels similar to present day Category IVa spacecraft, the Viking spacecraft were heat-treated for 30 hours at 112 °C, nominal 125 °C (five hours at 112 °C was considered enough to reduce the population tenfold even for enclosed parts of the spacecraft, so this was enough for a million-fold reduction of the originally low population). Modern materials however are often not designed to handle such temperatures, especially since modern spacecraft often use "commercial off the shelf" components. Problems encountered include nanoscale features only a few atoms thick, plastic packaging, and conductive epoxy attachment methods. Also many instrument sensors cannot be exposed to high temperature, and high temperature can interfere with critical alignments of instruments. As a result, new methods are needed to sterilize a modern spacecraft to the higher categories such as Category IVc for Mars, similar to Viking. Methods under evaluation, or already approved, include: Vapour phase hydrogen peroxide - effective, but can affect finishes, lubricants and materials that use aromatic rings and sulfur bonds. This has been established, reviewed, and a NASA/ESA specification for use of VHP has been approved by the Planetary Protection Officer, but it has not yet been formally published. Ethylene oxide - this is widely used in the medical industry, and can be used for materials not compatible with hydrogen peroxide. It is under consideration for missions such as ExoMars. Gamma radiation and electron beams have been suggested as a method of sterilization, as they are used extensively in the medical industry. They need to be tested for compatibility with spacecraft materials and hardware geometries, and are not yet ready for review. Some other methods are of interest as they can sterilize the spacecraft after arrival on the planet. Supercritical carbon dioxide snow (Mars) - is most effective against traces of organic compounds rather than whole microorganisms. Has the advantage though that it eliminates the organic traces - while other methods kill the microorganisms, they leave organic traces that can confuse life detection instruments. Is under study by JPL and ESA. Passive sterilization through UV radiation (Mars). Highly effective against many microorganisms, but not all, as a Bacillus strain found in spacecraft assembly facilities is particularly resistant to UV radiation. Is also complicated by possible shadowing by dust and spacecraft hardware. Passive sterilization through particle fluxes (Europa). Plans for missions to Europa take credit for reductions due to this. Bioburden detection and assessment The spore count is used as an indirect measure of the number of microorganisms present. Typically 99% of microorganisms by species will be non-spore forming and able to survive in dormant states, and so the actual number of viable dormant microorganisms remaining on the sterilized spacecraft is expected to be many times the number of spore-forming microorganisms. One new spore method approved is the "Rapid Spore Assay". This is based on commercial rapid assay systems, detects spores directly and not just viable microorganisms and gives results in 5 hours instead of 72 hours. Challenges It is also long been recognized that spacecraft cleaning rooms harbour polyextremophiles as the only microbes able to survive in them. For example, in a recent study, microbes from swabs of the Curiosity rover were subjected to desiccation, UV exposure, cold and pH extremes. Nearly 11% of the 377 strains survived more than one of these severe conditions. The genomes of resistant spore producing Bacillus sp. have been studied and genome level traits potentially linked to the resistance have been reported. This does not mean that these microbes have contaminated Mars. This is just the first stage of the process of bioburden reduction. To contaminate Mars they also have to survive the low temperature, vacuum, UV and ionizing radiation during the months long journey to Mars, and then have to encounter a habitat on Mars and start reproducing there. Whether this has happened or not is a matter of probability. The aim of planetary protection is to make this probability as low as possible. The currently accepted target probability of contamination per mission is to reduce it to less than 0.01%, though in the special case of Mars, scientists also rely on the hostile conditions on Mars to take the place of the final stage of heat treatment decimal reduction used for Viking. But with current technology scientists cannot reduce probabilities to zero. New methods Two recent molecular methods have been approved for assessment of microbial contamination on spacecraft surfaces. Adenosine triphosphate (ATP) detection - this is a key element in cellular metabolism. This method is able to detect non cultivable organisms. It can also be triggered by non viable biological material so can give a "false positive". Limulus Amebocyte Lysate assay - detects lipopolysaccharides (LPS). This compound is only present in Gram-negative bacteria. The standard assay analyses spores from microbes that are primarily Gram-positive, making it difficult to relate the two methods. Impact prevention This particularly applies to orbital missions, Category III, as they are sterilized to a lower standard than missions to the surface. It is also relevant to landers, as an impact gives more opportunity for forward contamination, and impact could be on an unplanned target, such as a special region on Mars. The requirement for an orbital mission is that it needs to remain in orbit for at least 20 years after arrival at Mars with probability of at least 99% and for 50 years with probability at least 95%. This requirement can be dropped if the mission is sterilized to Viking sterilization standard. In the Viking era (1970s), the requirement was given as a single figure, that any orbital mission should have a probability of less than 0.003% probability of impact during the current exploratory phase of exploration of Mars. For both landers and orbiters, the technique of trajectory biasing is used during approach to the target. The spacecraft trajectory is designed so that if communications are lost, it will miss the target. Issues with impact prevention Despite the above measures, there has been one notable failure of impact prevention. The Mars Climate Orbiter which was sterilized only to Category III, crashed on Mars in 1999 due to a mix-up of imperial and metric units. The office of planetary protection stated that it is likely that it burnt up in the atmosphere, but if it survived to the ground, then it could cause forward contamination. Mars Observer is another Category III mission with potential planetary contamination. Communications were lost three days before its orbital insertion maneuver in 1993. It seems most likely it did not succeed in entering into orbit around Mars and simply continued past on a heliocentric orbit. If it did succeed in following its automatic programming, and attempted the manoeuvre, however, there is a chance it crashed on Mars. Three landers have had hard landings on Mars. These are Schiaparelli EDM lander, the Mars Polar Lander, and Deep Space 2. These were all sterilized for surface missions but not for special regions (Viking pre-sterilization only). Mars Polar Lander, and Deep Space 2 crashed into the polar regions which are now treated as special regions because of the possibility of forming liquid brines. Controversies Meteorite argument Alberto G. Fairén and Dirk Schulze-Makuch published an article in Nature recommending that planetary protection measures need to be scaled down. They gave as their main reason for this, that exchange of meteorites between Earth and Mars means that any life on Earth that could survive on Mars has already got there and vice versa. Robert Zubrin used similar arguments in favour of his view that the back contamination risk has no scientific validity. Rebuttal by NRC The meteorite argument was examined by the NRC in the context of back contamination. It is thought that all the Martian meteorites originate in relatively few impacts every few million years on Mars. The impactors would be kilometers in diameter and the craters they form on Mars tens of kilometers in diameter. Models of impacts on Mars are consistent with these findings. Earth receives a steady stream of meteorites from Mars, but they come from relatively few original impactors, and transfer was more likely in the early Solar System. Also some life forms viable on both Mars and on Earth might be unable to survive transfer on a meteorite, and there is so far no direct evidence of any transfer of life from Mars to Earth in this way. The NRC concluded that though transfer is possible, the evidence from meteorite exchange does not eliminate the need for back contamination protection methods. Impacts on Earth able to send microorganisms to Mars are also infrequent. Impactors of 10 km across or larger can send debris to Mars through the Earth's atmosphere but these occur rarely, and were more common in the early Solar System. Proposal to end planetary protection for Mars In their 2013 paper "The Over Protection of Mars", Alberto Fairén and Dirk Schulze-Makuch suggested that we no longer need to protect Mars, essentially using Zubrin's meteorite transfer argument. This was rebutted in a follow-up article "Appropriate Protection of Mars", in Nature by the current and previous planetary protection officers Catharine Conley and John Rummel. Critique of Category V containment measures The scientific consensus is that the potential for large-scale effects, either through pathogenesis or ecological disruption, is extremely small. Nevertheless, returned samples from Mars will be treated as potentially biohazardous until scientists can determine that the returned samples are safe. The goal is to reduce the probability of release of a Mars particle to less than one in a million. Policy proposals Non-biological contamination A COSPAR workshop in 2010, looked at issues to do with protecting areas from non biological contamination. They recommended that COSPAR expand its remit to include such issues. Recommendations of the workshop include: Some ideas proposed include protected special regions, or "Planetary Parks" to keep regions of the Solar System pristine for future scientific investigation, and also for ethical reasons. Proposed extensions Astrobiologist Christopher McKay has argued that until we have better understanding of Mars, our explorations should be biologically reversible. For instance if all the microorganisms introduced to Mars so far remain dormant within the spacecraft, they could in principle be removed in the future, leaving Mars completely free of contamination from modern Earth lifeforms. In the 2010 workshop one of the recommendations for future consideration was to extend the period for contamination prevention to the maximum viable lifetime of dormant microorganisms introduced to the planet. In the case of Europa, a similar idea has been suggested, that it is not enough to keep it free from contamination during our current exploration period. It might be that Europa is of sufficient scientific interest that the human race has a duty to keep it pristine for future generations to study as well. This was the majority view of the 2000 task force examining Europa, though there was a minority view of the same task force that such strong protection measures are not required. In July 2018, the National Academies of Sciences, Engineering, and Medicine issued a Review and Assessment of Planetary Protection Policy Development Processes. In part, the report urges NASA to create a broad strategic plan that covers both forward and back contamination. The report also expresses concern about private industry missions, for which there is no governmental regulatory authority. Protecting objects beyond the Solar System The proposal by the German physicist Claudius Gros, that the technology of the Breakthrough Starshot project may be utilized to establish a biosphere of unicellular organisms on otherwise only transiently habitable exoplanets, has sparked a discussion, to what extent planetary protection should be extended to exoplanets. Gros argues that the extended timescales of interstellar missions imply that planetary and exoplanetary protection have different ethical groundings. See also References General references External links No bugs please, this is a clean planet! (ESA article) (COSPAR article) NASA Planetary Protection Website JPL Develops High-Speed Test to Improve Pathogen Decontamination at JPL. Geoethics in Planetary and Space Exploration Catharine Conley: NASA & international planetary protection policy, methodology & applications, The Space Show, October 2012 Astrobiology Biological contamination Space colonization Asteroid mining Space debris Astronomy projects
Planetary protection
[ "Astronomy", "Technology", "Biology" ]
6,239
[ "Origin of life", "Speculative evolution", "Astrobiology", "Space debris", "Astronomy projects", "Astronomical sub-disciplines", "Biological hypotheses" ]
1,612,888
https://en.wikipedia.org/wiki/Light%20beam
A light beam or beam of light is a directional projection of light energy radiating from a light source. Sunlight forms a light beam (a sunbeam) when filtered through media such as clouds, foliage, or windows. To artificially produce a light beam, a lamp and a parabolic reflector is used in many lighting devices such as spotlights, car headlights, PAR Cans, and LED housings. Light from certain types of laser has the smallest possible beam divergence. Visible light beams From the side, a beam of light is only visible if part of the light is scattered by objects: tiny particles like dust, water droplets (mist, fog, rain), hail, snow, or smoke, or larger objects such as birds. If there are many objects in the light path, then it appears as a continuous beam, but if there are only a few objects, then the light is visible as a few individual bright points. In any case, this scattering of light from a beam, and the resultant visibility of a light beam from the side, is known as the Tyndall effect. Visibility from the side as side effect Flashlight (UK 'Torch'), beam directed by hand Headlight, forward beam; the lamp is mounted in a vehicle, or on the forehead of a person, e.g. built into a helmet Lighthouse, beam sweeping around horizontally Searchlight, beam directed at something Visibility from the side as purpose For the purpose of visibility of light beams from the side, sometimes a haze machine or fog machine is used. The difference between the two is that the fog itself is also a visual effect. Laser lighting display- Laser beams are often used for visual effects, often in combination with music. Searchlights are often used in advertising, for instance by automobile dealers; the beam of light is visible over a large area, and (at least in theory) interested persons can find the dealer or store by following the beam to its source. This also used to be done for movie premieres; the waving searchlight beams are still to be seen as a design element in the logo of the 20th Century Fox movie studio. Other applications Optical communication Infrared Data Association (IrDA) standards Infrared remote control Security alarms Fibre optics Laser pointer Laser sight List of applications for lasers See also Beam diameter Collimated beam Crepuscular rays Light pillar, atmospheric optical phenomena Pencil beam Ray (optics) Relativistic beaming References External links A short video showing how you can put sound on a light beam and transmit it by the Vega Science Trust Light Geometrical optics
Light beam
[ "Physics" ]
517
[ "Physical phenomena", "Spectrum (physical sciences)", "Electromagnetic spectrum", "Waves", "Light" ]
1,612,976
https://en.wikipedia.org/wiki/138%20%28number%29
138 (one hundred [and] thirty-eight) is the natural number following 137 and preceding 139. Mathematics 138 is a sphenic number, an Ulam number, an abundant number, and a square-free congruent number. References Integers
138 (number)
[ "Mathematics" ]
53
[ "Elementary mathematics", "Integers", "Mathematical objects", "Numbers" ]
1,612,992
https://en.wikipedia.org/wiki/Scooba%20%28brand%29
Scooba was a floor-scrubbing robot made by iRobot. It was released in limited numbers in December 2005 for the Christmas season, with full production starting in early 2006. The company introduced a lower-priced version, the Scooba 5800, in the second half of 2006. It introduced a new Scooba 450 at CES 2014 in January 2014. By 2016, the Scooba line of floor-scrubbers were phased out in favor of the Braava line of floor-mopping robots. Operation The Scooba used either a special non-bleach cleaning solution named "Scooba juice" (made by the Clorox Company) formulated to clean the floors while discouraging rust or wheel slippage, or the newer Scooba Natural Enzyme cleaning solution. The robot prepared the floor by vacuuming loose debris, squirted clean solution on the floor, scrubbed the floor, and then sucked up the dirty solution leaving a nearly dry floor behind. The robot was safe to use on sealed hardwood floors and most other hard household surfaces, but could not be used on rugs. Scooba avoided cleaning rugs and stairs, and could clean about on a single tank-load of solution. Some models of the Scooba included an iRobot Virtual Wall accessory, which projected a beam of infrared light, setting a boundary which the robot would not cross. The Scooba was the second major commercial product made by iRobot, which popularized vacuum robots with the Roomba. It was available in over 40 countries. Systems The Scooba used approximately of cleaning solution per cycle, mixed with of water to fill the cleaning solution tank. The Scooba came with four packets of the new "Natural Enzyme" cleaning solution, enough for about four washes. Additional Clorox cleaning solution comes in five- and nine-packs of bottles, which provided enough solution for about 16 washings per bottle. Polysorbate 20 and tetrapotassium EDTA were the primary ingredients. Some Scooba models could also use white vinegar or plain water in place of the proprietary solution. Recharge times were typically 3 hours. Models The original Scooba Scooba 5900 was the first Scooba, it could be used with the Scooba Cleaning Solution, or other suitably conductive solutions, but was discontinued in favor of the Scooba 5800 version (basic floor washing model) which could also use plain water in its cleaning tank. iRobot shed several of the 5900's premium features to produce the lower-priced 5800 model. There were no changes to the basic floor cleaning machinery. The Scooba 5800 could clean about per battery charge. Scooba 230 Introduced in 2011, the Scooba 230 was a smaller model, less than half the diameter but taller than the previous Scooba models. The reduced diameter allowed the robot to clean more areas in small bathrooms, kitchens, and other tight spaces. In order to reduce the size, the clean water and dirty water tanks were replaced with an internal clean water bladder in a sealed compartment that holds the dirty water, allowing the dirty water storage to expand as the clean water was used up. It worked the same way as the other Scoobas, only there is no scrubbing brushes, but bristles. Only Scooba cleaning solution and water are recommended as vinegar and the original cleaning solution will damage the bladder. The initial vacuuming stage present in other Scooba models was also removed, requiring users to sweep up or vacuum loose debris to start. The Scooba 230 could clean up to in one charge. The Scooba 230’s. main downfall was that it had to be charged manually and that it did not have screws to hold it together making fixing it near impossible. Scooba 450 The fourth generation Scooba 450 was introduced at the Consumer Electronics Show 2014. It could mop tile, wood, linoleum, and more. It uses a three-stage cleaning process. First, it sweeps and pre-soaks the floor with cleaning solution, and then it scrubs the floor and squeegees the dirty solution. Discontinuation iRobot launched the Braava line of floor mopping robots in 2013, which eventually replaced the Scooba brand by 2016. References External links Scooba homepage Scooba Product Page Scooba manual PC Magazine review Time gadget of the week USA Today review Home appliance brands Domestic robots IRobot 2005 robots
Scooba (brand)
[ "Technology" ]
931
[ "Home automation", "Domestic robots" ]
1,612,994
https://en.wikipedia.org/wiki/Committee%20on%20Space%20Research
The Committee on Space Research (COSPAR) was established on October 3, 1958 by the International Council for Scientific Unions (ICSU) and its first chair was Hildegard Korf Kallmann-Bijl. Among COSPAR's objectives are the promotion of scientific research in space on an international level, with emphasis on the free exchange of results, information, and opinions, and providing a forum, open to all scientists, for the discussion of problems that may affect space research. These objectives are achieved through the organization of symposia, publication, and other means. COSPAR has created a number of research programmes on different topics, a few in cooperation with other scientific Unions. The long-term project COSPAR international reference atmosphere started in 1960; since then it has produced several editions of the high-atmosphere code CIRA. The code "IRI" of the URSI-COSPAR working group on the International Reference Ionosphere was first edited in 1978 and is yearly updated. General Assembly Every second year, COSPAR calls for a General Assembly (also called Scientific Assembly). These are conferences currently gathering almost three thousand participating space researchers. The most recent assemblies are listed in the table below; as of two previous leap years, two General Assemblies were cancelled. The 41st General Assembly in Istanbul was cancelled due to the 2016 Turkish coup d'état attempt, while the 43rd General Assembly in Sydney was also cancelled due to the COVID-19 pandemic. Scientific Structure Scientific Commissions Scientific Commission A Space Studies of the Earth's Surface, Meteorology and Climate Task Group on GEO Subcommission A1 on Atmosphere, Meteorology and Climate Subcommission A2 on Ocean Dynamics, Productivity and the Cryosphere Subcommission A3 on Land Processes and Morphology Scientific Commission B Space Studies of the Earth-Moon System, Planets, and Small Bodies of the Solar System Sub-Commission B1 on Small Bodies Sub-Commission B2 on International Coordination of Space Techniques for Geodesy (a joint Sub-Commission with IUGG/IAG Commission I on Reference Frames) Sub-Commission B3 on The Moon Sub-Commission B4 on Terrestrial Planets Sub-Commission B5 on Outer Planets and Satellites Sub-Commission B6/E4 on Exoplanets Detection, Characterization and Modelling Scientific Commission C Space Studies of the Upper Atmospheres of the Earth and Planets Including Reference Atmospheres Sub-Commission C1 on The Earth's Upper Atmosphere and Ionosphere Sub-Commission C2 on The Earth's Middle Atmosphere and Lower Ionosphere Sub-Commission C3 on Planetary Atmospheres and Aeronomy Task Group on Reference Atmospheres of Planets and Satellites (RAPS)  URSI/COSPAR Task Group on the International Reference Ionosphere (IRI) COSPAR/URSI Task Group on Reference Atmospheres, including ISO WG4 (CIRA) Sub-Commission C5/D4 on Theory and Observations of Active Experiments Scientific Commission D Space Plasmas in the Solar System, Including Planetary Magnetospheres Sub-Commission D1 on The Heliosphere Sub-Commission D2/E3 on The Transition from the Sun to the Heliosphere Sub-Commission D3 on Magnetospheres Sub-Commission C5/D4 on Theory and Observations of Active Experiments Scientific Commission E Research in Astrophysics from Space Sub-Commission E1 on Galactic and Extragalactic Astrophysics Sub-Commission E2 on The Sun as a Star Sub-Commission D2/E3 on The Transition from the Sun to the Heliosphere Sub-Commission B6/E4 on Exoplanets Detection, Characterization and Modelling Scientific Commission F Life Sciences as Related to Space Sub-Commission F1 on Gravitational and Space Biology Sub-Commission F2 on Radiation Environment, Biology and Health Sub-Commission F3 on Astrobiology Sub-Commission F4 on Natural and Artificial Ecosystems Sub-Commission F5 on Gravitational Physiology in Space Scientific Commission G Materials Sciences in Space Scientific Commission H Fundamental Physics in Space Panels            Technical Panel on Satellite Dynamics (PSD) Panel on Technical Problems Related to Scientific Ballooning (PSB) Panel on Potentially Environmentally Detrimental Activities in Space (PEDAS)      Panel on Radiation Belt Environment Modelling (PRBEM) Panel on Space Weather (PSW) Panel on Planetary Protection (PPP) Panel on Capacity Building (PCB) Panel on Capacity Building Fellowship Program and Alumni (PCB FP) Panel on Education (PE) Panel on Exploration (PEX) Panel on Interstellar Research (PIR) Task Group on Establishing an international Constellation of Small Satellites (TGCSS) Sub-Group on Radiation Belts (TGCSS-SGRB) Panel on Social Sciences and Humanities (PSSH) Panel on Innovative Solutions (PoIS) Task Group on Establishing an International Geospace Systems Program (TGIGSP) Planetary Protection Policy Responding to concerns raised in the scientific community that spaceflight missions to the Moon and other celestial bodies might compromise their future scientific exploration, in 1958 the International Council of Scientific Unions (ICSU) established an ad-hoc Committee on Contamination by Extraterrestrial Exploration (CETEX) to provide advice on these issues. In the next year, this mandate was transferred to the newly founded Committee on Space Research (COSPAR), which as an interdisciplinary scientific committee of the ICSU (now the International Science Council - ISC) was considered to be the appropriate place to continue the work of CETEX. Since that time, COSPAR has provided an international forum to discuss such matters under the terms “planetary quarantine” and later “planetary protection”, and has formulated a COSPAR planetary protection policy with associated implementation requirements as an international standard to protect against interplanetary biological and organic contamination, and after 1967 as a guide to compliance with Article IX of the United Nations Outer Space Treaty in that area (). The COSPAR Planetary Protection Policy, and its associated requirements, is not legally binding under international law, but it is an internationally agreed standard with implementation guidelines for compliance with Article IX of the Outer Space Treaty. States Parties to the Outer Space Treaty are responsible for national space activities under Article VI of this Treaty, including the activities of governmental and non-governmental entities. It is the State that ultimately will be held responsible for wrongful acts committed by its jurisdictional subjects. Updating the COSPAR Planetary Protection Policy, either as a response to new discoveries or based on specific requests, is a process that involves appointed members of the COSPAR Panel on Planetary Protection who represent, on the one hand, their national or international authority responsible for compliance with the United Nations Outer Space Treaty of 1967, and, on the other hand, COSPAR Scientific Commissions B – Space Studies of the Earth-Moon System, Planets and Small Bodies of the Solar Systems, and F - Life Sciences as Related to Space. After reaching a consensus among the involved parties, the proposed recommendation for updating the Policy is formulated by the COSPAR Panel on Planetary Protection and submitted to the COSPAR Bureau for review and approval. The new structure of the Panel and its work was described in recent publications (;). The recently updated COSPAR Policy on Planetary Protection was published in the August 2020 issue of COSPAR's journal Space Research Today. It contains some updates with respect to the previously approved version () based on recommendations formulated by the Panel and approved by the COSPAR Bureau. Participating member countries The table contains the list of countries participating in the Committee on Space Research: See also Space research Planetary protection, for other bodies and Earth International Planetary Data Alliance List of government space agencies References External links Scientific organizations based in France Astronomy organizations Space research International organizations based in France International scientific organizations
Committee on Space Research
[ "Astronomy" ]
1,546
[ "Astronomy organizations" ]
1,612,997
https://en.wikipedia.org/wiki/Screen-door%20effect
The screen-door effect (SDE) is a visual artifact of displays, where the fine lines separating pixels (or subpixels) become visible in the displayed image. This can be seen in digital projector images and regular displays under magnification or at close range, but the increases in display resolutions have made this much less significant. More recently, the screen-door effect has been an issue with virtual reality headsets and other head-mounted displays, because these are viewed at a much closer distance, and stretch a single display across a much wider field of view. SDE in projectors In LCD and DLP projectors, SDE can be seen because projector optics typically have significantly lower pixel density than the size of the image they project, enlarging these fine lines, which are much smaller than the pixels themselves, to be seen. This results in an image that appears as if viewed through a fine screen or mesh such as those used on anti-insect screen doors. The screen-door effect was noticed on the first digital projector: an LCD projector made in 1984 by Gene Dolgoff. To eliminate this artifact, Dolgoff invented depixelization, which used various optical methods to eliminate the visibility of the spaces between the pixels. The dominant method made use of a microlens array, wherein each micro-lens caused a slightly magnified image of the pixel behind it, filling in the previously-visible spaces between pixels. In addition, when making a projector with a single, full-color LCD panel, an additional appearance of pixelation was visible due to the noticeability of green pixels (appearing bright) adjacent to red and blue pixels (appearing dark), forming a noticeable repeating light and dark pattern. Use of a micro-lens array at a slightly greater distance created new pixel images, with each "new" pixel being a summation of six neighboring sub-pixels (made up of two full color pixels, one above the other). Since there were as many micro-lenses as there were original pixels, no resolution was lost, which was confirmed with modulation transfer function (MTF) measurements. The screen-door effect on Digital Light Processing (DLP) projectors can be mitigated by deliberately setting the projected image slightly out of focus, which blurs the boundaries of each pixel to its neighbor. This minimizes the effect by filling the black pixel perimeters with adjacent light. Some older LCD projectors have a more noticeable screen-door effect than first generation DLP projectors. Newer DLP chip designs promise closer spacing of the mirror elements which would reduce this effect; however, some space is still required along one edge of the mirror to provide a control circuit pathway. Use of Dolgoff's depixelization method could also produce a DLP projector without noticeable pixelation. See also Pancake lens Rainbow effect, an artifact associated with single-chip DLP projectors Scan line Silk screen effect Subpixel rendering Uncanny valley Virtual reality sickness References External links Hi Fi Writer, "What is the 'screen door effect'?" Display technology Visual artifacts Virtual reality
Screen-door effect
[ "Engineering" ]
644
[ "Electronic engineering", "Display technology" ]
1,613,043
https://en.wikipedia.org/wiki/HyperZ
HyperZ is the brand for a set of processing techniques developed by ATI Technologies and later Advanced Micro Devices and implemented in their Radeon-GPUs. HyperZ was announced in November 2000 and was still available in the TeraScale-based Radeon HD 2000 Series and in current Graphics Core Next-based graphics products. On the Radeon R100-based cores, Radeon DDR through 7500, where HyperZ debuted, ATI claimed a 20% improvement in overall rendering efficiency. They stated that with HyperZ, Radeon could be said to offer 1.5 gigatexels per second fillrate performance instead of the card's apparent theoretical rate of 1.2 gigatexels. In testing it was shown that HyperZ did indeed offer a tangible performance improvement that allowed the less endowed Radeon to keep up with the less efficient GeForce 2 GTS. Functionality HyperZ consists of three mechanisms: Z compression The Z-buffer is stored in a lossless compressed format to minimize the Z-Buffer bandwidth as Z read or writes are taking place. The compression scheme ATI used on Radeon 8500 operated 20% more effectively than on the original Radeon and Radeon 7500. Fast Z clear Rather than writing zeros throughout the entire Z-buffer, and thus using the bandwidth of another Z-Buffer write, a Fast Z Clear technique is used that can tag entire blocks of the Z-Buffer as cleared, such that only each of these blocks need be tagged as cleared. On Radeon 8500, ATI claimed that this process could clear the Z-Buffer up to approximately 64 times faster than that of a card without fast Z clear. Hierarchical Z-buffer This feature allows for the pixel being rendered to be checked against the z-buffer before the pixel actually arrives in the rendering pipelines. This allows useless pixels to be thrown out early (early Z reject), before the Radeon has to render them. Versions of HyperZ With each new microarchitecture, ATI has revised and improved the technology. HyperZ – R100 HyperZ II – R200 (8500-9250) HyperZ III – R300 in Radeon 9700 HyperZ III+ – R350 used in Radeon 9800, Radeon 9800 XL, Radeon 9800 Pro and Radeon 9800 SE HyperZ HD – R420 used in Radeon X700 to Radeon X850 XT PE See also Rasterization Z-buffering Irregular Z-buffer Depth map References External links Anandtech's Preview of Radeon 256 AMD press release about HyperZ AMD technologies ATI Technologies Graphics cards
HyperZ
[ "Technology" ]
565
[ "Computing stubs", "Computer hardware stubs" ]
1,613,052
https://en.wikipedia.org/wiki/Hackenbush
Hackenbush is a two-player game invented by mathematician John Horton Conway. It may be played on any configuration of colored line segments connected to one another by their endpoints and to a "ground" line. Gameplay The game starts with the players drawing a "ground" line (conventionally, but not necessarily, a horizontal line at the bottom of the paper or other playing area) and several line segments such that each line segment is connected to the ground, either directly at an endpoint, or indirectly, via a chain of other segments connected by endpoints. Any number of segments may meet at a point and thus there may be multiple paths to ground. On their turn, a player "cuts" (erases) any line segment of their choice. Every line segment no longer connected to the ground by any path "falls" (i.e., gets erased). According to the normal play convention of combinatorial game theory, the first player who is unable to move loses. Hackenbush boards can consist of finitely many (in the case of a "finite board") or infinitely many (in the case of an "infinite board") line segments. The existence of an infinite number of line segments does not violate the game theory assumption that the game can be finished in a finite amount of time, provided that there are only finitely many line segments directly "touching" the ground. On an infinite board, based on the layout of the board the game can continue on forever, assuming there are infinitely many points touching the ground. Variants In the original folklore version of Hackenbush, any player is allowed to cut any edge: as this is an impartial game it is comparatively straightforward to give a complete analysis using the Sprague–Grundy theorem. Thus the versions of Hackenbush of interest in combinatorial game theory are more complex partisan games, meaning that the options (moves) available to one player would not necessarily be the ones available to the other player if it were their turn to move given the same position. This is achieved in one of two ways: Original Hackenbush: All line segments are the same color and may be cut by either player. This means payoffs are symmetric and each player has the same operations based on position on board (in this case structure of drawing). This is also called Green Hackenbush. Blue-Red Hackenbush: Each line segment is colored either red or blue. One player (usually the first, or left, player) is only allowed to cut blue line segments, while the other player (usually the second, or right, player) is only allowed to cut red line segments. Blue-Red-Green Hackenbush: Each line segment is colored red, blue, or green. The rules are the same as for Blue-Red Hackenbush, with the additional stipulation that green line segments can be cut by either player. Blue-Red Hackenbush is merely a special case of Blue-Red-Green Hackenbush, but it is worth noting separately, as its analysis is often much simpler. This is because Blue-Red Hackenbush is a so-called cold game, which means, essentially, that it can never be an advantage to have the first move. Analysis Hackenbush has often been used as an example game for demonstrating the definitions and concepts in combinatorial game theory, beginning with its use in the books On Numbers and Games and Winning Ways for Your Mathematical Plays by some of the founders of the field. In particular Blue-Red Hackenbush can be used to construct surreal numbers: finite Blue-Red Hackenbush boards can construct dyadic rational numbers, while the values of infinite Blue-Red Hackenbush boards account for real numbers, ordinals, and many more general values that are neither. Blue-Red-Green Hackenbush allows for the construction of additional games whose values are not real numbers, such as star and all other nimbers. Further analysis of the game can be made using graph theory by considering the board as a collection of vertices and edges and examining the paths to each vertex that lies on the ground (which should be considered as a distinguished vertex — it does no harm to identify all the ground points together — rather than as a line on the graph). In the impartial version of Hackenbush (the one without player specified colors), it can be thought of using nim heaps by breaking the game up into several cases: vertical, convergent, and divergent. Played exclusively with vertical stacks of line segments, also referred to as bamboo stalks, the game directly becomes Nim and can be directly analyzed as such. Divergent segments, or trees, add an additional wrinkle to the game and require use of the colon principle stating that when branches come together at a vertex, one may replace the branches by a non-branching stalk of length equal to their nim sum. This principle changes the representation of the game to the more basic version of the bamboo stalks. The last possible set of graphs that can be made are convergent ones, also known as arbitrarily rooted graphs. By using the fusion principle, we can state that all vertices on any cycle may be fused together without changing the value of the graph. Therefore, any convergent graph can also be interpreted as a simple bamboo stalk graph. By combining all three types of graphs we can add complexity to the game, without ever changing the nim sum of the game, thereby allowing the game to take the strategies of Nim. Proof of Colon Principle The Colon Principle states that when branches come together at a vertex, one may replace the branches by a non-branching stalk of length equal to their nim sum. Consider a fixed but arbitrary graph, G, and select an arbitrary vertex, x, in G. Let H1 and H2 be arbitrary trees (or graphs) that have the same Sprague-Grundy value. Consider the two graphs G1 = Gx : H1 and G2 = Gx : H2, where Gx : Hi represents the graph constructed by attaching the tree Hi to the vertex x of the graph G. The colon principle states that the two graphs G1 and G2 have the same Sprague-Grundy value. Consider the sum of the two games. The claim that G1 and G2 have the same Sprague-Grundy value is equivalent to the claim that the sum of the two games has Sprague-Grundy value 0. In other words, we are to show that the sum G1 + G2 is a P-position. A player is guaranteed to win if they are the second player to move in G1 + G2. If the first player moves by chopping one of the edges in G in one of the games, then the second player chops the same edge in G in the other game. (Such a pair of moves may delete H1 and H2 from the games, but otherwise H1 and H2 are not disturbed.) If the first player moves by chopping an edge in H1 or H2, then the Sprague-Grundy values of H1 and H2 are no longer equal, so that there exists a move in H1 or H2 that keeps the Sprague-Grundy values the same. In this way you will always have a reply to every move he may make. This means you will make the last move and so win. References John H. Conway, On Numbers and Games, 2nd edition, A K Peters, 2000. External links Hackenstrings, and 0.999... vs. 1 Hackenbush on Pencil and Paper Games Mathematical games Abstract strategy games Combinatorial game theory Paper-and-pencil games John Horton Conway
Hackenbush
[ "Mathematics" ]
1,605
[ "Mathematical games", "Recreational mathematics", "Combinatorics", "Game theory", "Combinatorial game theory" ]
1,613,082
https://en.wikipedia.org/wiki/Service%20provider
A service provider (SP) is an organization that provides services, such as consulting, legal, real estate, communications, storage, and processing services, to other organizations. Although a service provider can be a sub-unit of the organization that it serves, it is usually a third-party or outsourced supplier. Examples include telecommunications service providers (TSPs), application service providers (ASPs), storage service providers (SSPs), and internet service providers (ISPs). A more traditional term is service bureau. IT professionals sometimes differentiate between service providers by categorizing them as type I, II, or III. The three service types are recognized by the IT industry although specifically defined by ITIL and the U.S. Telecommunications Act of 1996. Type I: internal service provider Type II: shared service provider Type III: external service provider Type III SPs provide IT services to external customers and subsequently can be referred to as external service providers (ESPs) which range from a full IT organization/service outsource via managed services or MSPs (managed service providers) to limited product feature delivery via ASPs (application service providers). Types Application service provider (ASP) Cloud service provider (CSP) - Software, platform, infrastructure service provider in cloud computing Network service provider (NSP) Internet service provider (ISP) Managed service provider (MSP) Managed Security Service Provider (MSSP) Storage service provider (SSP) Telecommunications service provider (TSP) SAML service provider Master managed service provider (MMSP) Managed Internet service provider (MISP) Online service provider (OSP) Payment service provider (PSP) Cleaning service provider Gardening service provider Pest control service provider Oilfield service provider Application software service provider in a service-oriented architecture (ASSP) Cable television service provider See also Identity management Identity provider IP address SAML 2.0 Service bureau Service system Outline of consulting Web service References Further reading External links IT service management Business models Business terms
Service provider
[ "Technology" ]
406
[ "Computer industry", "IT service management" ]
1,613,162
https://en.wikipedia.org/wiki/Biedermeier
The Biedermeier period was an era in Central European art and culture between 1815 and 1848 during which the middle classes grew in number and artists began producing works appealing to their sensibilities. The period began with the end of the Napoleonic Wars in 1815 and ended with the onset of the Revolutions of 1848. The term originated in popular literature, before spreading to architecture, interior design, and visual arts. "Biedermeier" derives from the fictional mediocre poet Gottlieb Biedermaier, who featured in the Munich magazine Fliegende Blätter (Flying Leaves). It is used mostly to denote the unchallenging artistic styles that flourished in the fields of literature, music, the visual arts and interior design. As is natural in cultural creative movements, Biedermeier has influenced later styles. Political background The Biedermeier period does not refer to the era as a whole, but to a particular mood and set of trends that grew out of the unique underpinnings of the time in Central Europe. There were two driving forces for the development of the period. One was the growing urbanization and industrialization leading to a new urban middle class, which created a new kind of audience for the arts. The other was the political stability prevalent under Klemens von Metternich following the end of the Napoleonic Wars and the Congress of Vienna. The effect was for artists and society in general to concentrate on the domestic and, at least in public, the non-political. Writers, painters, and musicians began to stay in safer territory, and the emphasis on home life for the growing middle class meant a blossoming of furniture design and interior decorating. Aesthetic standards The affluent middle class values that are associated with Biedermeier include affection, sensibility, moderation, and modesty. Biedermeier Gemütlichkeit means, that one reaches a state of cosiness, as well as friendliness. Family values Biedermeier family values reflected the bourgeois consensus and the housewife was responsible for furnishing and choosing the appropriate design. Middle class women were held responsible for family cohesion and children had to be socialized within the family. Literature The term Biedermeier appeared first in literary circles in the form of a pseudonym, Gottlieb Biedermaier, used by the country doctor Adolf Kussmaul and lawyer Ludwig Eichrodt in poems that the duo had published in the Munich satirical weekly Fliegende Blätter in 1850. The German word bieder translates into plain, while Maier is a common bourgeois surname. The verses parodied the people of the era, namely Samuel Friedrich Sauter, a primary teacher and sort of amateurish poet, as depoliticized and petit-bourgeois. The name was constructed from the titles of two poems—"Biedermanns Abendgemütlichkeit" (Biedermann's Evening Comfort) and "Bummelmaiers Klage" (Bummelmaier's Complaint)—which Joseph Victor von Scheffel had published in 1848 in the same magazine. As a label for the epoch, the term has been used since around 1900. Due to the strict control of publication and official censorship, Biedermeier writers primarily concerned themselves with non-political subjects, like historical fiction and country life. Political discussion was usually confined to the home, in the presence of close friends. Typical Biedermeier poets are Annette von Droste-Hülshoff, Friedrich Halm, Adelbert von Chamisso, Eduard Mörike, and Wilhelm Müller, the last three of whom have well-known musical settings by Robert Schumann, Hugo Wolf and Franz Schubert respectively. Adalbert Stifter was a novelist and short story writer whose work also reflected the concerns of the Biedermeier movement, particularly with his novel Der Nachsommer. As historian Carl Emil Schorske put it, "To illustrate and propagate his concept of Bildung, compounded of Benedictine world piety, German humanism, and Biedermeier conventionality, Stifter gave to the world his novel Der Nachsommer". Jeremias Gotthelf published The Black Spider in 1842 as an allegorical work that uses Gothic themes. It is Gotthelf's best known work. At first little noticed, the story is now considered by many critics to be among the masterworks of Biedermeier era and sensibility. Furniture design and interior decorating Biedermeier furniture is admired for quality craftsmanship and comfort. Original early 19th century Biedermeier furniture was manufactured to be publicly displayed, with less concern for convenience and private enjoyment. Biedermeier upholstery makes extensive use of coil-springs. Biedermeier furniture design was purchased or commissioned by the prosperous middle class to celebrate comfort and leisure. Middle to late-Biedermeier furniture design represented a heralding towards historicism and revival eras long sought for. Social forces originating in France would change the artisan-patron system that achieved this period of design, first in the German states, and then into Scandinavia. The middle class growth originated in the Industrial Revolution in Britain and many Biedermeier designs owe their simplicity to Georgian lines of the 19th century, as the proliferation of design publications reached the German states and the Austrian Empire. The Biedermeier style was a simplified interpretation of the influential French Empire style of Napoleon, which introduced the romance of ancient Roman Empire styles, adapting these to modern early 19th century households. Biedermeier furniture used locally available materials such as cherry, ash, and oak woods rather than the expensive timbers such as fully imported mahogany. Unique designs were created in Vienna. Furniture from the earlier period (1815–1830) was the most severe and neoclassical in inspiration. It also supplied the most fantastic forms which the second half of the period (1830–1848) lacked, being influenced by the many style publications from Britain. Biedermeier furniture was the first style in the world that emanated from the growing middle class. It preceded Victoriana and influenced mainly German-speaking countries. In Sweden, Jean-Baptiste Bernadotte, who was adopted by King Charles XIII (who was childless), became Sweden's new king in 1818 as Karl XIV Johan. The Swedish Karl Johan style, similar to Biedermeier, retained its elegant and blatantly Napoleonic style throughout the 19th century. Biedermeier furniture and lifestyle was a focus on exhibitions at the Vienna applied arts museum in 1896. The many visitors to this exhibition were so influenced by this fantasy style and its elegance that a new resurgence or revival period became popular amongst European cabinetmakers. This revival period lasted up until the Art Deco style was taken up. Biedermeier also influenced the various Bauhaus styles through their truth in material philosophy. The original Biedermeier period changed with the political unrests of 1845–1848 (its end date). With the revolutions in European historicism, furniture of the later years of the period took on a distinct Wilhelminian or Victorian style. The term Biedermeier is also used to refer to a style of clocks made in Vienna in the early 19th century. The clean and simple lines included a light and airy aesthetic, especially in Viennese regulators of the Laterndluhr and Dachluhr styles. Architecture The 19th century population growth and urbanization in Europe resulted in Biedermeier architecture marked by functionality and elegance. The Geymüllerschlössel in Vienna was constructed in 1808, it houses today the Biedermeier collection of the Museum of Applied Arts. Architectural legacy In Wilhelmine Germany social reformers regarded Biedermeier architecture as the perfect example for middle class culture and domestic reform. During the Weimar Republic Germany faced another housing crisis. Paul Schultze-Naumburg was among Germany's most respected neo-Biedermeier architects and in his mind, new housing should imitate the German Biedermeier architecture of around 1800. Paul Mebes popularized the neo-Biedermeier style, which was widely endorsed by German architects. A modernist neo-Biedermeier architectural style was contrived by Adolf Behne, Bruno Taut, and Peter Behrens. Schultz-Naumburg and Heinrich Tessenow advocated for interpreting Biedermeier architecture liberally, allowing for little modernization. The Polish architectural style Świdermajer was named as a play on Biedermeier. Visual arts In the visual arts, Biedermeier style is associated with sentimentality and dullness. Biedermeier paintings are known for their preoccupation with the everyday world with few grand gestures. This formed an aesthetic is evidenced in the portraits (e.g., Portrait of the Arthaber Family, 1837, by Friedrich von Amerling), landscapes (e.g. see Waldmüller or Gauermann landscapes) and contemporary-reporting genre scenes (e.g., Controversy of the Coachmen, 1828, by Michael Neder). Reflecting the moderately conservative and generally apolitical ethos of the movement and its audience, Biedermeier painting actively shunned the radical commentary used in other circles, though later works like The Bookworm () left space for some lighthearted satire. Key painters of the Biedermeier movement were Carl Spitzweg (1808–1885), Ferdinand Georg Waldmüller (1795–1865), Henrik Weber (1818–1866), Josip Tominc (1780–1866), Friedrich von Amerling (1803–1887), Friedrich Gauermann (1807–1862), Johann Baptist Reiter (1813–1890), Peter Fendi (1796–1842), (1807–1882), Josef Danhauser (1805–1845), and Edmund Wodick (1806–1886) among others. The biggest collection of Viennese Biedermeier paintings in the world is currently hosted by the Belvedere Palace Museum in Vienna. Music Biedermeier music was most evident in the numerous publications for in-home music making. Published arrangements of operatic excerpts, German Lieder, and some symphonic works that could be performed at the piano without professional musical training, illustrated the broadened reach of music in this period. Composers from this period include Beethoven, Schubert, Rossini, Weber, Mendelssohn, Chopin, Schumann and Liszt. The so called Schubertiad were people who gathered around the composer Franz Schubert to provide a forum or meeting place for political secret societies. However, Biedermeier home music making was decidedly unpretentious and nonpolitical, the backdrop being politically explosive. Even the critical discussion of music itself was avoided. Czech National Revival The Biedermeier period coincided with the Czech National Revival movement in the Czech-speaking areas. The most famous writers of the period were Božena Němcová, Karel Hynek Mácha, František Ladislav Čelakovský, Václav Kliment Klicpera, and Josef Kajetán Tyl. Key painters of the Czech Biedermeier were Josef Navrátil, Antonín Machek, and Antonín Mánes. Landscapes, still lifes, courtyards, family scenes, and portraits were very popular. Václav Tomášek composed lyric piano pieces and songs to the patriotic lyrics of Czech authors. Biedermeier was also reflected in the applied arts: glass and porcelain, fashion, jewellery, and furniture. References Further reading Ilsa Barea (1966, republished 1992), Vienna: Legend and Reality, London: Pimlico. Chapter 111, Biedermeier, pp. 111–188. Jane K. Brown, in The Cambridge Companion to the Lied, James Parsons (ed.), 2004, Cambridge. Martin Swales & Erika Swales, Adalbert Stifter: A Critical Study, Cambridge: Cambridge University Press, 1984. External links Architectural styles German art Austrian art Architecture in Germany Architecture in Austria German literary movements German literature Austrian literature Design Decorative arts History of furniture German Confederation sv:Empir#Biedermeier
Biedermeier
[ "Engineering" ]
2,545
[ "Design" ]
1,613,344
https://en.wikipedia.org/wiki/Word%20%28computer%20architecture%29
In computing, a word is the natural unit of data used by a particular processor design. A word is a fixed-sized datum handled as a unit by the instruction set or the hardware of the processor. The number of bits or digits in a word (the word size, word width, or word length) is an important characteristic of any specific processor design or computer architecture. The size of a word is reflected in many aspects of a computer's structure and operation; the majority of the registers in a processor are usually word-sized and the largest datum that can be transferred to and from the working memory in a single operation is a word in many (not all) architectures. The largest possible address size, used to designate a location in memory, is typically a hardware word (here, "hardware word" means the full-sized natural word of the processor, as opposed to any other definition used). Documentation for older computers with fixed word size commonly states memory sizes in words rather than bytes or characters. The documentation sometimes uses metric prefixes correctly, sometimes with rounding, e.g., 65 kilowords (kW) meaning for 65536 words, and sometimes uses them incorrectly, with kilowords (kW) meaning 1024 words (210) and megawords (MW) meaning 1,048,576 words (220). With standardization on 8-bit bytes and byte addressability, stating memory sizes in bytes, kilobytes, and megabytes with powers of 1024 rather than 1000 has become the norm, although there is some use of the IEC binary prefixes. Several of the earliest computers (and a few modern as well) use binary-coded decimal rather than plain binary, typically having a word size of 10 or 12 decimal digits, and some early decimal computers have no fixed word length at all. Early binary systems tended to use word lengths that were some multiple of 6-bits, with the 36-bit word being especially common on mainframe computers. The introduction of ASCII led to the move to systems with word lengths that were a multiple of 8-bits, with 16-bit machines being popular in the 1970s before the move to modern processors with 32 or 64 bits. Special-purpose designs like digital signal processors, may have any word length from 4 to 80 bits. The size of a word can sometimes differ from the expected due to backward compatibility with earlier computers. If multiple compatible variations or a family of processors share a common architecture and instruction set but differ in their word sizes, their documentation and software may become notationally complex to accommodate the difference (see Size families below). Uses of words Depending on how a computer is organized, word-size units may be used for: Fixed-point numbers Holders for fixed point, usually integer, numerical values may be available in one or in several different sizes, but one of the sizes available will almost always be the word. The other sizes, if any, are likely to be multiples or fractions of the word size. The smaller sizes are normally used only for efficient use of memory; when loaded into the processor, their values usually go into a larger, word sized holder. Floating-point numbers Holders for floating-point numerical values are typically either a word or a multiple of a word. Addresses Holders for memory addresses must be of a size capable of expressing the needed range of values but not be excessively large, so often the size used is the word though it can also be a multiple or fraction of the word size. Registers Processor registers are designed with a size appropriate for the type of data they hold, e.g. integers, floating-point numbers, or addresses. Many computer architectures use general-purpose registers that are capable of storing data in multiple representations. Memory–processor transfer When the processor reads from the memory subsystem into a register or writes a register's value to memory, the amount of data transferred is often a word. Historically, this amount of bits which could be transferred in one cycle was also called a catena in some environments (such as the Bull ). In simple memory subsystems, the word is transferred over the memory data bus, which typically has a width of a word or half-word. In memory subsystems that use caches, the word-sized transfer is the one between the processor and the first level of cache; at lower levels of the memory hierarchy larger transfers (which are a multiple of the word size) are normally used. Unit of address resolution In a given architecture, successive address values almost always designate successive units of memory; this unit is the unit of address resolution. In most computers, the unit is either a character (e.g. a byte) or a word. (A few computers have used bit resolution.) If the unit is a word, then a larger amount of memory can be accessed using an address of a given size at the cost of added complexity to access individual characters. On the other hand, if the unit is a byte, then individual characters can be addressed (i.e. selected during the memory operation). Instructions Machine instructions are normally the size of the architecture's word, such as in RISC architectures, or a multiple of the "char" size that is a fraction of it. This is a natural choice since instructions and data usually share the same memory subsystem. In Harvard architectures the word sizes of instructions and data need not be related, as instructions and data are stored in different memories; for example, the processor in the 1ESS electronic telephone switch has 37-bit instructions and 23-bit data words. Word size choice When a computer architecture is designed, the choice of a word size is of substantial importance. There are design considerations which encourage particular bit-group sizes for particular uses (e.g. for addresses), and these considerations point to different sizes for different uses. However, considerations of economy in design strongly push for one size, or a very few sizes related by multiples or fractions (submultiples) to a primary size. That preferred size becomes the word size of the architecture. Character size was in the past (pre-variable-sized character encoding) one of the influences on unit of address resolution and the choice of word size. Before the mid-1960s, characters were most often stored in six bits; this allowed no more than 64 characters, so the alphabet was limited to upper case. Since it is efficient in time and space to have the word size be a multiple of the character size, word sizes in this period were usually multiples of 6 bits (in binary machines). A common choice then was the 36-bit word, which is also a good size for the numeric properties of a floating point format. After the introduction of the IBM System/360 design, which uses eight-bit characters and supports lower-case letters, the standard size of a character (or more accurately, a byte) becomes eight bits. Word sizes thereafter are naturally multiples of eight bits, with 16, 32, and 64 bits being commonly used. Variable-word architectures Early machine designs included some that used what is often termed a variable word length. In this type of organization, an operand has no fixed length. Depending on the machine and the instruction, the length might be denoted by a count field, by a delimiting character, or by an additional bit called, e.g., flag, or word mark. Such machines often use binary-coded decimal in 4-bit digits, or in 6-bit characters, for numbers. This class of machines includes the IBM 702, IBM 705, IBM 7080, IBM 7010, UNIVAC 1050, IBM 1401, IBM 1620, and RCA 301. Most of these machines work on one unit of memory at a time and since each instruction or datum is several units long, each instruction takes several cycles just to access memory. These machines are often quite slow because of this. For example, instruction fetches on an IBM 1620 Model I take 8 cycles (160 μs) just to read the 12 digits of the instruction (the Model II reduced this to 6 cycles, or 4 cycles if the instruction did not need both address fields). Instruction execution takes a variable number of cycles, depending on the size of the operands. Word, bit and byte addressing The memory model of an architecture is strongly influenced by the word size. In particular, the resolution of a memory address, that is, the smallest unit that can be designated by an address, has often been chosen to be the word. In this approach, the word-addressable machine approach, address values which differ by one designate adjacent memory words. This is natural in machines which deal almost always in word (or multiple-word) units, and has the advantage of allowing instructions to use minimally sized fields to contain addresses, which can permit a smaller instruction size or a larger variety of instructions. When byte processing is to be a significant part of the workload, it is usually more advantageous to use the byte, rather than the word, as the unit of address resolution. Address values which differ by one designate adjacent bytes in memory. This allows an arbitrary character within a character string to be addressed straightforwardly. A word can still be addressed, but the address to be used requires a few more bits than the word-resolution alternative. The word size needs to be an integer multiple of the character size in this organization. This addressing approach was used in the IBM 360, and has been the most common approach in machines designed since then. When the workload involves processing fields of different sizes, it can be advantageous to address to the bit. Machines with bit addressing may have some instructions that use a programmer-defined byte size and other instructions that operate on fixed data sizes. As an example, on the IBM 7030 ("Stretch"), a floating point instruction can only address words while an integer arithmetic instruction can specify a field length of 1-64 bits, a byte size of 1-8 bits and an accumulator offset of 0-127 bits. In a byte-addressable machine with storage-to-storage (SS) instructions, there are typically move instructions to copy one or multiple bytes from one arbitrary location to another. In a byte-oriented (byte-addressable) machine without SS instructions, moving a single byte from one arbitrary location to another is typically: LOAD the source byte STORE the result back in the target byte Individual bytes can be accessed on a word-oriented machine in one of two ways. Bytes can be manipulated by a combination of shift and mask operations in registers. Moving a single byte from one arbitrary location to another may require the equivalent of the following: LOAD the word containing the source byte SHIFT the source word to align the desired byte to the correct position in the target word AND the source word with a mask to zero out all but the desired bits LOAD the word containing the target byte AND the target word with a mask to zero out the target byte OR the registers containing the source and target words to insert the source byte STORE the result back in the target location Alternatively many word-oriented machines implement byte operations with instructions using special byte pointers in registers or memory. For example, the PDP-10 byte pointer contained the size of the byte in bits (allowing different-sized bytes to be accessed), the bit position of the byte within the word, and the word address of the data. Instructions could automatically adjust the pointer to the next byte on, for example, load and deposit (store) operations. Powers of two Different amounts of memory are used to store data values with different degrees of precision. The commonly used sizes are usually a power of two multiple of the unit of address resolution (byte or word). Converting the index of an item in an array into the memory address offset of the item then requires only a shift operation rather than a multiplication. In some cases this relationship can also avoid the use of division operations. As a result, most modern computer designs have word sizes (and other operand sizes) that are a power of two times the size of a byte. Size families As computer designs have grown more complex, the central importance of a single word size to an architecture has decreased. Although more capable hardware can use a wider variety of sizes of data, market forces exert pressure to maintain backward compatibility while extending processor capability. As a result, what might have been the central word size in a fresh design has to coexist as an alternative size to the original word size in a backward compatible design. The original word size remains available in future designs, forming the basis of a size family. In the mid-1970s, DEC designed the VAX to be a 32-bit successor of the 16-bit PDP-11. They used word for a 16-bit quantity, while longword referred to a 32-bit quantity; this terminology is the same as the terminology used for the PDP-11. This was in contrast to earlier machines, where the natural unit of addressing memory would be called a word, while a quantity that is one half a word would be called a halfword. In fitting with this scheme, a VAX quadword is 64 bits. They continued this 16-bit word/32-bit longword/64-bit quadword terminology with the 64-bit Alpha. Another example is the x86 family, of which processors of three different word lengths (16-bit, later 32- and 64-bit) have been released, while word continues to designate a 16-bit quantity. As software is routinely ported from one word-length to the next, some APIs and documentation define or refer to an older (and thus shorter) word-length than the full word length on the CPU that software may be compiled for. Also, similar to how bytes are used for small numbers in many programs, a shorter word (16 or 32 bits) may be used in contexts where the range of a wider word is not needed (especially where this can save considerable stack space or cache memory space). For example, Microsoft's Windows API maintains the programming language definition of WORD as 16 bits, despite the fact that the API may be used on a 32- or 64-bit x86 processor, where the standard word size would be 32 or 64 bits, respectively. Data structures containing such different sized words refer to them as: WORD (16 bits/2 bytes) DWORD (32 bits/4 bytes) QWORD (64 bits/8 bytes) A similar phenomenon has developed in Intel's x86 assembly language – because of the support for various sizes (and backward compatibility) in the instruction set, some instruction mnemonics carry "d" or "q" identifiers denoting "double-", "quad-" or "double-quad-", which are in terms of the architecture's original 16-bit word size. An example with a different word size is the IBM System/360 family. In the System/360 architecture, System/370 architecture and System/390 architecture, there are 8-bit bytes, 16-bit halfwords, 32-bit words and 64-bit doublewords. The z/Architecture, which is the 64-bit member of that architecture family, continues to refer to 16-bit halfwords, 32-bit words, and 64-bit doublewords, and additionally features 128-bit quadwords. In general, new processors must use the same data word lengths and virtual address widths as an older processor to have binary compatibility with that older processor. Often carefully written source code – written with source-code compatibility and software portability in mind – can be recompiled to run on a variety of processors, even ones with different data word lengths or different address widths or both. Table of word sizes See also Notes References Data types Primitive types Units of information
Word (computer architecture)
[ "Mathematics" ]
3,244
[ "Units of information", "Quantity", "Units of measurement" ]
1,613,357
https://en.wikipedia.org/wiki/List%20of%20inorganic%20compounds
Although most compounds are referred to by their IUPAC systematic names (following IUPAC nomenclature), traditional names have also been kept where they are in wide use or of significant historical interests. A Ac Actinium(III) chloride – Actinium(III) fluoride – Actinium(III) oxide – Al Aluminium antimonide – AlSb Aluminium arsenate – Aluminium arsenide – AlAs Aluminium diboride – Aluminium bromide – Aluminium carbide – Aluminium iodide – Aluminium nitride – AlN Aluminium oxide – Aluminium phosphide – AlP Aluminium chloride – Aluminium fluoride – Aluminium hydroxide – Aluminium nitrate – Aluminium sulfide – Aluminium sulfate – Aluminium potassium sulfate – Am Americium(II) bromide − Americium(III) bromide − Americium(II) chloride − Americium(III) chloride – Americium(III) fluoride − Americium(IV) fluoride − Americium(II) iodide − Americium(III) iodide − Americium dioxide – / Ammonia – Ammonium azide – Ammonium bicarbonate – Ammonium bisulfate – Ammonium bromide – Ammonium chromate – Ammonium cerium(IV) nitrate – Ammonium cerium(IV) sulfate – Ammonium chloride – Ammonium chlorate – Ammonium cyanide – Ammonium dichromate – Ammonium dihydrogen phosphate – Ammonium hexafluoroaluminate – AlF6H12 N3 Ammonium hexafluorophosphate – F6H4 NP Ammonium hexachloroplatinate – Ammonium hexafluorosilicate Ammonium hexafluorotitanate Ammonium hexafluorozirconate Ammonium hydroxide – Ammonium nitrate – Ammonium orthomolybdate – Ammonium sulfamate – Ammonium sulfide – Ammonium sulfite – Ammonium sulfate – Ammonium perchlorate – Ammonium permanganate – Ammonium persulfate – Ammonium diamminetetrathiocynatochromate(III) – Ammonium thiocyanate – Ammonium triiodide – Diammonium dioxido(dioxo)molybdenum – Diammonium phosphate – Tetramethylammonium perchlorate – Sb Antimony hydride (stybine) – Antimony pentachloride – Antimony pentafluoride – Antimony potassium tartrate – Antimony sulfate – Antimony trichloride – Antimony trifluoride – Antimony trioxide – Antimony trisulfide – Antimony pentasulfide – Ar Argon fluorohydride – HArF As Arsenic trifluoride – Arsenic triiodide –AsI3 Arsenic pentafluoride – Arsenic trioxide (Arsenic(III) oxide) – Arsenous acid – Arsenic acid – Arsine – B Ba Barium azide – Barium bromide – Barium carbonate – Barium chlorate – Barium chloride – Barium chromate – Barium ferrate – Barium ferrite – Barium fluoride – Barium hydroxide – Barium iodide – Barium manganate – Barium nitrate – Barium oxalate – Barium oxide – BaO Barium permanganate – Barium peroxide – Barium sulfate – Barium sulfide – BaS Barium titanate – Barium thiocyanate – Be Beryllium borohydride – Beryllium bromide – Beryllium carbonate – Beryllium chloride – Beryllium fluoride – Beryllium hydride – Beryllium hydroxide – Beryllium iodide – Beryllium nitrate – Beryllium nitride – Beryllium oxide – BeO Beryllium sulfate – Beryllium sulfide – BeS Beryllium telluride – BeTe Bi Bismuth chloride – BiCl3 Bismuth ferrite – Bismuth hydroxide–BiH3O3 Bismuth(III) iodide–BiI3 Bismuth(III) nitrate–BiN3O9 Bismuth(III) oxide – Bismuth oxychloride – BiOCl Bismuth pentafluoride – Bismuth(III) sulfide– Bi2S3 Bismuth(III) telluride – Bismuth(III) telluride – Bismuth tribromide – Bismuth tungstate – B Borane – Borax – Borazine – Borazocine ((3Z,5Z,7Z)-azaborocine) – Boric acid – Boron carbide – Boron nitride – BN Boron suboxide – Boron tribromide – Boron trichloride – Boron trifluoride – Boron triiodide –BI3 Boron oxide – Boroxine – Decaborane – Diborane – Diboron tetrafluoride – Pentaborane – Tetraborane – Br Bromine monochloride – BrCl Bromine pentafluoride – Perbromic acid – Aluminium Bromide – Ammonium bromide – Boron tribromide – Bromic acid – Bromine monoxide – Bromine pentafluoride – Bromine trifluoride – Bromine monofluoride – BrF Calcium bromide – Carbon tetrabromide – Copper(I) bromide – CuBr Copper(II) bromide – Hydrobromic acid – HBr(aq) Hydrogen bromide – HBr Hypobromous acid – HOBr Iodine monobromide – IBr Iron(II) bromide – Iron(III) bromide – Lead(II) bromide – Lithium bromide – LiBr Magnesium bromide – Mercury(I) bromide – Mercury(II) bromide – Nitrosyl bromide – NOBr Phosphorus pentabromide – Phosphorus tribromide – Phosphorus heptabromide – PBr7 Potassium bromide – KBr Potassium bromate – Potassium perbromate – Tribromosilane – Silicon tetrabromide – Silver bromide – AgBr Sodium bromide – NaBr Sodium bromate – Sodium perbromate – Thionyl bromide – Tin(II) bromide – Zinc bromide – C Cd Cadmium arsenide – Cadmium bromide – Cadmium chloride – Cadmium fluoride – Cadmium iodide – Cadmium nitrate – Cadmium oxide – CdO Cadmium phosphide – Cadmium selenide – CdSe Cadmium sulfate – Cadmium sulfide – CdS Cadmium telluride – CdTe Cs Caesium bicarbonate – Caesium carbonate – Caesium chloride – CsCl Caesium chromate – Caesium fluoride – CsF Caesium hydride – CsH Caesium hydrogen sulfate – Caesium iodide – CsI Caesium sulfate – Cf Californium(III) bromide – Californium(III) carbonate – Californium(III) chloride – Californium(III) fluoride – Californium(III) iodide – Californium(II) iodide – Californium(III) nitrate – Californium(III) oxide – Californium(III) phosphate – Californium(III) sulfate – Californium(III) sulfide – Californium oxyfluoride – CfOF Californium oxychloride – CfOCl Ca Calcium bromide – Calcium carbide – Calcium carbonate (Precipitated Chalk) – Calcium chlorate – Calcium chloride – Calcium chromate – Calcium cyanamide – Calcium fluoride – Calcium hydride – Calcium hydroxide – Calcium monosilicide – CaSi Calcium oxalate – Calcium hydroxychloride – Calcium perchlorate – Calcium permanganate – Calcium sulfate (gypsum) – C Carbon dioxide – Carbon disulfide – Carbon monoxide – CO Carbon tetrabromide – Carbon tetrachloride – Carbon tetrafluoride – Carbon tetraiodide – Carbonic acid – Carbonyl chloride – Carbonyl fluoride – Carbonyl sulfide – COS Carboplatin – Ce Cerium(III) bromide – Cerium(III) carbonate – Cerium(III) chloride – Cerium(III) fluoride – Cerium(III) hydroxide – Cerium(III) iodide – Cerium(III) nitrate – Cerium(III) oxide – Cerium(III) sulfate – Cerium(III) sulfide – Cerium(IV) hydroxide – Cerium(IV) nitrate – Cerium(IV) oxide – Cerium(IV) sulfate – Cerium(III,IV) oxide – Ceric ammonium nitrate – Cerium hexaboride – Cerium aluminium – CeAl Cerium cadmium – CeCd Cerium magnesium – CeMg Cerium mercury – CeHg Cerium silver – CeAg Cerium thallium – CeTl Cerium zinc – CeZn Cl Actinium(III) chloride – Aluminium chloride – Americium(III) chloride – Ammonium chloride – Antimony(III) chloride – Antimony(V) chloride – Arsenic(III) chloride – Barium chloride – Beryllium chloride – Bismuth(III) chloride – Boron trichloride – Bromine monochloride – BrCl Cadmium chloride – Caesium chloride – CsCl Calcium chloride – Calcium hypochlorite – Carbon tetrachloride – Cerium(III) chloride – Chloramine – Chloric acid – Chlorine azide – Chlorine dioxide – Chlorine dioxide – Chlorine monofluoride – ClF Chlorine monoxide – ClO Chlorine pentafluoride – Chlorine perchlorate – Chlorine tetroxide – Chlorine trifluoride – Chlorine trifluoride – Chlorine trioxide – Chlorine trioxide – Chloroplatinic acid – Chlorosulfonic acid – Chlorosulfonyl isocyanate – Chloryl fluoride – Chromium(II) chloride – Chromium(III) chloride – Chromyl chloride – Cisplatin (cis–platinum(II) chloride diamine) – Cobalt(II) chloride – Copper(I) chloride – CuCl Copper(II) chloride – Curium(III) chloride – Cyanogen chloride – ClCN Dichlorine dioxide – Dichlorine heptaoxide – Dichlorine heptoxide – Dichlorine hexoxide – Dichlorine monoxide – Dichlorine monoxide – Dichlorine tetroxide (chlorine perchlorate) – Dichlorine trioxide – Dichlorosilane – Disulfur dichloride – Dysprosium(III) chloride – Erbium(III) chloride – Europium(II) chloride – Europium(III) chloride – Gadolinium(III) chloride – Gallium trichloride – Germanium dichloride – Germanium tetrachloride – Gold(I) chloride – AuCl Gold(III) chloride – Hafnium(IV) chloride – Holmium(III) chloride – Hydrochloric acid – HCl(aq) Hydrogen chloride – HCl Hypochlorous acid – HOCl Indium(I) chloride – InCl Indium(III) chloride – Iodine monochloride – ICl Iridium(III) chloride – Iron(II) chloride – Iron(III) chloride – Lanthanum chloride – Lead(II) chloride – Lithium chloride – LiCl Lithium perchlorate – Lutetium chloride – Magnesium chloride – Magnesium perchlorate – Manganese(II) chloride – Mercury(I) chloride – Mercury(II) chloride – Mercury(II) perchlorate – Molybdenum(III) chloride – Molybdenum(V) chloride – Neodymium(III) chloride – Neptunium(IV) chloride – Nickel(II) chloride – Niobium oxide trichloride – Niobium(IV) chloride – Niobium(V) chloride – Nitrogen trichloride – Nitrosyl chloride – NOCl Nitryl chloride – Osmium(III) chloride – Palladium(II) chloride – Perchloric acid – Perchloryl fluoride – Phosgene – Phosphonitrilic chloride trimer – Phosphorus oxychloride – Phosphorus pentachloride – Phosphorus trichloride – Platinum(II) chloride – Platinum(IV) chloride – Plutonium(III) chloride – Potassium chlorate – Potassium chloride – KCl Potassium perchlorate – Praseodymium(III) chloride – Protactinium(V) chloride – Radium chloride – Rhenium(III) chloride – Rhenium(V) chloride – Rhodium(III) chloride – Rubidium chloride – RbCl Ruthenium(III) chloride – Samarium(III) chloride – Scandium chloride – Selenium dichloride – Selenium tetrachloride – Silicon tetrachloride – Silver chloride – AgCl Silver perchlorate – Sodium chlorate – Sodium chloride (table salt, rock salt) – NaCl Sodium chlorite – Sodium hypochlorite – NaOCl Sodium perchlorate – Strontium chloride – Sulfur dichloride – Sulfuryl chloride – Tantalum(III) chloride – Tantalum(IV) chloride – Tantalum(V) chloride – Tellurium tetrachloride – Terbium(III) chloride – Tetrachloroauric acid – Thallium(I) chloride – TlCl Thallium(III) chloride – Thionyl chloride – Thiophosgene – Thorium(IV) chloride – Thulium(III) chloride – Tin(II) chloride – Tin(IV) chloride – Titanium tetrachloride – Titanium(III) chloride – Trichlorosilane – Trigonal bipyramidal – Tungsten(IV) chloride – Tungsten(V) chloride – Tungsten(VI) chloride – Uranium hexachloride – Uranium(III) chloride – Uranium(IV) chloride – Uranium(V) chloride – Uranyl chloride – Vanadium oxytrichloride – Vanadium(II) chloride – Vanadium(III) chloride – Vanadium(IV) chloride – Ytterbium(III) chloride – Yttrium chloride – Zinc chloride – Zirconium(IV) chloride – Cr Chromic acid – Chromium trioxide (Chromic acid) – Chromium(II) chloride (chromous chloride) – Chromium(II) sulfate – Chromium(III) chloride – Chromium(III) nitrate – Chromium(III) oxide – Chromium(III) sulfate – Chromium(III) telluride – Chromium(IV) oxide – Chromium pentafluoride – Chromyl chloride – Chromyl fluoride – Co Cobalt(II) bromide – Cobalt(II) carbonate – Cobalt(II) chloride – Cobalt(II) nitrate – Cobalt(II) sulfate – Cobalt(III) fluoride – Cu Copper(I) acetylide – Copper(I) chloride – CuCl Copper(I) fluoride – CuF Copper(I) oxide – Copper(I) sulfate – Copper(I) sulfide – Copper(II) azide – Copper(II) borate – Cu3(BO3)2 Copper(II) carbonate – Copper(II) chloride – Copper(II) hydroxide – Copper(II) nitrate – Copper(II) oxide – CuO Copper(II) sulfate – Copper(II) sulfide – CuS Copper oxychloride – Tetramminecopper(II) sulfate – Cm Curium(III) chloride – Curium(III) oxide – Curium(IV) oxide – Curium hydroxide – CN Cyanogen bromide – BrCN Cyanogen chloride – ClCN Cyanogen iodide – ICN Cyanogen – Cyanuric chloride – Cyanogen thiocyanate – Cyanogen selenocyanate – Cyanogen azide – D Disilane – Disulfur dichloride – Dy Dysprosium(III) chloride – Dysprosium oxide – Dysprosium titanate – E Es Einsteinium(III) bromide – Einsteinium(III) carbonate – Einsteinium(III) chloride – Einsteinium(III) fluoride – Einsteinium(III) iodide – Einsteinium(III) nitrate – Einsteinium(III) oxide – Einsteinium(III) phosphate – Einsteinium(III) sulfate – Einsteinium(III) sulfide – Er Erbium(III) chloride – Erbium-copper – ErCu Erbium-gold – ErAu Erbium(III) oxide – Erbium-silver – ErAg Erbium-Iridium – ErIr Eu Europium(II) chloride – Europium(II) sulfate – Europium(III) bromide – Europium(III) chloride – Europium(III) iodate – Europium(III) iodide – Europium(III) nitrate – Europium(III) oxide – Europium(III) perchlorate – Europium(III) sulfate – Europium(III) vanadate – F F Fluoroantimonic acid – Tetrafluorohydrazine – Trifluoromethylisocyanide – Trifluoromethanesulfonic acid – Other fluorides: AlF3, AmF3, NH4F, NH4HF2, NH4BF4, SbF5, SbF3, AsF5, AsF3, BaF2, BeF2, BiF3, F5SOOSF5, BF3, BrF5, BrF3, BrF, CdF2, CsF, CaF2, CF4, COF2, CeF3, CeF4, ClF5, ClF3, ClF, CrF3, CrF5, CrO2F2, CoF2, CoF3, CuF, CuF2, CmF3, N2F2, N2F4, O2F2, P2F4, S2F2, DyF3, ErF3, EuF3, HBF4, FN3, FOSO2F, FNO3, FSO3H, GdF3, GaF3, GeF4, AuF3, HfF4, H2SbF6, HPF6, H2SiF6, H2TiF6, HF, HF(aq), HFO, InF3, IF7, IF, IF5, IrF3, IrF6, FeF2, FeF3, KrF2, LaF3, PbF2, PbF4, LiF, MgF2, MnF2, MnF3, MnF4, Hg2F2, HgF2, MoF3, MoF5, MoF6, NbF4, NbF5, NdF3, NiF2, NpF4, NpF5, NpF6, ONF3, NF3, NO2BF4, NOBF4, NOF, NO2F, OsF4, OsF6, OsF7, OF2, PdF2, PdF4, FSO2OOSO2F, POF3, PF5, PF3, PtF2, PtF4, PtF6, PuF3, PuF4, PuF6, KF, KPF6, KBF4, PrF3, PaF5, RaF2, RnF2, ReF4, ReF6, ReF7, RhF3, RbF, RuF3, RuF4, RuF6, SmF3, ScF3, SeF6, SeF4, SiF4, AgF, AgF2, AgBF4, NaF, NaFSO3, Na3AlF6, NaSbF6, NaPF6, Na2SiF6, Na2TiF6, NaBF4, SrF2, SF2, SF6, SF4, SO2F2, TaF5, TcF6, TeF6, TeF4, TlF, TlF3, SOF2, ThF4, SnF2, SnF4, TiF3, TiF4, HSiF3, WF6, UF4, UF5, UF6, UO2F2, VF3, VF4, VF5, XeF2, XeO2F2, XeF6, XePtF6, XeF4, YbF3, YF3, ZnF2, ZrF4 Fr Francium oxide – Francium chloride – FrCl Francium bromide – FrBr Francium iodide – FrI Francium carbonate – Francium hydroxide – FrOH Francium sulfate – G Gd Gadolinium(III) chloride – Gadolinium(III) oxide – Gadolinium(III) carbonate – Gadolinium(III) chloride – Gadolinium(III) fluoride – Gadolinium gallium garnet – Gadolinium(III) nitrate – Gadolinium(III) oxide – Gadolinium(III) phosphate – Gadolinium(III) sulfate – Ga Gallium antimonide – GaSb Gallium arsenide – GaAs Gallium(III) fluoride – Gallium trichloride – Gallium nitride – GaN Gallium phosphide – GaP Gallium(II) sulfide – GaS Gallium(III) sulfide – Ge Digermane – Germane – Germanium(II) bromide – Germanium(II) chloride – Germanium(II) fluoride – Germanium(II) iodide – Germanium(II) oxide – GeO Germanium(II) selenide – GeSe Germanium(II) sulfide – GeS Germanium(IV) bromide – Germanium(IV) chloride – Germanium(IV) fluoride – Germanium(IV) iodide – Germanium(IV) nitride – Germanium(IV) oxide – Germanium(IV) selenide – Germanium(IV) sulfide – Germanium difluoride – Germanium dioxide – Germanium tetrachloride – Germanium tetrafluoride – Germanium telluride – GeTe Au Gold(I) bromide – AuBr Gold(I) chloride – AuCl Gold(I) cyanide-AuCN Gold(I) hydride – AuH Gold(I) iodide – AuI Gold(I) selenide – Gold(I) sulfide – Gold(III) bromide – Gold(III) chloride – Gold(III) fluoride – Gold(III) iodide – Gold(III) oxide – Gold(III) selenide – Gold(III) sulfide – Gold(III) nitrate – Gold(V) fluoride – Gold(I,III) chloride – Gold ditelluride – Gold heptafluoride – () H Hf Hafnium(IV) bromide – Hafnium(IV) carbide – HfC Hafnium(IV) chloride – Hafnium(IV) fluoride – Hafnium(IV) iodide – Hafnium(IV) oxide – Hafnium(IV) silicate – Hafnium(IV) sulfide – Hexadecacarbonylhexarhodium – Hs Hassium tetroxide – Ho Holmium(III) carbonate – Holmium(III) chloride – Holmium(III) fluoride – Holmium(III) nitrate – Holmium(III) oxide – Holmium(III) phosphate – Holmium(III) sulfate – H Hexafluorosilicic acid – Hydrazine – Hydrazoic acid – Hydroiodic acid – HI Hydrogen bromide – HBr Hydrogen chloride – HCl Hydrogen cyanide – HCN Hydrogen fluoride – HF Hydrogen peroxide – Hydrogen selenide – Hydrogen sulfide – Hydrogen telluride – Hydroxylamine – Hypobromous acid – HBrO Hypochlorous acid – HClO Hypophosphorous acid – Metaphosphoric acid – Protonated molecular hydrogen – Trioxidane – Water - H2O He Sodium helide – I In Indium(I) bromide – InBr Indium(III) bromide – Indium(III) chloride – Indium(III) fluoride – Indium(III) oxide – Indium(III) sulfate – Indium antimonide – InSb Indium arsenide – InAs Indium nitride – InN Indium phosphide – InP Indium(I) iodide – InI Indium(III) nitrate – Indium(I) oxide – Indium(III) selenide – Indium(III) sulfide – Trimethylindium – I Iodic acid – Iodine heptafluoride – Iodine pentafluoride – Iodine monochloride – ICl Iodine trichloride – Periodic acid – Iodine pentachloride - Iodine tribromide - Ir Iridium(IV) chloride – Iridium(V) fluoride – Iridium hexafluoride – Iridium tetrafluoride – Fe Columbite – Iron(II) chloride – Iron(II) oxalate – Iron(II) oxide – FeO Iron(II) selenate – Iron(II) sulfate – Iron(III) chloride – Iron(III) fluoride – Iron(III) oxalate – Iron(III) oxide – Iron(III) nitrate – Iron(III) sulfate – Iron(III) thiocyanate – Iron(II,III) oxide – Iron ferrocyanide – Prussian blue (Iron(III) hexacyanoferrate(II)) – Ammonium iron(II) sulfate – Iron(II) bromide – Iron(III) bromide – Iron(II) chloride – Iron(III) chloride – Iron disulfide – Iron dodecacarbonyl – Iron(III) fluoride – Iron(II) iodide – Iron naphthenate – Iron(III) nitrate – Iron nonacarbonyl – Iron(II) oxalate – Iron(II,III) oxide – Iron(III) oxide – Iron pentacarbonyl – Iron(III) perchlorate – Iron(III) phosphate – Iron(II) sulfamate – Iron(II) sulfate – Iron(III) sulfate – Iron(II) sulfide – FeS K Kr Krypton difluoride – L La Lanthanum aluminium – LaAl Lanthanum cadmium – LaCd Lanthanum carbonate – Lanthanum magnesium – LaMg Lanthanum manganite – Lanthanum mercury – LaHg Lanthanum silver – LaAg Lanthanum thallium – LaTl Lanthanum zinc – LaZn Lanthanum boride – Lanthanum carbonate – Lanthanum(III) chloride – Lanthanum trifluoride – Lanthanum(III) oxide – Lanthanum(III) nitrate – Lanthanum(III) phosphate – Lanthanum(III) sulfate – Pb Lead(II) azide – Lead(II) bromide – Lead(II) carbonate – Lead(II) chloride – Lead(II) fluoride – Lead(II) hydroxide – Lead(II) iodide – Lead(II) nitrate – Lead(II) oxide – PbO Lead(II) phosphate – Lead(II) sulfate – Lead(II) selenide – PbSe Lead(II) sulfide – PbS Lead(II) telluride – PbTe Lead(II) thiocyanate – Lead(II,IV) oxide – Lead(IV) oxide – Lead(IV) sulfide – Lead hydrogen arsenate – Lead styphnate – Lead tetrachloride – Lead tetrafluoride – Lead tetroxide – Lead titanate – Lead zirconate titanate – (e.g., x = 0.52 is lead zirconium titanate) Plumbane – Li Lithium tetrachloroaluminate – Lithium aluminium hydride – Lithium bromide – LiBr Lithium borohydride – Lithium carbonate (Lithium salt) – Lithium chloride – LiCl Lithium hypochlorite – LiClO Lithium chlorate – Lithium perchlorate – Lithium cobalt oxide – Lithium oxide – Lithium peroxide – Lithium hydride – LiH Lithium hydroxide – LiOH Lithium iodide – LiI Lithium iron phosphate – Lithium nitrate – Lithium sulfide – Lithium sulfite – Lithium sulfate – Lithium superoxide – Lithium hexafluorophosphate – M Mg Magnesium antimonide – MgSb Magnesium bromide – Magnesium carbonate – Magnesium chloride – Magnesium citrate – Magnesium oxide – MgO Magnesium perchlorate – Magnesium phosphate – Magnesium sulfate – Magnesium bicarbonate – Magnesium boride – Magnesium bromide – Magnesium carbide – Magnesium carbonate – Magnesium chloride – Magnesium cyanamide – Magnesium fluoride – Magnesium fluorophosphate – Magnesium gluconate – Magnesium hydride – Dimagnesium phosphate – Magnesium hydroxide – Magnesium hypochlorite – Magnesium iodide – Magnesium molybdate – Magnesium nitrate – Magnesium oxalate – Magnesium peroxide – Magnesium phosphate – Magnesium silicate – Magnesium sulfate – Magnesium sulfide – MgS Magnesium titanate – Magnesium tungstate – Magnesium zirconate – Mn Manganese(II) bromide – Manganese(II) chloride – Manganese(II) hydroxide – Manganese(II) oxide – MnO Manganese(II) phosphate – Manganese(II) sulfate – Manganese(II) sulfate monohydrate – Manganese(III) chloride – Manganese(III) oxide – Manganese(IV) fluoride – Manganese(IV) oxide (manganese dioxide) – Manganese(II,III) oxide – Manganese dioxide – Manganese heptoxide – Hg Mercury(I) chloride – Mercury(I) sulfate – Mercury(II) chloride – Mercury(II) hydride – Mercury(II) selenide – HgSe Mercury(II) sulfate – Mercury(II) sulfide – HgS Mercury(II) telluride – HgTe Mercury(II) thiocyanate – Mercury(IV) fluoride – Mercury fulminate – Mo Molybdenum(II) bromide – Molybdenum(II) chloride – Molybdenum(III) bromide – Molybdenum(III) chloride – Molybdenum(IV) carbide – MoC Molybdenum(IV) chloride – Molybdenum(IV) fluoride – Molybdenum(V) chloride – Molybdenum(V) fluoride – Molybdenum disulfide – Molybdenum hexacarbonyl – Molybdenum hexafluoride – Molybdenum tetrachloride – Molybdenum trioxide – Molybdic acid – N Nd Neodymium acetate - Neodymium(III) arsenate – NdAsO4 Neodymium(II) chloride – Neodymium(III) chloride – Neodymium magnet – Neodymium(II) bromide - Neodymium(III) bromide – Neodymium(III) fluoride – Neodymium(III) hydride - Neodymium(II) iodide - Neodymium(III) iodide – Neodymium molybdate - Neodymium perrhenate - Neodymium(III) sulfide - Neodymium tantalate - Neodymium(III) vanadate - Np Neptunium(III) fluoride – Neptunium(IV) fluoride – Neptunium(IV) oxide – Neptunium(VI) fluoride – Ni Nickel(II) carbonate – Nickel(II) chloride – Nickel(II) fluoride – Nickel(II) hydroxide – Nickel(II) nitrate – Nickel(II) oxide – NiO Nickel(II) sulfamate – Nickel(II) sulfide – NiS Nb Niobium(IV) fluoride – Niobium(V) fluoride – Niobium oxychloride – Niobium pentachloride – N Dinitrogen pentoxide (nitronium nitrate) – Dinitrogen tetrafluoride – Dinitrogen tetroxide – Dinitrogen trioxide – Nitric acid – Nitrous acid – Nitrogen dioxide – Nitrogen monoxide – NO Nitrous oxide (dinitrogen monoxide, laughing gas, NOS) – Nitrogen pentafluoride – Nitrogen triiodide – NO Nitrosonium octafluoroxenate(VI) – Nitrosonium tetrafluoroborate – Nitrosylsulfuric acid – O Os Osmium hexafluoride – Osmium tetroxide (osmium(VIII) oxide) – Osmium trioxide (osmium(VI) oxide) – O Tributyltin – Oxygen difluoride – Ozone – Aluminium oxide – Americium(II) oxide – AmO Americium(IV) oxide – Antimony trioxide – Antimony(V) oxide – Arsenic trioxide – Arsenic(V) oxide – Barium oxide – BaO Beryllium oxide – BeO Bismuth(III) oxide – Bismuth oxychloride – BiOCl Boron trioxide – Bromine monoxide – Carbon dioxide – Carbon monoxide – CO Cerium(IV) oxide – Chlorine dioxide – Chlorine trioxide – Dichlorine heptaoxide – Dichlorine monoxide – Chromium(III) oxide – Chromium(IV) oxide – Chromium(VI) oxide – Cobalt(II) oxide – CoO Copper(I) oxide – Copper(II) oxide – CuO Curium(III) oxide – Curium(IV) oxide – Dysprosium(III) oxide – Erbium(III) oxide – Europium(III) oxide – Oxygen difluoride – Dioxygen difluoride – Francium oxide – Gadolinium(III) oxide – Gallium(III) oxide – Germanium dioxide – Gold(III) oxide – Hafnium dioxide – Holmium(III) oxide – Indium(I) oxide – Indium(III) oxide – Iodine pentoxide – Iridium(IV) oxide – Iron(II) oxide – FeO Iron(II,III) oxide – Iron(III) oxide – Lanthanum(III) oxide – Lead(II) oxide – PbO Lead dioxide – Lithium oxide – Magnesium oxide – MgO Potassium oxide – Rubidium oxide – Sodium oxide – Strontium oxide – SrO Tellurium dioxide – Uranium(IV) oxide – (only simple oxides, oxyhalides, and related compounds, not hydroxides, carbonates, acids, or other compounds listed elsewhere) P Pd Palladium(II) chloride – Palladium(II) nitrate – Palladium(II,IV) fluoride – Palladium sulfate – Palladium tetrafluoride – P Diphosphorus tetrachloride – Diphosphorus tetrafluoride – Diphosphorus tetraiodide – Hexachlorophosphazene – Phosphine – Phosphomolybdic acid – Phosphoric acid – Phosphorous acid (Phosphoric(III) acid) – Phosphoroyl nitride – NPO Phosphorus pentabromide – Phosphorus pentafluoride – Phosphorus pentasulfide – Phosphorus pentoxide – Phosphorus sesquisulfide – Phosphorus tribromide – Phosphorus trichloride – Phosphorus trifluoride – Phosphorus triiodide – Phosphotungstic acid – Poly(dichlorophosphazene) – Pt Platinum(II) chloride – Platinum(IV) chloride – Platinum hexafluoride – Platinum pentafluoride – Platinum tetrafluoride – Pu Plutonium(III) bromide – Plutonium(III) chloride – Plutonium(III) fluoride – Plutonium dioxide (Plutonium(IV) oxide) – Plutonium hexafluoride – Plutonium hydride – Plutonium tetrafluoride – Po Polonium hexafluoride – Polonium monoxide – PoO Polonium dioxide – Polonium trioxide – Ps Di-positronium – Positronium hydride – PsH K Potash Alum – Potassium alum – Potassium aluminium fluoride – Potassium amide – Potassium argentocyanide – Potassium arsenite – Potassium azide – Potassium borate – Potassium bromide – KBr Potassium bicarbonate – Potassium bifluoride – Potassium bisulfite – Potassium carbonate – Potassium calcium chloride – Potassium chlorate – Potassium chloride – KCl Potassium chlorite – Potassium chromate – Potassium cyanide – KCN Potassium dichromate – Potassium dithionite – Potassium ferrate – Potassium ferrioxalate – Potassium ferricyanide – Potassium ferrocyanide – Potassium heptafluorotantalate – Potassium hexafluorophosphate – Potassium hydrogen carbonate – Potassium hydrogen fluoride – Potassium hydroxide – KOH Potassium iodide – KI Potassium iodate – Potassium manganate – Potassium monopersulfate – Potassium nitrate – Potassium perbromate – Potassium perchlorate – Potassium periodate – Potassium permanganate – Potassium sodium tartrate – Potassium sulfate – Potassium sulfite – Potassium sulfide – Potassium tartrate – Potassium tetraiodomercurate(II) – Potassium thiocyanate – KSCN Potassium titanyl phosphate – Potassium vanadate – Tripotassium phosphate – Pr Praseodymium(III) chloride – Praseodymium(III) sulfate – Praseodymium(III) bromide – Praseodymium(III) carbonate – Praseodymium(III) chloride – Praseodymium(III) fluoride – Praseodymium(III) iodide – Praseodymium(III) nitrate – Praseodymium(III) oxide – Praseodymium(III) phosphate – Praseodymium(III) sulfate – Praseodymium(III) sulfide – Pm Promethium(III) chloride – Promethium(III) oxide – Promethium(III) bromide – Promethium(III) carbonate – Promethium(III) chloride – Promethium(III) fluoride – Promethium(III) iodide – Promethium(III) nitrate – Promethium(III) oxide – Promethium(III) phosphate – Promethium(III) sulfate – Promethium(III) sulfide – R Ra Radium bromide – Radium carbonate – Radium chloride – Radium fluoride – Rn Radon difluoride – Re Rhenium(IV) oxide – Rhenium(VII) oxide – Rhenium heptafluoride – Rhenium hexafluoride – Rh Rhodium hexafluoride – Rhodium pentafluoride – Rhodium(III) chloride – Rhodium(III) hydroxide – Rhodium(III) iodide – Rhodium(III) nitrate – Rhodium(III) oxide – Rhodium(III) sulfate – Rhodium(III) sulfide – Rhodium(IV) fluoride – Rhodium(IV) oxide – Rb Rubidium azide – Rubidium bromide – RbBr Rubidium chloride – RbCl Rubidium fluoride – RbF Rubidium hydrogen sulfate – Rubidium hydroxide – RbOH Rubidium iodide – RbI Rubidium nitrate – Rubidium oxide – Rubidium telluride – Rubidium titanyl phosphate — Ru Ruthenium hexafluoride – Ruthenium pentafluoride – Ruthenium(VIII) oxide – Ruthenium(III) chloride – Ruthenium(IV) oxide – S Sm Samarium(II) iodide – Samarium(III) chloride – Samarium(III) oxide – Samarium(III) bromide – Samarium(III) carbonate – Samarium(III) fluoride – Samarium(III) iodide – Samarium(III) nitrate – Samarium(III) oxide – Samarium(III) phosphate – Samarium(III) sulfate – Samarium(III) sulfide – Sc Scandium(III) fluoride – Scandium(III) nitrate – Scandium(III) oxide – Scandium(III) triflate – Sg Seaborgium hexacarbonyl – Se Selenic acid – Selenious acid – Selenium dibromide – Selenium dioxide – Selenium disulfide – Selenium hexafluoride – Selenium hexasulfide – Selenium oxybromide – Selenium oxydichloride – Selenium tetrachloride – Selenium tetrafluoride – Selenium trioxide – Selenoyl fluoride – Si Silane – Silica gel – Silicic acid – Silicochloroform, trichlorosilane – Silicofluoric acid – Silicon boride – Silicon carbide (carborundum) – SiC Silicon dioxide – Silicon monoxide – SiO Silicon nitride – Silicon tetrabromide – Silicon tetrachloride – Silicon tetrafluoride – Silicon tetraiodide – Thortveitite – Ag Silver(I) fluoride – AgF Silver(II) fluoride – Silver acetylide – Silver argentocyanide – Silver azide – Silver bromate – Silver bromide – AgBr Silver chlorate – Silver chloride – AgCl Silver chromate – Silver fluoroborate – Silver fulminate – AgCNO Silver hydroxide – AgOH Silver iodide – AgI Silver nitrate – Silver nitride – Silver oxide – Silver perchlorate – Silver permanganate – Silver phosphate (silver orthophosphate) – Silver subfluoride – Silver sulfate – Silver sulfide – Na Sodamide – Sodium aluminate – Sodium arsenate – Sodium azide – Sodium bicarbonate – Sodium biselenide – NaSeH Sodium bisulfate – Sodium bisulfite – Sodium borate – Sodium borohydride – Sodium bromate – Sodium bromide – NaBr Sodium bromite – Sodium carbide – Sodium carbonate – Sodium chlorate – Sodium chloride – NaCl Sodium chlorite – Sodium cobaltinitrite – Sodium copper tetrachloride – Sodium cyanate – NaCNO Sodium cyanide – NaCN Sodium dichromate – Sodium dioxide – Sodium dithionite – Sodium ferrocyanide – Sodium fluoride – NaF Sodium fluorosilicate – Sodium formate – HCOONa Sodium hydride – NaH Sodium hydrogen carbonate (Sodium bicarbonate) – Sodium hydrosulfide – NaSH Sodium hydroxide – NaOH Sodium hypobromite – NaOBr Sodium hypochlorite – NaOCl Sodium hypoiodite – NaOI Sodium hypophosphite – Sodium iodate – Sodium iodide – NaI Sodium manganate – Sodium molybdate – Sodium monofluorophosphate (MFP) – Sodium nitrate – Sodium nitrite – Sodium nitroprusside – Sodium oxide – Sodium perborate – Sodium perbromate – Sodium percarbonate – Sodium perchlorate – Sodium periodate – Sodium permanganate – Sodium peroxide – Sodium peroxycarbonate – Sodium perrhenate – Sodium persulfate – Sodium phosphate; see trisodium phosphate – Sodium selenate – Sodium selenide – Sodium selenite – Sodium silicate – Sodium sulfate – Sodium sulfide – Sodium sulfite – Sodium tartrate – Sodium tellurite – Sodium tetrachloroaluminate – Sodium tetrafluoroborate – Sodium thioantimoniate – Sodium thiocyanate – NaSCN Sodium thiosulfate – Sodium tungstate – Sodium uranate – Sodium zincate – Trisodium phosphate – Sr Strontium bromide – Strontium carbonate – Strontium chloride – Strontium fluoride – Strontium hydroxide – Strontium iodide – Strontium nitrate – Strontium oxide – SrO Strontium titanate – Strontium bicarbonate – Strontium boride – Strontium bromide – Strontium carbide – Strontium carbonate – Strontium chloride – Strontium cyanamide – Strontium fluoride – Strontium fluorophosphate – Strontium gluconate – Strontium hydride – Strontium hydrogen phosphate – Strontium hydroxide – Strontium hypochlorite – Strontium iodide – Strontium molybdate – Strontium nitrate – Strontium oxalate – Strontium oxide – SrO Strontium peroxide – Strontium phosphate – Strontium silicate – Strontium sulfate – Strontium sulfide – SrS Strontium titanate – Strontium tungstate – Strontium zirconate – S Disulfur decafluoride – Hydrogen sulfide (sulfane) – Pyrosulfuric acid – Sulfamic acid – Sulfur dibromide – Sulfur dioxide – Sulfur hexafluoride – Sulfur tetrafluoride – Sulfuric acid – Sulfurous acid – Sulfuryl chloride – Tetrasulfur tetranitride – Persulfuric acid (Caro's acid) – T Ta Tantalum arsenide – TaAs Tantalum carbide – TaC Tantalum pentafluoride – Tantalum(V) oxide – Tc Technetium hexafluoride – Ammonium pertechnetate – Sodium pertechnetate – Te Ditellurium bromide – Telluric acid – Tellurium dioxide – Tellurium hexafluoride – Tellurium tetrabromide – Tellurium tetrachloride – Tellurium tetrafluoride – Tellurium tetraiodide – Tellurous acid – Beryllium telluride – BeTe Bismuth telluride – Cadmium telluride – CdTe Cadmium zinc telluride – Dimethyltelluride – Mercury Cadmium Telluride – Lead telluride – PbTe Mercury telluride – HgTe Mercury zinc telluride – Silver telluride – Tin telluride – SnTe Zinc telluride – ZnTe Teflic acid – Telluric acid – Sodium tellurite – Tellurium dioxide – Tellurium hexafluoride – Tellurium tetrafluoride – Tellurium tetrachloride – Tb Terbium(III) chloride – Terbium(III) bromide – Terbium(III) carbonate – Terbium(III) chloride – Terbium(III) fluoride – Terbium(III) iodide – Terbium(III) nitrate – Terbium(III) oxide – Terbium(III) phosphate – Terbium(III) sulfate – Terbium(III) sulfide – Tl Thallium(I) bromide – TlBr Thallium(I) carbonate – Thallium(I) fluoride – TlF Thallium(I) sulfate – Thallium(III) oxide – Thallium(III) sulfate – Thallium triiodide – Thallium antimonide – TlSb Thallium arsenide – TlAs Thallium(III) bromide – Thallium(III) chloride – Thallium(III) fluoride – Thallium(I) iodide – TlI Thallium(III) nitrate – Thallium(I) oxide – Thallium(III) oxide – Thallium phosphide – TlP Thallium(III) selenide – Thallium(III) sulfate – Thallium(III) sulfide – TrimethylThallium – Thallium(I) hydroxide – TlOH SO Thionyl chloride – Thionyl tetrafluoride – ClS Thiophosgene – Thiophosphoryl chloride – Th Thorium(IV) nitrate – Thorium(IV) sulfate – Thorium dioxide – Thorium tetrafluoride – Tm Thulium(III) bromide – Thulium(III) chloride – Thulium(III) oxide – Sn Stannane – Tin(II) bromide – Tin(II) chloride (stannous chloride) – Tin(II) fluoride – Tin(II) hydroxide – Tin(II) iodide – Tin(II) oxide – SnO Tin(II) sulfate – Tin(II) sulfide – SnS Tin(IV) bromide – Tin(IV) chloride – Tin(IV) fluoride – Tin(IV) iodide – Tin(IV) oxide – Tin(IV) sulfide – Tin(IV) cyanide – Tin selenide – Tin telluride – SnTe Ti Hexafluorotitanic acid – Titanium(II) chloride – Titanium(II) oxide – TiO Titanium(II) sulfide – TiS Titanium(III) bromide – Titanium(III) chloride – Titanium(III) fluoride – Titanium(III) iodide – Titanium(III) oxide – Titanium(III) phosphide – TiP Titanium(IV) bromide (titanium tetrabromide) – Titanium(IV) carbide – TiC Titanium(IV) chloride (titanium tetrachloride) – Titanium(IV) hydride – Titanium(IV) iodide (titanium tetraiodide) – Titanium carbide – TiC Titanium diboride – Titanium dioxide (titanium(IV) oxide) – Titanium diselenide – Titanium disilicide – Titanium disulfide – Titanium nitrate – Titanium nitride – TiN Titanium perchlorate – Titanium silicon carbide – Titanium tetrabromide – Titanium tetrafluoride – Titanium tetraiodide – TiO Titanyl sulfate – W Tungsten(VI) chloride – Tungsten(VI) fluoride – Tungsten boride – Tungsten carbide – WC Tungstic acid – Tungsten hexacarbonyl – U U Triuranium octaoxide (pitchblende or yellowcake) – Uranium hexafluoride – Uranium pentafluoride – Uranium sulfate – Uranium tetrachloride – Uranium tetrafluoride – Uranium(III) chloride – Uranium(IV) chloride – Uranium(V) chloride – Uranium hexachloride – Uranium(IV) fluoride – Uranium pentafluoride – Uranium(VI) fluoride – Uranyl peroxide – Uranium dioxide – UO2 Uranyl carbonate – Uranyl chloride – Uranyl fluoride – Uranyl hydroxide – Uranyl hydroxide – Uranyl nitrate – Uranyl sulfate – V V Vanadium(II) chloride – Vanadium(II) oxide – VO Vanadium(III) bromide – Vanadium(III) chloride – Vanadium(III) fluoride – Vanadium(III) nitride – VN Vanadium(III) oxide – Vanadium(IV) chloride – Vanadium(IV) fluoride – Vanadium(IV) oxide – Vanadium(IV) sulfate – Vanadium(V) oxide – Vanadium carbide – VC Vanadium oxytrichloride (Vanadium(V) oxide trichloride) – Vanadium pentafluoride – Vanadium tetrachloride – Vanadium tetrafluoride – W Water – X Xe Perxenic acid – Xenon difluoride – Xenon hexafluoride – Xenon hexafluoroplatinate – Xenon tetrafluoride – Xenon tetroxide – Xenic acid – Y Yb Ytterbium(III) chloride – Ytterbium(III) oxide – Ytterbium(III) sulfate – Ytterbium(III) bromide – Ytterbium(III) carbonate – Ytterbium(III) chloride – Ytterbium(III) fluoride – Ytterbium(III) iodide – Ytterbium(III) nitrate – Ytterbium(III) oxide – Ytterbium(III) phosphate – Ytterbium(III) sulfate – Ytterbium(III) sulfide – Y Yttrium(III) antimonide – YSb Yttrium(III) arsenate – Yttrium(III) arsenide – YAs Yttrium(III) bromide – Yttrium(III) fluoride – Yttrium(III) oxide – Yttrium(III) nitrate – Yttrium(III) sulfide – Yttrium(III) sulfate – Yttrium aluminium garnet – Yttrium barium copper oxide – Yttrium cadmium – YCd Yttrium copper – YCu Yttrium gold – YAu Yttrium iridium – YIr Yttrium iron garnet – Yttrium magnesium – YMg Yttrium phosphate – Yttrium phosphide – YP Yttrium rhodium – YRh Yttrium silver – YAg Yttrium zinc – YZn Z Zn Zinc arsenide – Zinc bromide – Zinc carbonate – Zinc chloride – Zinc cyanide – Zinc diphosphide – Zinc fluoride – Zinc iodide – Zinc nitrate – Zinc oxide – ZnO Zinc phosphide – Zinc pyrophosphate – Zinc selenate – Zinc selenide – ZnSe Zinc selenite – Zinc selenocyanate – Zinc sulfate – Zinc sulfide – ZnS Zinc sulfite – Zinc telluride – ZnTe Zinc thiocyanate – Zinc tungstate – Zr Zirconia hydrate – Zirconium boride – Zirconium carbide – ZrC Zirconium(IV) chloride – Zirconium(IV) oxide – Zirconium hydroxide – Zirconium orthosilicate – Zirconium nitride – ZrN Zirconium tetrafluoride – Zirconium tetrahydroxide – Zirconium tungstate – Zirconyl bromide – Zirconyl chloride – Zirconyl nitrate – Zirconyl sulfate – Zirconium dioxide – Zirconium nitride – ZrN Zirconium tetrachloride – Zirconium(IV) sulfide – Zirconium(IV) silicide – Zirconium(IV) silicate – Zirconium(IV) fluoride – Zirconium(IV) bromide – Zirconium(IV) iodide – Zirconium(IV) hydroxide – Schwartz's reagent – Zirconium propionate – Zirconium tungstate – Zirconium(II) hydride – Lead zirconate titanate – See also Dictionary of chemical formulas List of alchemical substances List of biomolecules List of compounds List of copper salts List of inorganic compounds named after people List of minerals List of organic compounds List of organic salts Named inorganic compounds Polyatomic ions References External links Inorganic Molecules made thinkable, an interactive visualisation showing inorganic compounds for an array of common metal and non-metal ions Inorganic Inorganic Compounds
List of inorganic compounds
[ "Chemistry" ]
12,225
[ "Lists of chemical compounds", "Inorganic compounds", "nan" ]
1,613,586
https://en.wikipedia.org/wiki/Pro-form
In linguistics, a pro-form is a type of function word or expression (linguistics) that stands in for (expresses the same content as) another word, phrase, clause or sentence where the meaning is recoverable from the context. They are used either to avoid repetitive expressions or in quantification (limiting the variables of a proposition). Pro-forms are divided into several categories, according to which part of speech they substitute: A pronoun substitutes a noun or a noun phrase, with or without a determiner: it, this. A prop-word: one, as in "the blue one" A pro-adjective substitutes an adjective or a phrase that functions as an adjective: so as in "It is less so than we had expected." A pro-adverb substitutes an adverb or a phrase that functions as an adverb: how or this way. A pro-verb substitutes a verb or a verb phrase: do, as in: "I will go to the party if you do". A pro-sentence substitutes an entire sentence or subsentence: Yes, or that as in "That is true". An interrogative pro-form is a pro-form that denotes the (unknown) item in question and may itself fall into any of the above categories. The rules governing allowable syntactic relations between certain pro-forms (notably personal and reflexive/reciprocal pronouns) and their antecedents have been studied in what is called binding theory. Table of correlatives Some 19th-century grammars of Latin, such as Raphael Kühner's 1844 grammar, organized non-personal pronouns (interrogative, demonstrative, indefinite/quantifier, relative) in a table of "correlative" pronouns due to their similarities in morphological derivation and their syntactic relationships (as correlative pairs) in that language. Later that century, L. L. Zamenhof, the inventor of Esperanto, made use of the concept to systematically create the pro-forms and determiners of Esperanto in a regular table of correlatives. The table of correlatives for English follows. Some languages may have more categories. See demonstrative. Note that some categories are regular and some are not. They may be regular or irregular also depending on languages. The following chart shows comparison between English, French (irregular) and Japanese (regular): (Note that "daremo", "nanimo" and "dokomo" are universal quantifiers with positive verbs.) Some languages do not distinguish interrogative and indefinite pro-forms. In Mandarin, "Shéi yǒu wèntí?" means either "Who has a question?" or "Does anyone have a question?", depending on context. See also References External links SIL Glossary of Linguistic Terms: Pro-Adverb Parts of speech
Pro-form
[ "Technology" ]
600
[ "Parts of speech", "Components" ]
1,613,644
https://en.wikipedia.org/wiki/Interleukin%206
Interleukin 6 (IL-6) is an interleukin that acts as both a pro-inflammatory cytokine and an anti-inflammatory myokine. In humans, it is encoded by the IL6 gene. In addition, osteoblasts secrete IL-6 to stimulate osteoclast formation. Smooth muscle cells in the tunica media of many blood vessels also produce IL-6 as a pro-inflammatory cytokine. IL-6's role as an anti-inflammatory myokine is mediated through its inhibitory effects on TNF and IL-1 and its activation of IL-1ra and IL-10. There is some early evidence that IL-6 can be used as an inflammatory marker for severe COVID-19 infection with poor prognosis, in the context of the wider coronavirus pandemic. Function Immune system IL-6 is secreted by macrophages in response to specific microbial molecules, referred to as pathogen-associated molecular patterns (PAMPs). These PAMPs bind to an important group of detection molecules of the innate immune system, called pattern recognition receptors (PRRs), including Toll-like receptors (TLRs). These are present on the cell surface and intracellular compartments and induce intracellular signaling cascades that give rise to inflammatory cytokine production. IL-6 is an important mediator of fever and of the acute phase response. IL-6 is responsible for stimulating acute phase protein synthesis, as well as the production of neutrophils in the bone marrow. It supports the growth of B cells and is antagonistic to regulatory T cells. Metabolic It is capable of crossing the blood–brain barrier and initiating synthesis of PGE2 in the hypothalamus, thereby changing the body's temperature setpoint. In muscle and fatty tissue, IL-6 stimulates energy mobilization that leads to increased body temperature. At 4°C, both the oxygen consumption and core temperature were lower in IL-6-/- compared with wild-type mice, suggesting a lower cold-induced thermogenesis in IL-6-/- mice. In the absence of inflammation 10–35% of circulating IL-6 may come from adipose tissue. IL-6 is produced by adipocytes and is thought to be a reason why obese individuals have higher endogenous levels of CRP. IL-6 may exert a tonic suppression of body fat in mature mice, given that IL-6 gene knockout causes mature onset obesity. Moreover, IL-6 can suppress body fat mass via effects at the level of the CNS. The antiobesity effect of IL-6 in rodents is exerted at the level of the brain, presumably the hypothalamus and the hindbrain. On the other hand, enhanced central IL-6 trans-signaling may improve energy and glucose homeostasis in obesity Trans-signaling implicates that a soluble form of IL-6R (sIL-6R) comprising the extracellular portion of the receptor can bind IL-6 with a similar affinity as the membrane bound IL-6R. The complex of IL-6 and sIL-6R can bind to gp130 on cells, which do not express the IL-6R, and which are unresponsive to IL-6. Studies in experimental animals indicate that IL-6 in the CNS partly mediates the suppression of food intake and body weight exerted by glucagon-like peptide-1 (GLP-1) receptor stimulation. Outside the CNS, it seems that IL-6 stimulates the production of GLP-1 in the endocrine pancreas and the gut. Amylin is another substance that can reduce body weight, and that may interact with IL-6. Amylin-induced IL-6 production in the ventromedial hypothalamus (VMH) is a possible mechanism by which amylin treatment could interact with VMH leptin signaling to increase its effect on weight loss. It is assumed that interleukin 6 in the liver activates the homologue of the human longevity gene mINDY expression via binding to its IL-6-receptor, which is associated with activation of the transcription factor STAT3 (which binds to the binding site in the mIndy promoter) and thereby rise of citrate uptake and hepatic lipogenesis. Central nervous system Intranasally administered IL-6 has been shown to improve sleep-associated consolidation of emotional memories. There are indications of interactions between GLP-1 and IL-6 in several parts of the brain. One example is the parabrachial nuclei of the pons, where GLP-1 increases IL-6 levels and where IL-6 exerts a marked anti-obesity effect. Role as myokine IL-6 is also considered a myokine, a cytokine produced from muscle, which is elevated in response to muscle contraction. It is significantly elevated with exercise, and precedes the appearance of other cytokines in the circulation. During exercise, it is thought to act in a hormone-like manner to mobilize extracellular substrates and/or augment substrate delivery. Like in humans, there seems to be an increase in IL-6 expression in working muscle and plasma IL-6 concentration during exercise in rodents. Studies in mice with IL-6 gene knockout indicate that lack of IL-6 in mice affect exercise function. It has been shown that the reduction of abdominal obesity by exercise in human adults can be reversed by the IL-6 receptor blocking antibody tocilizumab. Together with the findings that IL-6 prevents obesity, stimulates lipolysis and is released from skeletal muscle during exercise, the tocilizumab finding indicates that IL-6 is required for exercise to reduce visceral adipose tissue mass. Bone may be another organ affected by exercise induced IL-6, given that muscle-derived interleukin 6 has been reported to increase exercise capacity by signaling in osteoblasts. IL-6 has extensive anti-inflammatory functions in its role as a myokine. IL-6 was the first myokine that was found to be secreted into the blood stream in response to muscle contractions. Aerobic exercise provokes a systemic cytokine response, including, for example, IL-6, IL-1 receptor antagonist (IL-1ra), and IL-10. IL-6 was serendipitously discovered as a myokine because of the observation that it increased in an exponential fashion proportional to the length of exercise and the amount of muscle mass engaged in the exercise. It has been consistently demonstrated that the plasma concentration of IL-6 increases during muscular exercise. This increase is followed by the appearance of IL-1ra and the anti-inflammatory cytokine IL-10. In general, the cytokine response to exercise and sepsis differs with regard to TNF-α. Thus, the cytokine response to exercise is not preceded by an increase in plasma-TNF-α. Following exercise, the basal plasma IL-6 concentration may increase up to 100-fold, but less dramatic increases are more frequent. The exercise-induced increase of plasma IL-6 occurs in an exponential manner and the peak IL-6 level is reached at the end of the exercise or shortly thereafter. It is the combination of mode, intensity, and duration of the exercise that determines the magnitude of the exercise-induced increase of plasma IL-6. IL-6 had previously been classified as a proinflammatory cytokine. Therefore, it was first thought that the exercise-induced IL-6 response was related to muscle damage. However, it has become evident that eccentric exercise is not associated with a larger increase in plasma IL-6 than exercise involving concentric "nondamaging" muscle contractions. This finding clearly demonstrates that muscle damage is not required to provoke an increase in plasma IL-6 during exercise. As a matter of fact, eccentric exercise may result in a delayed peak and a much slower decrease of plasma IL-6 during recovery. Recent work has shown that both upstream and downstream signalling pathways for IL-6 differ markedly between myocytes and macrophages. It appears that unlike IL-6 signalling in macrophages, which is dependent upon activation of the NFκB signalling pathway, intramuscular IL-6 expression is regulated by a network of signalling cascades, including the /NFAT and glycogen/p38 MAPK pathways. Thus, when IL-6 is signalling in monocytes or macrophages, it creates a pro-inflammatory response, whereas IL-6 activation and signalling in muscle is totally independent of a preceding TNF-response or NFκB activation, and is anti-inflammatory. IL-6, among an increasing number of other recently identified myokines, thus remains an important topic in myokine research. It appears in muscle tissue and in the circulation during exercise at levels up to one hundred times basal rates, as noted, and is seen as having a beneficial impact on health and bodily functioning when elevated in response to physical exercise. Receptor IL-6 signals through a cell-surface type I cytokine receptor complex consisting of the ligand-binding IL-6Rα chain (CD126), and the signal-transducing component gp130 (also called CD130). CD130 is the common signal transducer for several cytokines including leukemia inhibitory factor (LIF), ciliary neurotropic factor, oncostatin M, IL-11 and cardiotrophin-1, and is almost ubiquitously expressed in most tissues. In contrast, the expression of CD126 is restricted to certain tissues. As IL-6 interacts with its receptor, it triggers the gp130 and IL-6R proteins to form a complex, thus activating the receptor. These complexes bring together the intracellular regions of gp130 to initiate a signal transduction cascade through certain transcription factors, Janus kinases (JAKs) and Signal Transducers and Activators of Transcription (STATs). IL-6 is probably the best-studied of the cytokines that use gp130, also known as IL-6 signal transducer (IL6ST), in their signalling complexes. Other cytokines that signal through receptors containing gp130 are Interleukin 11 (IL-11), Interleukin 27 (IL-27), ciliary neurotrophic factor (CNTF), cardiotrophin-1 (CT-1), cardiotrophin-like cytokine (CLC), leukemia inhibitory factor (LIF), oncostatin M (OSM), Kaposi's sarcoma-associated herpesvirus interleukin 6-like protein (KSHV-IL6). These cytokines are commonly referred to as the IL-6 like or gp130 utilising cytokines In addition to the membrane-bound receptor, a soluble form of IL-6R (sIL-6R) has been purified from human serum and urine. Many neuronal cells are unresponsive to stimulation by IL-6 alone, but differentiation and survival of neuronal cells can be mediated through the action of sIL-6R. The sIL-6R/IL-6 complex can stimulate neurites outgrowth and promote survival of neurons and, hence, may be important in nerve regeneration through remyelination. Interactions Interleukin-6 has been shown to interact with interleukin-6 receptor, glycoprotein 130, and Galectin-3. There is considerable functional overlap and interaction between Substance P (SP), the natural ligand for the neurokinin type 1 receptor (NK1R, a mediator of immunomodulatory activity) and IL-6. Role in disease IL-6 stimulates the inflammatory and auto-immune processes in many diseases such as multiple sclerosis, neuromyelitis optica spectrum disorder (NMOSD), diabetes, atherosclerosis, gastric cancer, depression, Alzheimer's disease, systemic lupus erythematosus, multiple myeloma, prostate cancer, Behçet's disease, rheumatoid arthritis, and intracerebral hemorrhage. Hence, there is an interest in developing anti-IL-6 agents as therapy against many of these diseases. The first such is tocilizumab, which has been approved for rheumatoid arthritis, Castleman's disease and systemic juvenile idiopathic arthritis. Others are in clinical trials. It has been observed that genetic inactivation of ZCCHC 6 suppresses IL‐6 expression and reduces the severity of experimental osteoarthritis in Mice. Some plant derived small molecule such as Butein have been reported to inhibit IL-6 expression in IL-1β stimulated human chondrocytes. Liver diseases Since IL-6 is a well-known pleiotropic molecule, it plays a dual role in the pathogenesis of liver diseases. While it is necessary for promoting liver regeneration, IL-6 is also a highly recognized marker of systemic inflammation and its association with mortality in liver diseases has been reported by multiple studies. In patients with severe alcohol-associated hepatitis, IL-6 showed the most robust elevation among inflammatory cytokines compared to healthy controls with a further increase in non-survivors. In these patients, IL-6 was a predictor of short-term (28- and 90-day) mortality. Rheumatoid arthritis The first FDA approved anti-IL-6 treatment was for rheumatoid arthritis. Cancer Anti-IL-6 therapy was initially developed for treatment of autoimmune diseases, but due to the role of IL-6 in chronic inflammation, IL-6 blockade was also evaluated for cancer treatment. IL-6 was seen to have roles in tumor microenvironment regulation, production of breast cancer stem cell-like cells, metastasis through down-regulation of E-cadherin, and alteration of DNA methylation in oral cancer. Advanced/metastatic cancer patients have higher levels of IL-6 in their blood. One example of this is pancreatic cancer, with noted elevation of IL-6 present in patients correlating with poor survival rates. Diseases Enterovirus 71 High IL-6 levels are associated with the development of encephalitis in children and immunodeficient mouse models infected with Enterovirus 71; this highly contagious virus normally causes a milder illness called Hand, foot, and mouth disease but can cause life-threatening encephalitis in some cases. EV71 patients with a certain gene polymorphism in IL-6 also appear to be more susceptible to developing encephalitis. Epigenetic modifications IL-6 has been shown to lead to several neurological diseases through its impact on epigenetic modification within the brain. IL-6 activates the Phosphoinositide 3-kinase (PI3K) pathway, and a downstream target of this pathway is the protein kinase B (PKB) (Hodge et al., 2007). IL-6 activated PKB can phosphorylate the nuclear localization signal on DNA methyltransferase-1 (DNMT1). This phosphorylation causes movement of DNMT1 to the nucleus, where it can be transcribed. DNMT1 recruits other DNMTs, including DNMT3A and DNMT3B, which, as a complex, recruit HDAC1. This complex adds methyl groups to CpG islands on gene promoters, repressing the chromatin structure surrounding the DNA sequence and inhibiting transcriptional machinery from accessing the gene to induce transcription. Increased IL-6, therefore, can hypermethylate DNA sequences and subsequently decrease gene expression through its effects on DNMT1 expression. Schizophrenia The induction of epigenetic modification by IL-6 has been proposed as a mechanism in the pathology of schizophrenia through the hypermethylation and repression of the GAD67 promoter. This hypermethylation may potentially lead to the decreased GAD67 levels seen in the brains of people with schizophrenia. GAD67 may be involved in the pathology of schizophrenia through its effect on GABA levels and on neural oscillations. Neural oscillations occur when inhibitory GABAergic neurons fire synchronously and cause inhibition of a multitude of target excitatory neurons at the same time, leading to a cycle of inhibition and disinhibition. These neural oscillations are impaired in schizophrenia, and these alterations may be responsible for both positive and negative symptoms of schizophrenia. Aging IL-6 is commonly found in the senescence-associated secretory phenotype (SASP) factors secreted by senescent cells (a toxic cell-type that increases with aging). Cancer (a disease that increases with age) invasiveness is promoted primarily though the actions of the SASP factors metalloproteinase, chemokine, IL-6, and interleukin 8 (IL-8). IL-6 and IL-8 are the most conserved and robust features of SASP. Myelodysplastic Syndromes IL-6 receptor was found upregulated in high-risk MDS patients. The inhibition of IL-6 signaling pathway can significantly ameliorate the clonogenicity of MDS hematopoietic stem and progenitor cells (HSPCs), but have undetectable effect on normal HSPCs. Depression and major depressive disorder The epigenetic effects IL-6 have also been implicated in the pathology of depression. The effects of IL-6 on depression are mediated through the repression of brain-derived neurotrophic factor (BDNF) expression in the brain; DNMT1 hypermethylates the BDNF promoter and reduces BDNF levels. Altered BDNF function has been implicated in depression, which is likely due to epigenetic modification following IL-6 upregulation. BDNF is a neurotrophic factor implicated in spine formation, density, and morphology on neurons. Downregulation of BDNF, therefore, may cause decreased connectivity in the brain. Depression is marked by altered connectivity, in particular between the anterior cingulate cortex and several other limbic areas, such as the hippocampus. The anterior cingulate cortex is responsible for detecting incongruences between expectation and perceived experience. Altered connectivity of the anterior cingulate cortex in depression, therefore, may cause altered emotions following certain experiences, leading to depressive reactions. This altered connectivity is mediated by IL-6 and its effect on epigenetic regulation of BDNF. Additional preclinical and clinical data, suggest that Substance P [SP] and IL-6 may act in concert to promote major depression. SP, a hybrid neurotransmitter-cytokine, is co-transmitted with BDNF through paleo-spinothalamic circuitry from the periphery with collaterals into key areas of the limbic system. However, both IL6 and SP mitigate expression of BDNF in brain regions associated with negative affect and memory. SP and IL6 both relax tight junctions of the blood brain barrier, such that effects seen in fMRI experiments with these molecules may be a bidirectional mix of neuronal, glial, capillary, synaptic, paracrine, or endocrine-like effects. At the cellular level, SP is noted to increase expression of interleukin-6 (IL-6) through PI-3K, p42/44 and p38 MAP kinase pathways. Data suggest that nuclear translocation of NF-κB regulates IL-6 overexpression in SP-stimulated cells. This is of key interest as: 1) a meta-analysis indicates an association of major depressive disorder, C-reactive protein and IL6 plasma concentrations, 2) NK1R antagonists [five molecules] studied by 3 independent groups in over 2000 patients from 1998 to 2013 validate the mechanism as dose-related, fully effective antidepressant, with a unique safety profile. (see Summary of NK1RAs in Major Depression), 3) the preliminary observation that plasma concentrations of IL6 are elevated in depressed patients with cancer, and 4) selective NK1RAs may eliminate endogenous SP stress-induced augmentation of IL-6 secretion pre-clinically. These and many other reports suggest that a clinical study of a neutralizing IL-6 biological or drug based antagonist is likely warranted in patients with major depressive disorder, with or without co-morbid chronic inflammatory based illnesses; that the combination of NK1RAs and IL6 blockers may represent a new, potentially biomarkable approach to major depression, and possibly bipolar disorder. The IL-6 antibody sirukumab underwent clinical trials for adjunctive treatment of major depressive disorder in 2015–2018, but this research has been discontinued. Asthma Obesity is a known risk factor in the development of severe asthma. Recent data suggests that the inflammation associated with obesity, potentially mediated by IL-6, plays a role in causing poor lung function and increased risk for developing asthma exacerbations. Protein superfamily Interleukin is the main member of the IL-6 superfamily (), which also includes G-CSF, IL23A, and CLCF1. A viral version of IL6 is found in Kaposi's sarcoma-associated herpesvirus. See also Ziltivekimab, a fully human monoclonal antibody against interleukin 6. References External links IL-6 expression in various cancers Interleukins Osaka University research Neurotrophic factors
Interleukin 6
[ "Chemistry" ]
4,592
[ "Neurotrophic factors", "Neurochemistry", "Signal transduction" ]
1,613,869
https://en.wikipedia.org/wiki/Plastic%20shopping%20bag
In use by consumers worldwide since the 1960s, shopping bags made from various kinds of plastic, are variously called plastic shopping bags, carrier bags, or plastic grocery bags. They a sometimes refereed to as single-use bags—referring to carrying items from a store to a home—although, it is rare for bags to be worn out after single use, and in the past some retailers (like Tesco and Sainsbury's in the UK) incentivised customers to reuse 'single use' bags by offering loyalty points to those doing so. Even after they are no longer used for shopping, reuse of these bags for storage or trash is common, and modern plastic shopping bags are increasingly recyclable or compostable - at the Co-op for example. In recent decades, numerous countries have introduced legislation restricting the provision of plastic bags, in a bid to reduce littering and plastic pollution. Some reusable shopping bags are made of plastic film, fibers, or fabric. History American and European patent applications relating to the production of plastic shopping bags can be found dating back to the early 1950s, but these refer to composite constructions with handles fixed to the bag in a secondary manufacturing process. The modern lightweight shopping bag is the invention of Swedish engineer Sten Gustaf Thulin. In the early 1960s, Thulin developed a method of forming a simple one-piece bag by folding, welding and die-cutting a flat tube of plastic for the packaging company Celloplast of Norrköping, Sweden. Thulin's design produced a simple, strong bag with a high load-carrying capacity, and was patented worldwide by Celloplast in 1965. As his son Raoul said later, Sten believed that durable plastic bags will be not single-used but long-term used and could replace paper bags which need chopping of trees. Hasminin was a well-established producer of cellulose film and a pioneer in plastics processing. Amer Mansour was the CEO of this company. The company's patent position gave it a virtual monopoly on plastic shopping bag production, and the company set up manufacturing plants across Europe and in the US. However, other companies saw the attraction of the bag, too, and the US petrochemicals group Mobil overturned Celloplast's US patent in 1977. The Dixie Bag Company of College Park, Georgia was one of the first companies to exploit this new opportunity in the 1980s, along with similar firms such as Houston Poly Bag and Capitol Poly. Kroger, a Cincinnati-based grocery chain, began to replace its paper shopping bags with plastic bags in 1982, and was soon followed by its rival, Safeway. Without its plastic bag monopoly, Celloplast's business went into decline, and the company was split up during the 1990s. The Norrköping site remains a plastics production site, however, and is now the headquarters of Miljösäck, a manufacturer of waste sacks manufactured from recycled polyethylene. From the mid-1980s onwards, plastic bags became common for carrying daily groceries from the store to vehicles and homes throughout the developed world. As plastic bags increasingly replaced paper bags, and as other plastic materials and products replaced glass, metal, stone, timber and other materials, a packaging materials war erupted, with plastic shopping bags at the center of highly publicized disputes. In 1992, Sonoco Products Company of Hartsville, SC patented the "self-opening polyethylene bag stack." The main innovation of this redesign is that the removal of a bag from the rack opens the next bag in the stack via a minimal adhesive placed between the bags on a tab at the center-top. This team was headed by Wade D. Fletcher and Harry Wilfong. This design and later variations upon it are commonplace through modern grocers, as they are space-efficient and customer-friendly. Production Although few peer-reviewed studies or government surveys have provided estimates for global plastic bag use, environmental activists estimate that between 500 billion and 1 trillion plastic bags are used each year worldwide. In 2009, the United States International Trade Commission reported that 102 billion plastic bags are used annually in the United States alone. Manufacture and composition Traditional plastic bags are usually made from polyethylene, which consists of long chains of ethylene monomers. Ethylene is derived from natural gas and petroleum. The polyethylene used in most plastic shopping bags is either low-density (resin identification code 4) or, more often, high-density (resin identification code 2). Color concentrates and other additives are often used to add tint to the plastic. Plastic shopping bags are commonly manufactured by blown film extrusion. Biodegradable materials Some modern bags are made of vegetable-based bioplastics, which can decay organically and prevent a build-up of toxic plastic bags in landfills and the natural environment. Bags can also be made from degradable polyethylene film or from polylactic acid (PLA), a biodegradable polymer derived from lactic acid. However, most degradable bags do not readily decompose in a sealed landfill, and represent a possible contaminant to plastic recycling operations. In general, biodegradable plastic bags need to be kept separate from conventional plastic recycling systems. Biodegradable plastics are plastics that are decomposed by the action of living organisms, usually bacteria. Two basic classes of biodegradable plastics exist: Bioplastics, whose components are derived from renewable materials, and many of which are biodegradable. Plastics made from petrochemicals containing biodegradable additives which enhance biodegradation or photodegradation . Environmental concerns Because plastic bags are so durable, this makes them a concern for the environment. They will not break down easily and as a result are very harmful to wildlife. Each year millions of discarded plastic shopping bags end up as plastic waste litter in the environment when improperly disposed of. The same properties that have made plastic bags so commercially successful and ubiquitous—namely their low weight and resistance to degradation—have also contributed to their proliferation in the environment. Due to their durability, plastic bags can take centuries to decompose. According to The Outline, it can take between 500 - 1,000 years for a plastic shopping bag to break down. The use lifespan of a bag is approximately 12 minutes of use. On land, plastic bags are one of the most prevalent types of litter in inhabited areas. Large buildups of plastic bags can clog drainage systems and contribute to flooding, as occurred in Bangladesh in 1988 and 1998 and almost annually in Manila. Littering is often a serious problem in developing countries, where trash collection infrastructure is less developed than in wealthier nations. According to Sharma, Moser, Vermillion, Doll, and Rajagopalan (2014), they have noted that in the year 2009 only 13% of one trillion single-use plastic bags produced were recycled, the rest were thrown away, which means they end up in landfills and because they are so lightweight end up in the atmosphere blown into the environment. The number of plastic grocery bags disposed of in the U.S., apart from the rest of the world, is a number that is difficult to comprehend, this is why it is important that solutions are considered, weighed and measured to address this growing problem. Phasing out plastic bags is a viable option, however, there are many that argue that this puts a strain on businesses and makes it more difficult for the customer to take goods home. There are alternatives such as purchasing cloth grocery bags so that those who don't agree with using plastic reusable bags can still have a bag that can be used many times over; however, government studies have found that cloth bags have a high carbon footprint. Ironically - and tragically, many states have used legislation to stop the banning of plastic bags. Plastic bags were found to constitute a significant portion of the floating marine debris in the waters around southern Chile in a study conducted between 2002 and 2005. Plastic bags don't do well in the environment, but several government studies have found them to be an environmentally friendly carryout bag option. According to the Recyc-Quebec, a Canadian government agency, "The conventional plastic bag has several environmental and economic advantages. Thin and light, its production requires little material and energy. It also avoids the production and purchase of garbage/bin liner bags since it benefits from a high reuse rate when reused for this purpose (77.7%)." Government studies from Denmark and the United Kingdom, as well as a study from Clemson University, came to similar conclusions. A 2022 global survey finds that 75% of people want single-use plastics banned, the research concluded that: "the percentage of people calling for bans is up from 71% since 2019, while those who said they favoured products with less plastic packaging rose to 82% from 75%, according to the IPSOS poll of more than 20,000 people across 28 countries." Reduction, reuse and recycling Plastic shopping bags are in most cases not accepted by standard curbside recycling programs; though their composition is often identical to other accepted plastics, they pose problems for the single-stream recycling process, as most of the sorting equipment is designed for rigid plastics such as bottles, so plastic bags often end up clogging wheels or belts, or being confused as paper and contaminating the pulp produced later in the stream. Plastic bags are 100% recyclable. They need to be taken to a location that recycles plastic film, usually a grocery store or major retail chain. Some large store chains have banned plastic shopping bags such as Whole Foods in the U.S. and IKEA in the U.S. and the U.K. Heavy-duty plastic shopping bags are suitable for reuse as reusable shopping bags. Lighter weight bags are often reused as trash bags or to pick up pet feces. All types of plastic shopping bag can be recycled into new bags where effective collection schemes exist. By the mid-1900s, the expansion of recycling infrastructure in the United States yielded a 7% annual rate of plastic bag recycling. This corresponded to more than of bags and plastic film being recycled in 2007 alone. Each ton of recycled plastic bags saves the energy equivalent of 11 barrels of oil, although most bags are produced from natural-gas-derived stock. In light of a 2002 Australian study showing that more than 60% of bags are reused as bin liners and for other purposes, the 7% recycling rate accounts for 17.5% of the plastic bags available for recycling. According to the UK's Environment Agency, 76% of British carrier bags are reused. A survey by the American Plastics Counsel found that 90% of Americans answer yes to the question "Do you or does anyone in your household ever reuse plastic shopping bags?" UK Environment Agency published a review of supermarket carrier bags and compares energy usage of current styles of bag. Bag legislation As of August 2018, over 160 countries, regions, and cities had enacted legislation to ban or put a fee on plastic bags with the aim to reduce the overall use of disposable plastic bags. Outright bans have been introduced in some countries, notably China which banned very thin plastic bags nationwide in 2008. Several other countries impose a tax at the point of sale. See also Photodegradation, the process through which chemicals decompose when exposed to light References Further reading Celloplast 1965 US Patent: Copy of US Patent 5669504 Scheirs, J. Polymer Recycling: Science, Technology and Applications, 1998, Selke, Susan. Packaging and the Environment, 1994, Selke, Susan. Plastics Packaging, 2004, Stillwell, E. J. Packaging for the Environment, A. D. Little, 1991, External links Plastics Swedish inventions Shopping bags Mass production Disposable products
Plastic shopping bag
[ "Physics" ]
2,437
[ "Amorphous solids", "Unsolved problems in physics", "Plastics" ]
1,613,896
https://en.wikipedia.org/wiki/National%20Design%20Awards
The American National Design Awards, founded in 2000, are funded and awarded by Cooper-Hewitt, Smithsonian Design Museum. There are seven official design categories, and three additional awards. Supplemental awards can be given at the discretion of the jury or institution. The seven official design categories are: Architecture Design Communications Design Fashion Design (created in 2003) Interior Design (created in 2005) Interaction Design (created for 2009) Landscape Design Product Design The three additional awards categories are: Lifetime Achievement Design Patron (created in 2001) Design Mind (created in 2005) The supplemental categories include: People's Design Award (created in 2006) Special Commendation (Awarded in 2008) Special Jury Commendation (created in 2005, but omitted in 2008) American Original (Awarded in 2000 and 2002 only) Selection criteria The selection criteria for all of the awards are excellence, innovation, and enhancement of the quality of life. Individual candidates must be citizens or long-term residents of the United States and have been practicing design for at least 7 years. Corporations and institutions must have their headquarters in the United States. Honorees are selected for a body of realized work, not for any one specific project. Candidates are proposed by an official Nominating Committee and are invited to submit materials for a jury's review. Submissions consist of resumes, portfolios, publications by and about the candidates, and professional-quality audio-visual samples. Jury The jurors are chosen by the museum based on their prominence and expertise in the design world. Once selected, jurors are briefed on the Museum mission and criteria for the Awards. Decisions are asked to be based on the core criteria: excellence, innovation, and contribution to the quality of life. Museum staff members do not enter into the selection process. The jury meets over a two-day period to thoroughly review every submission. The submissions are assessed in terms of the work's relationship to and impact on contemporary life. Special emphasis is placed on the extent to which the nominee's designs and achievements have benefit the general public. Purpose The National Design Awards program highlights achievements in various design disciplines, emphasizing their role in addressing societal challenges and shaping the built and natural environment. Established to foster a broader appreciation of design, the program seeks to educate the public about the importance of design and encourage innovation and excellence. In addition to the Awards ceremony and gala, the program includes an annual series of educational initiatives organized by the Cooper-Hewitt, Smithsonian Design Museum’s Education Department. These initiatives include lectures, roundtables, workshops, and fairs that showcase the work and perspectives of the award recipients, aiming to inspire future generations and promote a deeper understanding of design’s impact on society. People's Design Award In 2006, the first ever People's Design Award was created in order to give the general public a chance to nominate and vote for their favorite design. Individuals can nominate and vote for their favorite designers via the official website. Recipients External links National Design Awards Gallery Cooper-Hewitt, National Design Museum People's Design Award Smithsonian Institution References Design awards Design Awards, National Awards established in 2000 Design Awards, National 2000 establishments in the United States
National Design Awards
[ "Engineering" ]
632
[ "Design", "Design awards" ]
1,613,923
https://en.wikipedia.org/wiki/Quality%20of%20results
Quality of Results (QoR) is a term used in evaluating technological processes. It is generally represented as a vector of components, with the special case of uni-dimensional value as a synthetic measure. History The term was coined by the electronic design automation (EDA) industry in the late 1980s. QoR was meant to be an indicator of the performance of integrated circuits (chips), and initially measured the area and speed of a chip. As the industry evolved, new chip parameters were considered for coverage by the QoR, illustrating new areas of focus for chip designers (for example power dissipation, power efficiency, routing overhead, etc.). Because of the broad scope of quality assessment, QoR eventually evolved into a generic vector representation comprising a number of different values, where the meaning of each vector value was explicitly specified in the QoR analysis document. Currently the term is gaining popularity in other sectors of technology, with each sector using its own appropriate components. Current trends in EDA Originally, the QoR was used to specify absolute values such as chip area, power dissipation, speed, etc. (for example, a QoR could be specified as a {100 MHz, 1W, 1 mm2} vector), and could only be used for comparing the different achievements of a single design specification. The current trend among designers is to include normalized values in the QoR vector, such that they will remain meaningful for a longer period of time (as technologies change), and/or across broad classes of design. For example, one often uses – as a QoR component – a number representing the ratio between the area required by a combinational logic block and the area required by a simple logic gate, this number being often referred to as "relative density of combinational logic". In this case, a relative density of five will generally be accepted as a good quality of result – relative density of combinational logic component – while a relative density of fifty will indicate severe design problems (routability, technology that is being used, etc.) which should be investigated and addressed. Note: A new term "Quality of Silicon" (QoS) is being promoted by the EDA industry in an attempt to measure the performance of backend EDA tools in isolation from the human designer's own performance in the frontend design stage. It is claimed that for historical reasons QoR is, and should remain, a measure of frontend design performance, while QoS should be reserved for analysing the performance of the backend-related flow. However, with front-end designers being increasingly concerned with and involved in various backend stages of the design, a large number of QoS parameters are also being included in the QoR analysis vectors. References Integrated circuits Units of quality
Quality of results
[ "Mathematics", "Technology", "Engineering" ]
571
[ "Units of measurement", "Computer engineering", "Quantity", "Units of quality", "Integrated circuits" ]
1,614,044
https://en.wikipedia.org/wiki/Pennington%20clamp
A Pennington clamp, also known as a Duval clamp, is a surgical clamp with a triangular eyelet. Used for grasping tissue, particularly during intestinal and rectal operations. Also used in some OB/GYN procedures, particularly caesarian section. Under the name 'Duval clamp' they are occasionally used much like a Foerster clamp to atraumatically grasp lung tissue. The clamp is named after David Geoffrey Pennington, an Australian surgeon who is a pioneer of microsurgeries. Non-medical uses It is commonly used in body piercing to hold the skin in place, and guide the needle through it. Variants In addition to the shape of the gripping head, a distinction is also made between forceps with open and closed jaws. Open forceps can be removed directly after the stitch without having to shorten the intravenous cannula or piercing needle with scissors beforehand, but offer a slightly less secure hold than closed forceps. The clamps usually have small hooks on the handle side that interlock when closed and can therefore hold the clamp closed. Pennington clamp A Pennington clamp, also known as a Duval clamp, has a gripping head in the shape of a triangle. The end is straight. The pliers can therefore be placed flat against the body part to be pierced and are therefore particularly suitable for piercing surface piercings, but are also frequently used for gripping free-standing body parts, for example when piercing a lobe piercing, a lip frenulum piercing or various intimate piercings. See also Foerster clamp Instruments used in general surgery References Medical clamps Body piercing Surgical instruments Body piercing process Medical devices Australian inventions Medical equipment stubs Medical equipment
Pennington clamp
[ "Biology" ]
351
[ "Medical devices", "Medical equipment", "Medical technology" ]
1,614,121
https://en.wikipedia.org/wiki/Barotropic%20fluid
In fluid dynamics, a barotropic fluid is a fluid whose density is a function of pressure only. The barotropic fluid is a useful model of fluid behavior in a wide variety of scientific fields, from meteorology to astrophysics. The density of most liquids is nearly constant (isopycnic), so it can be stated that their densities vary only weakly with pressure and temperature. Water, which varies only a few percent with temperature and salinity, may be approximated as barotropic. In general, air is not barotropic, as it is a function of temperature and pressure; but, under certain circumstances, the barotropic assumption can be useful. In astrophysics, barotropic fluids are important in the study of stellar interiors or of the interstellar medium. One common class of barotropic model used in astrophysics is a polytropic fluid. Typically, the barotropic assumption is not very realistic. In meteorology, a barotropic atmosphere is one that for which the density of the air depends only on pressure, as a result isobaric surfaces (constant-pressure surfaces) are also constant-density surfaces. Such isobaric surfaces will also be isothermal surfaces, hence (from the thermal wind equation) the geostrophic wind will not vary with depth. Hence, the motions of a rotating barotropic air mass is strongly constrained. The tropics are more nearly barotropic than mid-latitudes because temperature is more nearly horizontally uniform in the tropics. A barotropic flow is a generalization of a barotropic atmosphere. It is a flow in which the pressure is a function of the density only and vice versa. In other words, it is a flow in which isobaric surfaces are isopycnic surfaces and vice versa. One may have a barotropic flow of a non-barotropic fluid, but a barotropic fluid will always follow a barotropic flow. Examples include barotropic layers of the oceans, an isothermal ideal gas or an isentropic ideal gas. A fluid which is not barotropic is baroclinic, i. e., pressure is not the only factor to determine density. For a barotropic fluid or a barotropic flow (such as a barotropic atmosphere), the baroclinic vector is zero. See also Atmospheric dynamics References James R Holton, An introduction to dynamic meteorology, , 3rd edition, p77. Marcel Lesieur, "Turbulence in Fluids: Stochastic and Numerical Modeling", , 2e. David Tritton, "Physical Fluid Dynamics", . Fluid dynamics Atmospheric dynamics
Barotropic fluid
[ "Chemistry", "Engineering" ]
574
[ "Piping", "Chemical engineering", "Atmospheric dynamics", "Fluid dynamics" ]
1,614,128
https://en.wikipedia.org/wiki/Life%20table
In actuarial science and demography, a life table (also called a mortality table or actuarial table) is a table which shows, for each age, the probability that a person of that age will die before their next birthday ("probability of death"). In other words, it represents the survivorship of people from a certain population. They can also be explained as a long-term mathematical way to measure a population's longevity. Tables have been created by demographers including John Graunt, Reed and Merrell, Keyfitz, and Greville. There are two types of life tables used in actuarial science. The period life table represents mortality rates during a specific time period for a certain population. A cohort life table, often referred to as a generation life table, is used to represent the overall mortality rates of a certain population's entire lifetime. They must have had to be born during the same specific time interval. A cohort life table is more frequently used because it is able to make a prediction of any expected changes in the mortality rates of a population in the future. This type of table also analyzes patterns in mortality rates that can be observed over time. Both of these types of life tables are created based on an actual population from the present, as well as an educated prediction of the experience of a population in the near future. In order to find the true life expectancy average, 100 years would need to pass and by then finding that data would be of no use as healthcare is continually advancing. Other life tables in historical demography may be based on historical records, although these often undercount infants and understate infant mortality, on comparison with other regions with better records, and on mathematical adjustments for varying mortality levels and life expectancies at birth. From this starting point, a number of inferences can be derived. The probability of surviving any particular year of age The remaining life expectancy for people at different ages Life tables are also used extensively in biology and epidemiology. An area that uses this tool is Social Security. It examines the mortality rates of all the people who have Social Security to decide which actions to take. The concept is also of importance in product life cycle management. All mortality tables are specific to environmental and life circumstances, and are used to probabilistically determine expected maximum age within those environmental conditions. Background There are two types of life tables: Period or static life tables show the current probability of death (for people of different ages, in the current year) Cohort'' life tables show the probability of death of people from a given cohort (especially birth year) over the course of their lifetime. Static life tables sample individuals assuming a stationary population with overlapping generations. "Static life tables" and "cohort life tables" will be identical if population is in equilibrium and environment does not change. If a population were to have a constant number of people each year, it would mean that the probabilities of death from the life table were completely accurate. Also, an exact number of 100,000 people were born each year with no immigration or emigration involved. "Life table" primarily refers to period life tables, as cohort life tables can only be constructed using data up to the current point, and distant projections for future mortality. Life tables can be constructed using projections of future mortality rates, but more often they are a snapshot of age-specific mortality rates in the recent past, and do not necessarily purport to be projections. For these reasons, the older ages represented in a life table may have a greater chance of not being representative of what lives at these ages may experience in future, as it is predicated on current advances in medicine, public health, and safety standards that did not exist in the early years of this cohort. A life table is created by mortality rates and census figures from a certain population, ideally under a closed demographic system. This means that immigration and emigration do not exist when analyzing a cohort. A closed demographic system assumes that migration flows are random and not significant, and that immigrants from other populations have the same risk of death as an individual from the new population. Another benefit from mortality tables is that they can be used to make predictions on demographics or different populations. However, there are also weaknesses of the information displayed on life tables. One being that they do not state the overall health of the population. There is more than one disease present in the world, and a person can have more than one disease at different stages simultaneously, introducing the term comorbidity. Therefore, life tables also do not show the direct correlation of mortality and morbidity. The life table observes the mortality experience of a single generation, consisting of 100,000 births, at every age number they can live through. Life tables are usually constructed separately for men and for women because of their substantially different mortality rates. Other characteristics can also be used to distinguish different risks, such as smoking status, occupation, and socioeconomic class. Life tables can be extended to include other information in addition to mortality, for instance health information to calculate health expectancy. Health expectancies such as disability-adjusted life year and Healthy Life Years are the remaining number of years a person can expect to live in a specific health state, such as free of disability. Two types of life tables are used to divide the life expectancy into life spent in various states: Multi-state life tables (also known as increment-decrements life tables) are based on transition rates in and out of the different states and to death Prevalence-based life tables (also known as the Sullivan method) are based on external information on the proportion in each state. Life tables can also be extended to show life expectancies in different labor force states or marital status states. Life tables that relate to maternal deaths and infant moralities are important, as they help form family planning programs that work with particular populations. They also help compare a country's average life expectancy with other countries. Comparing life expectancy globally helps countries understand why one country's life expectancy is rising substantially by looking at each other's healthcare, and adopting ideas to their own systems. Insurance applications In order to price insurance products, and ensure the solvency of insurance companies through adequate reserves, actuaries must develop projections of future insured events (such as death, sickness, and disability). To do this, actuaries develop mathematical models of the rates and timing of the events. They do this by studying the incidence of these events in the recent past, and sometimes developing expectations of how these past events will change over time (for example, whether the progressive reductions in mortality rates in the past will continue) and deriving expected rates of such events in the future, usually based on the age or other relevant characteristics of the population. An actuary's job is to form a comparison between people at risk of death and people who actually died to come up with a probability of death for a person at each age number, defined as qx in an equation. When analyzing a population, one of the main sources used to gather the required information is insurance by obtaining individual records that belong to a specific population. These are called mortality tables if they show death rates, and morbidity tables if they show various types of sickness or disability rates. The availability of computers and the proliferation of data gathering about individuals has made possible calculations that are more voluminous and intensive than those used in the past (i.e. they crunch more numbers) and it is more common to attempt to provide different tables for different uses, and to factor in a range of non-traditional behaviors (e.g. gambling, debt load) into specialized calculations utilized by some institutions for evaluating risk. This is particularly the case in non-life insurance (e.g. the pricing of motor insurance can allow for a large number of risk factors, which requires a correspondingly complex table of expected claim rates). However the expression "life table" normally refers to human survival rates and is not relevant to non-life insurance. The mathematics The basic algebra used in life tables is as follows. : the probability that someone aged exactly will die before reaching age . : the probability that someone aged exactly will survive to age . : the number of people who survive to age this is based on a radix or starting point, of lives, typically taken as 100,000 : the number of people who die aged last birthday : the probability that someone aged exactly will survive for more years, i.e. live up to at least age years : the probability that someone aged exactly will survive for more years, then die within the following years μx : the force of mortality, i.e. the instantaneous mortality rate at age x, i.e. the number of people dying in a short interval starting at age x, divided by ℓx and also divided by the length of the interval. Another common variable is This symbol refers to central rate of mortality. It is approximately equal to the average force of mortality, averaged over the year of age. Further descriptions: The variable dx stands for the number of deaths that would occur within two consecutive age numbers. An example of this is the number of deaths in a cohort that were recorded between the age of seven and the age of eight. The variable ℓx, which stands for the opposite of dx, represents the number of people who lived between two consecutive age numbers. ℓ of zero is equal to 100,000. The variable Tx stands for the years lived beyond each age number x by all members in the generation. Ėx'' represents the life expectancy for members already at a specific age number. Ending a mortality table In practice, it is useful to have an ultimate age associated with a mortality table. Once the ultimate age is reached, the mortality rate is assumed to be 1.000. This age may be the point at which life insurance benefits are paid to a survivor or annuity payments cease. Four methods can be used to end mortality tables: The Forced Method: Select an ultimate age and set the mortality rate at that age equal to 1.000 without any changes to other mortality rates. This creates a discontinuity at the ultimate age compared to the penultimate and prior ages. The Blended Method: Select an ultimate age and blend the rates from some earlier age to dovetail smoothly into 1.000 at the ultimate age. The Pattern Method: Let the pattern of mortality continue until the rate approaches or hits 1.000 and set that as the ultimate age. The Less-Than-One Method: This is a variation on the Forced Method. The ultimate mortality rate is set equal to the expected mortality at a selected ultimate age, rather 1.000 as in the Forced Method. This rate will be less than 1.000. Epidemiology In epidemiology and public health, both standard life tables (used to calculate life expectancy), as well as the Sullivan and multi-state life tables (used to calculate health expectancy), are the most commonly mathematical used devices. The latter includes information on health in addition to mortality. By watching over the life expectancy of any year(s) being studied, epidemiologists can see if diseases are contributing to the overall increase in mortality rates. Epidemiologists are able to help demographers understand the sudden decline of life expectancy by linking it to the health problems that are arising in certain populations. See also Age-adjusted life expectancy Decrement table Gompertz–Makeham law of mortality Survival analysis Notes References Further reading External links Human Life Table Database Human Mortality Database Canadian Human Mortality Database Australian Human Mortality Database (AHMD) The Japanese Mortality Database (JMD) United States Mortality Database (USMDB) (LAHMD) Latin American Mortality Database (LAMBdA) UN Model Life Tables for Developing Countries UN Extended Model Life Tables WHO-Global Health Observatory Life Tables UK Government Actuary Department's Interim Life Tables Actuarial Life Table from the U.S. Social Security department US CDC Vital Statistics Reports Ehemu Database World Health Organisation Life Tables Actuarial science Population Statistical data types Survival analysis
Life table
[ "Mathematics" ]
2,483
[ "Applied mathematics", "Actuarial science" ]
1,614,212
https://en.wikipedia.org/wiki/Evolution%20and%20the%20Theory%20of%20Games
Evolution and the Theory of Games is a book by the British evolutionary biologist John Maynard Smith on evolutionary game theory. The book was initially published in December 1982 by Cambridge University Press. Overview In the book, John Maynard Smith summarises work on evolutionary game theory that had developed in the 1970s, to which he made several important contributions. The book is also noted for being well written and not overly mathematically challenging. The main contribution to be had from this book is the introduction of the Evolutionarily Stable Strategy, or ESS, concept, which states that for a set of behaviours to be conserved over evolutionary time, they must be the most profitable avenue of action when common, so that no alternative behaviour can invade. So, for instance, suppose that in a population of frogs, males fight to the death over breeding ponds. This would be an ESS if any one cowardly frog that does not fight to the death always fares worse (in fitness terms, of course). A more likely scenario is one where fighting to the death is not an ESS because a frog might arise that will stop fighting if it realises that it is going to lose. This frog would then reap the benefits of fighting, but not the ultimate cost. Hence, fighting to the death would easily be invaded by a mutation that causes this sort of "informed fighting." Much complexity can be built from this, and Maynard Smith is outstanding at explaining in clear prose and with simple math. Reception See also Evolutionary biology References External links Cambridge University Press 1982 non-fiction books Books about evolution Books about game theory Evolutionary game theory Mathematics books
Evolution and the Theory of Games
[ "Mathematics" ]
325
[ "Game theory", "Evolutionary game theory" ]
1,614,482
https://en.wikipedia.org/wiki/Metropolitan-Vickers
Metropolitan-Vickers, Metrovick, or Metrovicks, was a British heavy electrical engineering company of the early-to-mid 20th century formerly known as British Westinghouse. Highly diversified, it was particularly well known for its industrial electrical equipment such as generators, steam turbines, switchgear, transformers, electronics and railway traction equipment. Metrovick holds a place in history as the builders of the first commercial transistor computer, the Metrovick 950, and the first British axial-flow jet engine, the Metropolitan-Vickers F.2. Its factory in Trafford Park, Manchester, was for most of the 20th century one of the biggest and most important heavy engineering facilities in Britain and the world. History Metrovick started as a way to separate the existing British Westinghouse Electrical and Manufacturing Company factories from United States control, which had proven to be a hindrance to gaining government contracts during the First World War. In 1917 a holding company was formed to try to find financing to buy the company's properties. In May 1917, control of the holding company was obtained jointly by the Metropolitan Carriage, Wagon and Finance Company, of Birmingham, chaired by Dudley Docker, and Vickers Limited, of Barrow-in-Furness.<ref name=gillham-2>Gillham (1988), Chapter 2: The Manufacturers.</ref> On 15 March 1919, Docker agreed terms with Vickers, for Vickers to purchase all the shares of the Metropolitan Carriage, Wagon and Finance Company for almost £13 million. On 8 September 1919, Vickers changed the name of the British Westinghouse Electrical and Manufacturing Company to Metropolitan Vickers Electrical Company. The immediate post-war era was marked by low investment and continued labour unrest. Fortunes changed in 1926 with the formation of the Central Electricity Board which standardised electrical supply and led to a massive expansion of electrical distribution, installations, and appliance purchases. Sales shot up, and 1927 marked the company's best year to date. On 15 November 1922 the BBC was registered and the BBC's Manchester station, 2ZY, was officially opened on 375 metres transmitting from the Metropolitan Vickers Electricity works in Old Trafford. In 1921, they bought a site at Attercliffe Common in Sheffield, which was used to manufacture traction motors. By 1923, it had its own engineering department, and was making complete locomotives and electric delivery vehicles. BTH merger and transition to AEI In 1928 Metrovick merged with the rival British Thomson-Houston (BTH), a company of similar size and product lineup. Combined, they would be one of the few companies able to compete with Marconi or English Electric on an equal footing. In fact the merger was marked by poor communication and intense rivalry, and the two companies generally worked at cross purposes. The next year the combined company was purchased by the Associated Electrical Industries (AEI) holding group, who also owned Edison Swan (Ediswan); and Ferguson, Pailin & Co, manufacturers of electrical switchgear in Openshaw, Manchester. The rivalry between Metrovick and BTH continued, and AEI was never able to exert effective control over the two competing subsidiary companies. Problems worsened in 1929 with the start of the Great Depression, but Metrovick's overseas sales were able to pick up some of the slack, notably a major railway electrification project in Brazil. By 1933 world trade was growing again, but growth was nearly upset when six Metrovick engineers were arrested and found guilty of espionage and "wrecking" in Moscow after a number of turbines built by the company in and for the Soviet Union proved to be faulty. The British government intervened; the engineers were released and trade with Russia was resumed after a brief embargo. During the 1930s Metropolitan Vickers produced two dozen very large diameter (3m/10 ft) three-phase AC traction motors for the Hungarian railway's V40 and V60 electric locomotives. The 1640 kW rated power machinery, designed by Kálmán Kandó, was paid for by British government economic aid. In 1935 the company built a 105 MW steam turbogenerator, the largest in Europe at that time, for the Battersea Power Station. In 1936 Metrovick started work with the Air Ministry on automatic pilot systems, eventually branching out to gunlaying systems and building radars the next year. In 1938 they reached an agreement with the Ministry to build a turboprop design developed at the Royal Aircraft Establishment (RAE) under the direction of Hayne Constant. It is somewhat ironic that BTH, its erstwhile partners, were at the same time working with Frank Whittle on his pioneering jet designs. Wartime aircraft production In mid-1938, MV was awarded a contract to build Avro Manchester twin-engined heavy bombers under licence from A.V. Roe. As this type of work was very different from its traditional heavy engineering activities, a new factory was built on the western side of Mosley Road and this was completed in stages through 1940. There were significant problems producing this aircraft, not least being the unreliability of the Rolls-Royce Vulture engine and that the first 13 Manchesters were destroyed in a Luftwaffe bombing raid on Trafford Park on 23 December. Despite this the firm went on to complete 43 examples. With the design of the much improved four-engined derivative, the Avro Lancaster, MV switched production to that famous type, supplied with Rolls-Royce Merlin engines from the Ford Trafford Park shadow factory. Three hangars were erected on the southside of Manchester's Ringway Airport for assembly and testing of its Lancasters, before a policy switch was made to assembling them in a hangar at Avro's Woodford airfield. By the end of the war, MV had built 1,080 Lancasters. These were followed by 79 Avro Lincoln derivatives before remaining orders were cancelled and MV's aircraft production ceased in December 1945. In 1940 the turboprop effort was re-engineered as a pure jet engine after the successful run of Whittle's designs. The new design became the Metrovick F.2 and eventually flew in 1943 on a Gloster Meteor. Considered to be too complex to bother with, Metrovick then re-engineered the design once again to produce roughly double the power, while at the same time starting work on a much larger design, the Metrovick F.9 Sapphire. Although the F.9 proved to be a winner, the Ministry of Supply nevertheless forced the company to sell the jet division to Armstrong Siddeley in 1947 to reduce the number of companies in the business. In addition to building aircraft, other wartime work included the manufacture of both Dowty and Messier undercarriages, automatic pilot units, searchlights and radar equipment. They also produced electric vans and lorries. Metrovick postwar The post-war era led to massive demand for electrical systems, leading to additional rivalries between Metrovick and BTH as each attempted to one-up the other in delivering ever-larger turbogenerator contracts. Metrovick also expanded its appliance division during this time, becoming a well known supplier of refrigerators and stoves. The design and manufacture of sophisticated scientific instruments, such as electron microscopes, and mass spectrometers, became an important area of scientific research for the company. In 1947, a Metrovick G.1 Gatric gas turbine was fitted to the Motor Gun Boat MGB 2009, making it the world's first gas turbine powered naval vessel. A subsequent marine gas turbine engine was the G.2 of 4,500 shp fitted to the Royal Navy Bold-class fast patrol boats Bold Pioneer and Bold Pathfinder'', which were built in 1953. The Bluebird K7 jet-propelled 3-point hydroplane in which Donald Campbell broke the 200 mph water speed barrier was powered with a Metropolitan-Vickers Beryl jet engine producing of thrust. The K7 was unveiled in late 1954. Campbell succeeded on Ullswater on 23 July 1955, where he set a record of , beating the previous record by some held by Stanley Sayres. Another major area of expansion was in the diesel locomotive market, where they combined their own generators and traction motors with third-party diesel engines to develop in 1950 the Western Australian Government Railways X class 2-Do-2 locomotive and in 1958 the type 2 Co-Bo, later re-classified under the TOPS system as the British Rail Class 28. This diesel-electric locomotive was unusual on two counts; its Co-Bo wheel arrangement and its Crossley 2-Stroke diesel engine (evolved from a World War II marine engine). Intended as part the British Railways Modernisation Plan, the twenty-strong fleet saw service between Scotland and England before being deemed unsuccessful and withdrawn in the late 1960s. Metrovick also produced the CIE 001 Class (originally 'A' Class) from 1955, the first production mainline diesels in Ireland. Metropolitan Vickers also produced electrical equipment for the British Rail Class 76 (EM1), and British Rail Class 77 (EM2), 1.5 kV DC locomotives, built at Gorton Works for the electrification of the Woodhead Line in the early 1950s. Larger but broadly similar locomotives were also supplied to the New South Wales Government Railways as its 46 class. The company also designed the British Rail Class 82, 25 kV AC locomotives built by Beyer, Peacock & Company in Manchester using Metrovick electrical equipment. The company also supplied electrical equipment for the British Rail Class 303 electric multiple units. In the 1950s, the company built a large power transformer works at Wythenshawe, Manchester. The factory opened in 1957, and was closed by GEC in 1971, after which it was sold to the American compressor manufacturer Ingersoll Rand. In 1961, the Russian cosmonaut Yuri Gagarin was invited to the company's factory at Trafford Park as part of his tour of Manchester. The rivalry between Metrovick and BTH was eventually ended in an unconvincing fashion when the AEI management eventually decided to rid themselves of both brands and be known as AEI universally, a change they made on 1 January 1960. This move was almost universally resented within both companies. Worse, the new brand name was utterly unknown to its customers, leading to a noticeable fall-off in sales and AEI's stock price. General Electric Company (GEC) takeover When AEI attempted to remove the doubled-up management structures, they found this task to be even more difficult. By the mid-1960s the company was struggling under the weight of two complete management hierarchies, and they appeared to be unable to control the company any more. This allowed AEI to be purchased by General Electric Company in 1967. See also :Category:Metropolitan-Vickers locomotives Bowesfield Works Metro-Vickers Affair Metrovick electric vehicles References Bibliography Further reading External links "Metropolitan-Vickers Electrical Co. Ltd. 1899-1949" by John Dummelow 250 pages of text and pictures. (This is a mirror of the original, which was accessed from https://web.archive.org/web/20050308065049/http://www.mvbook.org.uk/ ) Turbines Locomotive manufacturers of the United Kingdom Engineering companies of the United Kingdom Electrical engineering companies of the United Kingdom Defunct manufacturing companies of the United Kingdom Associated Electrical Industries Radar manufacturers Defunct companies based in Manchester Manufacturing companies based in Manchester Manufacturing companies established in 1899 Manufacturing companies disestablished in 1960 Defunct aircraft engine manufacturers of the United Kingdom Former defence companies of the United Kingdom Science and technology in the United Kingdom British companies established in 1899 British companies disestablished in 1960 1899 establishments in England 1960 disestablishments in England
Metropolitan-Vickers
[ "Chemistry" ]
2,387
[ "Turbines", "Turbomachinery" ]
1,614,492
https://en.wikipedia.org/wiki/Connectivity%20%28graph%20theory%29
In mathematics and computer science, connectivity is one of the basic concepts of graph theory: it asks for the minimum number of elements (nodes or edges) that need to be removed to separate the remaining nodes into two or more isolated subgraphs. It is closely related to the theory of network flow problems. The connectivity of a graph is an important measure of its resilience as a network. Connected vertices and graphs In an undirected graph , two vertices and are called connected if contains a path from to . Otherwise, they are called disconnected. If the two vertices are additionally connected by a path of length (that is, they are the endpoints of a single edge), the vertices are called adjacent. A graph is said to be connected if every pair of vertices in the graph is connected. This means that there is a path between every pair of vertices. An undirected graph that is not connected is called disconnected. An undirected graph is therefore disconnected if there exist two vertices in such that no path in has these vertices as endpoints. A graph with just one vertex is connected. An edgeless graph with two or more vertices is disconnected. A directed graph is called weakly connected if replacing all of its directed edges with undirected edges produces a connected (undirected) graph. It is unilaterally connected or unilateral (also called semiconnected) if it contains a directed path from to or a directed path from to for every pair of vertices . It is strongly connected, or simply strong, if it contains a directed path from to and a directed path from to for every pair of vertices . Components and cuts A connected component is a maximal connected subgraph of an undirected graph. Each vertex belongs to exactly one connected component, as does each edge. A graph is connected if and only if it has exactly one connected component. The strong components are the maximal strongly connected subgraphs of a directed graph. A vertex cut or separating set of a connected graph is a set of vertices whose removal renders disconnected. The vertex connectivity (where is not a complete graph) is the size of a smallest vertex cut. A graph is called -vertex-connected or -connected if its vertex connectivity is or greater. More precisely, any graph (complete or not) is said to be -vertex-connected if it contains at least vertices, but does not contain a set of vertices whose removal disconnects the graph; and is defined as the largest such that is -connected. In particular, a complete graph with vertices, denoted , has no vertex cuts at all, but . A vertex cut for two vertices and is a set of vertices whose removal from the graph disconnects and . The local connectivity is the size of a smallest vertex cut separating and . Local connectivity is symmetric for undirected graphs; that is, . Moreover, except for complete graphs, equals the minimum of over all nonadjacent pairs of vertices . -connectivity is also called biconnectivity and -connectivity is also called triconnectivity. A graph which is connected but not -connected is sometimes called separable. Analogous concepts can be defined for edges. In the simple case in which cutting a single, specific edge would disconnect the graph, that edge is called a bridge. More generally, an edge cut of is a set of edges whose removal renders the graph disconnected. The edge-connectivity is the size of a smallest edge cut, and the local edge-connectivity of two vertices is the size of a smallest edge cut disconnecting from . Again, local edge-connectivity is symmetric. A graph is called -edge-connected if its edge connectivity is or greater. A graph is said to be maximally connected if its connectivity equals its minimum degree. A graph is said to be maximally edge-connected if its edge-connectivity equals its minimum degree. Super- and hyper-connectivity A graph is said to be super-connected or super-κ if every minimum vertex cut isolates a vertex. A graph is said to be hyper-connected or hyper-κ if the deletion of each minimum vertex cut creates exactly two components, one of which is an isolated vertex. A graph is semi-hyper-connected or semi-hyper-κ if any minimum vertex cut separates the graph into exactly two components. More precisely: a connected graph is said to be super-connected or super-κ if all minimum vertex-cuts consist of the vertices adjacent with one (minimum-degree) vertex. A connected graph is said to be super-edge-connected or super-λ if all minimum edge-cuts consist of the edges incident on some (minimum-degree) vertex. A cutset of is called a non-trivial cutset if does not contain the neighborhood of any vertex . Then the superconnectivity of is A non-trivial edge-cut and the edge-superconnectivity are defined analogously. Menger's theorem One of the most important facts about connectivity in graphs is Menger's theorem, which characterizes the connectivity and edge-connectivity of a graph in terms of the number of independent paths between vertices. If and are vertices of a graph , then a collection of paths between and is called independent if no two of them share a vertex (other than and themselves). Similarly, the collection is edge-independent if no two paths in it share an edge. The number of mutually independent paths between and is written as , and the number of mutually edge-independent paths between and is written as . Menger's theorem asserts that for distinct vertices u,v, equals , and if u is also not adjacent to v then equals . This fact is actually a special case of the max-flow min-cut theorem. Computational aspects The problem of determining whether two vertices in a graph are connected can be solved efficiently using a search algorithm, such as breadth-first search. More generally, it is easy to determine computationally whether a graph is connected (for example, by using a disjoint-set data structure), or to count the number of connected components. A simple algorithm might be written in pseudo-code as follows: Begin at any arbitrary node of the graph . Proceed from that node using either depth-first or breadth-first search, counting all nodes reached. Once the graph has been entirely traversed, if the number of nodes counted is equal to the number of nodes of , the graph is connected; otherwise it is disconnected. By Menger's theorem, for any two vertices and in a connected graph , the numbers and can be determined efficiently using the max-flow min-cut algorithm. The connectivity and edge-connectivity of can then be computed as the minimum values of and , respectively. In computational complexity theory, SL is the class of problems log-space reducible to the problem of determining whether two vertices in a graph are connected, which was proved to be equal to L by Omer Reingold in 2004. Hence, undirected graph connectivity may be solved in space. The problem of computing the probability that a Bernoulli random graph is connected is called network reliability and the problem of computing whether two given vertices are connected the ST-reliability problem. Both of these are #P-hard. Number of connected graphs The number of distinct connected labeled graphs with n nodes is tabulated in the On-Line Encyclopedia of Integer Sequences as sequence . The first few non-trivial terms are Examples The vertex- and edge-connectivities of a disconnected graph are both . -connectedness is equivalent to connectedness for graphs of at least two vertices. The complete graph on vertices has edge-connectivity equal to . Every other simple graph on vertices has strictly smaller edge-connectivity. In a tree, the local edge-connectivity between any two distinct vertices is . Bounds on connectivity The vertex-connectivity of a graph is less than or equal to its edge-connectivity. That is, . The edge-connectivity for a graph with at least 2 vertices is less than or equal to the minimum degree of the graph because removing all the edges that are incident to a vertex of minimum degree will disconnect that vertex from the rest of the graph. For a vertex-transitive graph of degree , we have: . For a vertex-transitive graph of degree , or for any (undirected) minimal Cayley graph of degree , or for any symmetric graph of degree , both kinds of connectivity are equal: . Other properties Connectedness is preserved by graph homomorphisms. If is connected then its line graph is also connected. A graph is -edge-connected if and only if it has an orientation that is strongly connected. Balinski's theorem states that the polytopal graph (-skeleton) of a -dimensional convex polytope is a -vertex-connected graph. Steinitz's previous theorem that any 3-vertex-connected planar graph is a polytopal graph (Steinitz's theorem) gives a partial converse. According to a theorem of G. A. Dirac, if a graph is -connected for , then for every set of vertices in the graph there is a cycle that passes through all the vertices in the set. The converse is true when . See also Algebraic connectivity Cheeger constant (graph theory) Dynamic connectivity, Disjoint-set data structure Expander graph Strength of a graph References
Connectivity (graph theory)
[ "Mathematics" ]
1,908
[ "Mathematical relations", "Graph theory", "Graph connectivity" ]
1,614,549
https://en.wikipedia.org/wiki/Time%20consistency%20%28finance%29
Time consistency in the context of finance is the property of not having mutually contradictory evaluations of risk at different points in time. This property implies that if investment A is considered riskier than B at some future time, then A will also be considered riskier than B at every prior time. Time consistency and financial risk Time consistency is a property in financial risk related to dynamic risk measures. The purpose of the time the consistent property is to categorize the risk measures which satisfy the condition that if portfolio (A) is riskier than portfolio (B) at some time in the future, then it is guaranteed to be riskier at any time prior to that point. This is an important property since if it were not to hold then there is an event (with probability of occurring greater than 0) such that B is riskier than A at time although it is certain that A is riskier than B at time . As the name suggests a time inconsistent risk measure can lead to inconsistent behavior in financial risk management. Mathematical definition A dynamic risk measure on is time consistent if and implies . Equivalent definitions Equality For all Recursive For all Acceptance Set For all where is the time acceptance set and Cocycle condition (for convex risk measures) For all where is the minimal penalty function (where is an acceptance set and denotes the essential supremum) at time and . Construction Due to the recursive property it is simple to construct a time consistent risk measure. This is done by composing one-period measures over time. This would mean that: Examples Value at risk and average value at risk Both dynamic value at risk and dynamic average value at risk are not a time consistent risk measures. Time consistent alternative The time consistent alternative to the dynamic average value at risk with parameter at time t is defined by such that . Dynamic superhedging price The dynamic superhedging price is a time consistent risk measure. Dynamic entropic risk The dynamic entropic risk measure is a time consistent risk measure if the risk aversion parameter is constant. Continuous time In continuous time, a time consistent coherent risk measure can be given by: for a sublinear choice of function where denotes a g-expectation. If the function is convex, then the corresponding risk measure is convex. References Financial risk modeling Mathematical finance Financial economics
Time consistency (finance)
[ "Mathematics" ]
464
[ "Applied mathematics", "Mathematical finance" ]
1,614,609
https://en.wikipedia.org/wiki/Polarization%20in%20astronomy
Polarization of electromagnetic radiation is a useful tool for detecting various astronomical phenomenon. For example, energy can become polarized by passing through interstellar dust or by magnetic fields. Microwave energy from the primordial universe can be used to study the physics of that environment. Stars The polarization of starlight was first observed by the astronomers William Hiltner and John S. Hall in 1949. Subsequently, Jesse Greenstein and Leverett Davis, Jr. developed theories allowing the use of polarization data to trace interstellar magnetic fields. Though the integrated thermal radiation of stars is not usually appreciably polarized at source, scattering by interstellar dust can impose polarization on starlight over long distances. Net polarization at the source can occur if the photosphere itself is asymmetric, due to limb polarization. Plane polarization of starlight generated at the star itself is observed for Ap stars (peculiar A type stars). Sun Both circular and linear polarization of sunlight has been measured. Circular polarization is mainly due to transmission and absorption effects in strongly magnetic regions of the Sun's surface. Another mechanism that gives rise to circular polarization is the so-called "alignment-to-orientation mechanism". Continuum light is linearly polarized at different locations across the face of the Sun (limb polarization) though taken as a whole, this polarization cancels. Linear polarization in spectral lines is usually created by anisotropic scattering of photons on atoms and ions which can themselves be polarized by this interaction. The linearly polarized spectrum of the Sun is often called the second solar spectrum. Atomic polarization can be modified in weak magnetic fields by the Hanle effect. As a result, polarization of the scattered photons is also modified providing a diagnostics tool for understanding stellar magnetic fields. Other sources Polarization is also present in radiation from coherent astronomical sources due to the Zeeman effect (e.g. hydroxyl or methanol masers). The large radio lobes in active galaxies and pulsar radio radiation (which may, it is speculated, sometimes be coherent) also show polarization. Apart from providing information on sources of radiation and scattering, polarization also probes the interstellar magnetic field in our galaxy as well as in radio galaxies via Faraday rotation. In some cases it can be difficult to determine how much of the Faraday rotation is in the external source and how much is local to our own galaxy, but in many cases it is possible to find another distant source nearby in the sky; thus by comparing the candidate source and the reference source, the results can be untangled. Cosmic microwave background The polarization of the cosmic microwave background (CMB) is also being used to study the physics of the very early universe. CMB exhibits 2 components of polarization: B-mode (divergence-free like magnetic field) and E-mode (curl-free gradient-only like electric field) polarization. The BICEP2 telescope located at the South Pole initially claimed the detection of B-mode polarization in the CMB, though the initially claimed result was later retracted. The polarization modes of the CMB may provide more information about the influence of gravitational waves on the development of the early universe. It has been suggested that astronomical sources of polarised light caused the chirality found in biological molecules on Earth. See also Chandrasekhar polarization References External links Discovery by Hiltner and Hall, analysis by Greenstein Concepts in astrophysics Polarization (waves) Articles containing video clips
Polarization in astronomy
[ "Physics" ]
723
[ "Polarization (waves)", "Concepts in astrophysics", "Astrophysics" ]
1,615,081
https://en.wikipedia.org/wiki/Amylin
Amylin, or islet amyloid polypeptide (IAPP), is a 37-residue peptide hormone. It is co-secreted with insulin from the pancreatic β-cells in the ratio of approximately 100:1 (insulin:amylin). Amylin plays a role in glycemic regulation by slowing gastric emptying and promoting satiety, thereby preventing post-prandial spikes in blood glucose levels. IAPP is processed from an 89-residue coding sequence. Proislet amyloid polypeptide (proIAPP, proamylin, proislet protein) is produced in the pancreatic beta cells (β-cells) as a 67 amino acid, 7404 Dalton pro-peptide and undergoes post-translational modifications including protease cleavage to produce amylin. Synthesis ProIAPP consists of 67 amino acids, which follow a 22 amino acid signal peptide which is rapidly cleaved after translation of the 89 amino acid coding sequence. The human sequence (from N-terminus to C-terminus) is: (MGILKLQVFLIVLSVALNHLKA) TPIESHQVEKR^ KCNTATCATQRLANFLVHSSNNFGAILSSTNVGSNTYG^ KR^ NAVEVLKREPLNYLPL. The signal peptide is removed during translation of the protein and transport into the endoplasmic reticulum. Once inside the endoplasmic reticulum, a disulfide bond is formed between cysteine residues numbers 2 and 7. Later in the secretory pathway, the precursor undergoes additional proteolysis and posttranslational modification (indicated by ^). 11 amino acids are removed from the N-terminus by the enzyme proprotein convertase 2 (PC2) while 16 are removed from the C-terminus of the proIAPP molecule by proprotein convertase 1/3 (PC1/3). At the C-terminus Carboxypeptidase E then removes the terminal lysine and arginine residues. The terminal glycine amino acid that results from this cleavage allows the enzyme peptidylglycine alpha-amidating monooxygenase (PAM) to add an amine group. After this the transformation from the precursor protein proIAPP to the biologically active IAPP is complete (IAPP sequence: KCNTATCATQRLANFLVHSSNNFGAILSSTNVGSNTY). Regulation Insofar as both IAPP and insulin are produced by the pancreatic β-cells, impaired β-cell function (due to lipotoxicity and glucotoxicity) will affect both insulin and IAPP production and release. Insulin and IAPP are regulated by similar factors since they share a common regulatory promoter motif. The IAPP promoter is also activated by stimuli which do not affect insulin, such as tumor necrosis factor alpha and fatty acids. One of the defining features of Type 2 diabetes is insulin resistance. This is a condition wherein the body is unable to utilize insulin effectively, resulting in increased insulin production; since proinsulin and proIAPP are cosecreted, this results in an increase in the production of proIAPP as well. Although little is known about IAPP regulation, its connection to insulin indicates that regulatory mechanisms that affect insulin also affect IAPP. Thus blood glucose levels play an important role in regulation of proIAPP synthesis. Function Amylin functions as part of the endocrine pancreas and contributes to glycemic control. The peptide is secreted from the pancreatic islets into the blood circulation and is cleared by peptidases in the kidney. It is not found in the urine. Amylin's metabolic function is well-characterized as an inhibitor of the appearance of nutrient [especially glucose] in the plasma. It thus functions as a synergistic partner to insulin, with which it is cosecreted from pancreatic beta cells in response to meals. The overall effect is to slow the rate of appearance (Ra) of glucose in the blood after eating; this is accomplished via coordinate slowing down gastric emptying, inhibition of digestive secretion [gastric acid, pancreatic enzymes, and bile ejection], and a resulting reduction in food intake. Appearance of new glucose in the blood is reduced by inhibiting secretion of the gluconeogenic hormone glucagon. These actions, which are mostly carried out via a glucose-sensitive part of the brain stem, the area postrema, may be over-ridden during hypoglycemia. They collectively reduce the total insulin demand. Amylin also acts in bone metabolism, along with the related peptides calcitonin and calcitonin gene related peptide. Rodent amylin knockouts do not have a normal reduction of appetite following food consumption. Because it is an amidated peptide, like many neuropeptides, it is believed to be responsible for the effect on appetite. Structure The human form of IAPP has the amino acid sequence KCNTATCATQRLANFLVHSSNNFGAILSSTNVGSNTY, with a disulfide bridge between cysteine residues 2 and 7. Both the amidated C-terminus and the disulfide bridge are necessary for the full biological activity of amylin. IAPP is capable of forming amyloid fibrils in vitro. Within the fibrillization reaction, the early prefibrillar structures are extremely toxic to beta-cell and insuloma cell cultures. Later amyloid fiber structures also seem to have some cytotoxic effect on cell cultures. Studies have shown that fibrils are the end product and not necessarily the most toxic form of amyloid proteins/peptides in general. A non-fibril forming peptide (1–19 residues of human amylin) is toxic like the full-length peptide but the respective segment of rat amylin is not. It was also demonstrated by solid-state NMR spectroscopy that the fragment 20-29 of the human-amylin fragments membranes. Rats and mice have six substitutions (three of which are proline substitutions at positions 25, 28 and 29) that are believed to prevent the formation of amyloid fibrils, although not completely as seen by its propensity to form amyloid fibrils in vitro. Rat IAPP is nontoxic to beta-cells when overexpressed in transgenic rodents. History Before amylin deposition was associated with diabetes, already in 1901, scientists described the phenomenon of "islet hyalinization", which could be found in some cases of diabetes. A thorough study of this phenomenon was possible much later. In 1986, the isolation of an aggregate from an insulin-producing tumor was successful, a protein called IAP (Insulinoma Amyloid Peptide) was characterized, and amyloids were isolated from the pancreas of a diabetic patient, but the isolated material was not sufficient for full characterization. This was achieved only a year later by two research teams whose research was a continuation of the work from 1986. Clinical significance ProIAPP has been linked to Type 2 diabetes and the loss of islet β-cells. Islet amyloid formation, initiated by the aggregation of proIAPP, may contribute to this progressive loss of islet β-cells. It is thought that proIAPP forms the first granules that allow for IAPP to aggregate and form amyloid which may lead to amyloid-induced apoptosis of β-cells. IAPP is cosecreted with insulin. Insulin resistance in Type 2 diabetes produces a greater demand for insulin production which results in the secretion of proinsulin. ProIAPP is secreted simultaneously, however, the enzymes that convert these precursor molecules into insulin and IAPP, respectively, are not able to keep up with the high levels of secretion, ultimately leading to the accumulation of proIAPP. In particular, the impaired processing of proIAPP that occurs at the N-terminal cleavage site is a key factor in the initiation of amyloid. Post-translational modification of proIAPP occurs at both the carboxy terminus and the amino terminus, however, the processing of the amino terminus occurs later in the secretory pathway. This might be one reason why it is more susceptible to impaired processing under conditions where secretion is in high demand. Thus, the conditions of Type 2 diabetes—high glucose concentrations and increased secretory demand for insulin and IAPP—could lead to the impaired N-terminal processing of proIAPP. The unprocessed proIAPP can then serve as the nucleus upon which IAPP can accumulate and form amyloid. The amyloid formation might be a major mediator of apoptosis, or programmed cell death, in the islet β-cells. Initially, the proIAPP aggregates within secretory vesicles inside the cell. The proIAPP acts as a seed, collecting matured IAPP within the vesicles, forming intracellular amyloid. When the vesicles are released, the amyloid grows as it collects even more IAPP outside the cell. The overall effect is an apoptosis cascade initiated by the influx of ions into the β-cells. In summary, impaired N-terminal processing of proIAPP is an important factor initiating amyloid formation and β-cell death. These amyloid deposits are pathological characteristics of the pancreas in Type 2 diabetes. However, it is still unclear as to whether amyloid formation is involved in or merely a consequence of type 2 diabetes. Nevertheless, it is clear that amyloid formation reduces working β-cells in patients with Type 2 diabetes. This suggests that repairing proIAPP processing may help to prevent β-cell death, thereby offering hope as a potential therapeutic approach for Type 2 diabetes. Amyloid deposits deriving from islet amyloid polypeptide (IAPP, or amylin) are commonly found in pancreatic islets of patients suffering diabetes mellitus type 2, or containing an insulinoma cancer. While the association of amylin with the development of type 2 diabetes has been known for some time, its direct role as the cause has been harder to establish. Some studies suggest that amylin, like the related beta-amyloid (Abeta) associated with Alzheimer's disease, can induce apoptotic cell-death in insulin-producing beta cells, an effect that may be relevant to the development of type 2 diabetes. A 2008 study reported a synergistic effect for weight loss with leptin and amylin coadministration in diet-induced obese rats by restoring hypothalamic sensitivity to leptin. However, in clinical trials, the study was halted at Phase 2 in 2011 when a problem involving antibody activity that might have neutralized the weight-loss effect of metreleptin in two patients who took the drug in a previously completed clinical study. The study combined metreleptin, a version of the human hormone leptin, and pramlintide, which is Amylin's diabetes drug Symlin, into a single obesity therapy. A proteomics study showed that human amylin shares common toxicity targets with beta-amyloid (Abeta), suggesting that type 2 diabetes and Alzheimer's disease share common toxicity mechanisms. Pharmacology A synthetic analog of human amylin with proline substitutions in positions 25, 26 and 29, or pramlintide (brand name Symlin), was approved in 2005 for adult use in patients with both diabetes mellitus type 1 and diabetes mellitus type 2. Insulin and pramlintide, injected separately but both before a meal, work together to control the post-prandial glucose excursion. Amylin is degraded in part by insulin-degrading enzyme. Another long- acting analogue of Amylin is Cagrilintide being developed by Novo Nordisk ( now in the Phase 3 trials with the proposed brand name CagriSema co- formulated with Semaglutide as a once weekly subcutaneous injection ) as a measure to treat type II DM and obesity. Receptors There appear to be at least three distinct receptor complexes that amylin binds to with high affinity. All three complexes contain the calcitonin receptor at the core, plus one of three receptor activity-modifying proteins, RAMP1, RAMP2, or RAMP3. See also carboxypeptidase E Pancreatic islets peptidylglycine alpha-amidating monooxygenase (PAM) Pramlintide proprotein convertase 1/3 (PC1/3) proprotein convertase 2 (PC2) Type II Diabetes References Further reading External links Amyloidosis Peptide hormones Diabetes Endocrine system Amylin receptor agonists Pancreatic hormones
Amylin
[ "Biology" ]
2,668
[ "Organ systems", "Endocrine system" ]
1,615,106
https://en.wikipedia.org/wiki/Ion%20exchange
Ion exchange is a reversible interchange of one species of ion present in an insoluble solid with another of like charge present in a solution surrounding the solid. Ion exchange is used in softening or demineralizing of water, purification of chemicals, and separation of substances. Ion exchange usually describes a process of purification of aqueous solutions using solid polymeric ion-exchange resin. More precisely, the term encompasses a large variety of processes where ions are exchanged between two electrolytes. Aside from its use to purify drinking water, the technique is widely applied for purification and separation of a variety of industrially and medicinally important chemicals. Although the term usually refers to applications of synthetic (human-made) resins, it can include many other materials such as soil. Typical ion exchangers are ion-exchange resins (functionalized porous or gel polymer), zeolites, montmorillonite, clay, and soil humus. Ion exchangers are either cation exchangers, which exchange positively charged ions (cations), or anion exchangers, which exchange negatively charged ions (anions). There are also amphoteric exchangers that are able to exchange both cations and anions simultaneously. However, the simultaneous exchange of cations and anions is often performed in mixed beds, which contain a mixture of anion- and cation-exchange resins, or passing the solution through several different ion-exchange materials. Ion exchangers can have binding preferences for certain ions or classes of ions, depending on the physical properties and chemical structure of both the ion exchanger and ion. This can be dependent on the size, charge, or structure of the ions. Common examples of ions that can bind to ion exchangers are: H+ (proton) and OH− (hydroxide). Singly charged monatomic (i.e., monovalent) ions like Na+, K+, and Cl−. Doubly charged monatomic (i.e., divalent) ions like Ca2+ and Mg2+. Polyatomic inorganic ions like and . Organic bases, usually molecules containing the amine functional group −NR2H+. Organic acids, often molecules containing −COO− (carboxylic acid) functional groups. Biomolecules that can be ionized: amino acids, peptides, proteins, etc. Along with absorption and adsorption, ion exchange is a form of sorption. Ion exchange is a reversible process, and the ion exchanger can be regenerated or loaded with desirable ions by washing with an excess of these ions. Types Cation exchange CM (Carboxymethyl group, weak cation exchange) SP (sulphopropyl group, strong cation exchange) Anion exchange DEAE-Sepharose QFF Ion exchange resins Ion exchange resins are the physical medium that facilitates ion exchange reactions. The resin is composed of cross-linked organic polymers, typically polystyrene matrix and functional groups where the ion exchange process takes place. Cation exchange resins Strong acid cation (SAC) resins: Composed of a polystyrene matrix with a sulphonate (SO3−) functional group. Used in softening or demineralization processes. Weak acid cation (WAC) resins: Composed of an acrylic polymer and carboxylic acid functional groups. Used to selectively remove cations associated with alkalinity. Anion exchange resins Strong base anion (SBA) resins: Type 1 SBA resins: Greatest affinity for the weak acids and commonly present during a water demineralization process. Type 3 SBA resins: Lower chemical stability than Type 1 but better regeneration efficiency. Weak base anion (WBA) resins: Act as acid absorbers; capable of sorbing strong acids with a high capacity and are readily regenerated with caustic. Chelating resins Used to exchange heavy metals from alkaline earth and alkali metal solutions. Adsorbents Used for organic compound removal. Applications Ion exchange is widely used in the food and beverage industry, hydrometallurgy, metals finishing, chemical, petrochemical, pharmaceutical technology, sugar and sweetener production, ground- and potable-water treatment, nuclear, softening, industrial water treatment, semiconductor, power, and many other industries. A typical example of application is preparation of high-purity water for power engineering, electronic and nuclear industries; i.e. polymeric or inorganic insoluble ion exchangers are widely used for water softening, water purification, water decontamination, etc. Ion exchange is a method widely used in household filters to produce soft water for the benefit of laundry detergents, soaps, and water heaters. This is accomplished by exchanging divalent cations (such as calcium Ca2+ and magnesium Mg2+) with highly soluble monovalent cations (e.g., Na+ or H+) (see water softening). Another application for ion exchange in domestic water treatment is the removal of nitrate and natural organic matter. In domestic filtration systems ion exchange is one of the alternatives for water softening in households along with reverse osmosis (RO) membranes. Compared to RO membranes, ion exchange requires repetitive regeneration when inlet water is hard (has high mineral content). Industrial and analytical ion-exchange chromatography is another area to be mentioned. Ion-exchange chromatography is a chromatographical method that is widely used for chemical analysis and separation of ions. For example, in biochemistry it is widely used to separate charged molecules such as proteins. An important area of the application is extraction and purification of biologically produced substances such as proteins (amino acids) and DNA/RNA. Ion-exchange processes are used to separate and purify metals, including separating uranium from plutonium and the other actinides, including thorium, neptunium, and americium. This process is also used to separate the lanthanides, such as lanthanum, cerium, neodymium, praseodymium, europium, and ytterbium, from each other. The separation of neodymium and praseodymium was a particularly difficult one, and those were formerly thought to be just one element didymium – but that is an alloy of the two. There are two series of rare-earth metals, the lanthanides and the actinides, both of whose families all have very similar chemical and physical properties. Using methods developed by Frank Spedding in the 1940s, ion-exchange processes were formerly the only practical way to separate them in large quantities, until the development of the "solvent extraction" techniques that can be scaled up enormously. A very important case of ion-exchange is the plutonium-uranium extraction process (PUREX), which is used to separate the plutonium (mainly ) and the uranium (in that case known as reprocessed uranium) contained in spent fuel from americium, curium, neptunium (the minor actinides), and the fission products that come from nuclear reactors. Thus the waste products can be separated out for disposal. Next, the plutonium and uranium are available for making nuclear-energy materials, such as new reactor fuel (MOX-fuel) and (plutonium-based) nuclear weapons. Historically some fission products such as Strontium-90 or Caesium-137 were likewise separated for use as radionuclides employed in industry or medicine. The ion-exchange process is also used to separate other sets of very similar chemical elements, such as zirconium and hafnium, which is also very important for the nuclear industry. Physically, zirconium is practically transparent to free neutrons, used in building nuclear reactors, but hafnium is a very strong absorber of neutrons, used in reactor control rods. Thus, ion-exchange is used in nuclear reprocessing and the treatment of radioactive waste. Ion-exchange resins in the form of thin membranes are also used in chloralkali process, fuel cells, and vanadium redox batteries. Ion exchange can also be used to remove hardness from water by exchanging calcium and magnesium ions for sodium ions in an ion-exchange column. Liquid-phase (aqueous) ion-exchange desalination has been demonstrated. In this technique anions and cations in salt water are exchanged for carbonate anions and calcium cations respectively using electrophoresis. Calcium and carbonate ions then react to form calcium carbonate, which then precipitates, leaving behind fresh water. The desalination occurs at ambient temperature and pressure and requires no membranes or solid ion exchangers. The theoretical energy efficiency of this method is on par with electrodialysis and reverse osmosis. Other applications In soil science, cation-exchange capacity is the ion-exchange capacity of soil for positively charged ions. Soils can be considered as natural weak cation exchangers. In pollution remediation and geotechnical engineering, ion-exchange capacity determines the swelling capacity of swelling or expansive clay such as montmorillonite, which can be used to "capture" pollutants and charged ions. In planar waveguide manufacturing, ion exchange is used to create the guiding layer of higher index of refraction. Dealkalization, removal of alkali ions from a glass surface. Chemically strengthened glass, produced by exchanging K+ for Na+ in soda glass surfaces using KNO3 melts. Advantages and limitations Advantages Selective removal: Ion exchange resins can be designed to selectively remove specific ions from water. High efficiency: Ion exchange processes can achieve high removal efficiencies for targeted ions. Regenerability: Ion exchange resins can be regenerated multiple times by flushing them with a regenerating solution, extending their lifespan and reducing operational costs. Versatility: Ion exchange can be applied to various water treatment applications. Consistent performance: Ion exchange systems offer consistent and predictable performance, providing reliable water treatment over time. Scalability: Ion exchange systems can be easily scaled up or down to meet different treatment capacities and requirements. Limitations Removal limitations: If target ions are present in complex mixtures or at low concentrations, additional pre-treatment or post-treatment may be required. Regeneration requirements: Regeneration of ion exchange resins requires the use of chemicals and generates wastewater containing concentrated contaminants, which may require appropriate handling and disposal measures. Limited capacity: Ion exchange resins have finite capacities for adsorbing ions, and once saturated, they must be regenerated or replaced, which can limit their effectiveness in treating high-concentration or high-volume streams. Complexity: Ion exchange systems can be complex to design, operate, and maintain, requiring specialized knowledge and expertise. Waste water produced by resin regeneration Most ion-exchange systems use columns of ion-exchange resin that are operated on a cyclic basis. During the filtration process, water flows through the resin column until the resin is considered exhausted. That happens only when water leaving the column contains more than the maximal desired concentration of the ions being removed. Resin is then regenerated by sequentially backwashing the resin bed to remove accumulated suspended solids, flushing removed ions from the resin with a concentrated solution of replacement ions, and rinsing the flushing solution from the resin. Production of backwash, flushing, and rinsing wastewater during regeneration of ion-exchange media limits the usefulness of ion exchange for wastewater treatment. Water softeners are usually regenerated with brine containing 10% sodium chloride. Aside from the soluble chloride salts of divalent cations removed from the softened water, softener regeneration wastewater contains the unused 50–70% of the sodium chloride regeneration flushing brine required to reverse ion-exchange resin equilibria. Deionizing resin regeneration with sulfuric acid and sodium hydroxide is approximately 20–40% efficient. Neutralized deionizer regeneration wastewater contains all of the removed ions plus 2.5–5 times their equivalent concentration as sodium sulfate. See also Alkali anion-exchange membrane Ion Ion chromatography Ion-exchange membranes Ion-exchange resin Desalination Reverse osmosis References Further information Ion Exchangers (K. Dorfner, ed.), Walter de Gruyter, Berlin, 1991. C. E. Harland, Ion exchange: Theory and Practice, The Royal Society of Chemistry, Cambridge, 1994. Ion exchange (D. Muraviev, V. Gorshkov, A. Warshawsky), M. Dekker, New York, 2000. A. A. Zagorodni, Ion Exchange Materials: Properties and Applications, Elsevier, Amsterdam, 2006. Dr., I., & Luqman, M. (2012). Ion Exchange Technology I : Theory and Materials. Springer Netherlands. Harland, C. E. (1994). Ion exchange : theory and practice (2nd ed.). The Royal Society of Chemistry. External links Illustrated and well defined chemistry lab practical on ion exchange from Dartmouth College Some applets illustrating ion exchange processes A simple explanation of deionization Ion exchange, BioMineWiki Analytical chemistry General chemistry Industrial water treatment
Ion exchange
[ "Chemistry" ]
2,739
[ "Water treatment", "Industrial water treatment", "nan" ]
1,615,115
https://en.wikipedia.org/wiki/Meteor%20%28satellite%29
The Meteor spacecraft are weather observation satellites launched by the Soviet Union and Russia since the Cold War. The Meteor satellite series was initially developed during the 1960s. The Meteor satellites were designed to monitor atmospheric and sea-surface temperatures, humidity, radiation, sea ice conditions, snow-cover, and clouds. Between 1964 and 1969, a total of eleven Soviet Union Meteor satellites were launched. Satellites Unlike the United States, which has separate civilian and military weather satellites, the Soviet Union used a single weather satellite type for both purposes. Meteor Prototype Meteor Prototype launches Meteor-1 Meteor-1 was a set of fully operational Russian meteorological satellite launched from the Plesetsk site. The satellites were placed in a near-circular, near-polar prograde orbit to provide near-global observations of the earth's weather systems, cloud cover, ice and snow fields, and reflected and emitted radiation from the dayside and nightside of the earth-atmosphere system for operational use by the Soviet Hydrometeorological Service. 31 satellites were launched between 1969 and 1981. Meteor-1 launches Meteor-1-25, also called "Meteor-Priroda-2", launched on 15 May 1976 by the USSR out of Plesetsk on a Vostok-2M. It was a meteorological satellite that provided global observations of the earth's weather systems, cloud cover, ice and snow fields, vertical profiles of temperature and moisture, and reflected and emitted radiation from the dayside and nightside of the earth-atmosphere system for operational use by the Soviet Hydrometeorological Service. It carried an East German-designed experimental infrared Fourier spectrometers for on-orbit testing of the new instrument for weather observation. The satellite ceased operations on three years later and is now a derelict spacecraft. Meteor-2 The Meteor-2 series, based on the Meteor-1, was the second generation of Soviet meteorological satellites. They were launched into orbit at first by the Vostok-2M launch vehicle until that was replaced by the Tsyklon-3 launch vehicle in the early 1980s. Between 1975 and 1993, 21 Meteor-2's were launched. They were flown in non-sun-synchronous polar orbits with altitudes between 850 and 950 km and inclinations of 81-82º. They weighed about 1,300 kg and had two solar arrays. The instruments consisted of three television-type (frame technique) VIS and IR scanners, a five-channel scanning radiometer and a radiometer (RMK-2) for measuring radiation flux densities in the near-Earth space. In addition to its regular payload, Meteor-2-21 carried a unique Fizeau Retro Reflector Array (RRA) for Satellite Laser Ranging applications. Several of the satellites have begun to break up and create debris. #16 broke up in 1998 after a propulsion failure. #18 broke up the following year for unknown reasons. #4 broke up in March 2004. #17 broke up in June 2005. Meteor-2-2 Meteor 2-2 launched on 6 January 1977 by the USSR out of Plesetsk on a Vostok 2-M with 1st Generation Upper Stage. It was an earth science satellite that performed cloud observation and IR temperature/humidity sounding. It ceased operations on 6 July 1978. Since then, the satellite had broken up into several pieces of debris. Meteor-2-5 Meteor 2-5 launched on 31 October 1979 by the USSR out of Plesetsk on a Vostok 2-M with 1st Generation Upper Stage. It has undergone several breakup events, the first before January 2005 and the last as recently as 2013 or 2014, resulting in 83 known pieces of which 60 were still on-orbit as of 2019. Meteor-2-6 Meteor 2-6 launched on 9 September 1980 by the USSR out of Plesetsk on a Vostok 2-M with 1st Generation Upper Stage. It was an Earth Science/Weather satellite that gathered meteorological information and data on penetrating radiation fluxes in circumterrestrial space. It has since broken apart into multiple pieces of space debris. Meteor 2-7 Meteor 2-7 launched on May 14, 1981, by the USSR out of Plesetsk on a Vostok 2-M with 1st Generation Upper Stage. It had a weight of 2,750 kg, and contained the usual suite of communication and orbit control equipment powered by large solar arrays. Its mission was cloud observation and IR temperature/humidity sounding, using a Radiation Measurement Complex (RMk-2), Infrared Sounding Radiometer, Television Camera and Infrared Instrument. It ceased operations on 14 November 1982. In March 2004, it experienced an event, or a series of events, that caused it to break into 8 pieces. The cause of this break-up is unknown. Meteor 2-8 Meteor 2-8 launched on 25 March 1982 by the USSR out of Plesetsk on a Tsyklon-3 It had a weight of 1,500 kg, and It carried scientific and meteorological instruments, and service systems. Its mission was cloud observation and IR temperature/humidity sounding, using a Radiation Measurement Complex (RMk-2), Infrared Sounding Radiometer, Television Camera and Infrared Instrument. It ceased operations on 25 September 1983. On 29 May 1999, it experienced a break-up event that caused it to break into 53 pieces. The cause of this break-up is unknown. Meteor-2-21 Meteor-2-21/Fizeau is the twenty-first and last in the Meteor-2 series of Russian meteorological satellites. ILRS Mission Support Status: Satellite Laser Ranging (SLR) tracking support of this satellite was discontinued in October 1998. What makes Meteor-2-21 distinctive from the other meteorological satellites is its unique retroreflector array. The name Fizeau is derived from a French physicist, Armand Fizeau who, in 1851, conducted an experiment which tested for the aether convection coefficient. SLR tracking of this satellite was used for precise orbit determination and the Fizeau experiment. The Fizeau experiment tests the theory of special relativity – that distance events that are simultaneous for one observer will not be simultaneous for another observer who is in motion relative to the first observer. Retroreflector Array (RRA) Characteristics: The retro-reflector array consists of three corner cubes in a linear array with the two outer corner cubes pointing at 45-degree angles relative to the central cube. The central cube is made of fused silica and has a two-lobe Far Field Diffraction Pattern (FFDP) providing nearly equal intensities for compensated and uncompensated velocity aberration. Both outer reflectors have aluminum coating on the reflecting surfaces and near-diffraction-limited FFDPs. One of the end reflectors is made of fused silica with an index of refraction of 1.46 and should provide partial compensation of the velocity aberration. The other end reflector is made of fused glass with an index of refraction of 1.62 and should provide a perfect compensation of the velocity aberration. SLR full-rate data from MOBLAS 4, MOBLAS 7, and Maidanak seem to confirm the presence of the compensating influence of the Fizeau effect. Resur-1, another Russian satellite launched in 1994, has 2 corner cubes reflectors with near diffraction-limited FFDPs, which were specifically designed for the continuation of this experiment. WESTPAC, a future SLR satellite, will verify indisputably the existence or otherwise of the Fizeau effect. Instrumentation: Meteor-2-21/Fizeau had the following instrumentation on board: Scanning telephotometer Scanning infrared radiometers Radiation measurement complex Retroreflector array Meteor-Priroda Meteor-Priroda is a series of experimental satellites launched. Internal document of Russian space agency show that it is originally only used to describe Meteor 1-31 at the time, but later extend to all experimental satellites. It is commonly perceived to only include 6 satellites: Meteor 1-18, Meteor 1-25, Meteor 1-28, Meteor 1-29, Meteor 1-30, and Meteor 1-31. Evidence suggest that Kosmos 1484 should also be included. Meteor-Priroda series is considered to be prototypes for the Resurs O1 satellites. Meteor-Priroda launches Meteor-3 The Meteor-3 series was launched 7 times between 1984 and 1994 after a difficult and protracted development program that began in 1972. All the satellites were launched on Tsyklon-3 rockets. These satellites provide weather information including data on clouds, ice and snow cover, atmospheric radiation and humidity. The Meteor-3 class of satellites orbit in a higher altitude than the Meteor-2 class of satellites thus providing more complete coverage of the Earth's surface. The Meteor-3 has the same payload as the Meteor-2 but also includes an advanced scanning radiometer with better spectral and spatial resolution and a spectrometer for determining total ozone content. Meteorological data is transmitted to four primary sites in the former Soviet Union in conjunction with about 80 other smaller sites. Meteor-3 launches Meteor-3-5 Meteor-3-5, launched in 1991, is in a slightly higher orbit than Meteor-2-21, and operated until 1994. It transmitted on 137.300 MHz. Mechanically, it is similar to Meteor-2-21. Which satellite was in operation depended on the sun angles and consequently the seasons. Meteor-3-5 was usually the (Northern Hemisphere) "summer" satellite while 2-21 was in operation for approximately the half-year centered on winter. The satellite carried the second Total Ozone Mapping Spectrometer (TOMS) aloft as the first and the last American-built instrument to fly on a Soviet spacecraft. Launched from the Plesetsk, Russia, facility near the White Sea, on 15 August 1991, Meteor-3 TOMS had a unique orbit that presents special problems for processing data. Meteor-3 TOMS began returning data in August 1991 and stopped in December 1994. Meteor-3-6/PRARE The Meteor-3-6/PRARE satellite is the sixth in the Russian Meteor-3 series of meteorological satellites launched in 1994. ILRS Mission Support Status: Satellite laser ranging and PRARE data was used for precision orbit determination and intercomparison of the two techniques. ILRS tracking support of this satellite was discontinued on 11 November 1995. Instrumentation: Meteor-3-6 has the following instrumentation on board: Scanning TV-sensor Visible light and infrared radiometers Scanning infrared radiometer Ozone Mapper Precise Range and Range-Rate Equipment (PRARE) Retroreflector array RetroReflector Array (RRA) Characteristics: The retro-reflector array is a box wing annulus with a diameter of 28 cm and has 24 corner cube reflectors. Meteor-3M The Meteor-3M series of satellites was to be an advanced series of polar orbiters with one 1.4 km resolution visible channel and a ten-channel radiometer with 3 km resolution. Initially four Meteor-3M satellites were planned, however due to financial difficulties only one was launched. Meteor-M The first Meteor-M satellite, Meteor-M No.1, was launched 17 September 2009 at 16:55:07 UTC from Baikonur by a Soyuz-2-1b/Fregat rocket. Its mission ended in 2014. The second satellite, Meteor-M No.2, was launched 8 July 2014 at 16:58:28 UTC from Baikonur by a Soyuz-2-1b/Fregat rocket. Its mission is scheduled to last 5 years. On 27 November 2017, the launch of Meteor-M No.2-1 was lost after a programming error; also lost were 18 smaller satellites from other nations. On 5 July 2019, the replacement satellite for the failed Meteor-M No.2-1 satellite, the Meteor-M No.2-2 (also known as Meteor M2-2) was launched from Vostochny Cosmodrome. On 18 December 2019, image downlink from Meteor-M No.2-2 ceased. Tracking revealed the craft had suffered degradation in orbit with a decrease in perigee. NORAD was not able to identify any space object involved in a collision. Roscosmos later confirmed that the satellite had suffered a decompression of its thermal control system following what is presumed to be a micrometeoroid impact. Following the incident, the spacecraft was automatically switched into a low-power mode and ground operators worked to restore the satellite's orbit and orientation. By 25 December 2019, the satellite had resumed controlled flight, but the future of its mission remains uncertain. More Meteor-M satellites are currently being developed. Meteor-M No.2-3 was successfully launched on 27 June 2023, with three more satellites in various stages of development. Meteor-M No.2-4 was successfully launched on 29 February 2024 at 05:43 UTC, while Meteor-M No.2-5 is scheduled to be launched later in 2024, and No.2-6 in 2025. Meteor-M launches See also Elektro–L, Russian geosynchronous meteorological satellites References External links VNIIEM description of Meteor-M Sputnik server eoPortal Meteor overview NOAA description of Meteor NASA - Goddard Space Flight Center Scientific Visualization Studio NASA International Laser Ranging Service (ILRS) NASA ILRS Meteor 3-6 NASA ILRS Meteor 2-21/Fizeau NASA ILRS Meteor 3M Weather satellites of the Soviet Union Earth observation satellites of the Soviet Union Satellite series Derelict satellites orbiting Earth Spacecraft that broke apart in space
Meteor (satellite)
[ "Technology" ]
2,811
[ "Space debris", "Spacecraft that broke apart in space" ]
1,615,196
https://en.wikipedia.org/wiki/Energy%20level%20splitting
In quantum physics, energy level splitting or a split in an energy level of a quantum system occurs when a perturbation changes the system. The perturbation changes the corresponding Hamiltonian and the outcome is change in eigenvalues; several distinct energy levels emerge in place of the former degenerate (multi-state) level. This may occur because of external fields, quantum tunnelling between states, or other effects. The term is most commonly used in reference to the electron configuration in atoms or molecules. The simplest case of level splitting is a quantum system with two states whose unperturbed Hamiltonian is a diagonal operator: , where is the identity matrix. Eigenstates and eigenvalues (energy levels) of a perturbed Hamiltonian will be: : the level, and : the level, so this degenerate  eigenvalue splits in two whenever . Though, if a perturbed Hamiltonian is not diagonal for this quantum states basis , then Hamiltonian's eigenstates are linear combinations of these two states. For a physical implementation such as a charged spin-½ particle in an external magnetic field, the z-axis of the coordinate system is required to be collinear with the magnetic field to obtain a Hamiltonian in the form above (the  Pauli matrix corresponds to z-axis). These basis states, referred to as spin-up and spin-down, are hence eigenvectors of the perturbed Hamiltonian, so this level splitting is both easy to demonstrate mathematically and intuitively evident. But in cases where the choice of state basis is not determined by a coordinate system, and the perturbed Hamiltonian is not diagonal, a level splitting may appear counter-intuitive, as in examples from chemistry below. Examples In atomic physics: The Zeeman effect – the splitting of electronic levels in an atom because of an external magnetic field. The Stark effect – splitting because of an external electric field. In physical chemistry: The Jahn–Teller effect – splitting of electronic levels in a molecule because breaking the symmetry lowers the energy when the degenerate orbitals are partially filled. Resonance (chemistry) leads to creation of delocalized electron states. () Nitrogen inversion leads to level splitting in ammonia (), which is used in an ammonia maser. () References Quantum mechanics
Energy level splitting
[ "Physics" ]
478
[ "Theoretical physics", "Quantum mechanics" ]
1,615,402
https://en.wikipedia.org/wiki/Sulfonic%20acid
In organic chemistry, sulfonic acid (or sulphonic acid) refers to a member of the class of organosulfur compounds with the general formula , where R is an organic alkyl or aryl group and the group a sulfonyl hydroxide. As a substituent, it is known as a sulfo group. A sulfonic acid can be thought of as sulfuric acid with one hydroxyl group replaced by an organic substituent. The parent compound (with the organic substituent replaced by hydrogen) is the parent sulfonic acid, , a tautomer of sulfurous acid, . Salts or esters of sulfonic acids are called sulfonates. Preparation Aryl sulfonic acids are produced by the process of sulfonation. Usually the sulfonating agent is sulfur trioxide. A large scale application of this method is the production of alkylbenzenesulfonic acids: In this reaction, sulfur trioxide is an electrophile and the arene is the nucleophile. The reaction is an example of electrophilic aromatic substitution. In a related process, carboxylic acid]]s react with sulfur trioxide to give the sulfonic acids. Direct reaction of alkanes with sulfur trioxide is used for the conversion methane to methanedisulfonic acid. Alkylsulfonic acids can be prepared by sulfoxidation whereby alkanes are irradiated with a mixture of sulfur dioxide and oxygen. This reaction is employed industrially to produce alkyl sulfonic acids, which are used as surfactants. From terminal alkenes, alkane sulfonic acids can be obtained by the addition of bisulfite. Bisulfite can also be alkylated by alkyl halides: Sulfonic acids can be prepared by oxidation of thiols: This pathway is the basis of the biosynthesis of taurine. Hydrolysis routes Many sulfonic acids are prepared by hydrolysis of sulfonyl halides and related precursors. Thus, perfluorooctanesulfonic acid is prepared by hydrolysis of the sulfonyl fluoride, which in turn is generated by the electrofluorination of octanesulfonic acid. Similarly the sulfonyl chloride derived from polyethylene is hydrolyzed to the sulfonic acid. These sulfonyl chlorides are produced by free-radical reactions of chlorine, sulfur dioxide, and the hydrocarbons using the Reed reaction. Vinylsulfonic acid is derived by hydrolysis of carbyl sulfate, (), which in turn is obtained by the addition of sulfur trioxide to ethylene. Properties Sulfonic acids are strong acids. They are commonly cited as being around a million times stronger than the corresponding carboxylic acid. For example, p-Toluenesulfonic acid and methanesulfonic acid have pKa values of −2.8 and −1.9, respectively, while those of benzoic acid and acetic acid are 4.20 and 4.76, respectively. However, as a consequence of their strong acidity, their pKa values cannot be measured directly, and values commonly quoted should be regarded as indirect estimates with significant uncertainties. For instance, various sources have reported the pKa of methanesulfonic acid to be as high as −0.6 or as low as −6.5. Sulfonic acids are known to react with solid sodium chloride (salt) to form the sodium sulfonate and hydrogen chloride. This property implies an acidity within two or three orders of magnitude of that of HCl(g), whose pKa was recently accurately determined (pKaaq = −5.9). Because of their polarity, sulfonic acids tend to be crystalline solids or viscous, high-boiling liquids. They are also usually colourless and nonoxidizing, which makes them suitable for use as acid catalysts in organic reactions. Their polarity, in conjunction with their high acidity, renders short-chain sulfonic acids water-soluble, while longer-chain ones exhibit detergent-like properties. The structure of sulfonic acids is illustrated by the prototype, methanesulfonic acid. The sulfonic acid group, RSO2OH features a tetrahedral sulfur centre, meaning that sulfur is at the center of four atoms: three oxygens and one carbon. The overall geometry of the sulfur centre is reminiscent of the shape of sulfuric acid. Applications Both alkyl and aryl sulfonic acids are known, most large-scale applications are associated with the aromatic derivatives. Detergents and surfactants Detergents and surfactants are molecules that combine highly nonpolar and highly polar groups. Traditionally, soaps are the popular surfactants, being derived from fatty acids. Since the mid-20th century, the usage of sulfonic acids has surpassed soap in advanced societies. For example, an estimated 2 billion kilograms of alkylbenzenesulfonates are produced annually for diverse purposes. Lignin sulfonates, produced by sulfonation of lignin are components of drilling fluids and additives in certain kinds of concrete. Dyes Many if not most of the anthraquinone dyes are produced or processed via sulfonation. Sulfonic acids tend to bind tightly to proteins and carbohydrates. Most "washable" dyes are sulfonic acids (or have the functional sulfonyl group in them) for this reason. p-Cresidinesulfonic acid is used to make food dyes. Acid catalysts Being strong acids, sulfonic acids are also used as catalysts. The simplest examples are methanesulfonic acid, CH3SO2OH and p-toluenesulfonic acid, which are regularly used in organic chemistry as acids that are lipophilic (soluble in organic solvents). Polymeric sulfonic acids are also useful. Dowex resin are sulfonic acid derivatives of polystyrene and is used as catalysts and for ion exchange (water softening). Nafion, a fluorinated polymeric sulfonic acid is a component of proton exchange membranes in fuel cells. Drugs Sulfa drugs, a class of antibacterials, are produced from sulfonic acids. Lignosulfonates In the sulfite process for paper-making, lignin is removed from the lignocellulose by treating wood chips with solutions of sulfite and bisulfite ions. These reagents cleave the bonds between the cellulose and lignin components and especially within the lignin itself. The lignin is converted to lignosulfonates, useful ionomers, which are soluble and can be separated from the cellulose fibers. Reactions The reactivity of the sulfonic acid group is so extensive that it is difficult to summarize. Hydrolysis to phenols When treated with strong base, benzenesulfonic acid derivatives convert to phenols. In this case the sulfonate behaves as a pseudohalide leaving group. Hydrolytic desulfonation Arylsulfonic acids are susceptible to hydrolysis, the reverse of the sulfonation reaction: Whereas benzenesulfonic acid hydrolyzes above 200 °C, many derivatives are easier to hydrolyze. Thus, heating aryl sulfonic acids in aqueous acid produces the parent arene. This reaction is employed in several scenarios. In some cases the sulfonic acid serves as a water-solubilizing protecting group, as illustrated by the purification of para-xylene via its sulfonic acid derivative. In the synthesis of 2,6-dichlorophenol, phenol is converted to its 4-sulfonic acid derivative, which then selectively chlorinates at the positions flanking the phenol. Hydrolysis releases the sulfonic acid group. Esterification Sulfonic acids can be converted to esters. This class of organic compounds has the general formula R−SO2−OR. Sulfonic esters such as methyl triflate are considered good alkylating agents in organic synthesis. Such sulfonate esters are often prepared by alcoholysis of the sulfonyl chlorides: RSO2Cl + R′OH → RSO2OR′ + HCl Halogenation Sulfonyl halide groups (R−SO2−X) are produced by chlorination of sulfonic acids using thionyl chloride. Sulfonyl fluorides can be produced by treating sulfonic acids with sulfur tetrafluoride: Displacement by hydroxide Although strong, the (aryl)C−SO3− bond can be broken by nucleophilic reagents. Of historic and continuing significance is the α-sulfonation of anthroquinone followed by displacement of the sulfonate group by other nucleophiles, which cannot be installed directly. An early method for producing phenol involved the base hydrolysis of sodium benzenesulfonate, which can be generated readily from benzene. C6H5SO3Na + NaOH → C6H5OH + Na2SO3 The conditions for this reaction are harsh, however, requiring 'fused alkali' or molten sodium hydroxide at 350 °C for benzenesulfonic acid itself. Unlike the mechanism for the fused alkali hydrolysis of chlorobenzene, which proceeds through elimination-addition (benzyne mechanism), benzenesulfonic acid undergoes the analogous conversion by an SNAr mechanism, as revealed by a 14C labeling, despite the lack of stabilizing substituents. Sulfonic acids with electron-withdrawing groups (e.g., with NO2 or CN substituents) undergo this transformation much more readily. o-Lithiation Arylsulfonic acids react with two equiv of butyl lithium to give the ortho-lithio derivatives, i.e. ortho-lithiation. These dilithio compounds are poised for reactions with many electrophiles. Notes References Functional groups
Sulfonic acid
[ "Chemistry" ]
2,145
[ "Functional groups", "Sulfonic acids" ]
1,615,479
https://en.wikipedia.org/wiki/Edge%20and%20vertex%20spaces
In the mathematical discipline of graph theory, the edge space and vertex space of an undirected graph are vector spaces defined in terms of the edge and vertex sets, respectively. These vector spaces make it possible to use techniques of linear algebra in studying the graph. Definition Let be a finite undirected graph. The vertex space of G is the vector space over the finite field of two elements of all functions . Every element of naturally corresponds the subset of V which assigns a 1 to its vertices. Also every subset of V is uniquely represented in by its characteristic function. The edge space is the -vector space freely generated by the edge set E. The dimension of the vertex space is thus the number of vertices of the graph, while the dimension of the edge space is the number of edges. These definitions can be made more explicit. For example, we can describe the edge space as follows: elements are subsets of , that is, as a set is the power set of E vector addition is defined as the symmetric difference: scalar multiplication is defined by: The singleton subsets of E form a basis for . One can also think of as the power set of V made into a vector space with similar vector addition and scalar multiplication as defined for . Properties The incidence matrix for a graph defines one possible linear transformation between the edge space and the vertex space of . The incidence matrix of , as a linear transformation, maps each edge to its two incident vertices. Let be the edge between and then The cycle space and the cut space are subspaces of the edge space. References (the electronic 3rd edition is freely available on author's site). See also Cycle space Cut space Algebraic graph theory
Edge and vertex spaces
[ "Mathematics" ]
342
[ "Mathematical relations", "Graph theory", "Algebra", "Algebraic graph theory" ]
1,615,480
https://en.wikipedia.org/wiki/HD%20177830
HD 177830 is a 7th magnitude binary star system located approximately 205 light-years away in the constellation of Lyra. The primary star is slightly more massive than the Sun, but cooler being a type K star. Therefore, it is a subgiant clearly more evolved than the Sun. In visual light it is four times brighter than the Sun, but because of its distance, about 204 light years, it is not visible to the unaided eye. With binoculars it should be easily visible. The primary star is known to have two extrasolar planets orbiting around it. Stellar system The secondary star is a Red dwarf star orbiting at a distance of 100 to 200 AU with a likely period of roughly 800 years. Planetary system On November 1, 1999, the discovery of a planet HD 177830 b was announced by the California and Carnegie Planet Search team using the very successful radial velocity method and an analysis on data released by the team performed by amateur astronomer Peter Jalowiczor along with two other planets. This planet is nearly 50% more massive than Jupiter () and takes 407 days to orbit the star in an extremely circular orbit. In 2000 a group of scientists proposed, based on preliminary Hipparcos astrometrical satellite data, that the orbital inclination of HD 177830 b is as little as 1.3°. If that was the case, the planet would have a mass of , making it a brown dwarf instead of a planet. However, it is very unlikely that the planet would have such orbit. Furthermore, brown dwarfs with short orbits around solar-mass () stars are exceedingly rare (the so-called "brown dwarf desert") making the claim even more unlikely. On November 17, 2010, the discovery of a second planet HD 177830 c was announced along with four other planets. The planet has 50% the mass of Saturn and takes 111 days to orbit the star in a very eccentric orbit. This planet is in a near 4:1 resonance with the outer planet. See also List of exoplanets discovered before 2000 - HD 177830 b List of exoplanets discovered in 2010 - HD 177830 c References 177830 093746 Lyra K-type subgiants Planetary systems with two confirmed planets Durchmusterung objects Binary stars
HD 177830
[ "Astronomy" ]
460
[ "Lyra", "Constellations" ]
1,615,505
https://en.wikipedia.org/wiki/Time%20trial
In many racing sports, an athlete (or occasionally a team of athletes) will compete in a time trial (TT) against the clock to secure the fastest time. The format of a time trial can vary, but usually follow a format where each athlete or team sets off at a predetermined interval to set the fastest time on a course. Cycling In cycling, for example, a time trial can be a single track cycling event, or an individual or team time trial on the road, and either or both of the latter may form components of multi-day stage races. In contrast to other types of races, athletes race alone since they are sent out in intervals (interval starts), as opposed to a mass start. Time trialist will often seek to maintain marginal aerodynamic gains as the races are often won or lost by a couple of seconds. Skiing In cross-country skiing and biathlon competitions, skiers are sent out in 30 to 60 second intervals. Rowing In rowing, time trial races, where the boats are sent out at 10 to 20 second intervals, are usually called "head races" A head race is a time-trial competition in the sport of rowing. Head races are typically held in the fall, winter and spring seasons. These events draw many athletes as well as observers. In this form of racing, rowers race against the clock where the crew or rower completing the course in the shortest time in their age, ability and boat-class category is deemed the winner. Motorsport In many forms of motorsport, a similar format is used in qualifying to determine the starting order for the main event, though multiple attempts to set the fastest time are often allowed. In rallying, the special stages are run in a time-trial format. Other forms of time trials in motorsport include hill-climbing and qualifying. A similar race against the clock or time attack is often part of racing video games. Time attack is a type of motorsport in which the racers compete for the best lap time. Each vehicle is timed through numerous circuits of the track. The racers make a preliminary circuit, then run the timed laps, and then finish with a cool-down lap. Time attack and time trial events differ by competition format and rules. Time attack has a limited number of laps, time trial has open sessions. Unlike other timed motorsport disciplines such as sprinting and hillclimbing, the car is required to start off under full rolling start conditions following a warm up lap in which they will have to accelerate out as fast as possible to determine how fast they enter their timed lap. Commonly, as the cars are modified road going cars, they are required to have tires authorized for road use. Time attack events began in Japan in the mid-1960s. They have since spread around the world. In the United States, the Super Lap Battle is held at Buttonwillow Raceway Park since 2004. In February 2019 a new event called Superlap Battle USA was run at the Circuit of the Americas in Austin, Texas. The outright winner was Cole Powelson in the Lyfe Nissan GTR. An international event known as World Time Attack Challenge has been held at Sydney Motorsport Park, Australia since 2010 attracting the fastest time attack teams from around the globe to compete. Europe hosts several Time Attack championships with Dutch Time Attack as one of the first starting in June 2008. As to date this championship runs 4-6 races per year on CM.com Circuit Zandvoort, TT-circuit Assen, Nürburgring GP-Strecke (together with German Time Attack Masters) and occasional additional racetracks in Germany, Belgium or France. Dutch Time Attack is set up to host drivers from the very entry level up to full blown racecars with according drivers, divided over 5 classes. United States National Auto Sport Association Time Trial (NASA TT) series is a national auto competition program, utilizing regional series based on a time trial style format, with rules that establish car classifications to provide a contest of driver skill. NASA TT is designed to bridge the gap between NASA HPDE (High Performance Driving Events), and wheel-to-wheel racing. NASA TT provides a venue for spirited on-track competition with a high degree of both safety and convenience. NASA TT competition will take place during NASA HPDE-4 sessions or in separate TT run groups, depending on the event schedule and number of participants. In addition to having a set of National NASA TT Rules, the rules, safety guidelines, and driving requirements of the HPDE-4 program apply to NASA TT. These rules can be found in the NASA CCR (Club Codes and Regs). Other events such as Gridlife offer a time attack event taking place in various locations across North America. The competition is divided into various groups based on car specification. The level varies from everyday driven vehicles to non road legal race cars. Each class also has its own set of rules and regulations on car specifications as the higher class one goes the less regulations one is faced with. The top 12 competitors regardless of class will participate in the Final grid event in which the driver is allowed one warm up lap, one hot lap, and one cool down lap. The hot lap, however, will not count towards the overall trackbattle event. There is also a seasonal championship with every class having a champion based on points earned throughout the season. Germany In Germany, the German Timeattack Masters is a time attack championship, held since 2013. It started being limited to Japanese cars only and opened up to vehicles of all makes in 2016. From 2013 to 2017 the championship consisted of four events, in 2018 that number increased to five for the overall championship. Events are held on various racing tracks, most of them located in Germany, like the Nürburgring Grand Prix course, the Lausitzring and the Hockenheimring. Additionally, for years, the TT Circuit Assen is used in cooperation with the Dutch Time Attack Masters. Formerly, races also took place on the German course Oschersleben. Each event consists of Warm Up, Qualifying and the Hotlap finals, with Qualifying rank and Hotlap rank counting for the overall championship. The Hotlap is only driven by the five fastest starters from the Qualifying. Groups are split according to car specifications, mainly regarding severity of modifications and aerodynamics. With more powerful classes, safety regulations are also tighter. Classes range from Club-class, being close-to-production, via the Pro-class, with more allowed aerodynamics and allowed engine swaps, to the Extreme-class in which everything is allowed, that is not forbidden explicitly. While in lower classes a distinction between 2WD and 4WD is made, this is neglected in the Extreme-class. The series is independent and not connected to any larger organization like the DMSB. Video games Many computer and video games include a time trial (also known as time attack) mode, in which the main goal is to complete levelsor, in some cases, the entire gameas quickly as possible. This mode prioritizes completion time ahead of other measures of success such as high scores. In cases in which a game does not have a dedicated time attack or trial mode, a fast completion is frequently known as a speedrun. Usually the best results achieved in a time attack mode are stored in long-term memory by the game (on a hard disk or non-volatile memory), so they can be shared with friends or improved upon at a later date. Racing games often feature "ghost cars" which are saved when the player sets a record time. In subsequent races, the ghost car follows the path the player took when setting that record, allowing them to clearly gauge how they are performing against the previous achievement. Saved ghost cars can often be shared with other players. The inclusion of a time attack mode can often be an effective way of adding replay value to a game. Racing games may also include ghost cars recorded by the development staffattempting to beat their times can provide a final challenge to players who have mastered the rest of the game. Often the game provides other incentives to use the time attack feature; GoldenEye 007 and Tomb Raider Anniversary encouraged players to revisit levels more than once by offering unlockable cheat options as a reward for completing them within target times. Sometimes, the settings of a time attack mode are "locked" in order to standardize competition between players. For example, Soul Calibur features a time attack mode automatically set to two rounds for a win, the normal difficulty setting and a default time limit; but it also features an alternative Arcade mode, which allows any option settings to be used and saves record times separately. Both speedrunning and time attacking have extensive online communities dedicated to achieving the fastest times possible, with one popular website being Speedrun.com .. Amongst the community there are many speedrunning competitions, some held annually or otherwise inaugurally, such as Games Done Quick (GDQ). See also Bushy Park Time Trial Time trialist Isle of Man TT Track time trial Cycling Time Trials References Racing Video game terminology Esports terminology Motorsport by type
Time trial
[ "Technology" ]
1,846
[ "Computing terminology", "Video game terminology" ]
1,615,585
https://en.wikipedia.org/wiki/HD%20210277
HD 210277 is a single star in the equatorial constellation of Aquarius. It has an apparent visual magnitude of 6.54, which makes it a challenge to view with the naked eye, but it is easily visible in binoculars. The star is located at a distance of 69.6 light years from the Sun based on parallax, but is drifting closer with a radial velocity of −20.9 km/s. An early classification of this star was a G0 dwarf, and some sources still use this value. More modern classification surveys list it as G8V, matching a late G-type main-sequence star. It is older than the Sun with a very low level of chromospheric activity and is spinning with a projected rotational velocity of 1.9 km/s. The star has a slightly higher mass and larger radius than the Sun. Planetary system In 1999 it was announced that a dust disk orbiting HD 210277, similar to that produced by the Kuiper Belt, had been imaged, lying between 30 and 62 AU from the star. However, observations with the Spitzer Space Telescope failed to detect any infrared excess at 70 micrometres or at 24 micrometres wavelengths. Subsequent measurements by the Herschel Space Observatory did detect an excess at 100 and 160 micrometres. A model fit to the emission matches a disk orbiting at 160 AU with a mean temperature of 22 K. The disk signal is fairly strong, with S/N equal to 6.6. The only known exoplanet was discovered using 34 radial velocity measurements taken from 1996 to 1998 at W. M. Keck Observatory. It has a minimum mass greater than Jupiter orbiting the star in 442 days. The high eccentricity (ovalness) of the exoplanet's orbit means it is unlikely that there is a companion planet co-orbiting the star at a trojan point. See also List of exoplanets discovered before 2000 - HD 210277 b References External links Image HD 210277 jumk.de G-type main-sequence stars Planetary systems with one confirmed planet Circumstellar disks Aquarius (constellation) BD-08 5818 9769 210277 109378 J22092985-0732548
HD 210277
[ "Astronomy" ]
468
[ "Constellations", "Aquarius (constellation)" ]
14,732,052
https://en.wikipedia.org/wiki/Mark%20Pollicott
Mark Pollicott (born 24 September 1959) is a British mathematician known for his contributions to ergodic theory and dynamical systems. He has a particular interest in applications to other areas of mathematics, including geometry, number theory and analysis. Pollicott attended High Pavement College in Nottingham, where his teachers included the Booker Prize winning author Stanley Middleton. He gained a BSc in Mathematics and Physics in 1981 and a PhD in mathematics in 1984 both at the University of Warwick. His PhD supervisor was Bill Parry and his thesis title The Ruelle Operator, Zeta Functions and the Asymptotic Distribution of Closed Orbits. He held permanent positions at the University of Edinburgh, University of Porto, and University of Warwick before appointment to the Fielden Chair of Pure Mathematics at the University of Manchester (1996–2004). He then returned to a professorship at Warwick in 2005. In addition, he has held numerous visiting positions including ones at the Institut des Hautes Études Scientifiques in Paris, the Institute for Advanced Study in Princeton, MSRI at the University of California, Berkeley, Caltech, and the University of Grenoble. He has been recipient of a Royal Society University Research Fellowship, two Leverhulme Trust Senior Research Fellowships and an E.U. Marie Curie Chair. References External links Home page at Warwick Living people 20th-century British mathematicians 21st-century British mathematicians Dynamical systems theorists Alumni of the University of Warwick Academics of the University of Warwick Academics of the University of Manchester 1959 births Scientists from Nottingham
Mark Pollicott
[ "Mathematics" ]
306
[ "Dynamical systems theorists", "Dynamical systems" ]
14,732,574
https://en.wikipedia.org/wiki/Golden%20Bed
The Golden Bed is a bed designed by the English architect and designer William Burges in 1879 for the guest bedroom of the home that he designed for himself in Holland Park, The Tower House. It is now in the collection of the Victoria & Albert Museum (V&A) in South Kensington. The bed was made by John Walden and carved by Thomas Nicholls. The painting in the central panel of the headboard was executed by Henry Holiday, and the motifs and figures on the bed painted by Fred Weekes. The bed is made from polished hardwood, mahogany and pine. The theme for the guest room has been variously described as 'The Earth and Her Productions' and 'Vita Nova' ('New Life'). The Golden Bed matched the rest of the furniture designed for the guest bedroom, in keeping with the room's decorative scheme. Design The Golden Bed is a large bed, measuring long, high and wide. It is made from wood, gilded gold. The bed is decorated with carvings and 'fragments of illuminated manuscripts under glass and rock crystal'. Two mirrors are inset into the headboard, which features a painting by Thomas Weekes of the Judgement of Paris at its centre. The three gods in the 'Judgement of Paris' are wearing clothes of the 13th-century, with Mercury standing to the left of Paris, with Venus bowing to Paris on the right. The painting had previously been part of a larger painted panel at Burges' rooms in Buckingham Street, where he had lived before Tower House. The sideboards of the bed are ornamented with glass covering pieces of illuminated vellum and fragments of textiles. Grotesque figures, of a female and male, feature in the side brackets at the head of the bed. The bed head and foot posts are surmounted by half orbs of rock crystal. The foot of the bed is inscribed with the Latin phrase 'VITA NOVA' ('New Life'), with the posts of the bed inscribed 'WILLIAM BURGES ME FIERI FECIT' ('William Burges Made Me') on the right, and 'ANNO DOMINI MDCCCLXXIX' ('In the Year of Our Lord 1879') on the left. History The estimate book Burges used for Tower House records the bed on 12 March 1879 as costing £39 13s (). Thomas Nicholls carving for the bed is marked by a payment of £15 15s in June that year (). From 1952 to 1953 the Exhibition of Victorian and Edwardian Decorative Arts was held at the V&A, at which the Golden Bed and an accompanying washstand, also from the guest bedroom at The Tower House, were lent for display. Oliver Poole, 1st Baron Poole was originally asked to lend the bed but Poole subsequently requested that Colonel T.H. Minshall D.S.O. be acknowledged as the owner. Minshall had owned Tower House in the 1920s. Poole and his mother, Mrs. Minshall, later agreed to donate the bed and washstand to the V&A in the name of Colonel Minshall. In 2002 the Golden Bed was lent to Knightshayes Court in Tiverton, Devon, by the V&A. Knightshayes Court had been built by Burges from 1867 to 1874. The bed joined a wardrobe designed by Burges on loan from Tower House in a newly created 'Burges room' at Knightshayes Court. References 1879 in art Beds Collection of the Victoria and Albert Museum William Burges furniture
Golden Bed
[ "Biology" ]
720
[ "Beds", "Behavior", "Sleep" ]
14,733,839
https://en.wikipedia.org/wiki/Telbivudine
Telbivudine is an antiviral drug used in the treatment of hepatitis B infection. It is marketed by Swiss pharmaceutical company Novartis under the trade names Sebivo (European Union) and Tyzeka (United States). Clinical trials have shown it to be significantly more effective than lamivudine or adefovir, and less likely to cause resistance. However, HBV signature resistance mutation M204I (a change from methionine to isoleucine at position 204 in the reverse transcriptase domain of the hepatitis B polymerase) or L180M+M204V have been associated with Telbivudine resistance. Telbivudine is a synthetic thymidine β-L-nucleoside analogue; it is the L-isomer of thymidine. Telbivudine impairs hepatitis B virus (HBV) DNA replication by leading to chain termination. It differs from the natural nucleotide only with respect to the location of the sugar and base moieties, taking on an levorotatory configuration versus a dextrorotatory configuration as do the natural deoxynucleosides. It is taken orally in a dose of 600 mg once daily with or without food. Telbivudine has no in vitro activity against HIV-1, and in a case-series of three HIV-HBV co-infected patients, telbivudine did not produce sustained HIV-1 virologic suppression or induce any resistance mutations in HIV-1. Phase III clinical trials suggested that telbivudine put patients at greater risk for myopathy and peripheral neuropathy than the comparator drug lamivudine. FDA required a required a risk evaluation and mitigation strategy (REMS) aiming to increase awareness of peripheral neuropathy by requiring distribution of a medication guide. In 2016, Novartis posted a discontinuation notice. Efficacy or safety concerns were not cited as rationale for discontinuation, but rather "availability of alternative medications"; presumably this refers to tenofovir disoproxil, which became available as a generic medication in 2017, and is a safe and effective treatment for chronic HBV infection. References External links Antiviral drugs Pyrimidinediones Nucleosides Drugs developed by Novartis Withdrawn drugs
Telbivudine
[ "Chemistry", "Biology" ]
491
[ "Antiviral drugs", "Biocides", "Drug safety", "Withdrawn drugs" ]
14,734,022
https://en.wikipedia.org/wiki/Interferon%20alfa-2b
Interferon alfa-2b is an antiviral or antineoplastic drug. It is a recombinant form of the protein Interferon alpha-2 that was originally sequenced and produced recombinantly in E. coli in the laboratory of Charles Weissmann at the University of Zurich, in 1980. It was developed at Biogen, and ultimately marketed by Schering-Plough under the trade name Intron-A. It was also produced in 1986 in recombinant human form, in the Center for Genetic Engineering and Biotechnology of Havana, Cuba, under the name Heberon Alfa R. It has been used for a wide range of indications, including viral infections and cancers. This drug is approved around the world for the treatment of chronic hepatitis C, chronic hepatitis B, hairy cell leukemia, Behçet's disease, chronic myelogenous leukemia, multiple myeloma, follicular lymphoma, carcinoid tumor, mastocytosis and malignant melanoma. The medication is being used in clinical trials to treat patients with SARS-CoV-2 and there are published results in the peer-reviewed scientific literature. So far, two non-peer reviewed research articles have been published. One study at the University of Texas Medical Branch, Galveston, showed evidence of a direct anti-viral effect of Interferon alpha against novel Coronavirus in vitro. The study demonstrated around 10,000 fold reduction in the quantity of virus that was pre-treated with Interferon alpha 48 hours earlier. A second study by universities in China, Australia and Canada analysed 77 moderate COVID-19 subjects in Wuhan and observed that those who received Interferon alpha-2b showed a significant reduction in the duration of virus shedding period and even in levels of the inflammatory cytokine, IL-6. This drug is also used off-label in cats and dogs, both by injection and orally. The cross-species nature of IFN-α allow it to work in non-human animals, but the period of usefulness is limited by the production of antibodies against this foreign protein. See also Interferon Ropeginterferon alfa-2b References Further reading External links Antiviral drugs Drugs developed by Schering-Plough Drugs developed by Merck & Co.
Interferon alfa-2b
[ "Biology" ]
481
[ "Antiviral drugs", "Biocides" ]
14,735,482
https://en.wikipedia.org/wiki/Rolf-Dieter%20Heuer
Rolf-Dieter Heuer (; born 24 May 1948) is a German particle physicist. From 2009 to 2015 he was Director General of CERN and from 5 April 2016 to 9 April 2018 President of the German Physical Society (Deutsche Physikalische Gesellschaft). Since 2015 he has been Chair of the European Commission's Group of Chief Scientific Advisors, and since May 2017 he has been President of the SESAME Council. Biography Heuer studied physics at the University of Stuttgart. He then obtained his PhD 1977 at the University of Heidelberg under Joachim Heintze for his study of neutral decay modes of the Ψ(3686). His post-doc studies include the JADE experiment at the electron-positron storage ring PETRA at DESY, and from 1984, at the OPAL experiment at CERN, where he also became spokesperson of the OPAL collaboration for many years. Having been offered a full professorship for experimental physics at the University of Hamburg, Heuer returned to DESY in 1998. In 2004, he was appointed DESY's Research Director. In December 2007, the CERN research council announced that Heuer would become CERN's Director General starting 1 January 2009, succeeding Robert Aymar. In 2011 Heuer gave a talk The High Energy Frontier Past, Present and Future at the international symposium on subnuclear physics held in Vatican City. Since November 2015, Heuer has been a member of the Group of Chief Scientific Advisors set up by the European Commission. In 2016 he became President of the Deutsche Physikalische Gesellschaft, and in May 2017 the President of the SESAME Council. In 2022, Heuer was appointed a member of the African Synchrotron Initiative Think Tank with a view to set up a Pan-African project for the African Light Source. Awards On 15 June 2011, Heuer was awarded an honorary degree by the University of Victoria. On 19 July 2011, Heuer was awarded an honorary doctorate by the University of Liverpool. In his speech to the graduands he spoke of the need to bring science into mainstream culture. On 16 December 2011, Heuer was awarded an honorary degree by the University of Birmingham. On 13 June 2012, Heuer was awarded an honorary degree by the University of Glasgow. On 12 November 2012, he was awarded an Edison Volta Prize. On 5 December 2013, he was awarded the UNESCO Niels Bohr Medal. In May 2015, he was awarded a Grand Cross 1st class of the German Order of Merit. In September 2015, Heuer was awarded an honorary doctorate by the University of Belgrade. On 22 November 2016, Heuer was appointed a Knight of the Legion of Honour. References Further reading "Professor Dr. Rolf-Dieter Heuer Appointed as New Research Director", DESY Press Release, Hamburg, 4 October 2004 "Rolf-Dieter Heuer to be New CERN Director General", DESY Press Releases, Hamburg, 14 December 2007 "UNESCO and CERN: like hooked atoms; Jasmina Sopova meets Rolf-Dieter Heuer", in "Chemistry and life", The UNESCO Courier, January–March, 2011, pp. 48–50. External links Scientific publications of Rolf-Dieter Heuer on INSPIRE-HEP 1948 births People associated with CERN 20th-century German physicists Living people Particle physicists UNESCO Niels Bohr Medal recipients Commanders Crosses of the Order of Merit of the Federal Republic of Germany Knights of the Legion of Honour 21st-century German physicists Presidents of the German Physical Society Academic staff of the University of Hamburg
Rolf-Dieter Heuer
[ "Physics" ]
721
[ "Particle physicists", "Particle physics" ]
14,735,628
https://en.wikipedia.org/wiki/Zinc-dependent%20phospholipase%20C
In molecular biology, zinc-dependent phospholipases C is a family of bacterial phospholipases C enzymes, some of which are also known as alpha toxins. Bacillus cereus contains a monomeric phospholipase C (PLC) of 245 amino-acid residues. Although PLC prefers to act on phosphatidylcholine, it also shows weak catalytic activity with sphingomyelin and phosphatidylinositol. Sequence studies have shown the protein to be similar both to alpha toxin from Clostridium perfringens and Clostridium bifermentans, a phospholipase C involved in haemolysis and cell rupture, and to lecithinase from Listeria monocytogenes, which aids cell-to-cell spread by breaking down the 2-membrane vacuoles that surround the bacterium during transfer. Each of these proteins is a zinc-dependent enzyme, binding 3 zinc ions per molecule. The enzymes catalyse the conversion of phosphatidylcholine and water to 1,2-diacylglycerol and choline phosphate. In Bacillus cereus, there are nine residues known to be involved in binding the zinc ions: 5 His, 2 Asp, 1 Glu and 1 Trp. These residues are all conserved in the Clostridium alpha-toxin. Some examples of this enzyme contain a C-terminal sequence extension that contains a PLAT domain which is thought to be involved in membrane localisation.<ref name="PUB00018111"></ref> References EC 3.1.4 Protein domains Peripheral membrane proteins Zinc proteins
Zinc-dependent phospholipase C
[ "Biology" ]
358
[ "Protein domains", "Protein classification" ]
14,735,677
https://en.wikipedia.org/wiki/Continuity%20theory
The continuity theory of normal aging states that older adults will usually maintain the same activities, behaviors, relationships as they did in their earlier years of life. According to this theory, older adults try to maintain this continuity of lifestyle by adapting strategies that are connected to their past experiences. The continuity theory is one of three major psychosocial theories which describe how people develop in old age. The other two psychosocial theories are the disengagement theory, with which the continuity theory comes to odds, and the activity theory upon which the continuity theory modifies and elaborates. Unlike the other two theories, the continuity theory uses a life course perspective to define normal aging. The continuity theory can be classified as a micro-level theory because it pertains to the individual, and more specifically it can be viewed from the functionalist perspective History The continuity theory originated in the observation that a large proportion of older adults show consistency in their activities, personalities, and relationships despite their changing physical, mental, and social status. In 1968, George L. Maddox gave an empirical description of the theory in a chapter of the book Middle Age and Aging: A Reader in Social Psychology called "Persistence of life style among the elderly: A longitudinal study of patterns of social activity in relation to life satisfaction". The continuity theory was formerly proposed in 1971 by Robert Atchley in his article "Retirement and Leisure Participation: Continuity or Crisis?" in the journal The Gerontologist. Later, in 1989, he published another article entitled "A Continuity Theory of Normal Aging, in The Gerontologist in which he substantially developed the theory. In this article, he expanded the continuity theory to explain the development of internal and external structures of continuity. In 1999, Robert Atchley continued to strengthen his theory in his book Continuity and Adaptation in Aging: Creating Positive Experiences. Elements The theory deals with the internal structure and the external structure of continuity to describe how people adapt to their situation and set their goals. The internal structure of an individual such as personality, ideas, and beliefs remain constant throughout the life course. This provides the individual a way to make future decisions based on their internal foundation of the past. The external structure of an individual such as relationships and social roles provides a support for maintaining a stable self-concept and lifestyle. Criticisms and weaknesses The major criticism for the theory is its definition of normal aging. The theory distinguishes normal aging from pathological aging, neglecting the older adults with chronic illness. The feminist theories criticise the continuity theory for defining normal aging around a male model. Another weakness of the theory is that it fails to demonstrate how social institutions impact the individuals and the way they age. See also Aging Activity theory (aging) Disengagement theory References Further reading Ageing Gerontology Theories of non-biological ageing
Continuity theory
[ "Biology" ]
569
[ "Gerontology" ]
14,736,250
https://en.wikipedia.org/wiki/Delannoy%20number
In mathematics, a Delannoy number counts the paths from the southwest corner (0, 0) of a rectangular grid to the northeast corner (m, n), using only single steps north, northeast, or east. The Delannoy numbers are named after French army officer and amateur mathematician Henri Delannoy. The Delannoy number also counts the global alignments of two sequences of lengths and , the points in an m-dimensional integer lattice or cross polytope which are at most n steps from the origin, and, in cellular automata, the cells in an m-dimensional von Neumann neighborhood of radius n. Example The Delannoy number D(3, 3) equals 63. The following figure illustrates the 63 Delannoy paths from (0, 0) to (3, 3): The subset of paths that do not rise above the SW–NE diagonal are counted by a related family of numbers, the Schröder numbers. Delannoy array The Delannoy array is an infinite matrix of the Delannoy numbers: {| class="wikitable" style="text-align:right;" |- ! ! width="50" | 0 ! width="50" | 1 ! width="50" | 2 ! width="50" | 3 ! width="50" | 4 ! width="50" | 5 ! width="50" | 6 ! width="50" | 7 ! width="50" | 8 |- ! 0 | 1 || 1 || 1 || 1 || 1 || 1 || 1 || 1 || 1 |- ! 1 | 1 || 3 || 5 || 7 || 9 || 11 || 13 || 15 || 17 |- ! 2 | 1 || 5 || 13 || 25 || 41 || 61 || 85 || 113 || 145 |- ! 3 | 1 || 7 || 25 || 63 || 129 || 231 || 377 || 575 || 833 |- ! 4 | 1 || 9 || 41 || 129 || 321 || 681 || 1289 || 2241 || 3649 |- ! 5 | 1 || 11 || 61 || 231 || 681 || 1683 || 3653 || 7183 || 13073 |- ! 6 | 1 || 13 || 85 || 377 || 1289 || 3653 || 8989 || 19825 || 40081 |- ! 7 | 1 || 15 || 113 || 575 || 2241 || 7183 || 19825 || 48639 || 108545 |- ! 8 | 1 || 17 || 145 || 833 || 3649 || 13073 || 40081 || 108545 || 265729 |- ! 9 | 1 || 19 || 181 || 1159 || 5641 || 22363 || 75517 || 224143 || 598417 |} In this array, the numbers in the first row are all one, the numbers in the second row are the odd numbers, the numbers in the third row are the centered square numbers, and the numbers in the fourth row are the centered octahedral numbers. Alternatively, the same numbers can be arranged in a triangular array resembling Pascal's triangle, also called the tribonacci triangle, in which each number is the sum of the three numbers above it: 1 1 1 1 3 1 1 5 5 1 1 7 13 7 1 1 9 25 25 9 1 1 11 41 63 41 11 1 Central Delannoy numbers The central Delannoy numbers D(n) = D(n, n) are the numbers for a square n × n grid. The first few central Delannoy numbers (starting with n = 0) are: 1, 3, 13, 63, 321, 1683, 8989, 48639, 265729, ... . Computation Delannoy numbers For diagonal (i.e. northeast) steps, there must be steps in the direction and steps in the direction in order to reach the point ; as these steps can be performed in any order, the number of such paths is given by the multinomial coefficient . Hence, one gets the closed-form expression An alternative expression is given by or by the infinite series And also where is given with . The basic recurrence relation for the Delannoy numbers is easily seen to be This recurrence relation also leads directly to the generating function Central Delannoy numbers Substituting in the first closed form expression above, replacing , and a little algebra, gives while the second expression above yields The central Delannoy numbers satisfy also a three-term recurrence relationship among themselves, and have a generating function The leading asymptotic behavior of the central Delannoy numbers is given by where and . See also Motzkin number Narayana number Schroder-Hipparchus number Schroder number References External links Eponymous numbers in mathematics Integer sequences Triangles of numbers Combinatorics
Delannoy number
[ "Mathematics" ]
1,088
[ "Sequences and series", "Discrete mathematics", "Integer sequences", "Mathematical structures", "Recreational mathematics", "Mathematical objects", "Combinatorics", "Triangles of numbers", "Numbers", "Number theory" ]
14,738,000
https://en.wikipedia.org/wiki/Extraneous%20and%20missing%20solutions
In mathematics, an extraneous solution (or spurious solution) is one which emerges from the process of solving a problem but is not a valid solution to it. A missing solution is a valid one which is lost during the solution process. Both situations frequently result from performing operations that are not invertible for some or all values of the variables involved, which prevents the chain of logical implications from being bidirectional. Extraneous solutions: multiplication One of the basic principles of algebra is that one can multiply both sides of an equation by the same expression without changing the equation's solutions. However, strictly speaking, this is not true, in that multiplication by certain expressions may introduce new solutions that were not present before. For example, consider the following equation: If we multiply both sides by zero, we get, This is true for all values of , so the solution set is all real numbers. But clearly not all real numbers are solutions to the original equation. The problem is that multiplication by zero is not invertible: if we multiply by any nonzero value, we can reverse the step by dividing by the same value, but division by zero is not defined, so multiplication by zero cannot be reversed. More subtly, suppose we take the same equation and multiply both sides by . We get This quadratic equation has two solutions: and But if is substituted for in the original equation, the result is the invalid equation . This counterintuitive result occurs because in the case where , multiplying both sides by multiplies both sides by zero, and so necessarily produces a true equation just as in the first example. In general, whenever we multiply both sides of an equation by an expression involving variables, we introduce extraneous solutions wherever that expression is equal to zero. But it is not sufficient to exclude these values, because they may have been legitimate solutions to the original equation. For example, suppose we multiply both sides of our original equation by We get which has only one real solution: . This is a solution to the original equation so cannot be excluded, even though for this value of . Extraneous solutions: rational Extraneous solutions can arise naturally in problems involving fractions with variables in the denominator. For example, consider this equation: To begin solving, we multiply each side of the equation by the least common denominator of all the fractions contained in the equation. In this case, the least common denominator is . After performing these operations, the fractions are eliminated, and the equation becomes: Solving this yields the single solution However, when we substitute the solution back into the original equation, we obtain: The equation then becomes: This equation is not valid, since one cannot divide by zero. Therefore, the solution is extraneous and not valid, and the original equation has no solution. For this specific example, it could be recognized that (for the value ), the operation of multiplying by would be a multiplication by zero. However, it is not always simple to evaluate whether each operation already performed was allowed by the final answer. Because of this, often, the only simple effective way to deal with multiplication by expressions involving variables is to substitute each of the solutions obtained into the original equation and confirm that this yields a valid equation. After discarding solutions that yield an invalid equation, we will have the correct set of solutions. In some cases, as in the above example, all solutions may be discarded, in which case the original equation has no solution. Missing solutions: division Extraneous solutions are not too difficult to deal with because they just require checking all solutions for validity. However, more insidious are missing solutions, which can occur when performing operations on expressions that are invalid for certain values of those expressions. For example, if we were solving the following equation, the correct solution is obtained by subtracting from both sides, then dividing both sides by : By analogy, we might suppose we can solve the following equation by subtracting from both sides, then dividing by : The solution is in fact a valid solution to the original equation; but the other solution, , has disappeared. The problem is that we divided both sides by , which involves the indeterminate operation of dividing by zero when It is generally possible (and advisable) to avoid dividing by any expression that can be zero; however, where this is necessary, it is sufficient to ensure that any values of the variables that make it zero also fail to satisfy the original equation. For example, suppose we have this equation: It is valid to divide both sides by , obtaining the following equation: This is valid because the only value of that makes equal to zero is which is not a solution to the original equation. In some cases we are not interested in certain solutions; for example, we may only want solutions where is positive. In this case it is okay to divide by an expression that is only zero when is zero or negative, because this can only remove solutions we do not care about. Other operations Multiplication and division are not the only operations that can modify the solution set. For example, take the problem: If we take the positive square root of both sides, we get: We are not taking the square root of any negative values here, since both and are necessarily positive. But we have lost the solution The reason is that is actually not in general the positive square root of If is negative, the positive square root of is If the step is taken correctly, it leads instead to the equation: This equation has the same two solutions as the original one: and We can also modify the solution set by squaring both sides, because this will make any negative values in the ranges of the equation positive, causing extraneous solutions. See also Invalid proof References Elementary algebra Equations
Extraneous and missing solutions
[ "Mathematics" ]
1,184
[ "Mathematical objects", "Equations", "Elementary algebra", "Elementary mathematics", "Algebra" ]
14,738,183
https://en.wikipedia.org/wiki/Schmidt%20net
The Schmidt net is a manual drafting method for the Lambert azimuthal equal-area projection using graph paper. It results in one lateral hemisphere of the Earth with the grid of parallels and meridians. The method is common in geoscience. Construction In the figure, the area-preserving property of the projection can be seen by comparing a grid sector near the center of the net with one at the far right of the net. The two sectors have the same area on the sphere and the same area on the disk. The angle-distorting property can be seen by examining the grid lines; most of them do not intersect at right angles on the Schmidt net. A single Schmidt net can only represent one hemisphere of the earth; typically a pair of Schmidt nets is used to represent both sides of the globe. It is relatively simple to re-plot a gridded map of the world onto a Schmidt net if the azimuth is chosen to be the junction of the equator with any particular meridian from the world-map's grid. Each grid square surrounding this chosen longitude is simply re-plotted into the corresponding distorted grid-square in the Schmidt net. Points of latitude-longitude can be plotted relative to the azimuth's longitude, interpolating between grid lines in the Schmidt net. For greater accuracy, it is helpful to have a net with finer spacing than 10°; spacings of 2° are common. The Schmidt net is not an appropriate grid for representing the Earth's northern or southern hemisphere (because the lines would not correspond to meridians or parallels in such a projection). However, it can be used as a scalar measuring device for projecting latitude-longitude points onto a blank circle of the same size, to produce a Lambert equal-area projection with the azimuth at the north or south pole. The intersection of the parallels with the outer circle can be used as a de facto protractor for plotting a point's longitude as the angle in the polar projection. The Schmidt net's horizontal axis can then be used as a scalar measuring device to convert the point's latitude (relative to the pole) into a radial distance from the centre of the circle. Alternatively, the Schmidt net could be replaced entirely with a correctly projected polar grid, and grid squares from a map re-drawn into this disc. Use Researchers in structural geology use the Lambert azimuthal projection to plot lineation and foliation in rocks, slickensides in faults, and other linear and planar features. In this context the projection is called the equal-area hemispherical projection. The Schmidt net is often used to sketch out the Lambert azimuthal projection for these purposes. Conversely, the Wulff net ("equal-angle projection") is used to plot crystallographic axes and faces. Sources References Technical drawing Map projections
Schmidt net
[ "Mathematics", "Engineering" ]
587
[ "Design engineering", "Map projections", "Civil engineering", "Coordinate systems", "Technical drawing" ]
16,327,510
https://en.wikipedia.org/wiki/Banalinga
A Banalinga is a stone of a type found in the riverbed of parts of the Narmada River in Madhya Pradesh state, India, formed by natural processes of erosion into a shape resembling a lingam, an aniconic form of the Hindu deity Shiva. They are smooth ellipsoid stones that are regarded as manifestations of the deity, based on either the scriptures or cultural traditions among the Hindus, particularly of the Shaivas and Smarta Brahmins. The banalinga is also called the Svayambhu (self-born) linga as it is natural rather than artificial. However, the popularity of the stones has very severely depleted the natural supply, and the stones offered commercially today are very largely produced artificially. Banalinga stones are quite strong and typically have a hardness of 7 on the Mohs scale. Stones regarded and treated as banalinga are found naturally in other rivers and natural contexts, but only erratically. Significance The Narmada River also called the Rewa, from its leaping motion (from the root rev through its rocky bed) where the Banalinga stones are found, has been mentioned by Ptolemy and the author of the Periplus. The Ramayana, the Mahabharata and Puranas refer to it frequently. The Rewa Khand of Vayu Purana and the Rewa Khand of Skanda Purana are entirely devoted to the story of the birth and the importance of the Narmada River. It is said to have sprung from the body of Lord Shiva. It was created in the form of a lovely damsel who enamoured gods and hence named by the Lord as Narmada – delight giving. It is, therefore, often called Shankari, i.e., daughter of Lord Shankar (Shiva). All the pebbles rolling on its bed are said to take the shape of His emblem with the saying Narmada Ke Kanker utte Sanka (which is a popular saying in the Hindi belt of India) which means that ‘pebble stones of Narmada gets a personified form of Shiva’. Thus, these lingam shaped stones, called Banalinga are sought after for daily worship by the Hindus. The Bannalinga, as a divine aniconic symbol for worship, is held in reverence by the Shaivaites and Smartha Brahmins, to the same extent as the Saligrama Sila (murti) is held in reverence by the Vaishnavites. Further, a sighting of the Narmada River is considered equivalent to a bath in the Ganges. At numerous places along its course there are temples, and fairs are held. Pilgrims perform Pradakshina (circumambulation), i.e., walking along the southern bank from its source to the mouth and going back along the northern bank. The performance is regarded to be of the highest religious efficacy. Three kinds of lingas are described in the Brihat Vaivarta Purana (Hindu scripture). These three lingas, are called SvAmbhuva [Self-existing], Banalinga [got from a certain river] and Sailalinga [made of stone] and these are also respectively called , , and . It is said that gives salvation, the gives [worldly] happiness, and gives both happiness and salvation. People belonging to various Hindu sects such as Shaiva, Kapalik, Gosavi, Virashaiva, etc., use various lingas – earthen (parthivlinga), lingas in a silver box donned around the neck (kanthasthalinga), lingas of crystal glass (sphatiklinga), banalingas, a five stringed linga (panchasutri), stone lingas (pashanlinga), etc. Panchayatana Banalinga is a part of the fivefold family of deities (Panchayatana). The five Hindu deities (Shiva, Vishnu, Devi, Surya and Ganesha) are the embodiment of 5 bhutas/tatwas worshipped in formless stones, which are obtained from the 5 rivers as indicated in the table below. Panchayatana form of worship is said to have been introduced by Adi Shankara, the 8th century C.E Hindu philosopher, to enable a person to worship his Ishta devata (adored or desired deity), to address each sectarian form of worship and thus bring about tolerance among all sects. Depending on the tradition followed by Smarta households, one of these deities is kept in the centre facing East direction and the other four are arranged in four corners surrounding it, as indicated in the diagram below; all the deities are worshipped with equal fervor and devotion. People generally sit facing East, while placing the deities/devatas and performing the Panchayatana pooja in the following order: A layout for performing the Panchayatana Pooja In an additional form of worship, called the Shanmata, also founded by Adi Shankara, six deities are worshipped; the sixth deity in addition to the five deities referred in Panchayatana pooja referred above, is Skanda also known as Kartikeya and Murugan. Thanjavur Temple The famous Thanjavur Brihadeeswarar Temple has one of the biggest Banalingas. See also Panchayatana puja Shanmata Chesil Beach References Objects used in Hindu worship Stones
Banalinga
[ "Physics" ]
1,117
[ "Stones", "Physical objects", "Matter" ]
16,329,810
https://en.wikipedia.org/wiki/Bentley%E2%80%93Ottmann%20algorithm
In computational geometry, the Bentley–Ottmann algorithm is a sweep line algorithm for listing all crossings in a set of line segments, i.e. it finds the intersection points (or, simply, intersections) of line segments. It extends the Shamos–Hoey algorithm, a similar previous algorithm for testing whether or not a set of line segments has any crossings. For an input consisting of line segments with crossings (or intersections), the Bentley–Ottmann algorithm takes time . In cases where , this is an improvement on a naïve algorithm that tests every pair of segments, which takes . The algorithm was initially developed by ; it is described in more detail in the textbooks , , and . Although asymptotically faster algorithms are now known by and , the Bentley–Ottmann algorithm remains a practical choice due to its simplicity and low memory requirements. Overall strategy The main idea of the Bentley–Ottmann algorithm is to use a sweep line approach, in which a vertical line L moves from left to right (or, e.g., from top to bottom) across the plane, intersecting the input line segments in sequence as it moves. The algorithm is described most easily in its general position, meaning: No two line segment endpoints or crossings have the same x-coordinate No line segment endpoint lies upon another line segment No three line segments intersect at a single point. In such a case, L will always intersect the input line segments in a set of points whose vertical ordering changes only at a finite set of discrete events. Specifically, a discrete event can either be associated with an endpoint (left or right) of a line-segment or intersection point of two line-segments. Thus, the continuous motion of L can be broken down into a finite sequence of steps, and simulated by an algorithm that runs in a finite amount of time. There are two types of events that may happen during the course of this simulation. When L sweeps across an endpoint of a line segment s, the intersection of L with s is added to or removed from the vertically ordered set of intersection points. These events are easy to predict, as the endpoints are known already from the input to the algorithm. The remaining events occur when L sweeps across a crossing between (or intersection of) two line segments s and t. These events may also be predicted from the fact that, just prior to the event, the points of intersection of L with s and t are adjacent in the vertical ordering of the intersection points. The Bentley–Ottmann algorithm itself maintains data structures representing the current vertical ordering of the intersection points of the sweep line with the input line segments, and a collection of potential future events formed by adjacent pairs of intersection points. It processes each event in turn, updating its data structures to represent the new set of intersection points. Data structures In order to efficiently maintain the intersection points of the sweep line L with the input line segments and the sequence of future events, the Bentley–Ottmann algorithm maintains two data structures: A binary search tree (the "sweep line status tree"), containing the set of input line segments that cross L, ordered by the y-coordinates of the points where these segments cross L. The crossing points themselves are not represented explicitly in the binary search tree. The Bentley–Ottmann algorithm will insert a new segment s into this data structure when the sweep line L crosses the left endpoint p of this segment (i.e. the endpoint of the segment with the smallest x-coordinate, provided the sweep line L starts from the left, as explained above in this article). The correct position of segment s in the binary search tree may be determined by a binary search, each step of which tests whether p is above or below some other segment that is crossed by L. Thus, an insertion may be performed in logarithmic time. The Bentley–Ottmann algorithm will also delete segments from the binary search tree, and use the binary search tree to determine the segments that are immediately above or below other segments; these operations may be performed using only the tree structure itself without reference to the underlying geometry of the segments. A priority queue (the "event queue"), used to maintain a sequence of potential future events in the Bentley–Ottmann algorithm. Each event is associated with a point p in the plane, either a segment endpoint or a crossing point, and the event happens when line L sweeps over p. Thus, the events may be prioritized by the x-coordinates of the points associated with each event. In the Bentley–Ottmann algorithm, the potential future events consist of line segment endpoints that have not yet been swept over, and the points of intersection of pairs of lines containing pairs of segments that are immediately above or below each other. The algorithm does not need to maintain explicitly a representation of the sweep line L or its position in the plane. Rather, the position of L is represented indirectly: it is the vertical line through the point associated with the most recently processed event. The binary search tree may be any balanced binary search tree data structure, such as a red–black tree; all that is required is that insertions, deletions, and searches take logarithmic time. Similarly, the priority queue may be a binary heap or any other logarithmic-time priority queue; more sophisticated priority queues such as a Fibonacci heap are not necessary. Note that the space complexity of the priority queue depends on the data structure used to implement it. Detailed algorithm The Bentley–Ottmann algorithm performs the following steps. Initialize a priority queue Q of potential future events, each associated with a point in the plane and prioritized by the x-coordinate of the point. So, initially, Q contains an event for each of the endpoints of the input segments. Initialize a self-balancing binary search tree T of the line segments that cross the sweep line L, ordered by the y-coordinates of the crossing points. Initially, T is empty. (Even though the line sweep L is not explicitly represented, it may be helpful to imagine it as a vertical line which, initially, is at the left of all input segments.) While Q is nonempty, find and remove the event from Q associated with a point p with minimum x-coordinate. Determine what type of event this is and process it according to the following case analysis: If p is the left endpoint of a line segment s, insert s into T. Find the line-segments r and t that are respectively immediately above and below s in T (if they exist); if the crossing of r and t (the neighbours of s in the status data structure) forms a potential future event in the event queue, remove this possible future event from the event queue. If s crosses r or t, add those crossing points as potential future events in the event queue. If p is the right endpoint of a line segment s, remove s from T. Find the segments r and t that (prior to the removal of s) were respectively immediately above and below it in T (if they exist). If r and t cross, add that crossing point as a potential future event in the event queue. If p is the crossing point of two segments s and t (with s below t to the left of the crossing), swap the positions of s and t in T. After the swap, find the segments r and u (if they exist) that are immediately below and above t and s, respectively. Remove any crossing points rs (i.e. a crossing point between r and s) and tu (i.e. a crossing point between t and u) from the event queue, and, if r and t cross or s and u cross, add those crossing points to the event queue. Analysis The algorithm processes one event per segment endpoint or crossing point, in the sorted order of the -coordinates of these points, as may be proven by induction. This follows because, once the th event has been processed, the next event (if it is a crossing point) must be a crossing of two segments that are adjacent in the ordering of the segments represented by , and because the algorithm maintains all crossings between adjacent segments as potential future events in the event queue; therefore, the correct next event will always be present in the event queue. As a consequence, it correctly finds all crossings of input line segments, the problem it was designed to solve. The Bentley–Ottmann algorithm processes a sequence of events, where denotes the number of input line segments and denotes the number of crossings. Each event is processed by a constant number of operations in the binary search tree and the event queue, and (because it contains only segment endpoints and crossings between adjacent segments) the event queue never contains more than events. All operations take time . Hence the total time for the algorithm is . If the crossings found by the algorithm do not need to be stored once they have been found, the space used by the algorithm at any point in time is : each of the input line segments corresponds to at most one node of the binary search tree T, and as stated above the event queue contains at most elements. This space bound is due to ; the original version of the algorithm was slightly different (it did not remove crossing events from when some other event causes the two crossing segments to become non-adjacent) causing it to use more space. described a highly space-efficient version of the Bentley–Ottmann algorithm that encodes most of its information in the ordering of the segments in an array representing the input, requiring only additional memory cells. However, in order to access the encoded information, the algorithm is slowed by a logarithmic factor. Special position The algorithm description above assumes that line segments are not vertical, that line segment endpoints do not lie on other line segments, that crossings are formed by only two line segments, and that no two event points have the same x-coordinate. In other words, it doesn't take into account corner cases, i.e. it assumes general position of the endpoints of the input segments. However, these general position assumptions are not reasonable for most applications of line segment intersection. suggested perturbing the input slightly to avoid these kinds of numerical coincidences, but did not describe in detail how to perform these perturbations. describe in more detail the following measures for handling special-position inputs: Break ties between event points with the same x-coordinate by using the y-coordinate. Events with different y-coordinates are handled as before. This modification handles both the problem of multiple event points with the same x-coordinate, and the problem of vertical line segments: the left endpoint of a vertical segment is defined to be the one with the lower y-coordinate, and the steps needed to process such a segment are essentially the same as those needed to process a non-vertical segment with a very high slope. Define a line segment to be a closed set, containing its endpoints. Therefore, two line segments that share an endpoint, or a line segment that contains an endpoint of another segment, both count as an intersection of two line segments. When multiple line segments intersect at the same point, create and process a single event point for that intersection. The updates to the binary search tree caused by this event may involve removing any line segments for which this is the right endpoint, inserting new line segments for which this is the left endpoint, and reversing the order of the remaining line segments containing this event point. The output from the version of the algorithm described by consists of the set of intersection points of line segments, labeled by the segments they belong to, rather than the set of pairs of line segments that intersect. A similar approach to degeneracies was used in the LEDA implementation of the Bentley–Ottmann algorithm. Numerical precision issues For the correctness of the algorithm, it is necessary to determine without approximation the above-below relations between a line segment endpoint and other line segments, and to correctly prioritize different event points. For this reason it is standard to use integer coordinates for the endpoints of the input line segments, and to represent the rational number coordinates of the intersection points of two segments exactly, using arbitrary-precision arithmetic. However, it may be possible to speed up the calculations and comparisons of these coordinates by using floating point calculations and testing whether the values calculated in this way are sufficiently far from zero that they may be used without any possibility of error. The exact arithmetic calculations required by a naïve implementation of the Bentley–Ottmann algorithm may require five times as many bits of precision as the input coordinates, but describe modifications to the algorithm that reduce the needed amount of precision to twice the number of bits as the input coordinates. Faster algorithms The O(n log n) part of the time bound for the Bentley–Ottmann algorithm is necessary, as there are matching lower bounds for the problem of detecting intersecting line segments in algebraic decision tree models of computation. However, the dependence on k, the number of crossings, can be improved. and both provided randomized algorithms for constructing the planar graph whose vertices are endpoints and crossings of line segments, and whose edges are the portions of the segments connecting these vertices, in expected time O(n log n + k), and this problem of arrangement construction was solved deterministically in the same O(n log n + k) time bound by . However, constructing this arrangement as a whole requires space O(n + k), greater than the O(n) space bound of the Bentley–Ottmann algorithm; described a different algorithm that lists all intersections in time O(n log n + k) and space O(n). If the input line segments and their endpoints form the edges and vertices of a connected graph (possibly with crossings), the O(n log n) part of the time bound for the Bentley–Ottmann algorithm may also be reduced. As show, in this case there is a randomized algorithm for solving the problem in expected time O(n log* n + k), where denotes the iterated logarithm, a function much more slowly growing than the logarithm. A closely related randomized algorithm of solves the same problem in time O(n + k log(i)n) for any constant i, where log(i) denotes the function obtained by iterating the logarithm function i times. The first of these algorithms takes linear time whenever k is larger than n by a log(i)n factor, for any constant i, while the second algorithm takes linear time whenever k is smaller than n by a log(i)n factor. Both of these algorithms involve applying the Bentley–Ottmann algorithm to small random samples of the input. Notes References . . . . . . . . . . Corrigendum, 2 (3): 341–343. . . . . . . External links . Computational geometry Geometric algorithms
Bentley–Ottmann algorithm
[ "Mathematics" ]
3,078
[ "Computational geometry", "Computational mathematics" ]
16,331,206
https://en.wikipedia.org/wiki/Clarke%20Chapman
Clarke Chapman is a British engineering firm based in Gateshead, which was formerly listed on the London Stock Exchange. History The company was founded in 1864 in Gateshead by William Clarke (1831–1890). In 1865 Clarke took in a partner, Abel Chapman, and the two of them developed the business into one of the largest manufacturers of cranes and other mechanical handling equipment in the world. In 1870 two further partners joined the firm, Joseph Watson and Joseph Gurney. The firm became known as Clarke, Chapman and Gurney. Joseph Gurney retired from the firm in 1882. The firm subsequently formed a partnership with John Furneaux and Charles Parsons, and became known as Clarke, Chapman, Parsons, and Company. Parsons left the firm in 1889. By 1907 the firm manufactured an extensive range of ship's auxiliary machinery, mining plant, water tube boilers, and pumps. Clarke Chapman became the main supplier of auxiliary equipment to the British shipbuilding industry before the First World War. In 1969 Clarke Chapman acquired Sir William Arrol & Co., a leading bridge-builder. In 1970 Clarke Chapman acquired John Thompson, a leading boiler making business based in Wolverhampton. In 1974 Clarke Chapman acquired the UK interests of International Combustion, a diverse group of heavy engineering businesses. The company merged with Reyrolle Parsons in 1977 to form Northern Engineering Industries plc which itself was acquired by Rolls-Royce plc in 1989. The business continues today as part of Langley Holdings Limited which acquired it from Rolls-Royce in 2000. Ships using Clarke Chapman mechanical handling equipment include the RFA Wave Knight and the RFA Wave Ruler completed in 2000 and 2001 respectively. Operations The company trades under the names of Cowans Sheldon (railway cranes), RB Cranes (construction cranes), Stothert & Pitt (port cranes) and Wellman Booth (steel plant cranes). See also C.A. Parsons and Company A. Reyrolle & Company References External links Official site Clarke Chapman History Partial list of the firm's early product literature Boilers Companies based in Tyne and Wear British companies established in 1864 Engineering companies of the United Kingdom 1864 establishments in England Manufacturing companies established in 1864
Clarke Chapman
[ "Chemistry" ]
427
[ "Boilers", "Pressure vessels" ]
16,331,866
https://en.wikipedia.org/wiki/Security%20of%20automated%20teller%20machines
Automated teller machines (ATMs) are targets for fraud, robberies and other security breaches. In the past, the main purpose of ATMs was to deliver cash in the form of banknotes, and to debit a corresponding bank account. However, ATMs are becoming more complicated and they now serve numerous functions, thus becoming a high priority target for robbers and hackers. Introduction Modern ATMs are implemented with high-security protection measures. They work under complex systems and networks to perform transactions. The data processed by ATMs are usually encrypted, but hackers can employ discreet hacking devices to hack accounts and withdraw the account's balance. As an alternative, unskilled robbers threaten bank patrons with a weapon to loot their withdrawn money or account. Methods of looting ATMs ATM vandals can either physically tamper with the ATM to obtain cash, or employ credit card skimming methods to acquire control of the user's credit card account. Credit card fraud can be done by inserting discreet skimming devices over the keypad or credit card reader. The alternative way to credit card fraud is to identify the PIN directly with devices such as cameras concealed near the keypad. Security measures of ATMs PIN validation schemes for local transactions On-Line PIN validation The validation of on-line PIN occurs if the terminal in question is connected to the central database. The PIN supplied by the customer is always compared with the recorded reference PIN in the financial institutions. However, one disadvantage is that any malfunction of the network renders the ATM unusable until it is fixed. Off-Line PIN validation In off-line PIN validation, the ATM is not connected to the central database. A condition for off-line PIN validation is that the ATM should be able to compare the customer's entered PIN against the PIN of reference. the terminal must be able to perform cryptographic operations and it must have the required encryption keys at its disposal. The offline validation scheme is extremely slow and inefficient. Offline PIN validation is now obsolete, as the ATMs are connected to the central server over protected networks. PIN validation for interchange transactions There are three PIN procedures for the operation of a high-security interchange transaction. The supplied PIN is encrypted at the entry terminal, during this step, a secret cryptographic key is used. In addition to other transaction elements, the encrypted PIN is transmitted to the acquirer's system. Then, the encrypted PIN is routed from the acquirer's system to a hardware security module. Within it, the PIN is decrypted. With a cryptographic key used for interchange, the decrypted key is immediately re-encrypted and is routed to the issuer's system over normal communications channels. Lastly, the routed PIN is decrypted in the issuer's security module and then validated on the basis of the techniques for on-line local PIN validation. Shared ATMs There are different transaction methods used in shared ATMs with regards to the encipherment of PIN, and message authentication among them is so-called "zone encryption". In this method, a trusted authority is appointed to operate on behalf of a group of banks so they could interchange messages for ATM payment approvals. Hardware security module For successful communication between banks and ATMs, the incorporation of a cryptographic module, usually called a security module is a critical component in maintaining proper connections between banks and the machines. The security module is designed to be tamper resistant. The security module performs a plethora of functions, and among them is PIN verification, PIN translation in interchange, key management and message authentication. The use of PIN in interchanges is causing concerns in security as the PIN can be translated by the security module to the format used for interchange. Moreover, the security module is to generate, protect and maintaining all keys associated with the user's network. Authentication and data integrity The personal verification process begins with the user's supply of personal verification information. This information includes a PIN and the provided customer's information which is recorded on the bank account. In cases where there is a storage of a cryptographic key on the bank card, it is called a personal key (PK). Personal identification processes can be done by the authentication parameter (AP). It is capable of operating in two ways. The first option is where an AP can be time-invariant. The second option is where an AP can be time-variant. There is the case where there is an IP which is based on both time-variant information and on the transaction request message. In such a case where an AP can be used as a message authentication code (MAC), the use of message authentication is made recourse to find out stale or bogus messages which might be routed both into the communication path and the detection of modified messages which are fraudulent and which can traverse non-secure communication systems. In such cases, the AP serves two purposes. Security Security breaches in electronic funds transfer systems can be done without delimiting their components. Electronic funds transfer systems have three components; which are communication links, computers, and terminals (ATMs). First, communication links are prone to attacks. Data can be exposed by passive means or direct means where a device is inserted to retrieve the data. The second component is computer security. There are different techniques that can be used to acquire access to a computer such as accessing it via a remote terminal or other peripheral devices such as the card reader. The hacker had gained unauthorized access to the system, so programs or data can be manipulated and altered by the hacker. Terminal security is a significant component in cases where cipher keys reside in terminals. In the absence of physical security, an abuser may probe for a key that substitutes its value. See also ATM Industry Association (ATMIA) References External links https://www.lightbluetouchpaper.org/ - Security Research, Computer Laboratory University of Cambridge Data security Automated teller machines
Security of automated teller machines
[ "Engineering" ]
1,234
[ "Cybersecurity engineering", "Data security", "Automation", "Automated teller machines" ]
16,332,700
https://en.wikipedia.org/wiki/Institute%20of%20Automation
The Institute of Automation, Chinese Academy of Sciences (CASIA, ) is a research lab belonging to the Chinese Academy of Sciences which researches robotics, pattern recognition and control theory. See also Meinü robot List of datasets for machine-learning research References External links Automation Automation organizations
Institute of Automation
[ "Engineering" ]
58
[ "Automation organizations", "Automation" ]
16,334,130
https://en.wikipedia.org/wiki/On-die%20termination
On-die termination (ODT) is the technology where the termination resistor for impedance matching in transmission lines is located inside a semiconductor chip instead of on a printed circuit board (PCB). Overview of electronic signal termination In lower frequency (slow edge rate) applications, interconnection lines can be modelled as "lumped" circuits. In this case, there is no need to consider the concept of "termination". Under the low-frequency condition, every point in an interconnect wire can be assumed to have the same voltage as every other point for any instance in time. However, if the propagation delay in a wire, PCB trace, cable, or connector is significant (for example, if the delay is greater than 1/6 of the rise time of the digital signal), the "lumped" circuit model is no longer valid and the interconnect has to be analyzed as a transmission line. In a transmission line, the signal interconnect path is modeled as a circuit containing distributed inductance, capacitance, and resistance throughout its length. For a transmission line to minimize distortion of the signal, the impedance of every location on the transmission line should be uniform throughout its length. If there is any place in the line where the impedance is not uniform for some reason (open circuit, impedance discontinuity, different material) the signal gets modified by reflection at the impedance change point which results in distortion, ringing, and so forth. When the signal path has impedance discontinuity, in other words, an impedance mismatch, then a termination impedance with the equivalent amount of impedance is placed at the point of line discontinuity. This is described as "termination". For example, resistors can be placed on computer motherboards to terminate high-speed busses. There are several ways of termination depending on how the resistors are connected to the transmission line. Parallel termination and series termination are examples of termination methodologies. On-die termination Instead of having the necessary resistive termination located on the motherboard, the termination is located inside the semiconductor chips–technique called On-Die Termination (abbreviated to ODT). Why is on-die termination needed? Although the termination resistors on the motherboard reduce some reflections on the signal lines, they are unable to prevent reflections resulting from the stub lines that connect to the components on the module card (e.g. DRAM module). A signal propagating from the controller to the components encounters an impedance discontinuity at the stub leading to the components on the module. The signal that propagates along the stub to the component (e.g. DRAM component) will be reflected onto the signal line, thereby introducing unwanted noise into the signal. In addition, on-die termination can reduce the number of resistor elements and complex wiring on the motherboard. Accordingly, the system design can be simpler and cost-effective. Example of ODT: DRAM On-die termination is implemented with several combinations of resistors on the DRAM silicon along with other circuit trees. DRAM circuit designers can use a combination of transistors that have different values of turn-on resistance. In the case of DDR2, there are three kinds of internal resistors 150ohm, 75ohm, and 50ohm. The resistors can be combined to create a proper equivalent impedance value to the outside of the chip, whereby the signal line (transmission line) of the motherboard is controlled by the on-die termination operation signal. Where an on-die termination value control circuit exists the DRAM controller manages the on-die termination resistance through a programmable configuration register that resides in the DRAM. The internal on-die termination values in DDR3 are 120ohm, 60ohm, 40ohm, and so forth. How On-Die Termination (ODT) Works: An Example of DRAM Utilizing On-Die Termination (ODT) involves two steps. First, the On-Die Termination (ODT) value must be selected within the DRAM. Second, it can be dynamically enabled/disabled using the ODT pin from the ODT Controller. To configure ODT there could be different methods. In DRAM, it is done by setting up the device’s extended mode register with the proper ODT value. There are synchronous and asynchronous timing requirements, depending on the state of the DRAM device. Essentially, the On-Die Termination (ODT) is turned on just before the data transfer and then shut off immediately after. If there is more than one DRAM device loaded on the channel, either the active or inactive DRAM can terminate the signal. This flexibility enables optimal termination to occur as precisely as needed. Let’s try to understand how On-Die Termination (ODT) works in DRAM read and write operations. All data-group signals fall under point-to-point singling. The data-group signals are driven by the DRAM controller on writes and driven by the DRAM memories during reads. No external resistors are needed on these routes on PCB as the DRAM controller and Memory are equipped with ODT. The receivers in both cases (DRAMS memory on writes and DRAM controller on reads) will assert on-die terminations (ODT) at the appropriate times. The following diagrams show the impedances seen on these nets during write and read cycles. On-Die Termination (ODT) in Write Cycle Let’s take an example of the impedances seen on the nets during a write cycle as per the below picture. During writes, the output impedance of the DRAM device is approximately 45Ω. It is recommended that the SDRAM be implemented with a 240Ω. Assuming the RZQ resistor is 240Ω, Termination resistors can be configured to present an On-Die Termination (ODT) of RZQ/4 for an effective termination of 40Ω. On-die Termination (ODT) in Read Cycle The picture shows the impedances seen on the PCB nets during a read cycle. During reads, it is recommended that the DRAM be configured for an effective drive impedance of RZQ/7 or 34 Ω (assuming the RZQ resistor is 240 Ω). The on-die termination (ODT) within the DRAM controller will have an effective Thevenin impedance of 45 Ω. Fly-By Signals Now let’s talk about the fly-by signals, which include the address, control, command, and clock routing groups. The fly-by signals consist of the fly-by routing from the DRAM controller, stubs at each SDRAM, and terminations after the last SDRAM. In this example, address, control, and command groups will be terminated through a 39.2-2 resistor to VTT. The clock pairs will be terminated through 39.2Ω resistors to a common node connected to a capacitor that is then connected to VDDQ. The DRAM controller will present a 45-2 output impedance when driving these signals. See also Reflections of signals on conducting lines References Semiconductors
On-die termination
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,497
[ "Electrical resistance and conductance", "Physical quantities", "Semiconductors", "Materials", "Electronic engineering", "Condensed matter physics", "Solid state engineering", "Matter" ]
16,334,333
https://en.wikipedia.org/wiki/Pesticide%20application
Pesticide application is the practical way in which pesticides (including herbicides, fungicides, insecticides, or nematode control agents) are delivered to their biological targets (e.g. pest organism, crop or other plant). Public concern about the use of pesticides has highlighted the need to make this process as efficient as possible, in order to minimise their release into the environment and human exposure (including operators, bystanders and consumers of produce). The practice of pest management by the rational application of pesticides is supremely multi-disciplinary, combining many aspects of biology and chemistry with: agronomy, engineering, meteorology, socio-economics and public health, together with newer disciplines such as biotechnology and information science. Efficacy can be related to the quality of pesticide application, with small droplets, such as aerosols often improving performance. Decision making Optical data from satellites and from aircraft are increasingly being used to inform application decisions. Seed treatments Seed treatments can achieve exceptionally high efficiencies, in terms of effective dose-transfer to a crop. Pesticides are applied to the seed prior to planting, in the form of a seed treatment, or coating, to protect against soil-borne risks to the plant; additionally, these coatings can provide supplemental chemicals and nutrients designed to encourage growth. A typical seed coating can include a nutrient layer—containing nitrogen, phosphorus, and potassium, a rhizobial layer—containing symbiotic bacteria and other beneficial microorganisms, and a fungicide (or other chemical) layer to make the seed less vulnerable to pests. Spray application One of the most common forms of pesticide application, especially in conventional agriculture, is the use of mechanical sprayers. Hydraulic sprayers consists of a tank, a pump, a lance (for single nozzles) or boom, and a nozzle (or multiple nozzles). Sprayers convert a pesticide formulation, often containing a mixture of water (or another liquid chemical carrier, such as fertilizer) and chemical, into droplets, which can be large rain-type drops or tiny almost-invisible particles. This conversion is accomplished by forcing the spray mixture through a spray nozzle under pressure. The size of droplets can be altered through the use of different nozzle sizes, or by altering the pressure under which it is forced, or a combination of both. Large droplets have the advantage of being less susceptible to spray drift, but require more water per unit of land covered. Due to static electricity, small droplets are able to maximize contact with a target organism, but very still wind conditions are required. Spraying pre- and post-emergent crops Traditional agricultural crop pesticides can either be applied pre-emergent or post-emergent, a term referring to the germination status of the plant. Pre-emergent pesticide application, in conventional agriculture, attempts to reduce competitive pressure on newly germinated plants by removing undesirable organisms and maximizing the amount of water, soil nutrients, and sunlight available for the crop. An example of pre-emergent pesticide application is atrazine application for corn. Similarly, glyphosate mixtures are often applied pre-emergent on agricultural fields to remove early-germinating weeds and prepare for subsequent crops. Pre-emergent application equipment often has large, wide tires designed to float on soft soil, minimizing both soil compaction and damage to planted (but not yet emerged) crops. A three-wheel application machine, such as the one pictured on the right, is designed so that tires do not follow the same path, minimizing the creation of ruts in the field and limiting sub-soil damage. Post-emergent pesticide application requires the use of specific chemicals chosen to minimize harm to the desirable target organism. An example is 2,4-Dichlorophenoxyacetic acid, which will injure broadleaf weeds (dicots) but leave behind grasses (monocots). Such a chemical has been used extensively on wheat crops, for example. A number of companies have also created genetically modified organisms that are resistant to various pesticides. Examples include glyphosate-resistant soybeans and Bt maize, which change the types of formulations involved in addressing post-emergent pesticide pressure. It was important to also note that even given appropriate chemical choices, high ambient temperatures or other environmental influences, can allow the non-targeted desirable organism to be damaged during application. As plants have already germinated, post-emergent pesticide application necessitates limited field contact in order to minimize losses due to crop and soil damage. Typical industrial application equipment will utilize very tall and narrow tires and combine this with a sprayer body which can be raised and lowered depending on crop height. These sprayers usually carry the label ‘high-clearance’ as they can rise over growing crops, although usually not much more than 1 or 2 meters high. In addition, these sprayers often have very wide booms in order to minimize the number of passes required over a field, again designed to limit crop damage and maximize efficiency. In industrial agriculture, spray booms wide are not uncommon, especially in prairie agriculture with large, flat fields. Related to this, aerial pesticide application is a method of top dressing a pesticide to an emerged crop which eliminates physical contact with soil and crops. Air Blast sprayers, also known as air-assisted or mist sprayers, are often used for tall crops, such as tree fruit, where boom sprayers and aerial application would be ineffective. These types of sprayers can only be used where overspray—spray drift—is less of a concern, either through the choice of chemical which does not have undesirable effects on other desirable organisms, or by adequate buffer distance. These can be used for insects, weeds, and other pests to crops, humans, and animals. Air blast sprayers inject liquid into a fast-moving stream of air, breaking down large droplets into smaller particles by introducing a small amount of liquid into a fast-moving stream of air. Foggers fulfill a similar role to mist sprayers in producing particles of very small size, but use a different method. Whereas mist sprayers create a high-speed stream of air which can travel significant distances, foggers use a piston or bellows to create a stagnant area of pesticide that is often used for enclosed areas, such as houses and animal shelters. Spraying inefficiencies In order to better understand the cause of the spray inefficiency, it is useful to reflect on the implications of the large range of droplet sizes produced by typical (hydraulic) spray nozzles. This has long been recognized to be one of the most important concepts in spray application (e.g. Himel, 1969), bringing about enormous variations in the properties of droplets. Historically, dose-transfer to the biological target (i.e. the pest) has been shown to be inefficient. However, relating "ideal" deposits with biological effect is fraught with difficulty, but in spite of Hislop's misgivings about detail, there have been several demonstrations that massive amounts of pesticides are wasted by run-off from the crop and into the soil, in a process called endo-drift. This is a less familiar form of pesticide drift, with exo-drift causing much greater public concern. Pesticides are conventionally applied using hydraulic atomisers, either on hand-held sprayers or tractor booms, where formulations are mixed into high volumes of water. Different droplet sizes have dramatically different dispersal characteristics, and are subject to complex macro- and micro-climatic interactions (Bache & Johnstone, 1992). Greatly simplifying these interactions in terms of droplet size and wind speed, Craymer & Boyle concluded that there are essentially three sets of conditions under which droplets move from the nozzle to the target. These are where: sedimentation dominates: typically larger (>100 μm) droplets applied at low wind-speeds; droplets above this size are appropriate for minimising drift contamination by herbicides. turbulent eddies dominate: typically small droplets (<50 μm) that are usually considered most appropriate for targeting flying insects, unless an electrostatic charge is also present that provides the necessary force to attract droplets to foliage. (NB: the latter effects only operate at very short distances, typically under 10 mm.) intermediate conditions where both sedimentation and drift effects are important. Most agricultural insecticide and fungicide spraying is optimised by using relatively small (say 50-150 μm) droplets in order to maximize “coverage” (droplets per unit area), but are also subject to drift. Herbicide volatilisation Herbicide volatilisation refers to evaporation or sublimation of a volatile herbicide. The effect of gaseous chemical is lost at its intended place of application and may move downwind and affect other plants not intended to be affected causing crop damage. Herbicides vary in their susceptibility to volatilisation. Prompt incorporation of the herbicide into the soil may reduce or prevent volatilisation. Wind, temperature, and humidity also affect the rate of volatilisation with humidity reducing in. 2,4-D and dicamba are commonly used chemicals that are known to be subject to volatilisation but there are many others. Application of herbicides later in the season to protect herbicide-resistant genetically modified plants increases the risk of volatilisation as the temperature is higher and incorporation into the soil impractical. Improved targeting In the 1970s and 1980s improved application technologies such as controlled droplet application (CDA) received extensive research interest, but commercial uptake has been disappointing. By controlling droplet size, ultra-low volume (ULV) or very low volume (VLV) application rates of pesticidal mixtures can achieve similar (or sometimes better) biological results by improved timing and dose-transfer to the biological target (i.e. pest). No atomizer has been developed able to produce uniform (monodisperse) droplets, but rotary (spinning disc and cage) atomizers usually produce a more uniform droplet size spectrum than conventional hydraulic nozzles (see: CDA & ULV application equipment). Other efficient application techniques include: banding, baiting, specific granule placement, seed treatments and weed wiping. CDA is a good example of a rational pesticide use (RPU) technology (Bateman, 2003), but unfortunately has been unfashionable with public funding bodies since the early 1990s, with many believing that all pesticide development should be the responsibility of pesticide manufacturers. On the other hand, pesticide companies are unlikely widely to promote better targeting and thus reduced pesticide sales, unless they can benefit by adding value to products in some other way. RPU contrasts dramatically with the promotion of pesticides, and many agrochemical concerns, have equally become aware that product stewardship provides better long-term profitability than high pressure salesmanship of a dwindling number of new “silver bullet” molecules. RPU may therefore provide an appropriate framework for collaboration between many of the stake-holders in crop protection. Understanding the biology and life cycle of the pest is also an important factor in determining droplet size. The Agricultural Research Service, for example, has conducted tests to determine the ideal droplet size of a pesticide used to combat corn earworms. They found that in order to be effective, the pesticide needs to penetrate through the corn's silk, where the earworm's larvae hatch. The research concluded that larger pesticide droplets best penetrated the targeted corn silk. Knowing where the pest's destruction originates is crucial in targeting the amount of pesticide needed. Quality and assessment of equipment Ensuring quality of sprayers by testing and setting of standards for application equipment is important to ensure users get value for money. Since most equipment uses various hydraulic nozzles, various initiatives have attempted to classify spray quality, starting with the BCPC system. Road maintenance Roadsides receive substantial quantities of herbicides, both intentionally applied for their maintenance and due to herbicide drift from adjacent applications. This often kills off-target plants. Other application methods Aerial application See: aerial spraying, Ultra-low volume spray application, crop dusting and agricultural drones. Application methods for household insecticides Pest management in the home begins with restricting the availability to insects of three vital commodities: shelter, water and food. If insects become a problem despite such measures, it may become necessary to control them using chemical methods, targeting the active ingredient to the particular pest. Insect repellent, referred to as "bug spray", comes in a plastic bottle or aerosol can. Applied to clothing, arms, legs, and other extremities, the use of these products will tend to ward off nearby insects. This is not an insecticide. Insecticide used for killing pests—most often insects, and arachnids—primarily comes in an aerosol can, and is sprayed directly on the insect or its nest as a means of killing it. Fly sprays will kill house flies, blowflies, ants, cockroaches and other insects and also spiders. Other preparations are granules or liquids that are formulated with bait that is eaten by insects. For many household pests bait traps are available that contain the pesticide and either pheromone or food baits. Crack and crevice sprays are applied into and around openings in houses such as baseboards and plumbing. Pesticides to control termites are often injected into and around the foundations of homes. Active ingredients of many household insecticides include permethrin and tetramethrin, which act on the nervous system of insects and arachnids. Bug sprays should be used in well ventilated areas only, as the chemicals contained in the aerosol and most insecticides can be harmful or deadly to humans and pets. All insecticide products including solids, baits and bait traps should be applied such that they are out of reach of wildlife, pets and children. See also aerial application Aerosol spray Formulation Integrated pest management (IPM) Pest control Pesticide Insecticide Fungicide Weed control Pesticide drift sprayer spray nozzle References Further reading Matthews G.A. (2006) Pesticides: Health, Safety and the Environment Blackwell, Oxford Bache D.H., Johnstone, D.R. (1992) Microclimate and spray dispersion Ellis Horwood, Chichester, England. External links International Pesticide Application Research Centre (IPARC) Ontario Ministry of Agriculture, Food, and Rural Affairs - Pesticide Storage, Handling, and Application Example of Pesticide application in the Tsubo-en Zen garden (Japanese dry rock garden) in Lelystad, The Netherlands. Stewardship Community working together to promote the safe, effective use of pesticides. Pesticides Pest control techniques Environmental engineering Agricultural practices
Pesticide application
[ "Chemistry", "Engineering", "Biology", "Environmental_science" ]
3,086
[ "Pesticides", "Toxicology", "Chemical engineering", "Civil engineering", "Environmental engineering", "Biocides" ]
16,334,749
https://en.wikipedia.org/wiki/Algorithmic%20game%20theory
Algorithmic game theory (AGT) is an area in the intersection of game theory and computer science, with the objective of understanding and design of algorithms in strategic environments. Typically, in Algorithmic Game Theory problems, the input to a given algorithm is distributed among many players who have a personal interest in the output. In those situations, the agents might not report the input truthfully because of their own personal interests. We can see Algorithmic Game Theory from two perspectives: Analysis: given the currently implemented algorithms, analyze them using Game Theory tools (e.g., calculate and prove properties on their Nash equilibria, price of anarchy, and best-response dynamics). Design: design games that have both good game-theoretical and algorithmic properties. This area is called algorithmic mechanism design. On top of the usual requirements in classical algorithm design (e.g., polynomial-time running time, good approximation ratio), the designer must also care about incentive constraints. History Nisan-Ronen: a new framework for studying algorithms In 1999, the seminal paper of Noam Nisan and Amir Ronen drew the attention of the Theoretical Computer Science community to designing algorithms for selfish (strategic) users. As they claim in the abstract: This paper coined the term algorithmic mechanism design and was recognized by the 2012 Gödel Prize committee as one of "three papers laying foundation of growth in Algorithmic Game Theory". Price of Anarchy The other two papers cited in the 2012 Gödel Prize for fundamental contributions to Algorithmic Game Theory introduced and developed the concept of "Price of Anarchy". In their 1999 paper "Worst-case Equilibria", Koutsoupias and Papadimitriou proposed a new measure of the degradation of system efficiency due to the selfish behavior of its agents: the ratio of between system efficiency at an optimal configuration, and its efficiency at the worst Nash equilibrium. (The term "Price of Anarchy" only appeared a couple of years later.) The Internet as a catalyst The Internet created a new economy—both as a foundation for exchange and commerce, and in its own right. The computational nature of the Internet allowed for the use of computational tools in this new emerging economy. On the other hand, the Internet itself is the outcome of actions of many. This was new to the classic, ‘top-down’ approach to computation that held till then. Thus, game theory is a natural way to view the Internet and interactions within it, both human and mechanical. Game theory studies equilibria (such as the Nash equilibrium). An equilibrium is generally defined as a state in which no player has an incentive to change their strategy. Equilibria are found in several fields related to the Internet, for instance financial interactions and communication load-balancing. Game theory provides tools to analyze equilibria, and a common approach is then to ‘find the game’—that is, to formalize specific Internet interactions as a game, and to derive the associated equilibria. Rephrasing problems in terms of games allows the analysis of Internet-based interactions and the construction of mechanisms to meet specified demands. If equilibria can be shown to exist, a further question must be answered: can an equilibrium be found, and in reasonable time? This leads to the analysis of algorithms for finding equilibria. Of special importance is the complexity class PPAD, which includes many problems in algorithmic game theory. Areas of research Algorithmic mechanism design Mechanism design is the subarea of economics that deals with optimization under incentive constraints. Algorithmic mechanism design considers the optimization of economic systems under computational efficiency requirements. Typical objectives studied include revenue maximization and social welfare maximization. Inefficiency of equilibria The concepts of price of anarchy and price of stability were introduced to capture the loss in performance of a system due to the selfish behavior of its participants. The price of anarchy captures the worst-case performance of the system at equilibrium relative to the optimal performance possible. The price of stability, on the other hand, captures the relative performance of the best equilibrium of the system. These concepts are counterparts to the notion of approximation ratio in algorithm design. Complexity of finding equilibria The existence of an equilibrium in a game is typically established using non-constructive fixed point theorems. There are no efficient algorithms known for computing Nash equilibria. The problem is complete for the complexity class PPAD even in 2-player games. In contrast, correlated equilibria can be computed efficiently using linear programming, as well as learned via no-regret strategies. Computational social choice Computational social choice studies computational aspects of social choice, the aggregation of individual agents' preferences. Examples include algorithms and computational complexity of voting rules and coalition formation. Other topics include: Algorithms for computing Market equilibria Fair division Multi-agent systems And the area counts with diverse practical applications: Sponsored search auctions Spectrum auctions Cryptocurrencies Prediction markets Reputation systems Sharing economy Matching markets such as kidney exchange and school choice Crowdsourcing and peer grading Economics of the cloud Journals and newsletters ACM Transactions on Economics and Computation (TEAC) SIGEcom Exchanges Algorithmic Game Theory papers are often also published in Game Theory journals such as GEB, Economics journals such as Econometrica, and Computer Science journals such as SICOMP. See also Auction Theory Computational social choice Gamification Load balancing (computing) Mechanism design Multi-agent system Voting in game theory References John von Neumann, Oskar Morgenstern (1944) Theory of Games and Economic Behavior. Princeton Univ. Press. 2007 edition: . External links gambit.sourceforge.net - a library of game theory software and tools for the construction and analysis of finite extensive and strategic games. gamut.stanford.edu - a suite of game generators designated for testing game-theoretic algorithms. + Theory of computation Algorithms
Algorithmic game theory
[ "Mathematics" ]
1,203
[ "Applied mathematics", "Algorithms", "Game theory", "Mathematical logic" ]
16,334,959
https://en.wikipedia.org/wiki/Tephrosin
{{chembox | Verifiedfields = changed | Watchedfields = changed | verifiedrevid = 413150531 | ImageFile = Tephrosin.png | ImageSize = 200px | IUPACName = 7a-Hydroxy-9,10-dimethoxy-3,3-dimethyl-13,13a-dihydro-3H,7aH-pyrano[2,3-c;6,5-f]dichromen-7-one | OtherNames = 12aβ-hydroxydeguelin |Section1= |Section2= |Section3= |Section8= }}Tephrosin''' is rotenoid. It is a natural fish poison found in the leaves and seeds of Tephrosia purpurea and T. vogelii''. See also Cubé resin References Pesticides Phenol ethers Acyloins Tertiary alcohols Rotenoids Cyclic ethers Heterocyclic compounds with 5 rings Pyranochromenes Methoxy compounds
Tephrosin
[ "Biology", "Environmental_science" ]
229
[ "Biocides", "Toxicology", "Pesticides" ]
16,335,410
https://en.wikipedia.org/wiki/Frequency%20partition%20of%20a%20graph
In graph theory, a discipline within mathematics, the frequency partition of a graph (simple graph) is a partition of its vertices grouped by their degree. For example, the degree sequence of the left-hand graph below is (3, 3, 3, 2, 2, 1) and its frequency partition is 6 = 3 + 2 + 1. This indicates that it has 3 vertices with some degree, 2 vertices with some other degree, and 1 vertex with a third degree. The degree sequence of the bipartite graph in the middle below is (3, 2, 2, 2, 2, 2, 1, 1, 1) and its frequency partition is 9 = 5 + 3 + 1. The degree sequence of the right-hand graph below is (3, 3, 3, 3, 3, 3, 2) and its frequency partition is 7 = 6 + 1. In general, there are many non-isomorphic graphs with a given frequency partition. A graph and its complement have the same frequency partition. For any partition p = f1 + f2 + ... + fk of an integer p > 1, other than p = 1 + 1 + 1 + ... + 1, there is at least one (connected) simple graph having this partition as its frequency partition. Frequency partitions of various graph families are completely identified; frequency partitions of many families of graphs are not identified. Frequency partitions of Eulerian graphs For a frequency partition p = f1 + f2 + ... + fk of an integer p > 1, its graphic degree sequence is denoted as ((d1)f1,(d2)f2, (d3)f3, ..., (dk) fk) where degrees di's are different and fi ≥ fj for i < j. Bhat-Nayak et al. (1979) showed that a partition of p with k parts, k ≤ integral part of is a frequency partition of a Eulerian graph and conversely. Frequency partition of trees, Hamiltonian graphs, tournaments and hypegraphs The frequency partitions of families of graphs such as trees, Hamiltonian graphs directed graphs and tournaments and to k-uniform hypergraphs. have been characterized. Unsolved problems in frequency partitions The frequency partitions of the following families of graphs have not yet been characterized: Line graphs Bipartite graphs References External section Graph theory
Frequency partition of a graph
[ "Mathematics" ]
493
[ "Discrete mathematics", "Mathematical relations", "Graph theory", "Combinatorics" ]
16,335,526
https://en.wikipedia.org/wiki/Pesticide%20drift
Pesticide drift, also known as spray drift refers to the unintentional diffusion of pesticides toward nontarget species. It is one of the most negative effects of pesticide application. Drift can damage human health, environment, and crops. Together with runoff and leaching, drift is a mechanism for agricultural pollution. Some drift results from contamination of sprayer tanks. Farmers struggle to minimize pesticide drift and remain productive. Research continues on developing pesticides that are more selective, but the current pesticides have been highly optimized. Pesticide application Pesticides are commonly applied by the use of mechanical sprayers. Sprayers convert a pesticide formulation, often consisting of a mixture of water, the pesticide, and other components (adjuvants, for example) into droplets, which are applied to the crop. Ideally, the pesticide droplets attach evenly to the targeted crop. Because components of the mist are highly mobile, spray drift can occur, especially for smaller droplets. Some pesticides mists are visible, appearing cloud-like, while others can be invisible and odorless. The quality of sprayer equipment affects drift problems. Sprayer tanks contaminated with another herbicide are one source of drift. With placement (localised) spraying of broad spectrum pesticides, considerable efforts have been made to quantify and control spray drift from hydraulic nozzles. Conversely, wind drift is also an efficient mechanism for moving droplets of an appropriate size range to their targets over a wide area with ultra-low volume (ULV) spraying. "Drift retardants" are compounds added to the spray mixture to suppress pesticide drift. A typical retardant is polyacrylamide. These polymers suppress the formation of tiny droplets. Weather conditions and timing affect the drift problem. The efficiency of the spray and reach of the spray drift can be computed. In addition to weather, windbreaks can mitigate the effects of drift. Other ways to mitigate spray drift is to apply the pesticide directly to the desired treatment area, as well as paying attention to where surface waters, gutters, drainage ditches, and storm drains are located. This is to make sure that the pesticide is applied in a way that prevents it from getting in to these spaces. Most herbicides are organic compounds of low volatility, unlike fumigants, which are usually gases. Several are salts and others have boiling points above 100 °C (Dicamba is a solid that melts at 114°C). Thus, drift often entails mobilization of droplets, which can be very small. The contribution from their volatility, low as they are, cannot be ignored, either. A distinction has been made between "exo-drift" (the transfer of spray out of the target area) and endo-drift, where the active ingredient (AI) in droplets falls into the target area, but does not reach the biological target. "Endo-drift" is volumetrically more significant and may therefore cause greater ecological contamination (e.g. where chemical pesticides pollute ground water). Since drift can be problematic, alternative weed-control technologies have evolved. A topical approach is integrated pest management, which involves fewer chemicals but often greater manual work. Dicamba drift Dicamba drift is a particular problem, as has been recognized since at least 1979. The effects have been noted for many crops: grapes, tomatoes, soybeans. In 2017, Dicamba-resistant soybeans and cotton were approved for use in the US. This new technology worsened the drift problem because these farmers could use Dicamba more freely. Although already low in volatility, as discussed above, Dicamba can be made even less volatile by conversion to various salts. The approach entails treatment of Dicamba with amines, which form ammonium salts. These salts are described by their acronyms BAPMA-Dicamba and DGA-Dicamba. Although these salts are of lower volatility in laboratory tests, in the field the situation is more complicated, and drift remains a problem. Safety and society Much public concern has led to research into spray drift, point source pollution (e.g. pesticides entering bodies of water following spillage of concentrate or rinsate) can also cause environmental harm. Public concern for pesticide drift is not met with regulatory response. Farm workers and communities surrounding large farms are at a high risk of coming in contact with pesticides. People in agricultural areas are at risk for increased genotoxicity because of pesticide drift. Insecticides sprayed on crop fields can also have detrimental effects on non-human lifeforms that are important to the surrounding ecosystems like bees and other insects. The seriousness of crop injury caused by dicamba drift is increasingly being recognized. For example, the American Soybean Association and various land-grant universities are cooperating in the race to find ways to preserve the usability of dicamba while ending drift injury. Application of herbicides later in the season to protect herbicide-resistant genetically modified plants increases the risk of volatilisation as the temperature is higher and incorporation into the soil impractical. From 1998 to 2006, Environmental Health Perspectives found nearly 3,000 cases of pesticide drift; nearly half were workers on the fields treated with pesticides and 14% of cases were children under the age of 15. Health concerns Bystander exposure describes the event when individuals unintentionally come in contact with airborne pesticides. Bystanders include workers working in an area separate to the pesticide application area, individuals living in the surrounding areas of an application area, or individuals passing by fields as they are being treated with a pesticide. Different pesticides can affect different body systems, inflicting different symptoms. Pesticides can have long-term negative health impacts, including cancer, lung diseases, fertility and reproductive problems, and neurodevelopmental issues in children, when exposure levels are high enough. Regulations In 2001, the United States Environmental Protection Agency published a guidance to "manufacturers, formulators, and registrants of pesticide products" (EPA 2001) that stated the EPA's stance against pesticide drift as well as suggested product labelling practices. To try and reduce pesticide drift, the EPA is a part of several initiatives. The EPA has routine pesticide risk assessments to check potential drift impact on farmworkers living near or on fields where crops are grown, farmworkers, water sources, and the environment. The USDA and EPA are working together to examine new studies and how to improve scientific models to estimate the exposure, risk, and drift of pesticides. The EPA is also working with pesticide manufacturers to ensure labels are easy to read, contain the correct application process and DRT for that specific pesticide. See also Aerial application Agricultural runoff Environmental impact of agriculture Environmental protection Nonpoint source pollution References Sources Notes Matthews G.A. (2006) Pesticides: Health, Safety and the Environment Blackwell, Oxford External links EarthJustice - health impacts of pesticide drift in rural farming community Pesticide Action Network North America (PANNA)- "Advancing alternatives to pesticides worldwide" International Pesticide Application Research Centre (IPARC) Environmental effects of pesticides Sustainable agriculture Water pollution Lawn care Pesticides
Pesticide drift
[ "Chemistry", "Biology", "Environmental_science" ]
1,490
[ "Pesticides", "Biocides", "Toxicology", "Water pollution" ]
16,335,888
https://en.wikipedia.org/wiki/V810%20Centauri
V810 Centauri is a double star consisting of a yellow hypergiant primary (V810 Cen A) and blue giant secondary (V810 Cen B). It is a small amplitude variable star, entirely due to the supergiant primary which is visually over three magnitudes (about 12x) brighter than the secondary. It is the MK spectral standard for class G0 0-Ia. A 5th magnitude star, it is visible to the naked eye under good observing conditions. Maurice Pim FitzGerald announced that the star's brightness varies, in 1973. It was given its variable star designation, V810 Centauri, in 1979. V810 Cen A shows semi-regular variations with several component periods. The dominant mode is around 156 days and corresponds to Cepheid fundamental mode radial pulsation. Without the other stellar pulsation modes it would be considered a Classical Cepheid variable. Other pulsation modes have been detected at 89 to 234 days, with the strongest being a possible non-radial p-mode at 107 days and a possible non-radial g-mode at 185 days. The blue giant secondary has a similar mass and luminosity to the supergiant primary, but is visually much fainter. The primary is expected to have lost around since it was on the main sequence, and has expanded and cooled so it lies at the blue edge of the Cepheid instability strip. It is expected to get no cooler and may perform a blue loop while slowly increasing in luminosity. V810 Cen was once thought to be a member of the Stock 14 open cluster at 2.6 kpc, but now appears to be more distant. The distance derived from spectrophotometric study is larger than the mean Hipparcos parallax value but within the margin of error. References Centaurus 101947 057175 4511 Centauri, V810 F-type supergiants B-type giants Classical Cepheid variables Semiregular variable stars Durchmusterung objects G-type hypergiants
V810 Centauri
[ "Astronomy" ]
430
[ "Centaurus", "Constellations" ]
16,336,160
https://en.wikipedia.org/wiki/Distributed%20algorithmic%20mechanism%20design
Distributed algorithmic mechanism design (DAMD) is an extension of algorithmic mechanism design. DAMD differs from Algorithmic mechanism design since the algorithm is computed in a distributed manner rather than by a central authority. This greatly improves computation time since the burden is shared by all agents within a network. One major obstacle in DAMD is ensuring that agents reveal the true costs or preferences related to a given scenario. Often these agents would rather lie in order to improve their own utility. DAMD is full of new challenges since one can no longer assume an obedient networking and mechanism infrastructure where rational players control the message paths and mechanism computation. Game theoretic model Game theory and distributed computing both deal with a system with many agents, in which the agents may possibly pursue different goals. However they have different focuses. For instance one of the concerns of distributed computing is to prove the correctness of algorithms that tolerate faulty agents and agents performing actions concurrently. On the other hand, in game theory the focus is on devising a strategy which leads us to an equilibrium in the system. Nash equilibrium Nash equilibrium is the most commonly-used notion of equilibrium in game theory. However Nash equilibrium does not deal with faulty or unexpected behavior. A protocol that reaches Nash equilibrium is guaranteed to execute correctly in the face of rational agents, with no agent being able to improve its utility by deviating from the protocol. Solution preference There is no trusted center as there is in AMD. Thus, mechanisms must be implemented by the agents themselves. The solution preference assumption requires that each agent prefers any outcome to no outcome at all: thus, agents have no incentive to disagree on an outcome or cause the algorithm to fail. In other words, as Afek et al. said, "agents cannot gain if the algorithm fails". As a result, though agents have preferences, they have no incentive to fail the algorithm. Truthfulness A mechanism is considered to be truthful if the agents gain nothing by lying about their or other agents' values. A good example would be a leader election algorithm that selects a computation server within a network. The algorithm specifies that agents should send their total computational power to each other, after which the most powerful agent is chosen as the leader to complete the task. In this algorithm agents may lie about their true computation power because they are potentially in danger of being tasked with CPU-intensive jobs which will reduce their power to complete local jobs. This can be overcome with the help of truthful mechanisms which, without any a priori knowledge of the existing data and inputs of each agent, cause each agent to respond truthfully to requests. A well-known truthful mechanism in game theory is the Vickrey auction. Classic distributed computing problems Leader election (completely connected network, synchronous case) Leader election is a fundamental problem in distributed computing and there are numerous protocols to solve this problem. System agents are assumed to be rational, and therefore prefer having a leader to not having one. The agents may also have different preferences regarding who becomes the leader (an agent may prefer that he himself becomes the leader). Standard protocols may choose leaders based on the lowest or highest ID of system agents. However, since agents have an incentive to lie about their ID in order to improve their utility such protocols are rendered useless in the setting of algorithmic mechanism design. A protocol for leader election in the presence of rational agents has been introduced by Ittai et al.: At round 1, each agent i sends everyone his id; At round 2, agent i sends each other agent j the set of ids that he has received (including his own). If the sets received by agent i are not all identical or if i does not receive an id from some agent, then i sets its output to Null, and leader election fails. Otherwise, let n be the cardinality of the set of ids. Agent i chooses a random number Ni in {0, ..., n−1} and sends it to all the other agents. Each agent i then computes Σ Ni (mod n), and then takes the agent with the Nth highest id in the set to be the leader. (If some agent j does not send i a random number, then i sets its output to Null.) This protocol correctly elects a leader while reaching equilibrium and is truthful since no agent can benefit by lying about its input. See also Algorithmic mechanism design Mechanism design Game theory Distributed computing References External links Distributed Algorithmic Mechanism Design: Recent Results and Future Directions Distributed algorithmic mechanism design and network security Service Allocation in Selfish Mobile Ad Hoc Networks Using Vickrey Auction Mechanism design Distributed computing
Distributed algorithmic mechanism design
[ "Mathematics" ]
939
[ "Game theory", "Mechanism design" ]
16,336,287
https://en.wikipedia.org/wiki/Tesaglitazar
Tesaglitazar (also known as AZ 242) is a dual peroxisome proliferator-activated receptor agonist with affinity to PPARα and PPARγ, proposed for the management of type 2 diabetes. The drug had completed several phase III clinical trials, however in May, 2006 AstraZeneca announced that it had discontinued further development. Cardiac toxicity of tesaglitazar is related to mitochondrial toxicity caused by decrease in PPARγ coactivator 1-α (PPARGC1A, PGC1α) and sirtuin 1 (SIRT1). References Abandoned drugs Carboxylic acids Phenol ethers PPAR agonists Benzosulfones Ethoxy compounds
Tesaglitazar
[ "Chemistry" ]
152
[ "Functional groups", "Carboxylic acids", "Drug safety", "Abandoned drugs" ]
16,336,455
https://en.wikipedia.org/wiki/V744%20Centauri
V744 Centauri, is a semi-regular variable pulsating star in the constellation Centaurus. Located 3 degrees north north east of Epsilon Centauri, It ranges from apparent magnitude 5.1 to 6.7 over 90 days. It is unusual in that it is a red star with a high proper motion (greater than 50 milliarcseconds a year). When it is near its maximum brightness, it is visible to the naked eye under good observing conditions. In 1964, Wolfgang Strohmeier et al. announced the discovery that the star is a variable star. It was given its variable star designation, V744 Centauri, in 1968. References Centaurus M-type giants Centauri, V744 5134 118787 CD-49 08095 066666 Asymptotic-giant-branch stars
V744 Centauri
[ "Astronomy" ]
182
[ "Centaurus", "Constellations" ]
16,337,341
https://en.wikipedia.org/wiki/Injection%20well
An injection well is a device that places fluid deep underground into porous rock formations, such as sandstone or limestone, or into or below the shallow soil layer. The fluid may be water, wastewater, brine (salt water), or water mixed with industrial chemical waste. Definition The U.S. Environmental Protection Agency (EPA) defines an injection well as "a bored, drilled, or driven shaft, or a dug hole that is deeper than it is wide, or an improved sinkhole, or a subsurface fluid distribution system". Well construction depends on the injection fluid injected and depth of the injection zone. Deep wells that are designed to inject hazardous wastes or carbon dioxide deep below the Earth's surface have multiple layers of protective casing and cement, whereas shallow wells injecting non-hazardous fluids into or above drinking water sources are more simply constructed. Applications Injection wells are used for many purposes. Waste disposal Treated wastewater can be injected into the ground between impermeable layers of rocks to avoid polluting surface waters. Injection wells are usually constructed of solid walled pipe to a deep elevation in order to prevent injectate from mixing with the surrounding environment. Injection wells utilize the earth as a filter to treat the wastewater before it reaches the aquifer. This method of wastewater disposal also serves to spread the injectate over a wide area, further decreasing environmental impacts. In the United States, there are about 800 deep injection waste disposal wells used by industries such as chemical manufacturers, petroleum refineries, food producers and municipal wastewater plants. Most produced water generated by oil and gas extraction wells in the US is also disposed in deep injection wells. Critics of wastewater injection wells cite concerns about potential groundwater contamination. It is argued that the impacts of some injected wastes in groundwater is not fully understood, and that the science and regulatory agencies have not kept up with the rapid expansion of disposal practices in US, where there are over 680,000 wells as of 2012. Alternatives to injection wells include direct discharge of treated wastewater to receiving waters, conditioning of oil drilling and fracking produced water for reuse, utilization of treated water for irrigation or livestock watering, or processing of water at industrial wastewater treatment plants. Direct discharge does not disperse the water over a wide area; the environmental impact is focused on a particular segment of a river and its downstream reaches or on a coastal water body. Extensive irrigation is not typical in areas where the produced water tends to be salty, and this practice is often prohibitively expensive and requires ongoing maintenance and large electricity usage. Since the early 1990s, Maui County, Hawaii has been engaged in a struggle over the 3 to 5 million gallons per day of wastewater that it injects below the Lahaina Wastewater Reclamation Facility, over the claim that the water was emerging in seeps that were causing algae blooms and other environmental damage. After some twenty years, it was sued by environmental groups after multiple studies showed that more than half the injectate was appearing in nearby coastal waters. The judge in the suit rejected the County's arguments, potentially subjecting it to millions of dollars in federal fines. A 2001 consent decree required the county to obtain a water quality certification from the Hawaii Department of Health, which it failed to do until 2010, after the suit was filed. The case proceeded through the United States Court of Appeals for the Ninth Circuit and subsequently to the Supreme Court of the United States. In 2020 the Court ruled in County of Maui v. Hawaii Wildlife Fund that injection wells may be the "functional equivalent of a direct discharge" under the Clean Water Act, and instructed the EPA to work with the courts to establish regulations when these types of wells should require permits. Oil and gas production Another use of injection wells is in natural gas and petroleum production. Steam, carbon dioxide, water, and other substances can be injected into an oil-producing unit in order to maintain reservoir pressure, heat the oil or lower its viscosity, allowing it to flow to a producing well nearby. Waste site remediation Yet another use for injection wells is in environmental remediation, for cleanup of either soil or groundwater contamination. Injection wells can insert clean water into an aquifer, thereby changing the direction and speed of groundwater flow, perhaps towards extraction wells downgradient, which could then more speedily and efficiently remove the contaminated groundwater. Injection wells can also be used in cleanup of soil contamination, for example by use of an ozonation system. Complex hydrocarbons and other contaminants trapped in soil and otherwise inaccessible can be broken down by ozone, a highly reactive gas, often with greater cost-effectiveness than could be had by digging out the affected area. Such systems are particularly useful in built-up urban environments where digging may be impractical due to overlying buildings. Aquifer recharge Recently the option of refilling natural aquifers with injection or percolation has become more important, particularly in the driest region of the world, the MENA region (Middle East and North Africa). Surface runoff can also be recharged into dry wells, or simply barren wells that have been modified to functions as cisterns. These hybrid stormwater management systems, called recharge wells, have the advantage of aquifer recharge and instantaneous supply of potable water at the same time. They can utilize existing infrastructure and require very little effort for the modification and operation. The activation can be as simple as inserting a polymer cover (foil) into the well shaft. Vertical pipes for conduction of the overflow to the bottom can enhance performance. The area around the well acts as funnel. If this area is maintained well the water will require little purification before it enters the cistern. Geothermal energy Injection wells are used to tap geothermal energy in hot, porous rock formations below the surface by injecting fluids into the ground, which is heated in the ground, then extracted from adjacent wells as fluid, steam, or a combination of both. The heated steam and fluid can then be utilized to generate electricity or directly for geothermal heating. Regulatory requirements In the United States, injection well activity is regulated by EPA and state governments under the Safe Drinking Water Act (SDWA). The “State primary enforcement responsibility” section of the SDWA provides for States to submit their proposed UIC program to the EPA to request State assumption of primary enforcement responsibility. Thirty-four states have been granted UIC primacy enforcement authority for Class I, II, III, IV and V wells. For states without an approved UIC program, the EPA administrator prescribes a program to apply. EPA has issued Underground Injection Control (UIC) regulations in order to protect drinking water sources. EPA regulations define six classes of injection wells. Class I wells are used for the injection of municipal and industrial wastes beneath underground sources of drinking water. Class II wells are used for the injection of fluids associated with oil and gas production, including waste from hydraulic fracturing. Class III wells are used for the injection of fluids used in mineral solution mining beneath underground sources of drinking water. Class IV wells, like Class I wells, were used for the injection of hazardous wastes but inject waste into or above underground sources of drinking water instead of below. EPA banned the use of Class IV wells in 1984. Class V wells are those used for all non-hazardous injections that are not covered by Classes I through IV. Examples of Class V wells include stormwater drainage wells and septic system leach fields. Finally, Class VI wells are used for the injection of carbon dioxide for sequestration, or long term storage. Since the introduction of Class VI in 2010, only two Class VI wells have been constructed as of 2022, both at the same Illinois facility; four other approved projects did not proceed to construction. Injection-induced earthquakes A July 2013 study by US Geological Survey scientist William Ellsworth links earthquakes to wastewater injection sites. In the four years from 2010-2013 the number of earthquakes of magnitude 3.0 or greater in the central and eastern United States increased dramatically. After decades of a steady earthquake rate (average of 21 events/year), activity increased starting in 2001 and peaked at 188 earthquakes in 2011, including a record-breaking 5.7-magnitude earthquake near Prague, Oklahoma which was the strongest earthquake ever recorded in Oklahoma. USGS scientists have found that at some locations the increase in seismicity coincides with the injection of wastewater in deep disposal wells. Injection-induced earthquakes are thought to be caused by pressure changes due to excess fluid injected deep below the surface and are being dubbed “man-made” earthquakes. On September 3, 2016, a 5.8-magnitude earthquake occurred near Pawnee, Oklahoma, followed by nine aftershocks between magnitudes 2.6 and 3.6 within three and one-half hours. The earthquake broke the previous record set five years earlier. Tremors were felt as far away as Memphis, Tennessee, and Gilbert, Arizona. Mary Fallin, the Oklahoma governor, declared a local emergency and shutdown orders for local disposal wells were ordered by the Oklahoma Corporation Commission. Results of ongoing multi-year research on induced earthquakes by the United States Geological Survey (USGS) published in 2015 suggested that most of the significant earthquakes in Oklahoma, such as the 1952 magnitude 5.5 El Reno earthquake may have been induced by deep injection of waste water by the oil industry. Notes References US Army Environmental Center. Aberdeen Proving Ground, MD (2002). "Deep Well Injection." Remediation Technologies Screening Matrix and Reference Guide. 4th ed. Report no. SFIM-AEC-ET-CR-97053. External links EPA - Underground Injection Control Program Drinking water Hydrology Water pollution Petroleum technology Natural gas technology
Injection well
[ "Chemistry", "Engineering", "Environmental_science" ]
1,981
[ "Hydrology", "Petroleum technology", "Petroleum engineering", "Water pollution", "Water wells", "Natural gas technology", "Oil wells", "Environmental engineering" ]
16,339,156
https://en.wikipedia.org/wiki/Tibesti%E2%80%93Jebel%20Uweinat%20montane%20xeric%20woodlands
The Tibesti-Jebel Uweinat montane xeric woodlands is a deserts and xeric shrublands ecoregion in the eastern Sahara. The woodlands ecoregion occupies two separate highland regions, covering portions of northern Chad, southwestern Egypt, southern Libya, and northwestern Sudan. Setting The ecoregion covers in the volcanic Tibesti Mountains of Chad and Libya, and 1932-m peak of Jebel Uweinat on the border of Egypt, Libya, and Sudan. The climate is arid and subtropical, but can reach 0°C at the highest altitudes during the winter. Rainfall is irregular but more regular than the surrounding desert and many of the lower wadis are watered by rain which falls higher up. The Tibesti (and to a lesser extent the Jebel Uweinat massif) foster higher, more regular rainfall and cooler temperatures than the surrounding Sahara. This supports woodlands and shrublands of date palm (Phoenix dactylifera), acacias, Saharan Myrtle (Myrtus nivellei), oleander (Nerium oleander), tamarix, and several endemic and rare plant species, such as Ficus teloukat. The northern slopes are humid enough to support wetland species such as Juncus maritimus, Typha dominguensis, Scirpus holoschoenus, Phragmites australis and Equisetum ramosissimum. Fauna The ecoregion supports, or supported, populations of several important Saharan large mammals. One, the scimitar-horned oryx Oryx dammah is now believed to be extinct in the wild, while the addax Addax nasomaculatus is critically endangered. Other species include dorcas gazelle Gazella dorcas which is assessed as vulnerable, dama gazelle Nanger dama which is endangered, Barbary sheep Ammotragus lervia which is vulnerable and cheetah Acinonyx jubatus which is vulnerable. In 2000 Barbary sheep and dama gazelle were recorded in the Jebel Uweinat portion of the ecoregion. Smaller mammals are abundant, including rock hyrax Procavia capensis, Cape hare Lepus capensis, many mice, gerbils and jirds and three species of fox, Rüppell's fox Vulpes rueppelli, pale fox Vulpes pallida and fennec fox Fennecus zerda. Other predators are found in the region including a relict population of African wild dog Lycaon pictus as well as striped hyena Hyaena hyaena and golden jackal Canis aureus, primarily in the southern portion of the region. Locusts In habitats dominated by Schouwia and Tribulus terrestris in the wadis of this region, have an important role in the life cycle of the desert locust. This is where the female locusts lay their eggs, as the soil is moist and when the locust nymphs emerge, the leaves of Schouwia and Tribulus are fed on, allowing the nymphs to get enough food and water to mature. In some years, if conditions are right they can amass into large swarms, eventually becoming a plague which can reach distant areas of Africa and Europe, and have a huge economic impact by destroying crops. References External links Sahara Biota of Chad Biota of Egypt Biota of Sudan Deserts and xeric shrublands Ecoregions of Chad Ecoregions of Egypt Ecoregions of Libya Ecoregions of Sudan Afromontane forests Geography of Chad Geography of Egypt Geography of Libya Geography of Sudan Palearctic ecoregions
Tibesti–Jebel Uweinat montane xeric woodlands
[ "Biology" ]
749
[ "Biota by country", "Biota of Egypt", "Biota of Sudan", "Biota of Chad" ]
16,341,458
https://en.wikipedia.org/wiki/Maximum%20residue%20limit
The maximum residue limit (also maximum residue level, MRL) is the maximum amount of pesticide residue that is expected to remain on food products when a pesticide is used according to label directions, that will not be a concern to human health. Determination The MRL is usually determined by repeated (on the order of 10) field trials, where the crop has been treated according to good agricultural practice (GAP) and an appropriate pre harvest interval or withholding period has elapsed. For many pesticides this is set at the limit of determination (LOD) – since only major pesticides have been evaluated and understanding of acceptable daily intake (ADI) is incomplete (i.e. producers or public bodies have not submitted MRL data – often because these were not required in the past). LOD can be considered a measure of presence/absence, but certain residues may not be quantifiable at very low levels. For this reason the limit of quantification (LOQ) is often used instead of the LOD. As a rule of thumb the LOQ is approximately two times the LOD. For substances that are not included in any of the annexes in EU regulations, a default MRL of 0.01 mg/kg normally applies. It follows that adoption of GAP at the farm level must be a priority, and includes the withdrawal of obsolete pesticides. With increasingly sensitive detection equipment, a certain amount of pesticide residue will often be measured following field use. In the current regulatory environment, it would be wise for cocoa producers to focus only on pest control agents that are permitted for use in the EU and US. It should be stressed that MRLs are set on the basis of observations and not on ADIs. In medicinal plants If MRL of some medicinal plant is not known it is calculated by the formula: where SF is the safety factor MDI is the mean daily intake W is the body weight ADI is the acceptable daily intake Ornamental crops In some cases in the EU MRL's are also used for ornamental produce, and checked against MRL's for food crops. While this is a sound approach for the general environmental impact, it doesn't reflect potential exposure of people handling ornamentals. A swap test can eliminate this gap. MRL's for ornamental produce can sometimes result in a conflicting outcome because of the absence of pre harvest intervals (PHI) or withholding periods for ornamentals, specifically in crops where harvesting is continuous, like roses. This happens when a grower is following the label recommendations and the produce is sampled shortly after. MRL in the EU Three key points are taken into consideration regarding MRL values in the EU regulation: 1) the amounts of residues found in food must be safe for consumers and must be as low as possible, 2) the European Commission fixes MRLs for all food and animal feed, and 3) the MRLs for all crops and all pesticides can be found in the MRL database on the Commission website. See also Detection limits Maximum contaminant level Pesticides QuEChERS – method for testing pesticide residues References Further reading FAO (2016). Submission and evaluation of pesticide residues data for the estimation of maximum residue levels in food and feed, Rome: Food and Agriculture Organization of the United Nations External links FAO/WHO, Codex Alimentarius: MRL database Code of Federal Regulations, Part 180—Tolerances and exemptions for pesticide chemical residues in food Soil contamination Pesticides
Maximum residue limit
[ "Chemistry", "Biology", "Environmental_science" ]
709
[ "Toxicology", "Pesticides", "Environmental chemistry", "Soil contamination", "Biocides" ]