id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
992,039 | https://en.wikipedia.org/wiki/Anatol%20Rapoport | Anatol Borisovich Rapoport (; ; May 22, 1911January 20, 2007) was an American mathematical psychologist. He contributed to general systems theory, to mathematical biology and to the mathematical modeling of social interaction and stochastic models of contagion.
Biography
Rapoport was born in Lozova, Kharkov Governorate, Russia (in today's Kharkiv Oblast, Ukraine) into a secular Jewish family. In 1922, he moved to the United States, and in 1928 became a naturalized citizen. He started studying music in Chicago and continued with piano, conducting and composition at the Vienna Hochschule für Musik where he studied from 1929 to 1934. However, due to the rise of Nazism, he found it impossible to make a career as a pianist.
He shifted his career into mathematics, completing a Ph.D. in mathematics under Otto Schilling and Abraham Adrian Albert at the University of Chicago in 1941 on the thesis Construction of Non-Abelian Fields with Prescribed Arithmetic. According to The Globe and Mail, he was a member of the American Communist Party for three years, but quit before enlisting in the U.S. Army Air Forces in 1941, serving in Alaska and India during World War II.
After the war, he joined the Committee on Mathematical Biology at the University of Chicago (1947–54), publishing his first book, Science and the Goals of Man, co-authored with semanticist S. I. Hayakawa in 1950. He also received a one-year fellowship at the prestigious Center for Advanced Study in the Behavioral Sciences at Stanford University.
From 1955 to 1970, Rapoport was Professor of Mathematical Biology and Senior Research Mathematician at the University of Michigan, as well as founding member, in 1955, of the Mental Health Research Institute (MHRI) at the University of Michigan. In 1970, during the Vietnam War, Rapoport moved to Toronto "to live in a country that was not committed to a messianic role—a small peaceful country with no aspiration to major power status". He was appointed professor of mathematics and psychology at the University of Toronto (1970–79). The university appointed him professor emeritus in 1980. He lived in bucolic Wychwood Park overlooking downtown Toronto, a neighbour of Marshall McLuhan. On his retirement from the University of Toronto, he became director of the Institute of Advanced Studies (Vienna) until 1983.
University of Toronto appointed him professor of peace studies in 1984, a position he held until 1996, but continued to teach until 2000.
In 1984 he co-founded Science for Peace, was elected president and remained on its executive until 1998.
In 1954 Anatol Rapoport co-founded the Society for General Systems Research, along with the researchers Ludwig von Bertalanffy, Ralph Gerard, and Kenneth Boulding. He became president of the Society for General Systems Research in 1965.
Anatol Rapoport died of pneumonia in Toronto. He was survived by his wife Gwen, daughter Anya, and sons Alexander and Anthony.
Work
Rapoport contributed to general systems theory, to mathematical biology, and to the mathematical modeling of social interaction and stochastic models of contagion. He combined his mathematical expertise with psychological insights into the study of game theory, social networks, and semantics.
Rapoport extended these understandings into studies of psychological conflict, dealing with nuclear disarmament and international politics. His autobiography, Certainties and Doubts: A Philosophy of Life, was published in 2001. An article celebrating his legacy and thinking includes a career overview alongside testimonials by scholars and family that provide a glimpse of Anatol Rapoport, the scientist and the person.
Philosopher and physicist Mario Bunge called Rapoport a polymath whose work Bunge found congenial because of its applicability to real-life problems, its use of mathematics, and its "avoidance of holistic blabber".
Game theory
Rapoport had a versatile mind, working in mathematics, psychology, biology, game theory, social network analysis, and peace and conflict studies. For example, he pioneered in the modeling of parasitism and symbiosis, researching cybernetic theory. This went on to give a conceptual basis for his lifelong work in conflict and cooperation.
Among many other well-known books on fights, games, violence, and peace, Rapoport was the author of over 300 articles and of "Two-Person Game Theory" (1966) and "N-Person Game Theory" (1970). He analyzed contests in which there are more than two sets of conflicting interests, such as war, diplomacy, poker, or bargaining. His work led him to peace research, including books on The Origins of Violence (1989) and Peace, An Idea Whose Time Has Come (1993).
In the 1980s, he won a computer tournament which was based on Robert Axelrod's The Evolution of Cooperation and was designed to further understanding of the ways in which cooperation could emerge through evolution. The contenders had to present programs that could play iterated games of the prisoner's dilemma and these were pitted against each other. Rapoport's entry, Tit-for-Tat, has only four lines of code. The program opens by cooperating with its opponent. It then plays exactly as the other side played in the previous game. If the other side defected in the previous game, the program also defects; but only for one game. If the other side cooperates, the program continues to cooperate. According to Peace Magazine author/editor Metta Spencer, the program "punished the other player for selfish behaviour and rewarded her for cooperative behaviour—but the punishment lasted only as long as the selfish behaviour lasted. This proved to be an exceptionally effective sanction, quickly showing the other side the advantages of cooperating. It also set moral philosophers to proposing this as a workable principle to use in real life interactions."
His children report that he was a strong chess player but a bad poker player because he non-verbally revealed the strength of his hands.
Social network analysis
Rapoport was an early developer of social network analysis. His original work showed that one can measure large networks by profiling traces of flows through them. This enables learning about the speed of the distribution of resources, including information, and what speeds or impedes these flows—such as race, gender, socioeconomic status, proximity, and kinship. This work linked social networks to the diffusion of innovation, and by extension, to epidemiology. Rapoport's empirical work traced the spread of information within a school. It prefigured the study of degrees of separation by showing the rapid spread of information in a population to almost all—but not all—school members (see references below). His work on random nets predates the random graphs as defined by the Erdős–Rényi model and independently by Edgar Gilbert.
Rapoport is also the originator of the theory behind the interpretation of bias in social networks, which pertains to the extent to which a network deviates from a random base model. He introduced what is now known as "preferential attachment mechanism" in biased networks. It is a stochastic process that involves connected nodes that snowball into more connections. Rapoport also published an article that outlined a probabilistic approach to animal sociology, which is one of the earliest efforts at modeling simple social structures.
Conflict and peace studies
According to Thomas Homer-Dixon in the Toronto Globe and Mail, Rapoport "became anti-militarist quite soon after World War II. The idea of military values became anathema". He was a leading organizer of the first teach-ins against the Vietnam War at the University of Michigan, a model that spread rapidly throughout North America. He told at a teach-in: "By undertaking the war against Vietnam, the United States has undertaken a war against humanity…This war we shall not win". (Ann Arbor News, April 1967). He said he was an abolitionist, rather than a total pacifist: "I'm for killing the institution of war". In 1968, he signed the "Writers and Editors War Tax Protest" pledge, vowing to refuse tax payments in protest against the Vietnam War. As a result of his opposition to the Vietnam War, J.Edgar Hoover considered Rapoport a Communist, and organized FBI agents to write letters to the president of the University of Michigan, the Governor of Michigan and others. This smear campaign drove him from Ann Arbor to the University of Toronto.
Rapoport returned to the University of Toronto to become the founding (and unpaid) Professor of Peace and Conflict Studies programme, working with George Ignatieff and Canada's Science for Peace organization. As its sole professor at the start, he used a rigorous, interdisciplinary approach to the study of peace, integrating mathematics, politics, psychology, philosophy, science, and sociology. His main concern was to legitimize peace studies as a worthy academic pursuit. The Trudeau Centre for Peace and Conflict Studies continued to flourish at the University of Toronto under the leadership of Thomas Homer-Dixon, and, from 2008, under Ron Levi. When Rapoport began, there was one (unpaid) professor and twelve students. In 2007, there were three paid professors and ninety students.
Rapoport's students report that he was an engaged and inspiring professor who captured their attention, imagination and interest with his wide-ranging knowledge, passion for the subject, good humor, kind and generous spirit, attentiveness to student concerns, and animated teaching style.
In 1981 Rapoport co-founded the international non-governmental organization Science for Peace. He was recognized in the 1980s for his contribution to world peace through nuclear conflict restraint via his game theoretic models of psychological conflict resolution. He won the Lentz International Peace Research Prize in 1976. Professor Rapoport was also a member of the editorial board of the Journal of Environmental Peace published by the International Innovation Projects at the University of Toronto.
Publications
Books
1950, Science and the Goals of Man, Harper & Bros., New York
1953, Operational Philosophy: Integrating Knowledge and Action, Harper & Bros., New York
1960, Fights, Games, and Debates, University of Michigan Press, Ann Arbor
1965, Prisoner's Dilemma, The University of Michigan, Ann Arbor, MI. (co-author; Albert M. Chammah)
1966, Two-Person Game Theory: The Essential Ideas, Ann Arbor, MI, The University of Michigan Press. (reprinted by Dover Press, Mineola, NY, 1999).
1969, Strategy and Conscience, Shocken Books, New York, NY. (first published in 1964)
1970, N-Person Game Theory. Concepts and Applications, University of Michigan, Ann Arbor, MI. (reprinted by Dover Press, Mineola, NY, 2001).
1974, Conflict in Man-made Environment, Harmondsworth, Penguin Books.
1975, Semantics, Crowell.
1986, General System Theory. Essential Concepts and Applications, Abacus, Tunbridge Wells.
1989, The Origins of Violence: Approaches to the Study of Conflict, Paragon House, New York.
1989, Decision Theory and Decision Behaviour, Kluwer Academic Publishers.
1992, Peace: An Idea, Whose Time Has Come, University of Michigan Press, Ann Arbor, MI.
2000, Certainties and Doubts: A Philosophy of Life, Black Rose Books, Montreal, 2000. His autobiography.
2001, Skating on Thin Ice, RDR Books, Oakland, CA.
(English version: ).
Selected articles
1948, "Cycle distributions in random nets." Bull. Math. Biophysics 10(3):145–157.
1951, with Ray Solomonoff, "Connectivity of random nets." Bull. Math. Biophysics 13:107–117.
1953, "Spread of information through a population with sociostructural bias: I. Assumption of transitivity." Bulletin of Mathematical Biophysics, 15, 523–533.
1956, with Ralph W. Gerard and Clyde Kluckhohn, "Biological and cultural evolution: Some analogies and explorations". Behavioral Science 1:6–34.
1957, "Contribution to the Theory of Random and Biased Nets." Bulletin of Mathematical Biology 19:257–77.
1960 with W.J. Horvath, "The theoretical channel capacity of a single neuron as determined by various coding systems". Information and Control, 3(4):335–350.
1962, "The Use and Misuse of Game Theory". Scientific American, 207:108–114.
1963, "Mathematical models of social interaction". R. D. Luce, R. R. Bush, & E. Galanter (Eds.), Handbook of Mathematical Psychology, Vol. II, pp. 493–579. New York, NY: John Wiley and Sons.
1974, with Lawrence B. Slobodkin, "An optimal strategy of evolution". Q. Rev. Biol. 49:181–200
1979, "Some Problems Relating to Randomly Constructed Biased Networks." Perspectives on Social Network Research:119–164.
1989, with Y. Yuan, "Some Aspects of Epidemics and Social Nets." Pp. 327–348 in The Small World, ed. by Manfred Kochen. Norwood, NJ: Ablex.
About Rapoport
See also
References
External links
Anatol Rapoport, 1911–2007. anatolrapoport.net.
Science for Peace website. scienceforpeace.ca.
History of Science for Peace. peacemagazine.org.
Profile of Anatol Rapoport. isss.org.
Anatol Rapoport archival papers held at the University of Toronto Archives and Records Management Services
1911 births
2007 deaths
Deaths from pneumonia in Ontario
People from Lozova
United States Army Air Forces personnel of World War II
American tax resisters
Soviet emigrants to the United States
Game theorists
American systems scientists
Academic staff of the University of Toronto
Peace and conflict scholars
20th-century Ukrainian Jews
American people of Ukrainian-Jewish descent
Jewish American scientists
University of Michigan faculty
Center for Advanced Study in the Behavioral Sciences fellows
20th-century American psychologists
20th-century American Jews
Presidents of the International Society for the Systems Sciences
Network scientists | Anatol Rapoport | [
"Mathematics"
] | 2,930 | [
"Game theorists",
"Game theory"
] |
992,044 | https://en.wikipedia.org/wiki/NGC%207320 | NGC 7320 is a spiral galaxy in the constellation Pegasus. It was discovered on 27 September 1873 by French astronomer Édouard Stephan.
NGC 7320 is a member of Stephan's Quintet, however, it is not an actual member of the galaxy group, but a much closer line-of-sight galaxy at a distance of about 40 million light years, the same as the nearby NGC 7331. Other galaxies of Stephan's Quintet are some 300 million light-years distant.
NGC 7320 has extensive H II regions, identified as red blobs, where active star formation is occurring.
The galaxy was imaged by the James Webb Space Telescope as part of Stephan's Quintet; the picture was released on 12 July 2022.
Image gallery
See also
List of NGC objects (7001–7840)
References
External links
https://webbtelescope.org/contents/news-releases/2022/news-2022-034
7320
NGC 7320
319
Unbarred spiral galaxies
Pegasus (constellation)
Astronomical objects discovered in 1873
Discoveries by Édouard Stephan | NGC 7320 | [
"Astronomy"
] | 218 | [
"Pegasus (constellation)",
"Constellations"
] |
992,202 | https://en.wikipedia.org/wiki/IC%202602 | IC 2602 (also known as the Southern Pleiades, Theta Carinae Cluster, or Caldwell 102) is an open cluster in the constellation Carina. Discovered by Abbe Lacaille in 1751 from South Africa, the cluster is easily visible to the unaided eye, and is one of the nearest star clusters, centred about 149 parsecs (486 light-years) away from Earth.
Description
IC 2602 has a total apparent magnitude of 1.9, and contains about 75 stars. It is the third-brightest open cluster in the sky, following the Hyades and the Pleiades. Its apparent diameter is about 50 arcminutes.
IC 2602 is likely about the same age as the open cluster IC 2391, which has a lithium depletion boundary age of 50 million years old, though the age estimated from its Hertzsprung-Russell diagram is about 13.7 million years. IC 2602 is thought to form part of the Lower Scorpius–Centaurus association.
Components
Theta Carinae is the brightest star within the open cluster, with the apparent visual magnitude of +2.74. Theta Carinae is part of the asterism known as the Diamond Cross, which is often mistaken for the Southern Cross asterism in the constellation of Crux.
p Carinae (PP Carinae) is another third-magnitude star known to be a member of IC 2602, although it lies well outside the main visible grouping of stars. p Carinae exhibits a variable apparent magnitude ranging from 3.22 to 3.55.
All other members the cluster are of the fifth magnitude and fainter, but several are naked-eye objects, including HR 4196 (V518 Car), HR 4204, HD 93194, HR 4219, HR 4220, HR 4222, HD 92536, HD 93738, and V364 Carinae.
An exoplanet has been found orbiting the star TOI-837 in this cluster.
History
IC 2602 was first discovered by French astronomer and abbot Nicolas-Louis de Lacaille on March 3, 1751 while in Cape of Good Hope, South Africa. In Lacaille's initial discovery, he classified Theta Carinae (referred as "Theta Navis", or alternatively "Theta Argus") as a third magnitude star, whilst citing the cluster's resemblance to the northern Pleiades.
Observation
Positioned at a declination of -64º on the night sky, IC 2602 is most clearly visible from the southern hemisphere, and appears circumpolar from southern subtropics and temperate climates; the cluster is observable from a limited selection of north hemispherical regions, mainly tropical areas. IC 2602 is identifiable a few degrees south of the southern Milky Way, surrounded by various fifth and sixth magnitude stars. To the unaided eye, several faint stars are distinguishable to the east of the blue Theta Carinae.
Notes
References
External links
The Southern Pleaides @ SEDS IC objects pages
Image Southern Pleiades(IC 2602)
Carina (constellation)
Open clusters
102b
? | IC 2602 | [
"Astronomy"
] | 646 | [
"Carina (constellation)",
"Constellations"
] |
992,211 | https://en.wikipedia.org/wiki/IC%204665 | IC 4665 (Collinder 349 / Melotte 179) is an open cluster of stars in the constellation Ophiuchus, about 1° to the northeast of the star Beta Ophiuchi. It was discovered by Swiss astronomer Philippe Loys de Chéseaux in 1745. The cluster lies about 1,100 light years away from Earth. It is easily visible in the smallest of telescopes and also with binoculars. From a sufficiently dark place it is also visible to the naked eye. It is one of the brightest clusters not to be cataloged by Charles Messier or William Herschel, probably because it is so loose and coarse.
Age estimates for this cluster have ranged from 20 up to as high as 100 million years. Comparison of the stellar lithium depletion with other clusters suggests it began to develop about 55 million years ago. The upper main sequence turnoff age is Myr. 819 candidate cluster members have been identified. Two chemically peculiar stars were found to be members in 1977.
There is evidence that IC 4665 is undergoing a collision with the older cluster Collinder 350, located about 4° away. Currently they are separated by a distance of , after having formed at least apart. It is unclear whether the two clusters will merge as a result of the collision.
References
External links
IC 4665 @ SEDS IC objects pages
IC 4665 www.univie.ac.at
X-Ray Activity in the Open Cluster IC 4665 National Aeronautics and Space Administration
4665
Open clusters
Ophiuchus | IC 4665 | [
"Astronomy"
] | 308 | [
"Ophiuchus",
"Constellations"
] |
992,221 | https://en.wikipedia.org/wiki/IC%204756 | IC 4756 is a large bright open cluster in the constellation Serpens. Known as Graff's Cluster, it is bright enough to be seen with the naked eye and considered a fine cluster for binoculars or small telescopes.
IC 4756 is also known as the Tweedledee Cluster (paired with NGC 6633 as Tweedledum), also as the Secret Garden Cluster.
Metallicity of IC 4756 is similar to the Sun, at -0.02 dex.
Stars
There are some noteworthy stars in the cluster. HD 172365 is a likely post-blue straggler in the IC 4756 that contains a large excess of lithium. HD 172189, also in IC 4756, is an Algol variable eclipsing binary with a 5.70 day period. The primary star in the system is also a Delta Scuti variable, undergoing multiple pulsation frequencies, which, combined with the eclipses, causes the system to vary by around a tenth of a magnitude.
References
External links
4756
Open clusters
Serpens | IC 4756 | [
"Astronomy"
] | 213 | [
"Constellations",
"Serpens"
] |
992,412 | https://en.wikipedia.org/wiki/Wireless%20distribution%20system | A wireless distribution system (WDS) is a system enabling the wireless interconnection of access points in an IEEE 802.11 network. It allows a wireless network to be expanded using multiple access points without the traditional requirement for a wired backbone to link them. The notable advantage of WDS over other solutions is that it preserves the MAC addresses of client frames across links between access points.
An access point can be either a main, relay, or remote base station.
A main base station is typically connected to the (wired) Ethernet.
A relay base station relays data between remote base stations, wireless clients, or other relay stations; to either a main, or another relay base station.
A remote base station accepts connections from wireless clients and passes them on to relay stations or to main stations. Connections between "clients" are made using MAC addresses.
All base stations in a wireless distribution system must be configured to use the same radio channel, method of encryption (none, WEP, WPA or WPA2) and the same encryption keys. They may be configured to different service set identifiers (SSIDs). WDS also requires every base station to be configured to forward to others in the system.
WDS may also be considered a repeater mode because it appears to bridge and accept wireless clients at the same time (unlike traditional bridging). However, with the repeater method, throughput is halved for all clients connected wirelessly. This is because Wi-Fi is an inherently half duplex medium and therefore any Wi-Fi device functioning as a repeater must use the Store and forward method of communication.
WDS may be incompatible between different products (even occasionally from the same vendor) since the IEEE 802.11-1999 standard does not define how to construct any such implementations or how stations interact to arrange for exchanging frames of this format. The IEEE 802.11-1999 standard merely defines the 4-address frame format that makes it possible.
Technical
WDS may provide two modes of access point-to-access point (AP-to-AP) connectivity:
Wireless bridging, in which WDS APs (AP-to-AP on local routers AP) communicate only with each other and don't allow wireless stations (STA, also known as wireless clients) to access them
Wireless repeating, in which APs (WDS on local routers) communicate with each other and with wireless STAs
Two disadvantages to using WDS are:
The maximum wireless effective throughput may be halved after the first retransmission (hop) being made. For example, in the case of two APs connected via WDS, and communication is made between a computer which is plugged into the Ethernet port of AP A and a laptop which is connected wirelessly to AP B. The throughput is halved, because AP B has to retransmit the information during the communication of the two sides. However, in the case of communications between a computer which is plugged into the Ethernet port of AP A and a computer which is plugged into the Ethernet port of AP B, the throughput is not halved since there is no need to retransmit the information. Dual band/radio APs may avoid this problem, by connecting to clients on one band/radio, and making a WDS network link with the other.
Dynamically assigned and rotated encryption keys are usually not supported in a WDS connection. This means that dynamic Wi-Fi Protected Access (WPA) and other dynamic key assignment technology in most cases cannot be used, though WPA using pre-shared keys is possible. This is due to the lack of standardization in this field, which may be resolved with the upcoming 802.11s standard. As a result, only static WEP or WPA keys may be used in a WDS connection, including any STAs that associate to a WDS repeating AP.
OpenWRT, a universal third party router firmware, supports WDS with WPA-PSK, WPA2-PSK, WPA-PSK/WPA2-PSK Mixed-Mode encryption modes. Recent Apple base stations allow WDS with WPA, though in some cases firmware updates are required. Firmware for the Renasis SAP36g super access point and most third party firmware for the Linksys WRT54G(S)/GL support AES encryption using WPA2-PSK mixed-mode security, and TKIP encryption using WPA-PSK, while operating in WDS mode. However, this mode may not be compatible with other units running stock or alternate firmware.
Example
Suppose one has a Wi-Fi-capable game console. This device needs to send one packet to a WAN host, and receive one packet in reply.
Network 1: A wireless base station acting as a simple (non-WDS) wireless router. The packet leaves the game console, goes over-the-air to the router, which then transmits it across the WAN. One packet comes back, through the router, which transmits it wirelessly to the game console. Total packets sent over-the-air: 2.
Network 2: Two wireless base stations employing WDS: WAN connects to the master base station. The master base station connects over-the-air to the remote base station. The Remote base station connects over-the-air to the game console. The game console sends one packet over-the-air to the remote base station, which forwards it over-the-air to the master base station, which forwards it to the WAN. The reply packet comes from the WAN to the master base station, over-the-air to the remote, and then over-the-air again to the game console. Total packets sent over-the-air: 4.
Network 3: Two wireless base stations employing WDS, but this time the game console connects by Ethernet cable to the remote base station. One packet is sent from the game console over the Ethernet cable to the remote, from there by air to the master, and on to the WAN. Reply comes from WAN to master, over-the-air to remote, over cable to game console. Total packets sent over-the-air: 2.
Notice that network 1 (non-WDS) and network 3 (WDS) send the same number of packets over-the-air. The only slowdown is the potential halving due to the half-duplex nature of Wi-Fi.
Network 2 gets an additional halving because the remote base station uses double the air time because it is re-transmitting over-the-air packets that it has just received over-the-air. This is the halving that is usually attributed to WDS, but that halving only happens when the route through a base station uses over-the-air links on both sides of it. That does not always happen in a WDS, and can happen in non-WDS.
Important Note: This "double hop" (one wireless hop from the main station to the remote station, and a second hop from the remote station to the wireless client [game console]) is not necessarily twice as slow. End to end latency introduced here is in the "store and forward" delay associated with the remote station forwarding packets. In order to accurately identify the true latency contribution of relaying through a wireless remote station vs. simply increasing the broadcast power of the main station, more comprehensive tests specific to the environment would be required.
See also
Ad hoc wireless network
Network bridge
Wireless mesh network
References
External links
Swallow-Wifi Wiki (WDS Network dashboard for DD-WRT devices)
Alternative Wireless Signal-repeating Scheme with DD-WRT and AutoAP
What is Third Generation Mesh? Review of three generation of mesh networking architectures.
Wi-Fi Range Extender Vs Mesh Network System Explanation how wifi extender and mesh network works.
How to Extend Your Wireless Network with Tomato-Powered Routers
Polarcloud.com (How Do I Use WDS)
Me
IEEE 802.11 | Wireless distribution system | [
"Technology",
"Engineering"
] | 1,664 | [
"Wireless networking",
"Computer networks engineering"
] |
992,421 | https://en.wikipedia.org/wiki/Cisco%20PIX | Cisco PIX (Private Internet eXchange) was a popular IP firewall and network address translation (NAT) appliance. It was one of the first products in this market segment.
In 2005, Cisco introduced the newer Cisco Adaptive Security Appliance (Cisco ASA), that inherited many of the PIX features, and in 2008 announced PIX end-of-sale.
The PIX technology was sold in a blade, the FireWall Services Module (FWSM), for the Cisco Catalyst 6500 switch series and the 7600 Router series, but has reached end of support status as of September 26, 2007.
PIX
History
PIX was originally conceived in early 1994 by John Mayes of Redwood City, California and designed and coded by Brantley Coile of Athens, Georgia. The PIX name is derived from its creators' aim of creating the functional equivalent of an IP PBX to solve the then-emerging registered IP address shortage. At a time when NAT was just being investigated as a viable approach, they wanted to conceal a block or blocks of IP addresses behind a single or multiple registered IP addresses, much as PBXs do for internal phone extensions. When they began, RFC 1597 and RFC 1631 were being discussed, but the now-familiar RFC 1918 had not yet been submitted.
The design, and testing were carried out in 1994 by John Mayes, Brantley Coile and Johnson Wu of Network Translation, Inc., with Brantley Coile being the sole software developer. Beta testing of PIX serial number 000000 was completed and first customer acceptance was on December 21, 1994 at KLA Instruments in San Jose, California. The PIX quickly became one of the leading enterprise firewall products and was awarded the Data Communications Magazine "Hot Product of the Year" award in January 1995.
Shortly before Cisco acquired Network Translation in November 1995, Mayes and Coile hired two longtime associates, Richard (Chip) Howes and Pete Tenereillo, and shortly after acquisition 2 more longtime associates, Jim Jordan and Tom Bohannon. Together they continued development on Finesse OS and the original version of the Cisco PIX Firewall, now known as the PIX "Classic". During this time, the PIX shared most of its code with another Cisco product, the LocalDirector.
On January 28, 2008, Cisco announced the end-of-sale and end-of-life dates for all Cisco PIX Security Appliances, software, accessories, and licenses. The last day for purchasing Cisco PIX Security Appliance platforms and bundles was July 28, 2008. The last day to purchase accessories and licenses was January 27, 2009. Cisco ended support for Cisco PIX Security Appliance customers on July 29, 2013.
In May 2005, Cisco introduced the ASA which combines functionality from the PIX, VPN 3000 series and IPS product lines. The ASA series of devices run PIX code 7.0 and later. Through PIX OS release 7.x the PIX and the ASA use the same software images. Beginning with PIX OS version 8.x, the operating system code diverges, with the ASA using a Linux kernel and PIX continuing to use the traditional Finesse/PIX OS combination.
Software
The PIX runs a custom-written proprietary operating system originally called Finese (Fast Internet Service Executive), but the software is known simply as PIX OS. Though classified as a network-layer firewall with stateful inspection, technically the PIX would more precisely be called a Layer 4, or Transport Layer Firewall, as its access is not restricted to Network Layer routing, but socket-based connections (a port and an IP Address: port communications occur at Layer 4). By default it allows internal connections out (outbound traffic), and only allows inbound traffic that is a response to a valid request or is allowed by an Access Control List (ACL) or by a conduit. Administrators can configure the PIX to perform many functions including network address translation (NAT) and port address translation (PAT), as well as serving as a virtual private network (VPN) endpoint appliance.
The PIX became the first commercially available firewall product to introduce protocol specific filtering with the introduction of the "fixup" command. The PIX "fixup" capability allows the firewall to apply additional security policies to connections identified as using specific protocols. Protocols for which specific fixup behaviors were developed include DNS and SMTP. The DNS fixup originally implemented a very simple but effective security policy; it allowed just one DNS response from a DNS server on the Internet (known as outside interface) for each DNS request from a client on the protected (known as inside) interface. "Inspect" has superseded "fixup" in later versions of PIX OS.
The Cisco PIX was also one of the first commercially available security appliances to incorporate IPSec VPN gateway functionality.
Administrators can manage the PIX via a command line interface (CLI) or via a graphical user interface (GUI). They can access the CLI from the serial console, telnet and SSH. GUI administration originated with version 4.1, and it has been through several incarnations:
PIX Firewall Manager (PFM) for PIX OS versions 4.x and 5.x, which runs locally on a Windows NT client
PIX Device Manager (PDM) for PIX OS version 6.x, which runs over https and requires Java
Adaptive Security Device Manager (ASDM) for PIX OS version 7 and greater, which can run locally on a client or in reduced-functionality mode over HTTPS.
Because Cisco acquired the PIX from Network Translation, the CLI originally did not align with the Cisco IOS syntax. Starting with version 7.0, the configuration became much more IOS-like.
Hardware
The original NTI PIX and the PIX Classic had cases that were sourced from OEM provider Appro. All flash cards and the early encryption acceleration cards, the PIX-PL and PIX-PL2, were sourced from Productivity Enhancement Products (PEP). Later models had cases from Cisco OEM manufacturers.
The PIX was constructed using Intel-based/Intel-compatible motherboards; the PIX 501 used an Am5x86 processor, and all other standalone models used Intel 80486 through Pentium III processors.
The PIX boots off a proprietary ISA flash memory daughtercard in the case of the NTI PIX, PIX Classic, 10000, 510, 520, and 535, and it boots off integrated flash memory in the case of the PIX 501, 506/506e, 515/515e, 525, and WS-SVC-FWM-1-K9. The latter is the part code for the PIX technology implemented in the Fire Wall Services Module, for the Catalyst 6500 and the 7600 Router.
Adaptive Security Appliance (ASA)
The Adaptive Security Appliance is a network firewall made by Cisco. It was introduced in 2005 to replace the Cisco PIX line. Along with stateful firewall functionality another focus of the ASA is Virtual Private Network (VPN) functionality. It also features intrusion prevention and Voice over IP. The ASA 5500 series was followed up by the 5500-X series. The 5500-X series focuses more on virtualization than it does on hardware acceleration security modules.
History
In 2005 Cisco released the 5510, 5520, and 5540 models.
Software
The ASA continues using the PIX codebase but, when the ASA OS software transitioned from major version 7.X to 8.X, it moved from the Finesse/Pix OS operating system platform to the Linux operating system platform. It also integrates features of the Cisco IPS 4200 intrusion prevention system, and the Cisco VPN 3000 Concentrator.
Hardware
The ASA continues the PIX lineage of Intel 80x86 hardware.
Security vulnerabilities
The Cisco PIX VPN product was hacked by the NSA-tied group Equation Group sometime before 2016. Equation Group developed a tool code-named BENIGNCERTAIN that reveals the pre-shared password(s) to the attacker (). Equation Group was later hacked by another group called The Shadow Brokers, which published their exploit publicly, among others. According to Ars Technica, the NSA likely used this vulnerability to wiretap VPN-connections for more than a decade, citing the Snowden leaks.
The Cisco ASA-brand was also hacked by Equation Group. The vulnerability requires that both SSH and SNMP are accessible to the attacker. The codename given to this exploit by NSA was EXTRABACON. The bug and exploit () was also leaked by The ShadowBrokers, in the same batch of exploits and backdoors. According to Ars Technica, the exploit can easily be made to work against more modern versions of Cisco ASA than what the leaked exploit can handle.
On the 29th of January 2018 a security problem at the Cisco ASA-brand was disclosed by Cedric Halbronn from the NCC Group. A use after free-bug in the Secure Sockets Layer (SSL) VPN functionality of the Cisco Adaptive Security Appliance (ASA) Software could allow an unauthenticated remote attacker to cause a reload of the affected system or to remotely execute code. The bug is listed as .
See also
Cisco LocalDirector
References
Pix
Computer network security
Server appliance | Cisco PIX | [
"Engineering"
] | 1,955 | [
"Cybersecurity engineering",
"Computer networks engineering",
"Computer network security"
] |
992,441 | https://en.wikipedia.org/wiki/Powered%20paragliding | Powered paragliding, also known as paramotoring or PPG, is a form of ultralight aviation where the pilot wears a back-pack motor (a paramotor) which provides enough thrust to take off using a paraglider. It can be launched in still air, and on level ground, by the pilot alone—no assistance is required.
Description
In many countries, including the United States, powered paragliding is minimally regulated and requires no license. The ability to fly both low and slow safely, the "open" feel, the minimal equipment and maintenance costs, and the portability are claimed to be this type of flying's greatest merits.
Powered paragliders usually fly between at altitudes from 'foot-dragging' up about to or more with certain permission.
Due to the paramotor's slow forward speed and nature of a soft wing, it is risky to operate in high winds, turbulence, or intense thermal activity, especially for inexperienced pilots.
The paramotor, weighing from is supported by the pilot during takeoff. After a brief run (typically ) the wing lifts the motor and its harnessed pilot off the ground. After takeoff, the pilot gets into the seat and sits suspended beneath the inflated paraglider wing like a pendulum. Control is available using right and left brake toggles and a hand-held throttle control for the motor and propeller speed. Some rigs are equipped with trimmers and speed bar to adjust angle of incidence, which also changes the angle of attack for increased or reduced speed. Brake toggles and weight shift is the general method for controlling yaw and roll (turning). Tip brakes and stabilo steering (if equipped) will also affect yaw and roll, and they may be used for more efficient flying or when required by the wing manufacturer in certain wing configurations such as reflex. The throttle controls pitch (along with speed bar and trimmers). Unlike regular aircraft, increasing throttle causes a pitch-up and climb (or reduced descent) but does increase airspeed.
Confusion with powered parachutes
There is often confusion about the differences between powered paragliders (PPG) and powered parachutes (PPC), both terminologically and even sometimes visually, particularly in flight.
In simple terms, PPCs always include a wheeled airframe and are often controlled using steering bars pushed on by the feet to operate the steering controls, although there are exceptions such as the Australian Aerochute and the German Xcitor. The airframe is an integral component of the aircraft (as established by FAA regulations).
PPGs, on the other hand, normally don't have a wheeled airframe and almost exclusively steer using the hands to pull on the steering lines. When paragliding, an airframe is considered purely a higher end option; in fact, since a PPG wing is always to be attached to the harness, if the airframe used in a PPG failed in any way, the wing would continue to support the weight of the occupants and motor through the harness. In addition, because PPGs use smaller low-power engines to stay within 14 C.F.R. § 103 regulations, they frequently use a higher performance parafoil that visually appears thinner and more elliptical to compensate.
Any other distinctions are less clear. In the United States, all paragliding equipment must fall within 14 C.F.R. § 103, and pilot licensing (in the strict legal sense) is not applicable, which is not much different from ultralight PPCs. Other lines are blurred further. For example, some people previously argued that two-seat flying is only allowed using a PPC, but "tandem" (two-seat) paragliding is readily doable in many countries throughout the world, and limited types of tandem paragliding are legally authorized in the U.S. as a result of an FAA exemption for flight training only (since 2018, with subsequent extensions).
Another contributing reason for confusion nowadays comes from the fact that some aircraft and kit builders market ultralight-class rolling airframes that can be configured with either PPG-style hand steering or PPC-style foot steering (along with wider canopy attachment points), with the later sold as a 14 C.F.R. § 103 'powered parachute'. The net result is nearly identical aircraft, albeit with different steering systems and potentially different canopy types.
Uses
Paragliders are usually used for personal recreation, with some exceptions.
Military
Powered paragliding has seen some military application including insertion of special forces soldiers and also border patrol in some governments. The Lebanese Airborne regiment adopted this technique in 2008. The US Army and Egyptian Army have used Paramotor Inc FX series units for many years, and these units are still under production. During the outset of the 2023 Israel–Hamas war, Hamas militants used powered paragliders to infiltrate southern Israel, several of which were used in the Re'im music festival massacre.
Civilian
Because of limiting weather requirements, powered paragliders are not reliable replacements for most aviation uses.
They have been used for search and rescue, herding of animals, photography, surveying, and other uses, but regulations in most countries limit commercial activities.
Safety and regulations
Research estimates that the activity is slightly safer (fewer fatalities per thousand participants per year) than riding motorcycles and more dangerous than riding in cars. The most likely cause of serious injury is body contact with a spinning propeller. The next most likely cause of injury is flying into something other than the landing zone. Some countries run detailed statistics on accidents, e.g., in Germany in 2018 about 36,000 paragliding pilots registered 232 accidents, where 109 caused serious injury and 9 were fatal.
Some pilots carry a reserve parachute designed to open in as little as . While reserve parachutes are designed to open fast, they have a system length between 13.3 ft (4.5 m) and 21.9 ft (7.3 m) and usually need at least to slow down a pilot to a safe sink rate (certified design speed according to LTF and EN certifications is max per second). With enough height over ground, many potential issues with the canopy can be resolved without applying the reserve parachute. The required skills can be acquired in SIV trainings, which improve the overall safety of flying by providing a better understanding on the system limitations and practical training of extreme situations.
The lack of established design criteria for these aircraft led the British Air Accidents Investigation Branch to conclude in 2007 that "only when precise reserve factors have been established for individual harness/wing combinations carrying realistic suspended masses, at load factors appropriate to the maneuvers to be carried out, can these aircraft be considered to be structurally safe".
License and training
Neither a license nor specific training is required in the U.S., U.K. or many other countries. Where there is no specific regulation (e.g., Mexico), paramotor flying is tolerated provided the pilots cooperate with local officials when appropriate. In countries where specific regulation exists, such as Canada, France, Italy, and South Africa, pilots must be trained, both in flying theory and practice, by licensed instructors. Some countries that require formal certification frequently do so through non-government ultralight aviation organizations.
Regardless of regulations, powered paragliding can be dangerous when practiced without proper training.
For a pilot to get through most organizations' full pilot syllabus requires between 5 and 15 days which, due to weather, may include far more calendar time. A number of techniques are employed for teaching, although most include getting the student familiar with handling the wing either on the ground, via towing, small hills, or on tandem flights.
With special gear, it is possible to take a passenger (tandem), but most countries, including the U.S., require some form of certification to do so.
Regulations
In most countries, paramotor pilots operate under simple rules that spare them certification requirements for pilot and gear. Those laws, however, limit where they can fly—specifying that pilots avoid areas of urban/suburban population and larger airports to minimize risk to other people or aircraft. U.S. pilots operate under Federal Aviation Administration regulation Part 103. As powered heavier-than-air flying vehicles with wings, paramotors are technically a type of aircraft as defined in 14 CFR 1.1 - General definitions, which defines definitions for all FARs including part 103.
In the United Kingdom, paramotors are regulated by the Civil Aviation Authority, are classified as self-propelled hang-gliders, and can be flown without registration or a license as long as they weigh less than 70 kg, have a stall speed not exceeding 35 knots, and are foot-launched. Wheel-launched paramotors are allowed under the additional conditions that they do not carry passengers, and have a stall speed of 20 knots or less, but may weigh up to 75 kg if they carry a reserve parachute.
Associations
In the U.S., the sport is represented primarily by the US Powered Paragliding Association (USPPA) which also holds an exemption allowing two-place training by appropriately certified tandem instructors. The US Ultralight Association (USUA) and Aero Sports Connections (ASC) also offer some support.
Instructors in the U.S. are primarily represented and certified by the United States Powered Paragliding Association (USPPA).
In the United Kingdom, the sport is represented by the British Hang Gliding and Paragliding Association.
Powered parachute differences
A powered paraglider (PPG) differs from a powered parachute (PPC) primarily in size, power, control method, and number of occupants. Powered paragliders are smaller, use more efficient (but more difficult to manage) paraglider wings, and steer with brake toggles like sport parachutists. Powered parachutes typically use easier-to-manage but less efficient wings, have larger engines, are steered by foot and may be able to take along passengers. There are exceptions; a growing number of powered parachutes use elliptical wings, some use hand controls, and many are light, single-seat aircraft that meet FAA Part 103 requirements.
World records
Determined by the FAI, RPF1 category.
The current world altitude record for powered paragliders (RPF1TM) is 7,589m (24 898 ft). It was set by Ramon Morillas Salmeron (Granada, Spain) on 19 September 2009 while flying an Advance Sigma paraglider and a PAP frame powered by a HE R220Duo engine.
A highly publicized altitude record attempt was made by Bear Grylls on 14 May 2007 at 0933 local time over the Himalayas using a Parajet engine invented by Gilo Cardozo and a specifically designed reflex paraglider wing invented by Mike Campbell-Jones of Paramania. Cardozo, who also flew in the attempt, had engine problems that ended his climb 300m short of the record. Grylls went on to claim an altitude of 8,990 m (29,494 ft), though satisfactory evidence of this claim was not submitted to FAI, and therefore it was not ratified as a world record for this aircraft class.
Distance in a straight line without landing: set on 23 April 2007 by Ramon Morillas Salmeron flying from Jerez de la Frontera, Cádiz (Spain) to Lanzarote, Canary Islands (Spain) with an Advance Omega 7 paraglider.
Fastest Crossing of the United States of America Direct Path 2104 Miles in December 2020. Harley Milne (50xChallenge) cross the southern route from San Diego to Jacksonville Florida in 8 days 2 hours. Flying 48 hours 19 minutes, over 22 flights with a maximum 12,444 AGL and Max Speed of 89.9 MPH ground speed.
Determined by Guinness World Records
The longest journey by powered paraglider is 9,132 km (5,674.35 mi) and was achieved by Miroslav Oros (Czech Republic), flying throughout the Czech Republic, starting in Sazená and ending in Lipovå-lázn, between 1 April 2011 and 30 June 2011.
2nd Longest Journey by Powered Paraglider: set on 24 August 2009 by Canadian photographer and documentary filmmaker Benjamin Jordan during his Above + Beyond Canada campaign. In an unprecedented flight between Tofino, BC and Bay Saint Lawrence, NS, the cross-Canada campaign involved 108 flights with landings at schools and youth summer camps along the way. Jordan provided youth with motivational speeches and arranged them in shapes on the ground before launching and continuing on the next leg of his journey. Funds raised over the course of the trip were donated to various charities across Canada to help children from low-income homes attend summer camp.
First Paramotor Pilot to Fly in all 50 US States. The fastest time to fly a paramotor/ powered paraglider in all 50 US states is 215 days and was achieved by Harley Milne (USA), across USA from 8 November 2019 to 10 June 2020. While achieving this record Harley is also the first to complete this undertaking.
Images
See also
Fan Man
Hang gliding
Jet pack – flying with a parafoil and a jetpack
Kite
Paramotor
Powered hang glider
Powered parachute
Powered skydiving, where the participant jumps out of an aircraft
Ultralight trike
USPPA
Notes
References
External links
Air sports
Paragliding
Ultralight aircraft
Parachuting
Aircraft engines
Paramotors | Powered paragliding | [
"Technology"
] | 2,734 | [
"Engines",
"Aircraft engines"
] |
992,530 | https://en.wikipedia.org/wiki/NGC%20188 | NGC 188 (also known as Caldwell 1 or the Polarissima Cluster) is an open cluster in the constellation Cepheus. It was discovered by John Herschel in 1825.
Unlike most open clusters that drift apart after a few million years because of the gravitational interaction of our Milky Way galaxy, NGC 188 lies far above the plane of the galaxy and is one of the most ancient of open clusters known, at approximately 6.8 billion years old.
NGC 188 is very close to the North Celestial Pole, under five degrees away, and in the constellation of Cepheus at an estimated 5,000 light-years' distance, this puts it slightly above the Milky Way's disc and further from the center of the galaxy than the Sun.
References
External links
NGC 188 at SEDS NGC objects pages
NGC 188 at NightSkyInfo.com
NGC 0188
NGC 0188
0188
001b
Astronomical objects discovered in 1825 | NGC 188 | [
"Astronomy"
] | 187 | [
"Constellations",
"Cepheus (constellation)"
] |
992,565 | https://en.wikipedia.org/wiki/NGC%20206 | NGC 206 is a bright star cloud in the Andromeda Galaxy, and the brightest star cloud in Andromeda when viewed from Earth.
Features
NGC 206 is the richest and most conspicuous star cloud in the Andromeda Galaxy, and is one of the largest and brightest star-forming regions in the Local Group. It contains more than 300 stars brighter than Mb=−3.6. It was originally identified by Edwin Hubble as a star cluster but today, due to its size, it is classified as an OB association.
NGC 206 is located in a spiral arm of the Andromeda Galaxy, in a zone free of neutral hydrogen. It contains hundreds of stars of spectral types O and B. The star cloud has a double structure: one region has an age of around 10 million years and includes several H II regions in its border; the other region has an age of 40 to 50 million years and includes a number of cepheids. The two regions are separated by a band of interstellar dust.
See also
List of Andromeda's satellite galaxies
References
External links
NGC 206 @ SEDS NGC objects pages
Star clouds
Andromeda Galaxy
Andromeda (constellation)
0206 | NGC 206 | [
"Astronomy"
] | 243 | [
"Andromeda (constellation)",
"Constellations"
] |
992,572 | https://en.wikipedia.org/wiki/NGC%20225 | NGC 225 is an open cluster in the constellation Cassiopeia. It is located roughly 2,200 light-years from Earth. It is about 100 to 150 million years old.
The binary fraction, or the fraction of stars that are multiple stars, is 0.52.
See also
Open cluster
List of NGC objects (1–1000)
Cassiopeia (constellation)
References
External links
SEDS
Open clusters
Cassiopeia (constellation)
0225
17830927 | NGC 225 | [
"Astronomy"
] | 98 | [
"Cassiopeia (constellation)",
"Constellations"
] |
992,586 | https://en.wikipedia.org/wiki/NGC%20381 | NGC 381 is an open cluster of stars in the northern constellation of Cassiopeia, located at a distance of approximately from the Sun. Credit for the discovery of this cluster was given to Caroline Herschel by her brother William in 1787, although she may never have actually seen it.
This is a Trumpler class cluster of intermediate age, estimated at 316 million years. This class indicates the cluster is relatively weakly concentrated, with a small brightness range and an intermediate richness of stars. A total of 350 probable members have been identified, down to 20th magnitude, and the cluster contains about 32 times the mass of the Sun. The cluster has a core angular radius of and an outer cluster radius of . It has a physical tidal radius of . No giant stars have been discovered in this cluster. Four candidate variable stars have been found in the field of NGC 381; one of which is a suspected cluster member. The eclipsing binary OX Cassiopeiae was once thought to be a member, but is now known to be a background star system.
References
External links
SEDS – NGC 381
NGC 0381
NGC 0381
0381
17871103 | NGC 381 | [
"Astronomy"
] | 235 | [
"Cassiopeia (constellation)",
"Constellations"
] |
992,596 | https://en.wikipedia.org/wiki/NGC%20595 | NGC 595 is a massive H II region in the Triangulum Galaxy. It was discovered by Heinrich Ludwig d'Arrest on October 1, 1864 and is one of the biggest H II regions in the Local Group.
References
External links
NGC 0595
NGC 0595
0595
Triangulum Galaxy
18641001
Star-forming regions | NGC 595 | [
"Astronomy"
] | 71 | [
"Nebula stubs",
"Astronomy stubs",
"Constellations",
"Triangulum"
] |
992,611 | https://en.wikipedia.org/wiki/Polonium-210 | Polonium-210 (210Po, Po-210, historically radium F) is an isotope of polonium. It undergoes alpha decay to stable 206Pb with a half-life of 138.376 days (about months), the longest half-life of all naturally occurring polonium isotopes (210–218Po). First identified in 1898, and also marking the discovery of the element polonium, 210Po is generated in the decay chain of uranium-238 and radium-226. 210Po is a prominent contaminant in the environment, mostly affecting seafood and tobacco. Its extreme toxicity is attributed to intense radioactivity, mostly due to alpha particles, which easily cause radiation damage, including cancer in surrounding tissue. The specific activity of is 166 TBq/g, i.e., 1.66 × 10 Bq/g. At the same time, 210Po is not readily detected by common radiation detectors, because its gamma rays have a very low energy. Therefore, can be considered as a quasi-pure alpha emitter.
History
In 1898, Marie and Pierre Curie discovered a strongly radioactive substance in pitchblende and determined that it was a new element; it was one of the first radioactive elements discovered. Having identified it as such, they named the element polonium after Marie's home country, Poland. Willy Marckwald discovered a similar radioactive activity in 1902 and named it radio-tellurium, and at roughly the same time, Ernest Rutherford identified the same activity in his analysis of the uranium decay chain and named it radium F (originally radium E). By 1905, Rutherford concluded that all these observations were due to the same substance, 210Po. Further discoveries and the concept of isotopes, first proposed in 1913 by Frederick Soddy, firmly placed 210Po as the penultimate step in the uranium series.
In 1943, 210Po was studied as a possible neutron initiator in nuclear weapons, as part of the Dayton Project. In subsequent decades, concerns for the safety of workers handling 210Po led to extensive studies on its health effects.
In the 1950s, scientists of the United States Atomic Energy Commission at Mound Laboratories, Ohio explored the possibility of using 210Po in radioisotope thermoelectric generators (RTGs) as a heat source to power satellites. A 2.5-watt atomic battery using 210Po was developed by 1958. However, the isotope plutonium-238 was chosen instead, as it has a longer half-life of 87.7 years.
Polonium-210 was used to kill Russian dissident and ex-FSB officer Alexander V. Litvinenko in 2006, and was suspected as a possible cause of Yasser Arafat's death, following exhumation and analysis of his corpse in 2012–2013. The radioisotope may also have been used to kill Yuri Shchekochikhin, Lecha Islamov and Roman Tsepov.
Decay properties
210Po is an alpha emitter that has a half-life of 138.376 days; it decays directly to stable 206Pb. The majority of the time, 210Po decays by emission of an alpha particle only, not by emission of an alpha particle and a gamma ray; about one in 100,000 decays results in the emission of a gamma ray.
This low gamma ray production rate makes it more difficult to find and identify this isotope. Rather than gamma ray spectroscopy, alpha spectroscopy is the best method of measuring this isotope.
Owing to its much shorter half-life, a milligram of 210Po emits as many alpha particles per second as 5 grams of 226Ra. A few curies of 210Po emit a blue glow caused by excitation of surrounding air.
210Po occurs in minute amounts in nature, where it is the penultimate isotope in the uranium series decay chain. It is generated via beta decay from 210Pb and 210Bi.
The astrophysical s-process is terminated by the decay of 210Po, as the neutron flux is insufficient to lead to further neutron captures in the short lifetime of 210Po. Instead, 210Po alpha decays to 206Pb, which then captures more neutrons to become 210Po and repeats the cycle, thus consuming the remaining neutrons. This results in a buildup of lead and bismuth, and ensures that heavier elements such as thorium and uranium are only produced in the much faster r-process.
Production
Deliberate
Although 210Po occurs in trace amounts in nature, it is not abundant enough (0.1 ppb) for extraction from uranium ore to be feasible. Instead, most 210Po is produced synthetically, through neutron bombardment of 209Bi in a nuclear reactor. This process converts 209Bi to 210Bi, which beta decays to 210Po with a five-day half-life. Through this method, approximately of 210Po are produced in Russia and shipped to the United States every month for commercial applications. By irradiating certain bismuth salts containing light element nuclei such as beryllium, a cascading (α,n) reaction can also be induced to produce 210Po in large quantities.
Byproduct
The production of polonium-210 is a downside to reactors cooled with lead-bismuth eutectic rather than pure lead. However, given the eutectic properties of this alloy, some proposed Generation IV reactor designs still rely on lead-bismuth.
Applications
A single gram of 210Po generates 140 watts of power. Because it emits many alpha particles, which are stopped within a very short distance in dense media and release their energy, 210Po has been used as a lightweight heat source to power thermoelectric cells in artificial satellites. A 210Po heat source was also in each of the Lunokhod rovers deployed on the surface of the Moon, to keep their internal components warm during the lunar nights. Some anti-static brushes, used for neutralizing static electricity on materials like photographic film, contain a few microcuries of 210Po as a source of charged particles. 210Po was also used in initiators for atomic bombs through the (α,n) reaction with beryllium. Small neutron sources reliant on the (α,n) reaction also usually use polonium as a convenient source of alpha particles due to its comparatively low gamma emissions (allowing easy shielding) and high specific activity.
Hazards
210Po is extremely toxic; it and other polonium isotopes are some of the most radiotoxic substances to humans. With one microgram of 210Po being more than enough to kill the average adult, it is 250,000 times more toxic than hydrogen cyanide by weight. One gram of 210Po would hypothetically be enough to kill 50 million people and sicken another 50 million. This is a consequence of its ionizing alpha radiation, as alpha particles are especially damaging to organic tissues inside the body. However, 210Po does not pose a radiation hazard when contained outside the body. The alpha particles it produces cannot penetrate the outer layer of dead skin cells.
The toxicity of 210Po stems entirely from its radioactivity. It is not chemically toxic in itself, but its solubility in aqueous solution as well as that of its salts poses a hazard because its spread throughout the body is facilitated in solution. Intake of 210Po occurs primarily through contaminated air, food, or water, as well as through open wounds. Once inside the body, 210Po concentrates in soft tissues (especially in the reticuloendothelial system) and the bloodstream. Its biological half-life is approximately 50 days.
In the environment, 210Po can accumulate in seafood. It has been detected in various organisms in the Baltic Sea, where it can propagate in, and thus contaminate, the food chain. 210Po is also known to contaminate vegetation, primarily originating from the decay of atmospheric radon-222 and absorption from soil.
In particular, 210Po attaches to, and concentrates in, tobacco leaves. Elevated concentrations of 210Po in tobacco were documented as early as 1964, and cigarette smokers were thus found to be exposed to considerably greater doses of radiation from 210Po and its parent 210Pb. Heavy smokers may be exposed to the same amount of radiation (estimates vary from 100 µSv to 160 mSv per year) as individuals in Poland were from Chernobyl fallout traveling from Ukraine. As a result, 210Po is most dangerous when inhaled from cigarette smoke.
References
Isotopes of polonium
Carcinogens
Tobacco
Radioisotope fuels | Polonium-210 | [
"Chemistry",
"Environmental_science"
] | 1,755 | [
"Carcinogens",
"Toxicology",
"Isotopes",
"Isotopes of polonium"
] |
992,619 | https://en.wikipedia.org/wiki/NGC%20659 | NGC 659 is an open cluster in the Cassiopeia constellation. It was discovered by Caroline Herschel in 1783.
References
External links
SEDS – NGC 659
Open clusters
Cassiopeia (constellation)
0659
Astronomical objects discovered in 1783
Discoveries by Caroline Herschel | NGC 659 | [
"Astronomy"
] | 57 | [
"Cassiopeia (constellation)",
"Constellations"
] |
992,626 | https://en.wikipedia.org/wiki/NGC%20891 | NGC 891 (also known as Caldwell 23, the Silver Sliver Galaxy, and the Outer Limits Galaxy) is an edge-on unbarred spiral galaxy about 30 million light-years away in the constellation Andromeda. It was discovered by William Herschel on October 6, 1784. The galaxy is a member of the NGC 1023 group of galaxies in the Local Supercluster. It has an H II nucleus.
The object is visible in small to moderate size telescopes as a faint elongated smear of light with a dust lane visible in larger apertures.
In 1999, the Hubble Space Telescope imaged NGC 891 in infrared.
In 2005, due to its attractiveness and scientific interest, NGC 891 was selected to be the first light image of the Large Binocular Telescope.
In 2012, it was again used as a first light image of the Lowell Discovery Telescope with the Large Monolithic Imager.
Supernova SN 1986J was discovered on August 21, 1986 at apparent magnitude 14.
Properties
NGC 891 looks as the Milky Way would look like when viewed edge-on (some astronomers have even noted how similar to NGC 891 our galaxy looks as seen from the Southern Hemisphere) and, in fact, both galaxies are considered very similar in terms of luminosity and size; studies of the dynamics of its molecular hydrogen have also proven the likely presence of a central bar.
Despite this, recent high-resolution images of its dusty disk show unusual filamentary patterns. These patterns are extending into the halo of the galaxy, away from its galactic disk. Scientists presume that supernova explosions caused this interstellar dust to be thrown out of the galactic disk toward the halo.
It may also be possible that the light pressure from surrounding stars causes this phenomenon.
The galaxy is a member of a small group of galaxies, sometimes called the NGC 1023 Group. Other galaxies in this group are the NGCs 925, 949, 959, 1003, 1023, and 1058, and the UGCs 1807, 1865 (DDO 19), 2014 (DDO 22), 2023 (DDO 25), 2034 (DDO 24), and 2259. Its outskirts are populated by multiple low-surface brightness, coherent, and vast substructures, like giant streams that loop around the parent galaxy up to distances of approximately 50 kpc. The bulge and the disk are surrounded by a flat and thick cocoon-like stellar structure. These have vertical and radial distances of up to 15 kpc and 40 kpc, respectively, and are interpreted as the remnant of a satellite galaxy disrupted and in the process of being absorbed by NGC 891.
In popular culture
NGC 891 appears alongside M67, the Sombrero Galaxy, the Pinwheel Galaxy, NGC 5128, NGC 1300, M81, and the Andromeda Galaxy in the end credits of the Outer Limits TV series, which is why it is occasionally called the Outer Limits Galaxy.
The soundtrack of the 1974 film Dark Star by John Carpenter features a muzak-style instrumental piece called "When Twilight Falls on NGC 891".
The first solo album by Edgar Froese, Aqua, also released in 1974, contained a track called "NGC 891". Side 2 of the album, which included this track, was unusual in having been a rare example of a commercially issued piece of music recorded using the artificial head system.
See also
Messier 82
References
External links
APOD: Interstellar Dust-Bunnies of NGC 891 (9/9/1999)
SEDS: Information on NGC 891
NGC 891 on Astrophotography by Wolfgang Kloehr
Unbarred spiral galaxies
Andromeda (constellation)
0891
01831
09031
023b
NGC 1023 Group | NGC 891 | [
"Astronomy"
] | 779 | [
"Andromeda (constellation)",
"Constellations"
] |
992,646 | https://en.wikipedia.org/wiki/NGC%201435 | The Merope Nebula (also known as Tempel's Nebula and NGC 1435) is a diffuse reflection nebula in the Pleiades star cluster, surrounding the 4th magnitude star Merope. It was discovered on October 19, 1859 by the German astronomer Wilhelm Tempel. The discovery was made using a 10.5 cm refractor. John Herschel included it as 768 in his General Catalogue of Nebulae and Clusters of Stars but never observed it himself.
The Merope Nebula has an apparent magnitude starting at 13 and quickly dimming by a factor of about 15, making most of the nebula dimmer than magnitude 16. It is illuminated entirely by the star Merope, which is embedded in the nebula. It contains a bright knot, IC 349, about half an arcminute wide near Merope, which was discovered by Edward Emerson Barnard in November 1890. It is naturally very bright but is almost hidden in the radiance of Merope. It appears blue in photographs because of the fine carbon dust spread throughout the cloud. Though it was once thought the Pleiades formed from this and surrounding nebulae, it is now known that the Pleiades nebulosity is caused by a chance encounter with the cloud.
Gallery
References
External links
Seds NGC 1435 page
1435
Pleiades
Reflection nebulae
Taurus (constellation) | NGC 1435 | [
"Astronomy"
] | 277 | [
"Taurus (constellation)",
"Constellations"
] |
992,663 | https://en.wikipedia.org/wiki/NGC%202169 | NGC 2169 is an open cluster in the Orion constellation. It was possibly discovered by Giovanni Batista Hodierna before 1654 and discovered by William Herschel on October 15, 1784. NGC 2169 is at a distance of about 3,600 light years away from Earth. It is nicknamed "The '37' Cluster" due to its striking resemblance to the numerals "37". The cluster is composed of components Collinder 38, a I3pn open cluster, and Collinder 83, a III3m open cluster.
References
External links
SEDS entry
2169
Open clusters
Orion (constellation) | NGC 2169 | [
"Astronomy"
] | 125 | [
"Constellations",
"Orion (constellation)"
] |
7,017,489 | https://en.wikipedia.org/wiki/Address-range%20register | Address-range registers (ARR) are control registers of the Cyrix 6x86, 6x86MX and MII processors that are used as a control mechanism which provides system software with control of how accesses to memory ranges by the CPU are cached, similar to what memory type range registers (MTRRs) provide on other implementations of the x86 architecture.
See also
Write barrier
Page attribute table
References
Digital registers | Address-range register | [
"Technology"
] | 88 | [
"Computing stubs",
"Computer hardware stubs"
] |
7,017,575 | https://en.wikipedia.org/wiki/Mutchkin | Disambiguation: a "mutchkin" can also refer a close-fitting Scottish cap.
The mutchkin () was a Scottish unit of liquid volume measurement that was in use from at least 1661 (and possibly as early as the 15th century) until the late 19th century, approximately equivalent to 424 mL, or roughly imperial pint. The word was derived from – a mid 15th-century Dutch measure of beer or wine.
A mutchkin could be subdivided into four Scottish gills (of approximately 106 mL each) – this was roughly equivalent to three imperial gills or three-quarters of an imperial pint.
Two mutchkins (848 mL) made one chopin.
Four mutchkins (1696 mL) made one Scottish pint (or joug), roughly equivalent to three imperial pints (1705 mL).
See also
Obsolete Scottish units of measurement
References
Obsolete Scottish units of measurement
Units of volume
Alcohol measurement | Mutchkin | [
"Mathematics"
] | 193 | [
"Units of volume",
"Quantity",
"Units of measurement"
] |
7,017,612 | https://en.wikipedia.org/wiki/German%20Fountain | The German Fountain (; ) is a gazebo styled fountain in the northern end of old hippodrome (Sultanahmet Square), Istanbul, Turkey and across from the Mausoleum of Sultan Ahmed I. It was constructed to commemorate the second anniversary of German Emperor Wilhelm II's visit to Istanbul in 1898. It was built in Germany, then transported piece by piece and assembled in its current site in 1900. The neo-Byzantine style fountain's octagonal dome has eight marble columns, and dome's interior is covered with golden mosaics.
History
The idea of Great Palace of Constantinople's Empire Lodge (Kathisma) being on the site of the German Fountain's, conflicts with the view that Carceres Gates of Hippodrome was found on the site of the fountain however, the hypothesis of Carceres Gates being on the site enforces the view that Quadriga of Lysippos was used to stand on the site of the German Fountain.
During his reign as German Emperor and King of Prussia, Wilhelm II visited several European and Eastern countries. His trip started in Istanbul, Ottoman Empire on 18 October 1898 during the reign of Abdülhamid II. According to Peter Hopkirk, the visit to Ottoman Empire was an ego trip and also had long-term motivations. The Emperor's primary motivation for visiting was to construct the Baghdad Railway, which would run from Berlin to the Persian Gulf, and would further connect to British India through Persia. This railway could provide a short and quick route from Europe to Asia, and could carry German exports, troops and artillery. At the time, the Ottoman Empire could not afford such a railway, and Abdülhamid II was grateful to Wilhelm's offer, but was suspicious over the German motives. Abdülhamid II's secret service believed that German archeologists in the Emperor's retinue were in fact geologists with designs on the oil wealth of the Ottoman empire. Later, the secret service uncovered a German report, which noted that the oilfields in Mosul, northern Mesopotamia were richer than that in the Caucuses. In his first visit, Wilhelm secured the sale of German-made rifles to Ottoman Army, and in his second visit he secured a promise for German companies to construct the Istanbul-Baghdad railway. The German Government constructed the German Fountain for Wilhelm II and Empress Augusta's 1898 Istanbul visit. According to Afife Batur, the fountain's plans were drawn by architect Spitta and constructed by architect Schoele, also German architect Carlitzik and Italian architect Joseph Anthony worked on this project.
According to the Ottoman inscription, the fountain's construction started in the Hejira 1319 (1898–1899), although the inauguration of the fountain was planned to take place on 1 September 1900 – the 25th anniversary of Abdülhamid II's ascension to the throne. Construction, however, could not finish at the planned time and it was instead inaugurated on 27 January 1901, which was Wilhelm II's birthdate. Marble, stone and gem parts of the fountain were constructed in Germany and transported piece by piece to Istanbul by ships.
Architecture
The German Fountain was constructed on the site where there was a tree which is known as Vakvak Tree () or The Bloody Plane (). In the 1656 janissary rebellion, Mehmed IV yielded a number of officials to the demands of the rebels and these victims, when killed, were suspended on the Plane in the Hippodrome. Boynuyaralı Mehmed Pasha overcame this rebellion, which took two months and named Vak'a-i Vakvakiye, after becoming Grand Vizier. The plane named after Seçere-i Vakvak (Vakvak Tree) which believed to be in Jahannam and its fruits are human heads.The neo-Byzantine style octagonal fountain stands on a base with eight steps rising up to an entry gate. There are seven brass fountain spouts over basins on the remaining sides, and over the central reservoir there is a dome supported by eight porphyry columns. The fountain's central reservoir stands on a mosaic-tiled platform and surmounted with the bronze dome, which is raised on carved marble arches. There are eight monograms in the arch stonework and they represent the political union of Abdülhamid II and Wilhelm. In four of these medallions, Abdülhamid II's tughra is written on green background, and in other four Wilhelm's symbol "W" is written on a Prussian blue background. Also, over "W" there is a crown and below it a "II" is written. The fountain was surrounded with a bronze fence, but unfortunately this has been lost. The outside of the dome is ornately patterned bronze; the dome's ceiling is decorated with golden mosaics and again with Abdülhamid II's tughra and Wilhelm II's symbol.
The bronze inscription on the reservoir, which was written in German, reads "Wilhelm II Deutscher Kaiser stiftete diesen Brunnen in dankbarer Erinnerung an seinen Besuch bei Seiner Maiestaet [sic] dem Kaiser der Osmanen Abdul Hamid II im Herbst des Jahres 1898" meaning "German Kaiser Wilhelm II endowed this fountain, in thankful remembrance of his visit to the Ottoman Sultan Abdülhamid II in autumn of the year 1898". There is also an Ottoman inscription in the arch of fountain, Undersecretary of Seraskery Ahmet Muhtar Bey's eight couplet history verse is written by Hattat İzzet Efendi. The poem commemorates the construction of the fountain for Wilhelm II's visit to Istanbul.
Incidents
The German fountain was the site of a terrorist bombing which killed 13 people (12 of them German) and injured many more on 12 January 2016.
See also
List of fountains in Istanbul
Notes
References
External links
Alman (German) Fountain
Buildings and structures completed in 1900
Byzantine Revival architecture in Turkey
Fatih
Fountains in Istanbul
Hippodrome of Constantinople
Landscape architecture
Pavilions | German Fountain | [
"Engineering"
] | 1,242 | [
"Landscape architecture",
"Architecture"
] |
7,017,642 | https://en.wikipedia.org/wiki/Chopin%20%28unit%29 | The chopin was a Scottish measurement of volume, usually for fluids, that was in use from at least 1661, though possibly 15th century, until the mid 19th century. The measurement was derived from the French measure chopine an old and widespread unit of liquid capacity, first recorded in the 13th century. A chopin is equivalent to 0.848 litres.
1 chopin is 8 gills
1 chopin is 2 mutchkins
2 chopins is the equivalent of 1 (Scots) pint (or joug)
16 chopins is the equivalent of 1 (Scots) gallon
References
See also
Obsolete Scottish units of measurement
Obsolete Scottish units of measurement
Units of volume
17th-century establishments in Scotland
17th-century introductions
19th-century disestablishments in Scotland
Alcohol measurement | Chopin (unit) | [
"Mathematics"
] | 153 | [
"Units of volume",
"Quantity",
"Units of measurement"
] |
7,017,744 | https://en.wikipedia.org/wiki/Thin%20layers%20%28oceanography%29 | Thin layers are concentrated aggregations of phytoplankton and zooplankton in coastal and offshore waters that are vertically compressed to thicknesses ranging from several centimeters up to a few meters and are horizontally extensive, sometimes for kilometers. Generally, thin layers have three basic criteria: 1) they must be horizontally and temporally persistent; 2) they must not exceed a critical threshold of vertical thickness; and 3) they must exceed a critical threshold of maximum concentration. The precise values for critical thresholds of thin layers has been debated for a long time due to the vast diversity of plankton, instrumentation, and environmental conditions. Thin layers have distinct biological, chemical, optical, and acoustical signatures which are difficult to measure with traditional sampling techniques such as nets and bottles. However, there has been a surge in studies of thin layers within the past two decades due to major advances in technology and instrumentation. Phytoplankton are often measured by optical instruments that can detect fluorescence such as LIDAR, and zooplankton are often measured by acoustic instruments that can detect acoustic backscattering such as ABS.
These extraordinary concentrations of plankton have important implications for many aspects of marine ecology (e.g., phytoplankton growth dynamics, zooplankton grazing, behaviour, environmental effects, harmful algal blooms), as well as for ocean optics and acoustics. Zooplankton thin layers are often found slightly under phytoplankton layers because many feed on them. Thin layers occur in a wide variety of ocean environments, including estuaries, coastal shelves, fjords, bays, and the open ocean, and they are often associated with some form of vertical structure in the water column, such as pycnoclines, and in zones of reduced flow.
Criteria
Persistence
Thin layers persist from hours to weeks while other small-scale patches of plankton exist for minutes. The presence of nutrients as well as coastal fronts, eddies, and upwelling zones greatly increase the persistence of thin layers. One of the main criteria for an aggregation of plankton to be considered a thin layer is that the increased concentration at a certain depth of the water column must appear in subsequently measured profiles. However, thin layers are dynamic and horizontally extensive so their persistence cannot be defined using multiple measurements at only one location. A study on the Karenia brevis algae responsible for more recent and increasingly longer red tide blooms shows that the cellular gene expression patterns are extremely diverse which means that this particular species of plankton are more resilient because they adapt well to changing conditions. Studies also indicate that red tide blooms are often terminated by interactions with other microbes such as viruses and bacteria that may either compete for the same nutrients or adversely impact the algal cells.
Thickness
Some studies have considered the maximum critical threshold for vertical thickness of thin layers as three meters, but more recent data has shown that the criteria can be relaxed to five meters. The horizontal extents of thin layers can reach tens of kilometers, and their horizontal to vertical aspect ratio is usually at least 1000:1.
Intensity
The intensity of a thin layer refers to the maximum concentration of the plankton within the layer relative to the background and the water column. Thin layer concentrations can range between three and 100 times more than the background and up to 75% of the total biomass in the water column.
Formation
Buoyancy
Thin layers of non-motile phytoplankton tend to collect at boundaries of strong vertical gradients in salinity (haloclines), temperature (thermoclines), and density (pycnoclines) which often coincide because they are directly proportional. These particular thin layers are formed by sinking non-motile phytoplankton reaching a neutral buoyancy at a pycnocline, and the stifling of vertical turbulent dispersion at these depths. Other studies have shown that gradients in nutrients (nutriclines) also contribute to the formation of thin layers.
Vertical Migration
Many zooplankton normally exhibit a diel vertical migration (DVM) pattern that dictates their depth in the water column based on the time of day. Phytoplankton require sunlight for photosynthesis and protein production, but they are not primarily attracted to light. This is evident by their single move up near the surface prior to sunrise and single move down into deeper waters prior to sunset. Their collective movements may result in the aggregation that form thin layers. These regular movements are thought to be governed by an internal clock in normal nutrient concentrations However, they have also been observed to migrate irregularly when nutrient concentrations are higher or lower than normal.
Chemotaxis
Motile plankton have been observed to be able to detect and swim towards higher nutrient concentrations and/or light intensities. This mechanism is called chemotaxis and is partly responsible for the formation of thin layers at depths where nutrients are abundant. Another mechanism specific to dinoflagellates is called helical klinotaxis where the algal cell's ability to respond to both positive and negative chemosensory signals is crucial to their motility. If dinoflagellates were not capable of both positive and negative chemotaxis, they would not navigate successfully due to the nature of the transverse and longitudinal flagella causing rotating and translating motions, respectively.
Eddies, Filaments, and Fronts
Another obvious cause of thin layers is the horizontal transport of waters with high plankton concentration into waters with lower concentrations. In this case, upwelled intrusions of nutrient-rich slope water are suggested to be the cause of algal blooms and some thin layers. However, thin layers have been observed to form at the boundaries of more complex fluid mechanisms such as eddies, filaments, and fronts. These thin layers were located at the transition layer, a region of maximum shear and stratification at the base of the mixed layer.
Straining by Shear
A fluid mechanism that contributes to the formation of thin layers is the straining of fluid by the sheared velocity profile which causes the fluid to tilt and disperse horizontally. If a patch of plankton is located at the fluid being sheared, a thin layer could be formed by the straining of the patch by velocity shear. The four phases of plankton distributions caused by straining are: 1) tilting, 2) shear-thinning, 3) decay, and 4) shear-dispersion (dissipation).
Gyrotactic Trapping
A sharp change in flow velocities can also prevent some motile plankton from orienting themselves or swimming vertically. This fluid mechanism is called gyrotactic trapping.
See also
Algal bloom
Chemotaxis
Dinoflagellate
Halocline
Karenia brevis
Phytoplankton
Pycnocline
Red tide
Thermocline
Zooplankton
References
External links
Critical Scales and Thin Layers
Biological oceanography
Planktology
Aquatic ecology | Thin layers (oceanography) | [
"Biology"
] | 1,433 | [
"Aquatic ecology",
"Ecosystems"
] |
7,018,285 | https://en.wikipedia.org/wiki/AlgaeBase | AlgaeBase is a global species database of information on all groups of algae, both marine and freshwater, as well as sea-grass.
History
AlgaeBase began in March 1996, founded by Michael Guiry.
By 2005, the database contained about 65,000 names.
In 2013, AlgaeBase and the Flanders Marine Institute (VLIZ) signed an end-user license agreement regarding the Electronic Intellectual Property of AlgaeBase. This allows the World Register of Marine Species (WoRMS) to include taxonomic names of algae in WoRMS, thereby allowing WoRMS, as part of the Aphia database, to make its overview of all described marine species more complete. Synchronisation of the AlgaeBase data with Aphia and WoRMS was undertaken manually until March 2015, but this was very time-consuming, so an online application was developed to semi-automate the synchronisation, launching in 2015 in conjunction with Michael Guiry and the chief programmer of AlgaeBase, Pier Kuipers. After a long phase of further development and testing, the AlgaeBase harvester tool was implemented by the WoRMS data management team in early 2019. Since then, newly-added species in AlgaeBase are added to Aphia and, if marine, to WoRMS as well.
Description
The database is hosted at the National University of Ireland's Ryan Institute, in Galway. It includes about all types of algae, as well as one group of flowering plants, the sea-grasses. Information about each species' taxonomy, nomenclature and distribution is included, and the algae covered include terrestrial as well as marine and freshwater species, such as seaweeds, phytoplankton, and freshwater algae. , marine species have the best coverage, including sea-grasses.
As of 2014 there were nearly 17,000 images, and the database was being used by 2,000–3,000 individual visitors each day.
As of 2023, there were about 170,000 species and infraspecies in AlgaeBase.
Support and funding
The compilation of the data was funded by the Irish Government Department of Education and Science's Programme for Research in Third-level Institutions (PRTLI) 2, 3 and 4 programmes, to the Ryan Institute and the Environmental Change Institute, as well as by Atlantic Philanthropies, and the European Union.
The synchronisation between AlgaeBase and Aphia was possible through support of the LifeWatch Species Information Backbone. LifeWatch, the E-Science European Infrastructure for Biodiversity and Ecosystem Research, is a distributed virtual laboratory, which is used for different aspects of biodiversity research.
, the main sponsors of the database are the Phycological Society of America, the British Phycological Society, the International Phycological Society, and the Korean Phycological Society. Programming is carried out by Pier Kuipers and Caoilte Guiry, and Michael Guiry. Jonathan Guthrie was responsible for programming much of earlier versions.
References
Further reading
External links
Algae
Biodiversity databases
Databases in Ireland
Online taxonomy databases | AlgaeBase | [
"Biology",
"Environmental_science"
] | 610 | [
"Environmental science databases",
"Algae",
"Biodiversity databases",
"Biodiversity"
] |
7,018,488 | https://en.wikipedia.org/wiki/International%20Journal%20of%20Ecology%20%26%20Development | The International Journal of Ecology & Development is a scientific journal published by the Indian Society for Development and Research that was established to cover "research and developments in ecology and development." The editor-in-chief is Kaushal K. Srivastava. It has been published since 2005 and is included in Scopus.
External links
Ecology journals
Academic journals established in 2005
English-language journals
Environmental social science journals
Development studies journals | International Journal of Ecology & Development | [
"Environmental_science"
] | 87 | [
"Ecology journals",
"Environmental social science journals",
"Environmental science journals",
"Environmental social science stubs",
"Environmental science journal stubs",
"Environmental social science"
] |
7,018,613 | https://en.wikipedia.org/wiki/Glass%20tile | Glass tiles are pieces of glass formed into consistent shapes.
Early history
Glass was used in mosaics as early as 2500 BC, but it was not until the 3rd century BC that innovative artisans in Greece, Persia, and India created glass tiles.
Whereas clay tile is dated as early as 8,000 BC, there were significant barriers to the development of glass tile, including the high temperatures required to melt glass and the complexities of annealing glass curves.
In recent years, glass tiles have become popular for both field tile and accent tiles. This trend can be attributed to recent technological breakthroughs, as well as the tiles inherent properties; in particular, their potential to impart intense color, reflect light, and remain impervious to water.
Glass tile introduces complexities to the installer. Since glass is more rigid than ceramic or porcelain tile, glass tiles break more readily under the duress of substrate shifts.
Smalti tiles
Smalti tile, sometimes referred to as Byzantine glass mosaic tile, is a typically opaque glass tile originally developed for use in mosaics created during the time of the Byzantine empire.
Smalti is made by mixing molten glass with metal oxides for color in a furnace; the result is a cloudy mixture poured into flat slabs that are cooled and broken into individual pieces. The molten mixture can be topped with gold leaf, followed by a thin glass film to protect against tarnishing. During the Byzantine era, Constantinople became the center of the mosaic craft, and the use of gold leaf glass mosaic reached perhaps its greatest artistic expression in the former seat of the Orthodox patriarch of Constantinople, the Hagia Sophia.
Traditional smalti tiles are still found today in many European churches and ornamental objects; the method is used by some present-day artisans, both in installations and fine art. In the 1920s, mass production methods were applied to Smalti tile manufacturing, which enabled these tiles to find their way into many middle-class homes. Instead of the old method of rolling the colored glass mixture out, cooling, and cutting, the new method called for molten liquid to be poured and cooled in trays, usually resulting in 3/4 inch chiclet-type pieces.
Modern era
Since the 1990s, a variety of modern glass tile technologies, including methods to take used glass and recreate it as ' green' tiles, has resulted in a resurgence of interest in glass tile as a floor and wall cladding. It is now most commonly used in pools, kitchens, spas, and bathrooms. Although Smalti tile remains popular, small and large format glass products are now commonly formed using cast and fused glass methods. The plasticity of these last two methods has resulted in a wide variety of looks and applications, including floor tiles.
In the late 1990s, special glass tiles were coated on the back side with a receptive white coating. This has allowed impregnation of heat-transfer dyes by a printing process reproducing high resolution pictures and designs. Custom printed glass tile and glass tile murals exhibit the toughness of glass on the wearing surface with photo-like pictures. These are especially practical in kitchens and showers, where cleanser and moisture resistance are important.
See also
Stained glass
Glass mosaic
References
External links
Decorative arts
Mosaic
Visual arts materials
Glass art
Glass architecture
Glass applications
Tiling | Glass tile | [
"Materials_science",
"Engineering"
] | 669 | [
"Glass architecture",
"Glass engineering and science"
] |
7,018,709 | https://en.wikipedia.org/wiki/File%20transfer | File transfer is the transmission of a computer file through a communication channel from one computer system to another. Typically, file transfer is mediated by a communications protocol. In the history of computing, numerous file transfer protocols have been designed for different contexts.
Protocols
A file transfer protocol is a convention that describes how to transfer files between two computing endpoints. As well as the stream of bits from a file stored as a single unit in a file system, some may also send relevant metadata such as the filename, file size and timestamp – and even file-system permissions and file attributes.
Some examples:
FTP is an older cross-platform file transfer protocol
SSH File Transfer Protocol a file transfer protocol secured by the Secure Shell (SSH) protocol
Secure copy (scp) is based on the Secure Shell (SSH) protocol
HTTP can support file transfer
BitTorrent, Gnutella and other distributed file transfers systems use peer-to-peer
In Systems Network Architecture, LU 6.2 Connect:Direct and XCOM Data Transport are traditionally used to transfer files
Many instant messaging or LAN messenger systems support the ability to transfer files
Computers may transfer files to peripheral devices such as USB flash drives
Dial-up modems null modem links used XMODEM, YMODEM, ZMODEM and similar
See also
File sharing
Managed file transfer
Peer-to-peer file sharing
Pull technology
Push technology
Sideloading
References
Internet terminology
Network file transfer protocols | File transfer | [
"Technology"
] | 295 | [
"Computing stubs",
"Computing terminology",
"Internet terminology",
"Computer network stubs"
] |
7,018,809 | https://en.wikipedia.org/wiki/Lax%20equivalence%20theorem | In numerical analysis, the Lax equivalence theorem is a fundamental theorem in the analysis of finite difference methods for the numerical solution of partial differential equations. It states that for a consistent finite difference method for a well-posed linear initial value problem, the method is convergent if and only if it is stable.
The importance of the theorem is that while the convergence of the solution of the finite difference method to the solution of the partial differential equation is what is desired, it is ordinarily difficult to establish because the numerical method is defined by a recurrence relation while the differential equation involves a differentiable function. However, consistency—the requirement that the finite difference method approximates the correct partial differential equation—is straightforward to verify, and stability is typically much easier to show than convergence (and would be needed in any event to show that round-off error will not destroy the computation). Hence convergence is usually shown via the Lax equivalence theorem.
Stability in this context means that a matrix norm of the matrix used in the iteration is at most unity, called (practical) Lax–Richtmyer stability. Often a von Neumann stability analysis is substituted for convenience, although von Neumann stability only implies Lax–Richtmyer stability in certain cases.
This theorem is due to Peter Lax. It is sometimes called the Lax–Richtmyer theorem, after Peter Lax and Robert D. Richtmyer.
References
Numerical differential equations
Theorems in analysis | Lax equivalence theorem | [
"Mathematics"
] | 288 | [
"Theorems in mathematical analysis",
"Mathematical analysis",
"Mathematical theorems",
"Applied mathematics",
"Applied mathematics stubs",
"Mathematical problems"
] |
7,019,701 | https://en.wikipedia.org/wiki/Air%20data%20inertial%20reference%20unit | An Air Data Inertial Reference Unit (ADIRU) is a key component of the integrated Air Data Inertial Reference System (ADIRS), which supplies air data (airspeed, angle of attack and altitude) and inertial reference (position and attitude) information to the pilots' electronic flight instrument system displays as well as other systems on the aircraft such as the engines, autopilot, aircraft flight control system and landing gear systems. An ADIRU acts as a single, fault tolerant source of navigational data for both pilots of an aircraft. It may be complemented by a secondary attitude air data reference unit (SAARU), as in the Boeing 777 design.
This device is used on various military aircraft as well as civilian airliners starting with the Airbus A320 and Boeing 777.
Description
An ADIRS consists of up to three fault tolerant ADIRUs located in the aircraft electronic rack, an associated control and display unit (CDU) in the cockpit and remotely mounted air data modules (ADMs). The No 3 ADIRU is a redundant unit that may be selected to supply data to either the commander's or the co-pilot's displays in the event of a partial or complete failure of either the No 1 or No 2 ADIRU. There is no cross-channel redundancy between the Nos 1 and 2 ADIRUs, as No 3 ADIRU is the only alternate source of air and inertial reference data. An inertial reference (IR) fault in ADIRU No 1 or 2 will cause a loss of attitude and navigation information on their associated primary flight display (PFD) and navigation display (ND) screens. An air data reference (ADR) fault will cause the loss of airspeed and altitude information on the affected display. In either case the information can only be restored by selecting the No 3 ADIRU.
Each ADIRU comprises an ADR and an inertial reference (IR) component.
Air data reference
The air data reference (ADR) component of an ADIRU provides airspeed, Mach number, angle of attack, temperature and barometric altitude data. Ram air pressure and static pressures used in calculating airspeed are measured by small ADMs located as close as possible to the respective pitot and static pressure sensors. ADMs transmit their pressures to the ADIRUs through ARINC 429 data buses.
Inertial reference
The IR component of an ADIRU gives attitude, flight path vector, ground speed and positional data. The ring laser gyroscope is a core enabling technology in the system, and is used together with accelerometers, GPS and other sensors to provide raw data. The primary benefits of a ring laser over older mechanical gyroscopes are that there are no moving parts, it is rugged and lightweight, frictionless and does not resist a change in precession.
Complexity in redundancy
Analysis of complex systems is itself so difficult as to be subject to errors in the certification process. Complex interactions between flight computers and ADIRUs can lead to counter-intuitive behaviour for the crew in the event of a failure. In the case of Qantas Flight 72, the captain switched the source of IR data from ADIRU1 to ADIRU3 following a failure of ADIRU1; however ADIRU1 continued to supply ADR data to the captain's primary flight display. In addition, the master flight control computer (PRIM1) was switched from PRIM1 to PRIM2, then PRIM2 back to PRIM1, thereby creating a situation of uncertainty for the crew who did not know which redundant systems they were relying upon.
Reliance on redundancy of aircraft systems can also lead to delays in executing needed repairs, as airline operators rely on the redundancy to keep the aircraft system working without having to repair faults immediately.
Failures and directives
FAA Airworthiness directive 2000-07-27
On May 3, 2000, the FAA issued airworthiness directive 2000-07-27, addressing dual critical failures during flight, attributed to power supply issues affecting early Honeywell HG2030 and HG2050 ADIRU ring laser gyros used on several Boeing 737, 757, Airbus A319, A320, A321, A330, and A340 models.
Airworthiness directive 2003-26-03
On 27 January 2004 the FAA issued airworthiness directive 2003-26-03 (later superseded by AD 2008-17-12) which called for modification to the mounting of ADIRU3 in Airbus A320 family aircraft to prevent failure and loss of critical attitude and airspeed data.
Alitalia A320
On 25 June 2005, an Alitalia Airbus A320-200 registered as I-BIKE departed Milan with a defective ADIRU as permitted by the Minimum Equipment List. While approaching London Heathrow Airport during deteriorating weather another ADIRU failed, leaving only one operable. In the subsequent confusion the third was inadvertently reset, losing its reference heading and disabling several automatic functions. The crew was able to effect a safe landing after declaring a Pan-pan.
Malaysia Airlines Flight 124
On 1 August 2005, a serious incident involving Malaysia Airlines Flight 124 occurred when an ADIRU fault in a Boeing 777-2H6ER (9M-MRG) flying from Perth to Kuala Lumpur International caused the aircraft to act on false indications, resulting in uncommanded manoeuvres. In that incident the incorrect data impacted all planes of movement while the aircraft was climbing through . The aircraft pitched up and climbed to around , with the stall warning activated. The pilots recovered the aircraft with the autopilot disengaged and requested a return to Perth. During the return to Perth, both the left and right autopilots were briefly activated by the crew, but in both instances the aircraft pitched down and banked to the right. The aircraft was flown manually for the remainder of the flight and landed safely in Perth. There were no injuries and no damage to the aircraft. The ATSB found that the main probable cause of this incident was a latent software error which allowed the ADIRU to use data from a failed accelerometer.
The US Federal Aviation Administration issued Emergency Airworthiness Directive (AD) 2005-18-51 requiring all 777 operators to install upgraded software to resolve the error.
Qantas Flight 68
On 12 September 2006, Qantas Flight 68, Airbus A330 registration VH-QPA, from Singapore to Perth exhibited ADIRU problems but without causing any disruption to the flight. At and estimated position north of Learmonth, Western Australia, NAV IR1 FAULT then, 30 minutes later, NAV ADR 1 FAULT notifications were received on the ECAM identifying navigation system faults in Inertial Reference Unit 1, then in ADR 1 respectively. The crew reported to the later Qantas Flight 72 investigation involving the same airframe and ADIRU that they had received numerous warning and caution messages which changed too quickly to be dealt with. While investigating the problem, the crew noticed a weak and intermittent ADR 1 FAULT light and elected to switch off ADR 1, after which they experienced no further problems. There was no impact on the flight controls throughout the event. The ADIRU manufacturer's recommended maintenance procedures were carried out after the flight and system testing found no further fault.
Jetstar Flight 7
On 7 February 2008, a similar aircraft (VH-EBC) operated by Qantas subsidiary Jetstar Airways was involved in a similar occurrence while conducting the JQ7 service from Sydney to Ho Chi Minh City, Vietnam. In this event - which occurred east of Learmonth - many of the same errors occurred in the ADIRU unit. The crew followed the relevant procedure applicable at the time and the flight continued without problems.
The ATSB has yet to confirm if this event is related to the other Airbus A330 ADIRU occurrences.
Airworthiness directive 2008-17-12
On 6 August 2008, the FAA issued airworthiness directive 2008-17-12 expanding on the requirements of the earlier AD 2003-26-03 which had been determined to be an insufficient remedy. In some cases it called for replacement of ADIRUs with newer models, but allowed 46 months from October 2008 to implement the directive.
Qantas Flight 72
On 7 October 2008, Qantas Flight 72, using the same aircraft involved in the Flight 68 incident, departed Singapore for Perth. Some time into the flight, while cruising at 37,000 ft, a failure in the No.1 ADIRU led to the autopilot automatically disengaging followed by two sudden uncommanded pitch down manoeuvres, according to the Australian Transport Safety Bureau (ATSB). The accident injured up to 74 passengers and crew, ranging from minor to serious injuries. The aircraft was able to make an emergency landing without further injuries. The aircraft was equipped with a Northrop Grumman made ADIRS, which investigators sent to the manufacturer for further testing.
Qantas Flight 71
On 27 December 2008, Qantas Flight 71 from Perth to Singapore, a different Qantas A330-300 with registration VH-QPG was involved in an incident at 36,000 feet approximately north-west of Perth and south of Learmonth Airport at 1729 WST. The autopilot disconnected and the crew received an alert indicating a problem with ADIRU Number 1.
Emergency Airworthiness Directive No 2009-0012-E
On 15 January 2009, the European Aviation Safety Agency issued Emergency Airworthiness Directive No 2009-0012-E to address the above A330 and A340 Northrop-Grumman ADIRU problem of incorrectly responding to a defective inertial reference. In the event of a NAV IR fault the directed crew response is now to "select OFF the relevant IR, select OFF the relevant ADR, and then turn the IR rotary mode selector to the OFF position." The effect is to ensure that the faulted IR is powered off so that it no longer can send erroneous data to other systems.
Air France Flight 447
On 1 June 2009, Air France Flight 447, an Airbus A330 en route from Rio de Janeiro to Paris, crashed in the Atlantic Ocean after transmitting automated messages indicating faults with various equipment, including the ADIRU. While examining possibly related events of weather-related loss of ADIRS, the NTSB decided to investigate two similar cases on cruising A330s. On a 21 May 2009 Miami–São Paulo TAM Flight 8091 registered as PT-MVB, and on a 23 June 2009 Hong Kong-Tokyo Northwest Airlines Flight 8 registered as N805NW each saw sudden loss of airspeed data at cruise altitude and consequent loss of ADIRS control.
Ryanair Flight 6606
On 9 October 2018, the Boeing 737-800 operating the flight from Porto Airport to Edinburgh Airport suffered a left ADIRU failure that resulted in the aircraft pitching up and climbing 600 feet. The left ADIRU was put in ATT (attitude-only) mode in accordance with the Quick Reference Handbook, but it continued to display erroneous attitude information to the captain. The remainder of the flight was flown manually with an uneventful landing. The UK's AAIB released the final report on 31 October 2019, with the following recommendation:It is recommended that Boeing Commercial Aircraft amend the Boeing 737 Quick Reference Handbook to include a non-normal checklist for situations when pitch and roll comparator annunciations appear on the attitude display.
See also
Acronyms and abbreviations in avionics
References
Further reading
Aerospace engineering
Aircraft instruments
Avionics
Flight control systems
Global Positioning System
Navigational equipment
Technology systems | Air data inertial reference unit | [
"Technology",
"Engineering"
] | 2,398 | [
"Systems engineering",
"Wireless locating",
"Technology systems",
"Avionics",
"Measuring instruments",
"Aircraft instruments",
"nan",
"Aerospace engineering",
"Global Positioning System"
] |
7,019,702 | https://en.wikipedia.org/wiki/Maund%20%28unit%29 | The maund (), mun or mann (Bengali: ; Urdu: ) is the anglicized name for a traditional unit of mass used in British India, and also in Afghanistan, Persia, and Arabia: the same unit in the Mughal Empire was sometimes written as mann or mun in English, while the equivalent unit in the Ottoman Empire and Central Asia was called the batman. At different times, and in different South Asian localities, the mass of the maund has varied, from as low as 25 pounds (11 kg) to as high as 160 pounds (72 kg): even greater variation is seen in Persia and Arabia.
History
In British India, the maund was first standardized in the Bengal Presidency in 1833, where it was set equal to 100 Troy pounds (82.28 lbs. av.). This standard spread throughout the British Raj. After the independence of India and Pakistan, the definition formed the basis for metrication, one maund becoming exactly 37.3242 kilograms. A similar metric definition is used in Bangladesh and Nepal. Throughout Bangladesh, one মণ/mun/mann is 40 kg. In Nepal's southern plains one Mann equals 40 kilograms and is generally used to measure agricultural output.
The Old English, 'maund' may also be the origin of Maundy Thursday. As a verb, 'maund' : to beg; as a noun, 'a maund' : a small basket held out for alms.
South Asia
Delhi Sultanate
During the reign of Alauddin Khalji of the Delhi Sultanate, 1 mann was roughly equivalent to 15 kg.
Mughal Empire
Prinsep (1840) summarizes the evidence as to the weight of the mun (later "maund") during the reign (1556–1605) of Akbar the Great, which comes from the Ain-i-Akbari written by the vizier Abu'l-Fazl ibn Mubarak (anglicized as "Abul Fuzl"). The principal definition is that the mun is forty seers; and that each seer is thirty dams.
1 mun = 40 seers = 1200 dams
The problem arises in assigning the values of the smaller units.
The section of the Ain-i-Akbari that defines the mun also defines the dam as five tanks. A separate section defines the tank as twenty-four ruttees. However, by the 19th century, the tank was no longer a uniform unit across the former Mughal territories: Prinsep quotes values of 50 grains (3.24 g) in Darwar, 72 grains (4.67 g) in Bombay and 268 grains (17.37 g) in Ahmednugur.
The jilály, a square silver rupee coin issued by Akbar, was said by the Ain-i-Akbari to be mashas in weight: surviving jilály and other Mughal rupee coins weigh 170–175 Troy grains (11.02–11.34 g), so the masha, defined as eight ruttees, would be about grains (1 g). Masha weights sent back to London in 1819 agree with this value. This basis gives a mun of lb. av. (15.75 kg). One Koni was 4 muns.
However, in yet another section of the Ain-i-Akbari, the dam is said to be "twenty mashas seven ruttees": using this definition would imply an Imperial mass of about 47 lb. av. (21.3 kg) for the mun. Between these two values, the maund in Central India was often found to be around 40 lb. av. (18 kg) in the East India Company survey of 1821.
A Maund was 55.5 British pounds mass under Akbar.
Nineteenth century
Prinsep's values for the maund come from a survey organized by the East India Company in 1821. The Company's agents were asked to send back examples of the standard weights and measures used in the places they were stationed, and these were compared with the English standards in London by Patrick Kelly, the leading British metrologist of the time. The results were published as an appendix to the second edition of Kelly's Universal Cambist (1831), and later as a separate book entitled Oriental Metrology (1832).
It will be seen from Kelly's results below that Prinsep's generalizations are only partially correct. The Gujarat maund is more closely related to the Central Indian maund than to the standardized Bombay maund, except in the town of Anjar, except that it is divided into 40 seers instead of 20 as was found in Malwa.
Central India and Gujarat
Bombay Presidency
Madras Presidency
Maund was known as Mudi in Tulu language
Bengal
Notes
References
External links
Sizes.com
The maund in India (historical values)
Customary units in India
Units of mass | Maund (unit) | [
"Physics",
"Mathematics"
] | 1,004 | [
"Matter",
"Quantity",
"Units of mass",
"Mass",
"Units of measurement"
] |
7,020,766 | https://en.wikipedia.org/wiki/Spectral%20Genomics | Spectral Genomics, Inc. was a technology spin-off company from Baylor College of Medicine, selling aCGH microarrays and related software.
History
The company was founded in February 2000 by BCM technologies. Spectral licensed technology invented by its founders Alan Bradley, Ph.D., Wei-wen Cai, Ph.D.. The company raised $3.0 million in the first financing round in August 2001. In March 2004 the company raised additional $9.4 million in its second financing round. In March 2005, GE Healthcare became the exclusive distributor for Spectral Genomics's products outside of North America. Spectral Genomics was acquired by PerkinElmer in May 2006, ending GE's distribution agreement.
External links
Corporate website
Defunct biotechnology companies of the United States
Microarrays | Spectral Genomics | [
"Chemistry",
"Materials_science",
"Biology"
] | 163 | [
"Biochemistry methods",
"Genetics techniques",
"Microtechnology",
"Microarrays",
"Bioinformatics",
"Molecular biology techniques"
] |
7,020,888 | https://en.wikipedia.org/wiki/Affine%20action | Let be the Weyl group of a semisimple Lie algebra (associate to fixed choice of a Cartan subalgebra ). Assume that a set of simple roots in is chosen.
The affine action (also called the dot action) of the Weyl group on the space is
where is the sum of all fundamental weights, or, equivalently, the half of the sum of all positive roots.
References
.
Representation theory of Lie algebras | Affine action | [
"Mathematics"
] | 93 | [
"Algebra stubs",
"Algebra"
] |
7,021,041 | https://en.wikipedia.org/wiki/Swan%20band | Swan bands are a characteristic of the spectra of carbon stars, comets and of burning hydrocarbon fuels. They are named for the Scottish physicist William Swan, who first studied the spectral analysis of radical diatomic carbon (C2) in 1856.
Swan bands consist of several sequences of vibrational bands scattered throughout the visible spectrum.
See also
Spectroscopy
References
Emission spectroscopy
Fire
Astronomical spectroscopy
Astrochemistry
Carbon | Swan band | [
"Physics",
"Chemistry",
"Astronomy"
] | 79 | [
"Spectroscopy stubs",
"Spectrum (physical sciences)",
"Fire",
"Emission spectroscopy",
"Astronomy stubs",
"Astrophysics",
"Astrochemistry",
"Astrophysics stubs",
"Combustion",
"Astronomical spectroscopy",
"nan",
"Molecular physics stubs",
"Spectroscopy",
"Physical chemistry stubs",
"Astr... |
7,022,543 | https://en.wikipedia.org/wiki/Bray%E2%80%93Liebhafsky%20reaction | The Bray–Liebhafsky reaction is a chemical clock first described by William C. Bray in 1921 and the first oscillating reaction in a stirred homogeneous solution. He investigated the role of the iodate (), the anion of iodic acid, in the catalytic conversion of hydrogen peroxide to oxygen and water by the iodate. He observed that the concentration of iodine molecules oscillated periodically and that hydrogen peroxide was consumed during the reaction.
An increase in temperature reduces the cycle in the range of hours. This oscillating reaction consisting of free radical on non-radical steps was investigated further by his student Herman A. Liebhafsky, hence the name Bray–Liebhafsky reaction. During this period, most chemists rejected the phenomenon and tried to explain the oscillation by invoking heterogeneous impurities.
A fundamental property of this system is that hydrogen peroxide has a redox potential which enables the simultaneous oxidation of iodine to iodate:
5 H2O2 + I2 → 2 + 2 H+ + 4 H2O
and the reduction of iodate back to iodine:
5 H2O2 + 2 + 2 H+ → I2 + 5 O2 + 6 H2O
Between these two reactions the system oscillates causing a concentration jump of the iodide and the oxygen production. The net reaction is:
2 H2O2 → 2 H2O + O2
necessitating a catalyst and .
References
Further reading
Name reactions | Bray–Liebhafsky reaction | [
"Chemistry"
] | 314 | [
"Name reactions"
] |
7,022,709 | https://en.wikipedia.org/wiki/Tanning%20lamp | Tanning lamps (sometimes called tanning bulbs in the United States or tanning tubes in Europe) are the part of a tanning bed, booth or other tanning device which produces ultraviolet light used for indoor tanning. There are hundreds of different kinds of tanning lamps most of which can be classified in two basic groups: low pressure and high pressure. Within the industry, it is common to call high-pressure units "bulbs" and low-pressure units "lamps", although there are many exceptions and not everyone follows this example. This is likely due to the size of the unit, rather than the type. Both types require an oxygen free environment inside the lamp.
Fluorescent tanning lamps require an electrical ballast to limit the amount of current going through the lamp. While the resistance of an incandescent lamp filament inherently limits the current inside the lamp, tanning lamps do not and instead have negative resistance. They are plasma devices, like a neon sign, and will pass as much current as the external circuit will provide, even to the point of self-destruction. Thus a ballast is needed to regulate the current through them.
Tanning lamps are installed in a tanning bed, tanning booth, tanning canopy or free standing single bulb tanning unit. The quality of the tan (or how similar it is to a tan from the natural sun) depends upon the spectrum of the light that is generated from the lamps.
High-pressure bulbs
High-pressure bulbs are 3 to 5 inches long and typically powered by a ballast with 250 to 2,000 watts. The most common is the 400 watt variety that is used as an added face tanner in the traditional tanning bed. High-pressure lamps use quartz glass, and as such do not filter UVC. Because UVC can be deadly, a special dichroic filter glass (usually purple) is required that will filter out the UVC and UVB. The goal with high-pressure tanning bulbs is to produce a high amount of UVA only. Unfiltered light from a high-pressure lamp is rich in UVC used in germicidal lamps, for water purification, but it damages human skin.
The contents of a high-pressure lamp are inert gas (such as argon) and mercury. There are no phosphors used, and the mercury is clearly visible if it is not in a gaseous state. During installation, even a small amount of oil from fingertips can cause the quartz envelope to fail in operation. Most commercial replacement bulbs come with a special pocket wipe, usually containing alcohol, to clean the bulb in case it is accidentally touched during installation. Because the bulb contains mercury, great care should be used if a bulb is broken, to prevent accidental contact or vapor exposure.
Low-pressure lamps
Like all fluorescent lamps, low-pressure tanning lamps have a ballast to start the lamps and limit the flow of current. The plasma of excited mercury atoms inside the lamp emits ultraviolet light directly. The lamps are coated on the inside with special phosphors. Unlike high-pressure lamps, the glass that is used in low-pressure lamps filters out all UVC. Once the plasma is fully formed, the plasma strips away the outer electrons from the mercury; when these electrons return to a lower energy level, visible and ultraviolet light is emitted. Some of the short-wave ultraviolet excites the phosphors, which then emits photons in the proper spectrum for tanning.
Ballasts
In the older style (but still most popular) "choke ballast", each end of the lamp has its own cathode and anode, however, once the lamp has started, the plasma flows from one end of the lamp to the other, with each end acting as a single cathode or anode. The starter is a plasma switch itself, and temporarily connects the cathode on one end of the lamp to the anode on the other end of the lamp, causing the lamp ends to heat up quickly, or "preheat". Many F71 lamps are still called "pre-heat bi-pin" for this reason.
Newer electronic systems work differently and always treat one end of the lamp as a cathode and one end as an anode. Whereas the choke style always works at 230 V AC at 60 Hz (220–240 V AC/50 Hz in Europe), newer electronics work very differently. This includes magnetic, pure solid state, and high frequency ballasts. These new ballasts operate at voltages up to 600 V AC, and at 20,000 Hz, with some high frequency ballasts operating as high as 100,000 Hz or higher. This allows the ballast to energize the lamp with more than raw power, and instead operates using a combination of electrical force and induction. This allows a 100 watt lamp to fully light with as little as 65 watts.
The disadvantage of the newer electronics is price. It can cost 3 to 5 times more per lamp to use electronic ballasts than traditional choke ballasts, which is why choke ballasts are still used in the majority of new tanning systems. Another disadvantage of the older style choke ballast is they are designed for European electricity, and require incoming voltage in the range of 220 V AC and 230 V AC. Most US homes have 110 V service and businesses use 208 V three-phase service that requires these beds to use a buck-boost transformer in order to receive the proper voltage. Too low a voltage will result in the lamp starter not letting the lamp ignite (or at the least, very slowly) whereas too high a voltage can lead to premature failure in the starters and lamps. The average cost of these transformers is $200 to $250. While this makes the newer electronics cost about the same for the typical tanning bed, buckboost transformers are usually sold separately, so the total cost is not always obvious to the consumer at first glance.
Low-pressure lamp sizes and powers
Tanning lamps come in several configurations which are considered standards within the industry, including:
F59 and F60 - 80 watt lamps (shorter lamps to go in front of face tanning "buckets")
F71, F72, F73, F74 - Typically 100 W, although some F74 are 120 W.
F71 - 160 W versions of the F71 for use in more expensive salon equipment, but a special ballast is required.
F71 - 200 W versions of the F71 for use in more expensive salon equipment, but a special ballast is required.
F59 - 140 W versions, shorter versions of the above lamp
F79, 2M - 200 W (2 metres) used only in very expensive tanning booths and beds.
The power listing for lamps is not absolute, as you can drive a lamp with less power than listed if you use certain solid state ballasts. You can also use a 160 W lamp with a 100 W ballast, although there are no advantages to this. Using a 100 W lamp with a 160 W ballast, however, can lead to quick failure as the cathode/anode of some 100 W lamps can not take the extra power. The lamps will operate at any frequency (50 Hz to 120,000 Hz or higher). However, the ballasts and other electrical systems on the tanning bed are sensitive to frequency.
Lamp life
Like all fluorescent lamps, the low-pressure lamps will burn for a long period of time. They will, however, lose their ability to produce a reasonable amount of UV after a short while. Typical lifespans for low-pressure lamps are from 300 to 1,600 hours of actual use although they may light and produce very little UV for as much as 5000 hours. High-pressure lamps range from 300 to 1,000 hours, and should be replaced when they have reached their maximum life to prevent any possible damage to the ballast, although this is very rare. Lamp manufacturers generally rate the "life" of the lamp to be the period of time that the lamp will continue to emit at least 70% to 80% of the initial UV.
Lamp types
In addition to standard lamps, there are also lamps with reflectors built inside. This is accomplished by taking the raw glass before any phosphor is used and pouring a white, opaque, highly reflective chemical on the inside of the lamp. This is done only on a certain percentage of the lamp, such as 210 degrees or 180 degrees, so that the remaining lamp is NOT coated. After this coating has dried or has been treated to ensure it will stick to the surface of the glass (using heat, for example) the lamp is coated on the inside with the phosphor blend as usual. Anywhere from 3 to 5 different chemicals are typically used in a blend, with the actual proportions and chemicals closely guarded as trade secrets.
The 100 watt version of a reflector lamp is typically called a RUVA (Reflector UVA) or less commonly HO-R (High Output - Reflector). The 160 watt version are called VHO-R (Very High Output - Reflector). The name "VHR" describe 160 W reflector lamps and is a registered trademark of Cosmedico, Ltd. There are many other variations of low-pressure tanning lamps including 26 watt, 80 watt, and 200 watt to name a few.
UV output rating
This is one of the most confusing aspects of tanning lamps in North America, as lamps in the US are not rated for their total output, but rather their ratio of UVA to UVB. Most people could be led to believe that a 6.5% lamp is stronger than a 5% lamp, while both lamps might have the same total UV output (or the 5% could even be stronger across the spectrum).
As such, UVA vs UVB rating on lamps only tells you the relative amount of UV, making a 5% lamp really a lamp whose UV spectrum is 5% UVB and 95% UVA. There are no accepted published numbers for rating the overall power for lamps, except the TE (time exposure), which is almost as useless for making comparisons.
The TE isn't generally published, although it is usually available from the lamp manufacturer on request. Because the U.S. Food and Drug Administration (FDA) biases tests against UVB, the TE may make a weaker lamp appear stronger by having more UVB. Furthermore, although tanning beds are rated with exposure times, tanning lamps are not because beds can vary widely as to how a given lamp affects the user, making it difficult or impossible to compare the total UV output of different low-pressure lamps.
The UVB to UVA ratio percentage is considered a technologically outdated form of measuring a lamp's overall UV output and Wolff "Metric" now lists actual UVA, UVB and total UV flux powers. This is the best way of measuring a low-pressure and high-pressure lamp. Wolff measured lamp outputs are listed here If you are purchasing a lamp from any manufacturer always ask for actual flux power output, as UVA to UVB ratios tell very little.
Lamp maintenance and replacement
Tanning lamps are virtually maintenance free, but must be kept clean, as UV can easily be blocked by dust drawn in from the cooling system (or from improperly cleaned acrylics shields). Most manufacturers recommend wiping the lamps and other internals clean every 200 to 300 hours of operation. Most salons will replace their tanning lamps once per year, while home tanning bed owners can expect 3 to 5 years of use. This depends solely on the number of hours the lamps have been used and the rated life of the lamp, which varies from model to model.
High-pressure lamps must be handled very carefully, as any oil from the skin that is left on the bulb can cause the bulb to overheat and lead to early failure. The filter glass must also be handled carefully as it is extremely fragile by its nature. These should only be cleaned with special chemicals designed for this purpose. Operating any tanning equipment that uses high-pressure bulbs without the special filter glass is extremely dangerous, and illegal in a salon, due to the high amount of UVC generated in the bulbs.
The amount of UV that is generated from a low-pressure lamp is highly dependent on the temperature in the tanning unit. As a rule, tanning lamps produce the highest amount of ultraviolet light when this temperature is between . As the temperature moves away from this range, the amount of UV produced is reduced. Cooling systems for tanning equipment are usually designed to maintain a range of temperature instead of providing maximum airflow for this reason. Higher temperatures will also reduce the expected life of the tanning lamp. This is why it is important to perform regular maintenance, including checking cooling fans and insuring that vent holes are not blocked. The owners manual for the tanning equipment is the best source for maintenance schedules and methods.
Other uses
In addition to their use in tanning, tanning lamps are used for the treatment of psoriasis, eczema, vitiligo.
Mercury hazards
All fluorescent lamps contain mercury, and at this time, no suitable replacement has been found. Many US states have banned disposal of lamps containing mercury, and have established regulations requiring that lamps containing mercury are identified as such. This has not caused problems for manufacturers, however, as lamps are not produced locally, and often not in the US. There have been several efforts to label all lamps that contain mercury with a universally accepted symbol, Hg. Old lamps should be handled as would be any hazardous material, and persons should take special precautions when dealing with broken lamps to avoid contact with mercury. This is particularly true for pregnant women. These laws and guidelines are not unique to tanning lamps, and apply to all fluorescent lamps, other lamps that contain mercury, as well as other products that contain mercury with the exception of pharmaceuticals. Proper disposal or recycling will prevent the mercury content of the lamps from entering the environment.
See also
Excimer lamp
Suntanning
Vitamin D
Indoor tanning
Footnotes
External links
Title 12 CFR 1040.20 US FDA regulations that cover tanning lamps and devices
UV index and UV dose
Tanning (beauty treatment)
Gas discharge lamps | Tanning lamp | [
"Chemistry"
] | 2,913 | [
"Tanning (beauty treatment)",
"Ultraviolet radiation"
] |
7,022,785 | https://en.wikipedia.org/wiki/Gomberg%E2%80%93Bachmann%20reaction | The Gomberg–Bachmann reaction, named for the Russian-American chemist Moses Gomberg and the American chemist Werner Emmanuel Bachmann, is an aryl-aryl coupling reaction via a diazonium salt.
The arene compound (here benzene) is reacted with a diazonium salt in the presence of a base to provide the biaryl through an intermediate aryl radical. For example, p-bromobiphenyl may be prepared from 4-bromoaniline and benzene:
BrC6H4NH2 + C6H6 → BrC6H4−C6H5
The reaction offers a wide scope for both diazonium component and arene component but yields are generally low following the original procedure (less than 40%), given the many side-reactions of diazonium salts. Several improvements have been suggested. One possibility is to employ diazonium tetrafluoroborates in arene solvent together with a phase-transfer catalyst, another is to use 1-aryl-3,3-dialkyltriazenes.
Pschorr reaction
One intramolecular variation which gives better results is the Pschorr cyclization:
The group Z can be CH2, CH2CH2, NH and CO (to fluorenone) to name just a few.
See also
Graebe–Ullmann synthesis
Meerwein arylation
Sandmeyer reaction
References
Substitution reactions
Name reactions | Gomberg–Bachmann reaction | [
"Chemistry"
] | 303 | [
"Coupling reactions",
"Name reactions",
"Organic reactions"
] |
7,022,979 | https://en.wikipedia.org/wiki/Bayesian%20inference%20in%20phylogeny | Bayesian inference of phylogeny combines the information in the prior and in the data likelihood to create the so-called posterior probability of trees, which is the probability that the tree is correct given the data, the prior and the likelihood model. Bayesian inference was introduced into molecular phylogenetics in the 1990s by three independent groups: Bruce Rannala and Ziheng Yang in Berkeley, Bob Mau in Madison, and Shuying Li in University of Iowa, the last two being PhD students at the time. The approach has become very popular since the release of the MrBayes software in 2001, and is now one of the most popular methods in molecular phylogenetics.
Bayesian inference of phylogeny background and bases
Bayesian inference refers to a probabilistic method developed by Reverend Thomas Bayes based on Bayes' theorem. Published posthumously in 1763 it was the first expression of inverse probability and the basis of Bayesian inference. Independently, unaware of Bayes' work, Pierre-Simon Laplace developed Bayes' theorem in 1774.
Bayesian inference or the inverse probability method was the standard approach in statistical thinking until the early 1900s before RA Fisher developed what's now known as the classical/frequentist/Fisherian inference. Computational difficulties and philosophical objections had prevented the widespread adoption of the Bayesian approach until the 1990s, when Markov Chain Monte Carlo (MCMC) algorithms revolutionized Bayesian computation.
The Bayesian approach to phylogenetic reconstruction combines the prior probability of a tree P(A) with the likelihood of the data (B) to produce a posterior probability distribution on trees P(A|B). The posterior probability of a tree will be the probability that the tree is correct, given the prior, the data, and the correctness of the likelihood model.
MCMC methods can be described in three steps: first using a stochastic mechanism a new state for the Markov chain is proposed. Secondly, the probability of this new state to be correct is calculated. Thirdly, a new random variable (0,1) is proposed. If this new value is less than the acceptance probability the new state is accepted and the state of the chain is updated. This process is run thousands or millions of times. The number of times a single tree is visited during the course of the chain is an approximation of its posterior probability. Some of the most common algorithms used in MCMC methods include the Metropolis–Hastings algorithms, the Metropolis-Coupling MCMC (MC³) and the LOCAL algorithm of Larget and Simon.
Metropolis–Hastings algorithm
One of the most common MCMC methods used is the Metropolis–Hastings algorithm, a modified version of the original Metropolis algorithm. It is a widely used method to sample randomly from complicated and multi-dimensional distribution probabilities. The Metropolis algorithm is described in the following steps:
An initial tree, Ti, is randomly selected.
A neighbour tree, Tj, is selected from the collection of trees.
The ratio, R, of the probabilities (or probability density functions) of Tj and Ti is computed as follows: R = f(Tj)/f(Ti)
If R ≥ 1, Tj is accepted as the current tree.
If R < 1, Tj is accepted as the current tree with probability R, otherwise Ti is kept.
At this point the process is repeated from Step 2 N times.
The algorithm keeps running until it reaches an equilibrium distribution. It also assumes that the probability of proposing a new tree Tj when we are at the old tree state Ti, is the same probability of proposing Ti when we are at Tj. When this is not the case Hastings corrections are applied.
The aim of Metropolis-Hastings algorithm is to produce a collection of states with a determined distribution until the Markov process reaches a stationary distribution. The algorithm has two components:
A potential transition from one state to another (i → j) using a transition probability function qi,j
Movement of the chain to state j with probability αi,j and remains in i with probability 1 – αi,j.
Metropolis-coupled MCMC
Metropolis-coupled MCMC algorithm (MC³) has been proposed to solve a practical concern of the Markov chain moving across peaks when the target distribution has multiple local peaks, separated by low valleys, are known to exist in the tree space. This is the case during heuristic tree search under maximum parsimony (MP), maximum likelihood (ML), and minimum evolution (ME) criteria, and the same can be expected for stochastic tree search using MCMC. This problem will result in samples not approximating correctly to the posterior density. The (MC³) improves the mixing of Markov chains in presence of multiple local peaks in the posterior density. It runs multiple (m) chains in parallel, each for n iterations and with different stationary distributions , , where the first one, is the target density, while , are chosen to improve mixing. For example, one can choose incremental heating of the form:
so that the first chain is the cold chain with the correct target density, while chains are heated chains. Note that raising the density to the power with has the effect of flattening out the distribution, similar to heating a metal. In such a distribution, it is easier to traverse between peaks (separated by valleys) than in the original distribution. After each iteration, a swap of states between two randomly chosen chains is proposed through a Metropolis-type step. Let be the current state in chain , . A swap between the states of chains and is accepted with probability:
At the end of the run, output from only the cold chain is used, while those from the hot chains are discarded. Heuristically, the hot chains will visit the local peaks rather easily, and swapping states between chains will let the cold chain occasionally jump valleys, leading to better mixing. However, if is unstable, proposed swaps will seldom be accepted. This is the reason for using several chains which differ only incrementally.
An obvious disadvantage of the algorithm is that chains are run and only one chain is used for inference. For this reason, is ideally suited for implementation on parallel machines, since each chain will in general require the same amount of computation per iteration.
LOCAL algorithm of Larget and Simon
The LOCAL algorithms offers a computational advantage over previous methods and demonstrates that a Bayesian approach is able to assess uncertainty computationally practical in larger trees. The LOCAL algorithm is an improvement of the GLOBAL algorithm presented in Mau, Newton and Larget (1999) in which all branch lengths are changed in every cycle. The LOCAL algorithms modifies the tree by selecting an internal branch of the tree at random. The nodes at the ends of this branch are each connected to two other branches. One of each pair is chosen at random. Imagine taking these three selected edges and stringing them like a clothesline from left to right, where the direction (left/right) is also selected at random. The two endpoints of the first branch selected will have a sub-tree hanging like a piece of clothing strung to the line. The algorithm proceeds by multiplying the three selected branches by a common random amount, akin to stretching or shrinking the clothesline. Finally the leftmost of the two hanging sub-trees is disconnected and reattached to the clothesline at a location selected uniformly at random. This would be the candidate tree.
Suppose we began by selecting the internal branch with length that separates taxa and from the rest. Suppose also that we have (randomly) selected branches with lengths and from each side, and that we oriented these branches. Let , be the current length of the clothesline. We select the new length to be , where is a uniform random variable on . Then for the LOCAL algorithm, the acceptance probability can be computed to be:
Assessing convergence
To estimate a branch length of a 2-taxon tree under JC, in which sites are unvaried and are variable, assume exponential prior distribution with rate . The density is . The probabilities of the possible site patterns are:
for unvaried sites, and
Thus the unnormalized posterior distribution is:
or, alternately,
Update branch length by choosing new value uniformly at random from a window of half-width centered at the current value:
where is uniformly distributed between and . The acceptance
probability is:
Example: , . We will compare results for two values of , and . In each case, we will begin with an initial length of and update the length times.
Maximum parsimony and maximum likelihood
There are many approaches to reconstructing phylogenetic trees, each with advantages and disadvantages, and there is no straightforward answer to “what is the best method?”. Maximum parsimony (MP) and maximum likelihood (ML) are traditional methods widely used for the estimation of phylogenies and both use character information directly, as Bayesian methods do.
Maximum Parsimony recovers one or more optimal trees based on a matrix of discrete characters for a certain group of taxa and it does not require a model of evolutionary change. MP gives the most simple explanation for a given set of data, reconstructing a phylogenetic tree that includes as few changes across the sequences as possible. The support of the tree branches is represented by bootstrap percentage. For the same reason that it has been widely used, its simplicity, MP has also received criticism and has been pushed into the background by ML and Bayesian methods. MP presents several problems and limitations. As shown by Felsenstein (1978), MP might be statistically inconsistent, meaning that as more and more data (e.g. sequence length) is accumulated, results can converge on an incorrect tree and lead to long branch attraction, a phylogenetic phenomenon where taxa with long branches (numerous character state changes) tend to appear more closely related in the phylogeny than they really are. For morphological data, recent simulation studies suggest that parsimony may be less accurate than trees built using Bayesian approaches, potentially due to overprecision, although this has been disputed. Studies using novel simulation methods have demonstrated that differences between inference methods result from the search strategy and consensus method employed, rather than the optimization used.
As in maximum parsimony, maximum likelihood will evaluate alternative trees. However it considers the probability of each tree explaining the given data based on a model of evolution. In this case, the tree with the highest probability of explaining the data is chosen over the other ones. In other words, it compares how different trees predict the observed data. The introduction of a model of evolution in ML analyses presents an advantage over MP as the probability of nucleotide substitutions and rates of these substitutions are taken into account, explaining the phylogenetic relationships of taxa in a more realistic way. An important consideration of this method is the branch length, which parsimony ignores, with changes being more likely to happen along long branches than short ones. This approach might eliminate long branch attraction and explain the greater consistency of ML over MP. Although considered by many to be the best approach to inferring phylogenies from a theoretical point of view, ML is computationally intensive and it is almost impossible to explore all trees as there are too many. Bayesian inference also incorporates a model of evolution and the main advantages over MP and ML are that it is computationally more efficient than traditional methods, it quantifies and addresses the source of uncertainty and is able to incorporate complex models of evolution.
Pitfalls and controversies
Bootstrap values vs posterior probabilities. It has been observed that bootstrap support values, calculated under parsimony or maximum likelihood, tend to be lower than the posterior probabilities obtained by Bayesian inference. This leads to a number of questions such as: Do posterior probabilities lead to overconfidence in the results? Are bootstrap values more robust than posterior probabilities? One fact underlying this controversy is that all data are used during Bayesian analysis and the calculation of posterior probabilities, while the nature of bootstrapping means that most bootstrap replicates will be missing some of the original data. As a result, bipartitions (branches) supported by relatively few characters in the dataset may receive very high posterior probabilities but moderate or even low bootstrap support, as many of the bootstrap replicates don't contain enough of the critical characters to retrieve the bipartition.
Controversy of using prior probabilities. Using prior probabilities for Bayesian analysis has been seen by many as an advantage as it provides a way of incorporating information from sources other than the data being analyzed. However, when such external information is lacking, one is forced to use a prior even if it is impossible to use a statistical distribution to represent total ignorance. It is also a concern that the Bayesian posterior probabilities may reflect subjective opinions when the prior is arbitrary and subjective.
Model choice. The results of the Bayesian analysis of a phylogeny are directly correlated to the model of evolution chosen so it is important to choose a model that fits the observed data, otherwise inferences in the phylogeny will be erroneous. Many scientists have raised questions about the interpretation of Bayesian inference when the model is unknown or incorrect. For example, an oversimplified model might give higher posterior probabilities.
MrBayes software
MrBayes is a free software tool that performs Bayesian inference of phylogeny. It was originally written by John P. Huelsenbeck and Frederik Ronquist in 2001. As Bayesian methods increased in popularity, MrBayes became one of the software of choice for many molecular phylogeneticists. It is offered for Macintosh, Windows, and UNIX operating systems and it has a command-line interface. The program uses the standard MCMC algorithm as well as the Metropolis coupled MCMC variant. MrBayes reads aligned matrices of sequences (DNA or amino acids) in the standard NEXUS format.
MrBayes uses MCMC to approximate the posterior probabilities of trees. The user can change assumptions of the substitution model, priors and the details of the MC³ analysis. It also allows the user to remove and add taxa and characters to the analysis. The program includes, among several nucleotide models, the most standard model of DNA substitution, the 4x4 also called JC69, which assumes that changes across nucleotides occur with equal probability. It also implements a number of 20x20 models of amino acid substitution, and codon models of DNA substitution. It offers different methods for relaxing the assumption of equal substitutions rates across nucleotide sites. MrBayes is also able to infer ancestral states accommodating uncertainty to the phylogenetic tree and model parameters.
MrBayes 3 was a completely reorganized and restructured version of the original MrBayes. The main novelty was the ability of the software to accommodate heterogeneity of data sets. This new framework allows the user to mix models and take advantages of the efficiency of Bayesian MCMC analysis when dealing with different type of data (e.g. protein, nucleotide, and morphological). It uses the Metropolis-Coupling MCMC by default.
MrBayes 3.2 was released in 2012. This version allows the users to run multiple analyses in parallel. It also provides faster likelihood calculations and allow these calculations to be delegated to graphics processing unites (GPUs). Version 3.2 provides wider outputs options compatible with FigTree and other tree viewers.
List of phylogenetics software
This table includes some of the most common phylogenetic software used for inferring phylogenies under a Bayesian framework. Some of them do not use exclusively Bayesian methods.
Applications
Bayesian Inference has extensively been used by molecular phylogeneticists for a wide number of applications. Some of these include:
Inference of phylogenies.
Inference and evaluation of uncertainty of phylogenies.
Inference of ancestral character state evolution.
Inference of ancestral areas.
Molecular dating analysis.
Model dynamics of species diversification and extinction
Elucidate patterns in pathogens dispersal.
Inference of phenotypic trait evolution.
References
External links
MrBayes official website
BEAST official website
Computational phylogenetics
Phylogeny | Bayesian inference in phylogeny | [
"Biology"
] | 3,307 | [
"Bioinformatics",
"Phylogenetics",
"Computational phylogenetics",
"Genetics techniques"
] |
7,023,098 | https://en.wikipedia.org/wiki/ER%20oxidoreductin | ER oxidoreductin 1 (Ero1) is an oxidoreductase enzyme that catalyses the formation and isomerization of protein disulfide bonds in the endoplasmic reticulum (ER) of eukaryotes. ER Oxidoreductin 1 (Ero1) is a conserved, luminal, glycoprotein that is tightly associated with the ER membrane, and is essential for the oxidation of protein dithiols. Since disulfide bond formation is an oxidative process, the major pathway of its catalysis has evolved to utilise oxidoreductases, which become reduced during the thiol-disulfide exchange reactions that oxidise the cysteine thiol groups of nascent polypeptides. Ero1 is required for the introduction of oxidising equivalents into the ER and their direct transfer to protein disulfide isomerase (PDI), thereby ensuring the correct folding and assembly of proteins that contain disulfide bonds in their native state.
Ero1 exists in two isoforms: Ero1-α and Ero1-β. Ero1-α is mainly induced by hypoxia (HIF-1), whereas Ero1-β is mainly induced by the unfolded protein response (UPR).
During endoplasmic reticulum stress (such as occurs in beta cells of the pancreas or in macrophages causing atherosclerosis), CHOP can induce activation of Ero1, causing calcium release from the endoplasmic reticulum into the cytoplasm, resulting in apoptosis.
Homologues of the Saccharomyces cerevisiae Ero1 proteins have been found in all eukaryotic organisms examined, and contain seven cysteine residues that are absolutely conserved, including three that form the sequence Cys–X–X–Cys–X–X–Cys (where X can be any residue).
The mechanism of thiol–disulfide exchange between oxidoreductases
The mechanism of thiol–disulfide exchange between oxidoreductases is understood to begin with the nucleophilic attack on the sulfur atoms of a disulfide bond in the oxidised partner, by a thiolate anion derived from a reactive cysteine in a reduced partner. This generates mixed disulfide intermediates, and is followed by a second, this time intramolecular, nucleophilic attack by the remaining thiolate anion in the formerly reduced partner, to liberate both oxidoreductases. The balance of evidence discussed thus far supports a model in which oxidising equivalents are sequentially transferred from Ero1 via a thiol–disulfide exchange reaction to PDI, with PDI then undergoing a thiol–disulfide exchange with the nascent polypeptide, thereby enabling the formation of disulfide bonds within the nascent polypeptide.
References
Biomolecules
Enzymes | ER oxidoreductin | [
"Chemistry",
"Biology"
] | 646 | [
"Natural products",
"Organic compounds",
"Biomolecules",
"Structural biology",
"Biochemistry",
"Molecular biology"
] |
7,023,290 | https://en.wikipedia.org/wiki/Mark%20Barr | James Mark McGinnis Barr (18 May 187115 December 1950) was an electrical engineer, physicist, inventor, and polymath known for proposing the standard notation for the golden ratio. Born in America, but with English citizenship, Barr lived in both London and New York City at different times of his life.
Though remembered primarily for his contributions to abstract mathematics, Barr put much of his efforts over the years into the design of machines, including calculating machines. He won a gold medal at the 1900 Paris Exposition Universelle for an extremely accurate engraving machine.
Life
Barr was born in Pennsylvania, the son of Charles B. Barr and Ann M'Ginnis.
He was educated in London, then worked for the Westinghouse Electric Company in Pittsburgh from 1887 to 1890. He started there as a draughtsman before becoming a laboratory assistant, and later an erection engineer. For two years in the early 1890s, he worked in New York City at the journal Electrical World as an assistant editor, at the same time studying chemistry at the New York City College of Technology, and by 1900, he had worked with both Nikola Tesla and Mihajlo Pupin in New York. However, he was known among acquaintances for his low opinion of Thomas Edison. Returning to London in 1892, he studied physics and electrical engineering at the City and Guilds of London Technical College for three years.
From 1896 to 1900, he worked for Linotype in England, and from 1900 to 1904, he worked as a technical advisor to Trevor Williams in London.
Beginning in 1902, he was elected to the Small Screw Gauge Committee of the British Association for the Advancement of Science. The committee was set up to put into practice the system of British Association screw threads, which had been settled on but not implemented in 1884. More broadly, it was tasked with considering "the whole question of standardisation of engineering materials, tools, and machinery".
In January 1916, Barr was given charge of a school for machinists in London, intended to supply workers to a nearby factory for machine guns for the war effort; the school closed that June, as the factory was unable to take on the new workers at the expected rate.
In the early 1920s, Barr was a frequent visitor to Alfred North Whitehead in Chelsea, London, but by 1924, he had moved back to New York.
Hamlin Garland writes that, "after thirty years in London", Barr returned to America "in order that his young sons might become citizens". Garland quotes Barr as saying that, for him, "to abandon America would be an act of treason".
In 1924, Harvard University invited Whitehead to join its faculty, with the financial backing of Henry Osborn Taylor. Barr, a friend of both Whitehead and Taylor, served as an intermediary in the preparations for this move.
Whitehead, in subsequent letters to his son North in 1924 and 1925, writes of Barr's struggles to sell the design for one of his calculating machines to an unnamed large American company. In the 1925 letter, Whitehead writes that Barr's son Stephen was staying with him while Barr and his wife Mabel visited Elyria, Ohio, to oversee a test build of the device. However, by 1927, Barr and Whitehead had fallen out, Whitehead writing to North (amid much complaint about Barr's character) that he was "very doubtful whether he will keep his post at the business school here";
Barr was a "research assistant in finance" at Harvard Business School around this time.
Barr joined the Century Association in 1925, and in his later life it "became practically his home". He died in The Bronx in 1950.
Contributions
Machining
At Linotype, Barr improved punch-cutting machines by substituting ball bearings for oil lubrication to achieve a more precise fit, and using tractrix-shaped sleeves to distribute wear uniformly.
In an 1896 publication in The Electrical Review on calculating the dimensions of a ball race, Barr credits the bicycle industry for stimulating development of the perfectly spherical steel balls needed in this application.
The punch-cutters he worked on were, essentially, pantographs that could engrave copies of given shapes (the outlines of letters or characters) as three-dimensional objects at a much smaller scale (the punches used to shape each letter in hot metal typesetting).
Between 1900 and 1902, with Linotype managers Arthur Pollen and William Henry Lock, Barr also designed pantographs operating on a very different scale, calculating aim for naval artillery based on the positions, headings, and speeds of the firing ship and its target.
Golden ratio
Barr was a friend of William Schooling, and worked with him in exploiting the properties of the golden ratio to develop arithmetic algorithms suitable for mechanical calculators.
According to Theodore Andrea Cook, Barr gave the golden ratio the name of phi (ϕ). Cook wrote that Barr chose ϕ by analogy to the use of for the ratio of a circle's circumference to its diameter, and because it is the first Greek letter in the name of the ancient sculptor Phidias. Although Martin Gardner later wrote that Phidias was chosen because he was "believed to have used the golden proportion frequently in his sculpture", Barr himself denied this, writing in his paper "Parameters of beauty" that he doubted Phidias used the golden ratio. Schooling communicated some of his discoveries with Barr to Cook after seeing an article by Cook about phyllotaxis, the arrangement of leaves on a plant stem, which often approximates the golden ratio.
Schooling published his work with Barr later, in 1915, employing the same notation.
Barr also published a related work in The Sketch in around 1913, generalizing the Fibonacci numbers to higher-order recurrences.
Other inventions and discoveries
Around 1910, Barr built a lighting apparatus for painter William Nicholson, using filters and reflectors to mix different types of light to produce an "artificial reproduction of daylight".
In 1914, as an expert in electricity, he took part in an investigation of psychic phenomena involving Polish medium Stanisława Tomczyk by the Society for Psychical Research; however, the results were inconclusive.
At some point prior to 1916, Barr was a participant in a business venture to make synthetic rubber from turpentine by a bacterial process. However, after much effort in relocating the bacterium after exhausting the original supply (a barrel of vinegar from New Jersey), the process ended up being less cost-effective than natural rubber, and the business failed.
With Edward George Boulenger of the London Zoo, he built a timer-operated electromechanical rat trap.
In preparation for a diving expedition to Haiti by William Beebe and the New York Zoological Society in early 1927, in which he participated as "physicist, master electrician, and philosopher", Barr helped develop an underwater telephone allowing divers to talk to a support boat, and a brass underwater housing for a motion picture camera.
Selected publications
References
Golden ratio
1871 births
1950 deaths
Scientists from Pennsylvania
American electrical engineers
19th-century American inventors
20th-century American inventors
English electrical engineers
English inventors
Engineers from Pennsylvania | Mark Barr | [
"Mathematics"
] | 1,436 | [
"Golden ratio"
] |
7,023,718 | https://en.wikipedia.org/wiki/Albedo%20%28alchemy%29 | In alchemy, albedo, or leucosis, is the second of the four major stages of the Magnum Opus, along with nigredo, citrinitas and rubedo. It is a Latinicized term meaning "whiteness". Following the chaos or massa confusa of the nigredo stage, the alchemist undertakes a purification in albedo, which is literally referred to as ablutio – the washing away of impurities. This phase is concerned with "bringing light and clarity to the prima materia (the First Matter)".
In this process, the subject is divided into two opposing principles to be later coagulated to form a unity of opposites or coincidentia oppositorum during rubedo. Alchemists also applied it to an individual's soul after the first phase is completed, which entailed the decay of matter. In Medieval literature, which developed an intricate system of images and symbols for alchemy, the dove often represented this stage, while the raven symbolized nigredo.
Titus Burckhardt interprets the albedo as the end of the lesser work, corresponding to a spiritualization of the body. Claiming the goal of this portion of the process is to regain the original purity and receptivity of the soul.
Psychology
Psychologist Carl Jung equated the albedo with unconscious contrasexual soul images; the anima in men and animus in women. It is a phase where insight into shadow projections are realized, and inflated ego and unneeded conceptualizations are removed from the psyche.
Another interpretation describes albedo as an experience of awakening and involves a shift in consciousness where the world becomes more than just an individual's ego, his family, or country.
References
Nigel Hamilton. "The Alchemical Process of Transformation." 1985.
Notes
Alchemical processes | Albedo (alchemy) | [
"Chemistry"
] | 382 | [
"Alchemical processes"
] |
7,023,796 | https://en.wikipedia.org/wiki/Adenophostin | Adenophostin A is a potent inositol trisphosphate (IP3) receptor agonist, but is much more potent than IP3.
IP3R is a ligand-gated intracellular Ca2+ release channel that plays a central role in modulating cytoplasmic free Ca2+ concentration (Ca2+i). Adenophostin A is structurally different from IP3 but could elicit distinct calcium signals in cells.
References
Purines
Organophosphates
Signal transduction | Adenophostin | [
"Chemistry",
"Biology"
] | 108 | [
"Signal transduction",
"Organic compounds",
"Biochemistry",
"Neurochemistry",
"Organic compound stubs",
"Organic chemistry stubs"
] |
7,023,870 | https://en.wikipedia.org/wiki/Sex%20strike | A sex strike (sex boycott), or more formally known as Lysistratic nonaction, is a method of nonviolent resistance in which one or more persons refrain from or refuse sex with partners until policy or social demands are met. It is a form of temporary sexual abstinence. Sex strikes have been used to protest many issues, from war to gang violence to policies.
The effectiveness of sex strikes is contested.
History
Ancient Greece
The most famous example of a sex strike in the arts is the Greek playwright Aristophanes' work Lysistrata, an anti-war comedy. The female characters in the play, led by the eponymous Lysistrata, withhold sex from their husbands as part of their strategy to end the Peloponnesian War.
Nigeria
Among the Igbo people of Nigeria, in pre-colonial times, the community of women periodically formed themselves into a Council, a kind of women's trade union. This was headed by the Agba Ekwe, 'the favoured one of the goddess Idemili and her earthly manifestation'. She carried her staff of authority and had the final word in public gatherings and assemblies. Central among her tasks was to ensure men's good behaviour, punishing male attempts at harassment or abuse. What men most feared was the council's power of strike action. According to Ifi Amadiume, an Igbo anthropologist: "The strongest weapon the Council had and used against the men was the right to order mass strikes and demonstrations by all women. When ordered to strike, women refused to perform their expected duties and roles, including all domestic, sexual and maternal services. They would leave the town en masse, carrying only suckling babies. If angry enough, they were known to attack any men they met."
World history and prehistory
Citing similar examples of women's strike action in hunter-gatherer and other precolonial traditions around the world, some anthropologists argue that it was thanks to solidarity of this kind—especially collective resistance to the possibility of rape—that language, culture, and religion became established in our species in the first instance. This controversial hypothesis is known as the "Female Cosmetic Coalitions", "Lysistrata", or "sex strike" theory of human origins.
Modern times
Africa
Kenya
In April 2009 a group of Kenyan women organised a week-long sex strike aimed at politicians, encouraging the wives of the president and prime minister to join in too, and offering to pay prostitutes for lost earnings if they joined in.
Liberia
In 2003 Leymah Gbowee and the Women of Liberia Mass Action for Peace organized nonviolence protests that included suggesting a sex strike, though this was not actually carried out. Their actions led to peace in Liberia after a 14‑year civil war and the election of Ellen Johnson Sirleaf, country's first female head of state. Leymah Gbowee was awarded the 2011 Nobel Peace Prize "for her non-violent struggle for the safety of women and for women's rights to full participation in peace-building work."
South Sudan
In October 2014, Pricilla Nanyang, a politician in South Sudan, coordinated a meeting of women peace activists in Juba "to advance the cause of peace, healing and reconciliation." Attendees issued a statement which called on women of South Sudan "to deny their husbands conjugal rights until they ensure that peace returns."
Togo
In 2012, inspired by the 2003 Liberian sex strike, the Togolese opposition coalition "Let's Save Togo" asked women to abstain from sex for a week as a protest against President Faure Gnassingbé, whose family has been in power for more than 45 years. The strike aimed to "motivate men who are not involved in the political movement to pursue its goals". Opposition leader Isabelle Ameganvi views it as a possible "weapon of the battle" to achieve political change.
Elsewhere
Colombia
In October 1997, the chief of the Military of Colombia, General Manuel Bonnet publicly called for a sex strike among the wives and girlfriends of the Colombian left-wing guerrillas, drug traffickers, and paramilitaries as part of a strategy—along with diplomacy—to achieve a ceasefire. Also the mayor of Bogota, Antanas Mockus, declared the capital a women-only zone for one night, suggesting men to stay at home to reflect on violence. The guerrillas ridiculed the initiatives, pointing at the fact that there were more than 2,000 women in their army. In the end the ceasefire was achieved, but lasted only a short time.
In September 2006 dozens of wives and girlfriends of gang members from Pereira, Colombia, started a sex strike called La huelga de las piernas cruzadas ("the strike of crossed legs") to curb gang violence, in response to 480 deaths due to gang violence in the coffee region. According to spokeswoman Jennifer Bayer, the specific target of the strike was to force gang members to turn in their weapons in compliance with the law. According to them, many gang members were involved in violent crime for status and sexual attractiveness, and the strike sent the message that refusing to turn in the guns was not sexy. In 2010 the city's murder rate saw the steepest decline in Colombia, down by 26.5%.
In June 2011, women organized in the so-called Crossed Legs Movement in the secluded town of Barbacoas in southwestern Colombia, started a sex strike to pressure the government to repair the road connecting Barbacoas and its neighboring towns and cities. They declared that if the men of the town were not going to demand action, they would refuse to have sex with them. The men of Barbacoas showed no support at the beginning of the campaign, but they soon joined in the protest campaign. After 112 days strike in October 2011, the Colombian government promised action on road repairs. Construction ensued and the strike ended.
Naples, Italy
In the build-up to New Year's Eve in 2008, hundreds of Neapolitan women pledged to make their husbands and lovers "sleep on the sofa" unless they took action to prevent fireworks from causing serious injuries.
The Philippines
During the summer of 2011, women in rural Mindanao imposed a several-week-long sex strike in an attempt to end fighting between their two villages.
United States of America
In 2019, Georgia governor, Brian Kemp (R), signed House Bill (HB) 481 into law. It was immediately blocked by a lawsuit. HB 481 criminalizes most abortions after six weeks and adds a “fetal personhood” language. This language changes the definition of a “natural person” to include an unborn child at any stage of development in the womb. This law has been nicknamed a “heartbeat bill” because HB 481 states that no abortion will be performed if the physician determines that they detect a human heartbeat.
In response to this bill's passage, actress and #MeToo activist, Alyssa Milano and Waleisah Wilson wrote an opinion editorial for CNN and went to Twitter to call for a sex strike until the policy was repealed. In the tweet, Milano calls on women to join her sex strike until women “have legal control over [their] own bodies” because women cannot risk a pregnancy under this new bill. In her CNN opinion piece, Milano states that there are similar bills to the one in Georgia and that the single purpose of them is to make it up to the Supreme Court, forcing them reconsider Roe v. Wade (1973). In this opinion piece, Milano discusses the history of Lysistratic protest, calls on people who can become pregnant to conduct a sex strike, and pay attention to current events. Milano encourages a sex strike in addition to other efforts.
In entertainment
Lysistrata
Absurdistan (film)
Chi-Raq (film)
See also
2021 Minas Gerais prostitute strike
Matriarchy
Menstrual synchrony
Occupation of Saint-Nizier church by Lyon prostitutes
Reproductive synchrony
Sex/Work Strike
Women's strike (disambiguation)
Female cosmetic coalitions
References
External links
Sexual abstinence
Human sexuality
Strikes (protest)
Protest tactics
Nonviolence
Feminism and sexuality
Feminist terminology
Women's strikes | Sex strike | [
"Biology"
] | 1,683 | [
"Human sexuality",
"Behavior",
"Human behavior",
"Sexuality"
] |
7,023,871 | https://en.wikipedia.org/wiki/Monolithic%20application | In software engineering, a monolithic application is a single unified software application that is self-contained and independent from other applications, but typically lacks flexibility. There are advantages and disadvantages of building applications in a monolithic style of software architecture, depending on requirements. Monolith applications are relatively simple and have a low cost but their shortcomings are lack of elasticity, fault tolerance and scalability. Alternative styles to monolithic applications include multitier architectures, distributed computing and microservices. Despite their popularity in recent years, monolithic applications are still a good choice for applications with small team and little complexity. However, once it becomes too complex, you can consider refactoring it into microservices or a distributed application. Note that a monolithic application deployed on a single machine, may be performant enough for your current workload but it's less available, less durable, less changeable, less fine-tuned and less scalable than a well designed distributed system.
The design philosophy is that the application is responsible not just for a particular task, but can perform every step needed to complete a particular function. Some personal finance applications are monolithic in the sense that they help the user carry out a complete task, end to end, and are private data silos rather than parts of a larger system of applications that work together. Some word processors are monolithic applications. These applications are sometimes associated with mainframe computers.
In software engineering, a monolithic application describes a software application that is designed as a single service. Multiple services can be desirable in certain scenarios as it can facilitate maintenance by allowing repair or replacement of parts of the application without requiring wholesale replacement.
Modularity is achieved to various extents by different modular programming approaches. Code-based modularity allows developers to reuse and repair parts of the application, but development tools are required to perform these maintenance functions (e.g. the application may need to be recompiled). Object-based modularity provides the application as a collection of separate executable files that may be independently maintained and replaced without redeploying the entire application (e.g. Microsoft's Dynamic-link library (DLL); Sun/UNIX shared object files). Some object messaging capabilities allow object-based applications to be distributed across multiple computers (e.g. Microsoft's Component Object Model (COM)). Service-oriented architectures use specific communication standards/protocols to communicate between modules.
In its original use, the term "monolithic" described enormous mainframe applications with no usable modularity. This, in combination with the rapid increase in computational power and therefore rapid increase in the complexity of the problems which could be tackled by software, resulted in unmaintainable systems and the "software crisis".
Patterns
Here are common architectural patterns used for monolithic applications, each has its own trade-offs:
Layered architecture
Modular monolith
Microkernel architecture
References
Software architecture
History of software | Monolithic application | [
"Technology",
"Engineering"
] | 603 | [
"Software engineering",
"History of software",
"Software engineering stubs",
"History of computing"
] |
7,024,176 | https://en.wikipedia.org/wiki/Casson%20handle | In 4-dimensional topology, a branch of mathematics, a Casson handle is a 4-dimensional topological 2-handle constructed by an infinite procedure. They are named for Andrew Casson, who introduced them in about 1973. They were originally called "flexible handles" by Casson himself, and introduced the name "Casson handle" by which they are known today. In that work he showed that Casson handles are topological 2-handles, and used this to classify simply connected compact topological 4-manifolds.
Motivation
In the proof of the h-cobordism theorem, the following construction is used.
Given a circle in the boundary of a manifold, we would often like to find a disk embedded in the manifold whose boundary is the given circle. If the manifold is simply connected then we can find a map from a disc to the manifold with boundary the given circle, and if the manifold is of dimension at least 5 then by putting this disc in "general position" it becomes an embedding. The number 5 appears for the following reason: submanifolds of dimension m and n in general position do not intersect provided the dimension of the manifold containing them has dimension greater than . In particular, a disc (of dimension 2) in general position will have no self intersections inside a manifold of dimension greater than 2+2.
If the manifold is 4 dimensional, this does not work: the problem is that a disc in general position may have double points where two points of the disc have the same image. This is the main reason why the usual proof of the h-cobordism theorem only works for cobordisms whose boundary has dimension at least 5. We can try to get rid of these double points as follows. Draw a line on the disc joining two points with the same image. If the image of this line is the boundary of an embedded disc (called a Whitney disc), then it is easy to remove the double point. However this argument seems to be going round in circles: in order to eliminate a double point of the first disc, we need to construct a second embedded disc, whose construction involves exactly the same problem of eliminating double points.
Casson's idea was to iterate this construction an infinite number of times, in the hope that the problems about double points will somehow disappear in the infinite limit.
Construction
A Casson handle has a 2-dimensional skeleton, which can be constructed as follows.
Start with a 2-disc .
Identify a finite number of pairs of points in the disc.
For each pair of identified points, choose a path in the disc joining these points, and construct a new disc with boundary this path. (So we add a disc for each pair of identified points.)
Repeat steps 2–3 on each new disc.
We can represent these skeletons by rooted trees such that each point is joined to only a finite number of other points: the tree has a point for each disc, and a line joining points if the corresponding discs intersect in the skeleton.
A Casson handle is constructed by "thickening" the 2-dimensional construction above to give a 4-dimensional object: we replace each disc by a copy of . Informally we can think of this as taking a small neighborhood of the skeleton (thought of as embedded in some 4-manifold). There are some minor extra subtleties in doing this: we need to keep track of some framings, and intersection points now have an orientation.
Casson handles correspond to rooted trees as above, except that now each vertex has a sign attached to it to indicate the orientation of the double point.
We may as well assume that the tree has no finite branches, as finite branches can be "unravelled" so make no difference.
The simplest exotic Casson handle corresponds to the tree which is just a half infinite line of points (with all signs the same). It is diffeomorphic to with a cone over the Whitehead continuum removed.
There is a similar description of more complicated Casson handles, with the Whitehead continuum replaced by a similar but more complicated set.
Structure
Freedman's main theorem about Casson handles states that they are all homeomorphic to ; or in other words they are topological 2-handles. In general they are not diffeomorphic to as follows from Donaldson's theorem, and there are an uncountable infinite number of different diffeomorphism types of Casson handles. However the interior of a Casson handle is diffeomorphic to ; Casson handles differ from standard 2 handles only in the way the boundary is attached to the interior.
Freedman's structure theorem can be used to prove the h-cobordism theorem for 5-dimensional topological cobordisms, which in turn implies the 4-dimensional topological Poincaré conjecture.
References
4-manifolds
Geometric topology | Casson handle | [
"Mathematics"
] | 979 | [
"Topology",
"Geometric topology"
] |
7,024,760 | https://en.wikipedia.org/wiki/Czech%20Hydrometeorological%20Institute | The Czech Hydrometeorological Institute (CHMI; ) is the central state office of the Czech Republic in the fields of air quality, meteorology, climatology and hydrology. It is an organization established by the Ministry of the Environment of the Czech Republic. The head office and centralized workplaces of the CHMI, including the data processing, telecommunication and technical services, are located at the Institute's own campus in Prague.
History
The National Meteorological Institute was established in 1919 shortly after Czechoslovakia was established at the end of World War I. On 1 January 1954, the National Meteorological Institute was united with the hydrology service and the Czech Hydrometeorological Institute was established. Its charter was amended in 1994 and in 1995 by the Ministry of the Environment of the Czech Republic.
Structure
The CHMI is made up of three specialized sections (meteorology and climatology section, hydrology section, and air quality section) with two support sections (finance and administration and Information technology (IT) section), and finally, the director section.
In addition to the central office in Prague-Komořany, the CHMI has a regional offices (branches) in six other Czech cities, not all sections are represented in each branch. Those other offices are in Brno, Ostrava, Plzeň, Ústí nad Labem, Hradec Králové, and České Budějovice.
Air pollution dispersion modelling activities
The Air Quality division has seven departments:
Air Quality Information System
Emission and Sources
Modelling and Expertise Pool
National Inventorization System
Air Quality Monitoring
Central Air Quality Laboratory
Calibration Laboratory
The work of the Modelling and Expertise Pool department is focused upon: the development of air pollution dispersion models; the application of such models in the preparation of expert reports and opinions; forecasts of air quality control; the processing of operating information on pollutant concentrations obtained by the Airborne Monitoring section.
The SYMOS97 air pollution dispersion model was developed at the CHMI. It models the dispersion of continuous, neutral or buoyant plumes from single or multiple point, area or line sources. It can handle complex terrain and it can also be used to simulate the dispersion of cooling tower plumes.
See also
List of atmospheric dispersion models
FMI, the Finnish Meteorological Institute
KNMI, the Royal Dutch Meteorological Institute
NILU, the Norwegian Institute for Air Research
Swedish Meteorological and Hydrological Institute
Royal Meteorological Society
References
External links
CMHI website (English version)
Governmental meteorological agencies in Europe
Science and technology in the Czech Republic
Environment of the Czech Republic
Atmospheric dispersion modeling
Air pollution
Science and technology in Czechoslovakia
1954 establishments in Czechoslovakia
Organizations established in 1954 | Czech Hydrometeorological Institute | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 546 | [
"Atmospheric dispersion modeling",
"Environmental modelling",
"Environmental engineering"
] |
7,025,572 | https://en.wikipedia.org/wiki/Emptiness%20%28Chinese%20constellation%29 | The Emptiness mansion () is one of the Twenty-eight mansions of the Chinese constellations. It is one of the northern mansions of the Black Tortoise.
Asterisms
References
Chinese constellations | Emptiness (Chinese constellation) | [
"Astronomy"
] | 42 | [
"Chinese constellations",
"Constellations"
] |
7,025,591 | https://en.wikipedia.org/wiki/Adjusted%20Peak%20Performance | Adjusted Peak Performance (APP) is a metric introduced by the U.S. Department of Commerce's Bureau of Industry and Security (BIS) to more accurately predict the suitability of a computing system to complex computational problems, specifically those used in simulating nuclear weapons. This is used to determine the export limitations placed on certain computer systems under the Export Administration Regulations 15 CFR.
Further details can be found in the document "Practitioner's Guide To Adjusted Peak Performance".
The (simplified) algorithm used to calculate APP consists of the following steps:
Determine how many 64 bit (or better) floating point operations every processor in the system can perform per clock cycle (best case). This is FPO(i).
Determine the clock frequency of every processor. This is F(i).
Choose the weighting factor for each processor: 0.9 for vector processors and 0.3 for non-vector processors. This is W(i).
Calculate the APP for the system as follows: APP = FPO(1) * F(1) * W(1) + ... + FPO(n) * F(n) * W(n).
The metric was introduced in April 2006 to replace the Composite Theoretical Performance (CTP) metric which was introduced in 1993. APP was itself replaced in November 2007 when the BIS amended 15 CFR to include the December 2006 Wassenaar Arrangement Plenary Agreement Implementation's new metric - Gigaflops (GFLOPS), one billion floating point operations per second, or TeraFLOPS, one trillion floating point operations per second.
The unit of measurement is Weighted TeraFLOPS (WT) to specify Adjusted Peak Performance (APP).
The weighting factor is 0.3 for non-vector processors and 0.9 for vector processors. For example, a PowerPC 750 running at 800 MHz would be rated at 0.00024 WT due to being able to execute one floating point instruction per cycle and not having a vector unit. Note that only 64 bit (or wider) floating point instructions count.
Notes:
Processors without 64 bit (or better) floating point support have an FPO of zero.
The current APP limit is 0.75 WT.
References
External links
U. S. Bureau of Industry and Security—High Performance Computers (HPCs)
Intel's microprocessor export compliance metrics
APP values for Oracle systems
APP values for Intel processors
Benchmarks (computing)
United States sanctions | Adjusted Peak Performance | [
"Technology"
] | 507 | [
"Benchmarks (computing)",
"Computing comparisons",
"Computer performance"
] |
7,025,644 | https://en.wikipedia.org/wiki/Cover%20Flow | Cover Flow is an animated, three-dimensional graphical user interface element that was integrated within the Macintosh Finder and other Apple Inc. products for visually flipping through snapshots of documents, website bookmarks, album artwork, or photographs.
Cover Flow is browsed using the on-screen scrollbar, mouse wheel, gestures, or by selecting a file from a list, which flips through the pages to bring the associated image into view. On iPod and iPhone devices, the user slides their finger across the touch screen or uses the click wheel.
Apple discontinued the use of Cover Flow after settling a patent suit against Mirror Worlds. It is now absent on the Mac in everything other than "Finder" with OS X El Capitan. macOS Mojave, a completely different Gallery view feature "replaces" Cover Flow in "Finder". It was removed from iOS in 2015 with the release of iOS 8.4, which replaced the Music app with Apple Music.
History
Cover Flow was conceived by artist Andrew Coulter Enright and originally implemented by an independent Macintosh developer, Jonathan del Strother. Enright later named the interaction style fliptych to distinguish it from the particular Cover Flow implementation.
Cover Flow was purchased by Apple Inc. in 2006, and its technology was integrated into its music application, iTunes 7.0, which was released September 12, 2006. The name was previously "CoverFlow" without a space.
The last release of Steel Skies’ stand-alone application, version RC1.2, was released on September 10, 2006, and was freely distributed until the end of the next day only, however it remains available for download from MacUpdate.
On January 9, 2007, when Apple announced the iPhone, it was announced that it would incorporate Cover Flow technology.
During the WWDC Keynote on June 11, 2007, Steve Jobs announced that Cover Flow would be added as a view option in Mac OS X Leopard's Finder.
On September 5, 2007 Apple announced that Cover Flow would be utilized in the third generation iPod nano as well as the new iPod classic and iPod Touch models. Cover Flow was integrated into the fourth-generation iPod nano by the use of an accelerometer which accesses Cover Flow when the iPod nano is turned horizontally on its side.
On March 14, 2008, Mirror Worlds LLC sued Apple for infringing on its patents (nos. 6006227, 6638313, 6725427, and 6768999) (Mirror Worlds, LLC, vs Apple, Inc; Texas Eastern District Court)
On February 24, 2009, Cover Flow was also included with the public beta of Safari 4, with the final version of Safari 4, released on June 8, using Cover Flow to browse history, bookmarks, RSS feeds, Bonjour, and Address Book.
In April 2010, Apple was granted US design patent D613,300 on the Cover Flow interface.
On October 1, 2010, Apple was ordered to pay $625.5 million to Mirror Worlds LLC for infringing utility patents relating to Cover Flow. On April 4, 2011, Judge Davis reversed the judgement.
With the release of version 11 of iTunes, Cover Flow was removed from the iTunes interface.
iOS 7 saw Cover Flow replaced by Album Wall. This feature shows tiles of album art in rows when the device is in landscape. This feature was removed with the release of iOS 8.4 on June 30, 2015.
In macOS Mojave, Cover Flow was removed from Finder and replaced by gallery view.
Other implementations
The open-source media player Songbird offers a Cover Flow navigation add-on called MediaFlow.
The open source Banshee media player also offers a Cover Flow-like add-on called ClutterFlow, which is based on the Clutter toolkit.
The proprietary media player MediaMonkey also offers a Cover Flow add-on called MonkeyFlow. It can either be embedded or run as an external remote application.
Using Compiz Fusion (Shift Switcher), KDE Plasma Workspaces (Cover Switch on KWin 4.1 or later), or Muffin on a Unix-like system, it is possible to switch between open applications with a Cover Flow animation.
A Cover Flow-like interface was used by the graphical search engine Search Me.
When selecting music or course in arcade edition of Dance Dance Revolution X2 and later, a Cover Flow-style interface is used.
The free jukebox firmware Rockbox also implements a Cover Flow-like album art viewer, called "PictureFlow". However, PictureFlow is not part of the main UI, instead included as a demo.
A Cover Flow-like interface was used in the built-in music player app for latest Symbian OS versions (Anna and above).
Reflection Music Player also implements a Cover Flow-like Music Player for the iPad Reflection Music Player with Cover Flow on iTunes.
The open source ebook managing software calibre incorporates Cover Flow to browse through ebooks' covers.
Open source multi-system game emulator OpenEmu includes a cover flow view.
By default, the Nintendo Wii homebrew application WiiFlow displays games in a Cover Flow-like interface.
The proprietary Apple Music player Cider features a cover flow view mode.
References
Graphical user interface elements
ITunes
Apple Inc. acquisitions | Cover Flow | [
"Technology"
] | 1,089 | [
"Components",
"Graphical user interface elements"
] |
7,025,924 | https://en.wikipedia.org/wiki/Models%20of%20DNA%20evolution | A number of different Markov models of DNA sequence evolution have been proposed. These substitution models differ in terms of the parameters used to describe the rates at which one nucleotide replaces another during evolution. These models are frequently used in molecular phylogenetic analyses. In particular, they are used during the calculation of likelihood of a tree (in Bayesian and maximum likelihood approaches to tree estimation) and they are used to estimate the evolutionary distance between sequences from the observed differences between the sequences.
Introduction
These models are phenomenological descriptions of the evolution of DNA as a string of four discrete states. These Markov models do not explicitly depict the mechanism of mutation nor the action of natural selection. Rather they describe the relative rates of different changes. For example, mutational biases and purifying selection favoring conservative changes are probably both responsible for the relatively high rate of transitions compared to transversions in evolving sequences. However, the Kimura (K80) model described below only attempts to capture the effect of both forces in a parameter that reflects the relative rate of transitions to transversions.
Evolutionary analyses of sequences are conducted on a wide variety of time scales. Thus, it is convenient to express these models in terms of the instantaneous rates of change between different states (the Q matrices below). If we are given a starting (ancestral) state at one position, the model's Q matrix and a branch length expressing the expected number of changes to have occurred since the ancestor, then we can derive the probability of the descendant sequence having each of the four states. The mathematical details of this transformation from rate-matrix to probability matrix are described in the mathematics of substitution models section of the substitution model page. By expressing models in terms of the instantaneous rates of change we can avoid estimating a large numbers of parameters for each branch on a phylogenetic tree (or each comparison if the analysis involves many pairwise sequence comparisons).
The models described on this page describe the evolution of a single site within a set of sequences. They are often used for analyzing the evolution of an entire locus by making the simplifying assumption that different sites evolve independently and are identically distributed. This assumption may be justifiable if the sites can be assumed to be evolving neutrally. If the primary effect of natural selection on the evolution of the sequences is to constrain some sites, then models of among-site rate-heterogeneity can be used. This approach allows one to estimate only one matrix of relative rates of substitution, and another set of parameters describing the variance in the total rate of substitution across sites.
DNA evolution as a continuous-time Markov chain
Continuous-time Markov chains
Continuous-time Markov chains have the usual transition matrices
which are, in addition, parameterized by time, . Specifically, if are the states, then the transition matrix
where each individual entry, refers to the probability that state will change to state in time .
Example: We would like to model the substitution process in DNA sequences (i.e. Jukes–Cantor, Kimura, etc.) in a continuous-time fashion. The corresponding transition matrices will look like:
where the top-left and bottom-right 2 × 2 blocks correspond to transition probabilities and the top-right and bottom-left 2 × 2 blocks corresponds to transversion probabilities.
Assumption: If at some time , the Markov chain is in state , then the probability that at time , it will be in state depends only upon , and . This then allows us to write that probability as .
Theorem: Continuous-time transition matrices satisfy:
Note: There is here a possible confusion between two meanings of the word transition. (i) In the context of Markov chains, transition is the general term for the change between two states. (ii) In the context of nucleotide changes in DNA sequences, transition is a specific term for the exchange between either the two purines (A ↔ G) or the two pyrimidines (C ↔ T) (for additional details, see the article about transitions in genetics). By contrast, an exchange between one purine and one pyrimidine is called a transversion.
Deriving the dynamics of substitution
Consider a DNA sequence of fixed length m evolving in time by base replacement. Assume that the processes followed by the m sites are Markovian independent, identically distributed and that the process is constant over time. For a particular site, let
be the set of possible states for the site, and
their respective probabilities at time . For two distinct , let be the transition rate from state to state . Similarly, for any , let the total rate of change from be
The changes in the probability distribution for small increments of time are given by
In other words, (in frequentist language), the frequency of 's at time is equal to the frequency at time minus the frequency of the lost 's plus the frequency of the newly created 's.
Similarly for the probabilities , and . These equations can be written compactly as
where
is known as the rate matrix. Note that, by definition, the sum of the entries in each row of is equal to zero. It follows that
For a stationary process, where does not depend on time t, this differential equation can be solved. First,
where denotes the exponential of the matrix . As a result,
Ergodicity
If the Markov chain is irreducible, i.e. if it is always possible to go from a state to a state (possibly in several steps), then it is also ergodic. As a result, it has a unique stationary distribution , where corresponds to the proportion of time spent in state after the Markov chain has run for an infinite amount of time. In DNA evolution, under the assumption of a common process for each site, the stationary frequencies correspond to equilibrium base compositions. Indeed, note that since the stationary distribution satisfies , we see that when the current distribution is the stationary distribution we have
In other words, the frequencies of do not change.
Time reversibility
Definition: A stationary Markov process is time reversible if (in the steady state) the amount of change from state to is equal to the amount of change from to , (although the two states may occur with different frequencies). This means that:
Not all stationary processes are reversible, however, most commonly used DNA evolution models assume time reversibility, which is considered to be a reasonable assumption.
Under the time reversibility assumption, let , then it is easy to see that:
Definition The symmetric term is called the exchangeability between states and . In other words, is the fraction of the frequency of state that is the result of transitions from state to state .
Corollary The 12 off-diagonal entries of the rate matrix, (note the off-diagonal entries determine the diagonal entries, since the rows of sum to zero) can be completely determined by 9 numbers; these are: 6 exchangeability terms and 3 stationary frequencies , (since the stationary frequencies sum to 1).
Scaling of branch lengths
By comparing extant sequences, one can determine the amount of sequence divergence. This raw measurement of divergence provides information about the number of changes that have occurred along the path separating the sequences. The simple count of differences (the Hamming distance) between sequences will often underestimate the number of substitution because of multiple hits (see homoplasy). Trying to estimate the exact number of changes that have occurred is difficult, and usually not necessary. Instead, branch lengths (and path lengths) in phylogenetic analyses are usually expressed in the expected number of changes per site. The path length is the product of the duration of the path in time and the mean rate of substitutions. While their product can be estimated, the rate and time are not identifiable from sequence divergence.
The descriptions of rate matrices on this page accurately reflect the relative magnitude of different substitutions, but these rate matrices are not scaled such that a branch length of 1 yields one expected change. This scaling can be accomplished by multiplying every element of the matrix by the same factor, or simply by scaling the branch lengths. If we use the β to denote the scaling factor, and ν to denote the branch length measured in the expected number of substitutions per site then βν is used in the transition probability formulae below in place of μt. Note that ν is a parameter to be estimated from data, and is referred to as the branch length, while β is simply a number that can be calculated from the rate matrix (it is not a separate free parameter).
The value of β can be found by forcing the expected rate of flux of states to 1. The diagonal entries of the rate-matrix (the Q matrix) represent -1 times the rate of leaving each state. For time-reversible models, we know the equilibrium state frequencies (these are simply the πi parameter value for state i). Thus we can find the expected rate of change by calculating the sum of flux out of each state weighted by the proportion of sites that are expected to be in that class. Setting β to be the reciprocal of this sum will guarantee that scaled process has an expected flux of 1:
For example, in the Jukes–Cantor, the scaling factor would be 4/(3μ) because the rate of leaving each state is 3μ/4.
Most common models of DNA evolution
JC69 model (Jukes and Cantor 1969)
JC69, the Jukes and Cantor 1969 model, is the simplest substitution model. There are several assumptions. It assumes equal base frequencies and equal mutation rates. The only parameter of this model is therefore , the overall substitution rate. As previously mentioned, this variable becomes a constant when we normalize the mean-rate to 1.
When branch length, , is measured in the expected number of changes per site then:
It is worth noticing that what stands for sum of any column (or row) of matrix multiplied by time and thus means expected number of substitutions in time (branch duration) for each particular site (per site) when the rate of substitution equals .
Given the proportion of sites that differ between the two sequences the Jukes–Cantor estimate of the evolutionary distance (in terms of the expected number of changes) between two sequences is given by
The in this formula is frequently referred to as the -distance. It is a sufficient statistic for calculating the Jukes–Cantor distance correction, but is not sufficient for the calculation of the evolutionary distance under the more complex models that follow (also note that used in subsequent formulae is not identical to the "-distance").
K80 model (Kimura 1980)
K80, the Kimura 1980 model, often referred to as Kimura's two parameter model (or the K2P model), distinguishes between transitions (, i.e. from purine to purine, or , i.e. from pyrimidine to pyrimidine) and transversions (from purine to pyrimidine or vice versa). In Kimura's original description of the model the α and β were used to denote the rates of these types of substitutions, but it is now more common to set the rate of transversions to 1 and use κ to denote the transition/transversion rate ratio (as is done below). The K80 model assumes that all of the bases are equally frequent ().
Rate matrix with columns corresponding to , , , and , respectively.
The Kimura two-parameter distance is given by:
where p is the proportion of sites that show transitional differences and
q is the proportion of sites that show transversional differences.
K81 model (Kimura 1981)
K81, the Kimura 1981 model, often called Kimura's three parameter model (K3P model) or the Kimura three substitution type (K3ST) model, has distinct rates for transitions and two distinct types of transversions. The two transversion types are those that conserve the weak/strong properties of the nucleotides (i.e., and , denoted by symbol ) and those that conserve the amino/keto properties of the nucleotides (i.e., and , denoted by symbol ). The K81 model assumes that all equilibrium base frequencies are equal (i.e., ).
Rate matrix with columns corresponding to , , , and , respectively.
The K81 model is used much less often than the K80 (K2P) model for distance estimation and it is seldom the best-fitting model in maximum likelihood phylogenetics. Despite these facts, the K81 model has continued to be studied in the context of mathematical phylogenetics. One important property is the ability to perform a Hadamard transform assuming the site patterns were generated on a tree with nucleotides evolving under the K81 model.
When used in the context of phylogenetics the Hadamard transform provides an elegant and fully invertible means to calculate expected site pattern frequencies given a set of branch lengths (or vice versa). Unlike many maximum likelihood calculations, the relative values for , , and can vary across branches and the Hadamard transform can even provide evidence that the data do not fit a tree. The Hadamard transform can also be combined with a wide variety of methods to accommodate among-sites rate heterogeneity, using continuous distributions rather than the discrete approximations typically used in maximum likelihood phylogenetics (although one must sacrifice the invertibility of the Hadamard transform to use certain among-sites rate heterogeneity distributions).
F81 model (Felsenstein 1981)
F81, the Felsenstein's 1981 model, is an extension of the JC69 model in which base frequencies are allowed to vary from 0.25 ()
Rate matrix:
When branch length, ν, is measured in the expected number of changes per site then:
HKY85 model (Hasegawa, Kishino and Yano 1985)
HKY85, the Hasegawa, Kishino and Yano 1985 model, can be thought of as combining the extensions made in the Kimura80 and Felsenstein81 models. Namely, it distinguishes between the rate of transitions and transversions (using the κ parameter), and it allows unequal base frequencies (). [ Felsenstein described a similar (but not equivalent) model in 1984 using a different parameterization; that latter model is referred to as the F84 model. ]
Rate matrix
If we express the branch length, ν in terms of the expected number of changes per site then:
and formula for the other combinations of states can be obtained by substituting in the appropriate base frequencies.
T92 model (Tamura 1992)
T92, the Tamura 1992 model, is a mathematical method developed to estimate the number of nucleotide substitutions per site between two DNA sequences, by extending Kimura's (1980) two-parameter method to the case where a G+C content bias exists. This method will be useful when there are strong transition-transversion and G+C-content biases, as in the case of Drosophila mitochondrial DNA.
T92 involves a single, compound base frequency parameter (also noted )
As T92 echoes the Chargaff's second parity rule — pairing nucleotides do have the same frequency on a single DNA strand, G and C on the one hand, and A and T on the other hand — it follows that the four base frequences can be expressed as a function of
and
Rate matrix
The evolutionary distance between two DNA sequences according to this model is given by
where and is the G+C content ().
TN93 model (Tamura and Nei 1993)
TN93, the Tamura and Nei 1993 model, distinguishes between the two different types of transition; i.e. () is allowed to have a different rate to (). Transversions are all assumed to occur at the same rate, but that rate is allowed to be different from both of the rates for transitions.
TN93 also allows unequal base frequencies ().
Rate matrix
GTR model (Tavaré 1986)
GTR, the Generalised time-reversible model of Tavaré 1986, is the most general neutral, independent, finite-sites, time-reversible model possible. It was first described in a general form by Simon Tavaré in 1986.
GTR parameters consist of an equilibrium base frequency vector, , giving the frequency at which each base occurs at each site, and the rate matrix
Where
are the transition rate parameters.
Therefore, GTR (for four characters, as is often the case in phylogenetics) requires 6 substitution rate parameters, as well as 4 equilibrium base frequency parameters. However, this is usually eliminated down to 9 parameters plus , the overall number of substitutions per unit time. When measuring time in substitutions (=1) only 8 free parameters remain.
In general, to compute the number of parameters, one must count the number of entries above the diagonal in the matrix, i.e. for n trait values per site , and then add n for the equilibrium base frequencies, and subtract 1 because is fixed. One gets
For example, for an amino acid sequence (there are 20 "standard" amino acids that make up proteins), one would find there are 209 parameters. However, when studying coding regions of the genome, it is more common to work with a codon substitution model (a codon is three bases and codes for one amino acid in a protein). There are codons, but the rates for transitions between codons which differ by more than one base is assumed to be zero. Hence, there are parameters.
See also
Molecular evolution
Molecular clock
UPGMA
References
Further reading
External links
DAWG: DNA Assembly With Gaps — free software for simulating sequence evolution
Bioinformatics
Phylogenetics
Computational phylogenetics
Markov models | Models of DNA evolution | [
"Engineering",
"Biology"
] | 3,688 | [
"Genetics techniques",
"Biological engineering",
"Computational phylogenetics",
"Taxonomy (biology)",
"Bioinformatics",
"Phylogenetics"
] |
7,026,278 | https://en.wikipedia.org/wiki/Homology%20modeling | Homology modeling, also known as comparative modeling of protein, refers to constructing an atomic-resolution model of the "target" protein from its amino acid sequence and an experimental three-dimensional structure of a related homologous protein (the "template"). Homology modeling relies on the identification of one or more known protein structures likely to resemble the structure of the query sequence, and on the production of a sequence alignment that maps residues in the query sequence to residues in the template sequence. It has been seen that protein structures are more conserved than protein sequences amongst homologues, but sequences falling below a 20% sequence identity can have very different structure.
Evolutionarily related proteins have similar sequences and naturally occurring homologous proteins have similar protein structure.
It has been shown that three-dimensional protein structure is evolutionarily more conserved than would be expected on the basis of sequence conservation alone.
The sequence alignment and template structure are then used to produce a structural model of the target. Because protein structures are more conserved than DNA sequences, and detectable levels of sequence similarity usually imply significant structural similarity.
The quality of the homology model is dependent on the quality of the sequence alignment and template structure. The approach can be complicated by the presence of alignment gaps (commonly called indels) that indicate a structural region present in the target but not in the template, and by structure gaps in the template that arise from poor resolution in the experimental procedure (usually X-ray crystallography) used to solve the structure. Model quality declines with decreasing sequence identity; a typical model has ~1–2 Å root mean square deviation between the matched Cα atoms at 70% sequence identity but only 2–4 Å agreement at 25% sequence identity. However, the errors are significantly higher in the loop regions, where the amino acid sequences of the target and template proteins may be completely different.
Regions of the model that were constructed without a template, usually by loop modeling, are generally much less accurate than the rest of the model. Errors in side chain packing and position also increase with decreasing identity, and variations in these packing configurations have been suggested as a major reason for poor model quality at low identity. Taken together, these various atomic-position errors are significant and impede the use of homology models for purposes that require atomic-resolution data, such as drug design and protein–protein interaction predictions; even the quaternary structure of a protein may be difficult to predict from homology models of its subunit(s). Nevertheless, homology models can be useful in reaching qualitative conclusions about the biochemistry of the query sequence, especially in formulating hypotheses about why certain residues are conserved, which may in turn lead to experiments to test those hypotheses. For example, the spatial arrangement of conserved residues may suggest whether a particular residue is conserved to stabilize the folding, to participate in binding some small molecule, or to foster association with another protein or nucleic acid.
Homology modeling can produce high-quality structural models when the target and template are closely related, which has inspired the formation of a structural genomics consortium dedicated to the production of representative experimental structures for all classes of protein folds. The chief inaccuracies in homology modeling, which worsen with lower sequence identity, derive from errors in the initial sequence alignment and from improper template selection. Like other methods of structure prediction, current practice in homology modeling is assessed in a biennial large-scale experiment known as the Critical Assessment of Techniques for Protein Structure Prediction, or Critical Assessment of Structure Prediction (CASP).
Motive
The method of homology modeling is based on the observation that protein tertiary structure is better conserved than amino acid sequence. Thus, even proteins that have diverged appreciably in sequence but still share detectable similarity will also share common structural properties, particularly the overall fold. Because it is difficult and time-consuming to obtain experimental structures from methods such as X-ray crystallography and protein NMR for every protein of interest, homology modeling can provide useful structural models for generating hypotheses about a protein's function and directing further experimental work.
There are exceptions to the general rule that proteins sharing significant sequence identity will share a fold. For example, a judiciously chosen set of mutations of less than 50% of a protein can cause the protein to adopt a completely different fold. However, such a massive structural rearrangement is unlikely to occur in evolution, especially since the protein is usually under the constraint that it must fold properly and carry out its function in the cell. Consequently, the roughly folded structure of a protein (its "topology") is conserved longer than its amino-acid sequence and much longer than the corresponding DNA sequence; in other words, two proteins may share a similar fold even if their evolutionary relationship is so distant that it cannot be discerned reliably. For comparison, the function of a protein is conserved much less than the protein sequence, since relatively few changes in amino-acid sequence are required to take on a related function.
Steps in model production
The homology modeling procedure can be broken down into four sequential steps: template selection, target-template alignment, model construction, and model assessment. The first two steps are often essentially performed together, as the most common methods of identifying templates rely on the production of sequence alignments; however, these alignments may not be of sufficient quality because database search techniques prioritize speed over alignment quality. These processes can be performed iteratively to improve the quality of the final model, although quality assessments that are not dependent on the true target structure are still under development.
Optimizing the speed and accuracy of these steps for use in large-scale automated structure prediction is a key component of structural genomics initiatives, partly because the resulting volume of data will be too large to process manually and partly because the goal of structural genomics requires providing models of reasonable quality to researchers who are not themselves structure prediction experts.
Template selection and sequence alignment
The critical first step in homology modeling is the identification of the best template structure, if indeed any are available. The simplest method of template identification relies on serial pairwise sequence alignments aided by database search techniques such as FASTA and BLAST. More sensitive methods based on multiple sequence alignment – of which PSI-BLAST is the most common example – iteratively update their position-specific scoring matrix to successively identify more distantly related homologs. This family of methods has been shown to produce a larger number of potential templates and to identify better templates for sequences that have only distant relationships to any solved structure. Protein threading, also known as fold recognition or 3D-1D alignment, can also be used as a search technique for identifying templates to be used in traditional homology modeling methods. Recent CASP experiments indicate that some protein threading methods such as RaptorX are more sensitive than purely sequence(profile)-based methods when only distantly-related templates are available for the proteins under prediction. When performing a BLAST search, a reliable first approach is to identify hits with a sufficiently low E-value, which are considered sufficiently close in evolution to make a reliable homology model. Other factors may tip the balance in marginal cases; for example, the template may have a function similar to that of the query sequence, or it may belong to a homologous operon. However, a template with a poor E-value should generally not be chosen, even if it is the only one available, since it may well have a wrong structure, leading to the production of a misguided model. A better approach is to submit the primary sequence to fold-recognition servers or, better still, consensus meta-servers which improve upon individual fold-recognition servers by identifying similarities (consensus) among independent predictions.
Often several candidate template structures are identified by these approaches. Although some methods can generate hybrid models with better accuracy from multiple templates, most methods rely on a single template. Therefore, choosing the best template from among the candidates is a key step, and can affect the final accuracy of the structure significantly. This choice is guided by several factors, such as the similarity of the query and template sequences, of their functions, and of the predicted query and observed template secondary structures. Perhaps most importantly, the coverage of the aligned regions: the fraction of the query sequence structure that can be predicted from the template, and the plausibility of the resulting model. Thus, sometimes several homology models are produced for a single query sequence, with the most likely candidate chosen only in the final step.
It is possible to use the sequence alignment generated by the database search technique as the basis for the subsequent model production; however, more sophisticated approaches have also been explored. One proposal generates an ensemble of stochastically defined pairwise alignments between the target sequence and a single identified template as a means of exploring "alignment space" in regions of sequence with low local similarity. "Profile-profile" alignments that first generate a sequence profile of the target and systematically compare it to the sequence profiles of solved structures; the coarse-graining inherent in the profile construction is thought to reduce noise introduced by sequence drift in nonessential regions of the sequence.
Model generation
Given a template and an alignment, the information contained therein must be used to generate a three-dimensional structural model of the target, represented as a set of Cartesian coordinates for each atom in the protein. Three major classes of model generation methods have been proposed.
Fragment assembly
The original method of homology modeling relied on the assembly of a complete model from conserved structural fragments identified in closely related solved structures. For example, a modeling study of serine proteases in mammals identified a sharp distinction between "core" structural regions conserved in all experimental structures in the class, and variable regions typically located in the loops where the majority of the sequence differences were localized. Thus unsolved proteins could be modeled by first constructing the conserved core and then substituting variable regions from other proteins in the set of solved structures. Current implementations of this method differ mainly in the way they deal with regions that are not conserved or that lack a template. The variable regions are often constructed with the help of a protein fragment library.
Segment matching
The segment-matching method divides the target into a series of short segments, each of which is matched to its own template fitted from the Protein Data Bank. Thus, sequence alignment is done over segments rather than over the entire protein. Selection of the template for each segment is based on sequence similarity, comparisons of alpha carbon coordinates, and predicted steric conflicts arising from the van der Waals radii of the divergent atoms between target and template.
Satisfaction of spatial restraints
The most common current homology modeling method takes its inspiration from calculations required to construct a three-dimensional structure from data generated by NMR spectroscopy. One or more target-template alignments are used to construct a set of geometrical criteria that are then converted to probability density functions for each restraint. Restraints applied to the main protein internal coordinates – protein backbone distances and dihedral angles – serve as the basis for a global optimization procedure that originally used conjugate gradient energy minimization to iteratively refine the positions of all heavy atoms in the protein.
This method had been dramatically expanded to apply specifically to loop modeling, which can be extremely difficult due to the high flexibility of loops in proteins in aqueous solution. A more recent expansion applies the spatial-restraint model to electron density maps derived from cryoelectron microscopy studies, which provide low-resolution information that is not usually itself sufficient to generate atomic-resolution structural models. To address the problem of inaccuracies in initial target-template sequence alignment, an iterative procedure has also been introduced to refine the alignment on the basis of the initial structural fit. The most commonly used software in spatial restraint-based modeling is MODELLER and a database called ModBase has been established for reliable models generated with it.
Loop modeling
Regions of the target sequence that are not aligned to a template are modeled by loop modeling; they are the most susceptible to major modeling errors and occur with higher frequency when the target and template have low sequence identity. The coordinates of unmatched sections determined by loop modeling programs are generally much less accurate than those obtained from simply copying the coordinates of a known structure, particularly if the loop is longer than 10 residues. The first two sidechain dihedral angles (χ1 and χ2) can usually be estimated within 30° for an accurate backbone structure; however, the later dihedral angles found in longer side chains such as lysine and arginine are notoriously difficult to predict. Moreover, small errors in χ1 (and, to a lesser extent, in χ2) can cause relatively large errors in the positions of the atoms at the terminus of side chain; such atoms often have a functional importance, particularly when located near the active site.
Model assessment
A large number of methods have been developed for selecting a native-like structure from a set of models. Scoring functions have been based on both molecular mechanics energy functions (Lazaridis and Karplus 1999; Petrey and Honig 2000; Feig and Brooks 2002; Felts et al. 2002; Lee and Duan 2004), statistical potentials (Sippl 1995; Melo and Feytmans 1998; Samudrala and Moult 1998; Rojnuckarin and Subramaniam 1999; Lu and Skolnick 2001; Wallqvist et al. 2002; Zhou and Zhou 2002), residue environments (Luthy et al. 1992; Eisenberg et al. 1997; Park et al. 1997; Summa et al. 2005), local side-chain and backbone interactions (Fang and Shortle 2005), orientation-dependent properties (Buchete et al. 2004a,b; Hamelryck 2005), packing estimates (Berglund et al. 2004), solvation energy (Petrey and Honig 2000; McConkey et al. 2003; Wallner and Elofsson 2003; Berglund et al. 2004), hydrogen bonding (Kortemme et al. 2003), and geometric properties (Colovos and Yeates 1993; Kleywegt 2000; Lovell et al. 2003; Mihalek et al. 2003). A number of methods combine different potentials into a global score, usually using a linear combination of terms (Kortemme et al. 2003; Tosatto 2005), or with the help of machine learning techniques, such as neural networks (Wallner and Elofsson 2003) and support vector machines (SVM) (Eramian et al. 2006). Comparisons of different global model quality assessment programs can be found in recent papers by Pettitt et al. (2005), Tosatto (2005), and Eramian et al. (2006).
Less work has been reported on the local quality assessment of models. Local scores are important in the context of modeling because they can give an estimate of the reliability of different regions of a predicted structure. This information can be used in turn to determine which regions should be refined, which should be considered for modeling by multiple templates, and which should be predicted ab initio. Information on local model quality could also be used to reduce the combinatorial problem when considering alternative alignments; for example, by scoring different local models separately, fewer models would have to be built (assuming that the interactions between the separate regions are negligible or can be estimated separately).
One of the most widely used local scoring methods is Verify3D (Luthy et al. 1992; Eisenberg et al. 1997), which combines secondary structure, solvent accessibility, and polarity of residue environments. ProsaII (Sippl 1993), which is based on a combination of a pairwise statistical potential and a solvation term, is also applied extensively in model evaluation. Other methods include the Errat program (Colovos and Yeates 1993), which considers distributions of nonbonded atoms according to atom type and distance, and the energy strain method (Maiorov and Abagyan 1998), which uses differences from average residue energies in different environments to indicate which parts of a protein structure might be problematic. Melo and Feytmans (1998) use an atomic pairwise potential and a surface-based solvation potential (both knowledge-based) to evaluate protein structures. Apart from the energy strain method, which is a semiempirical approach based on the ECEPP3 force field (Nemethy et al. 1992), all of the local methods listed above are based on statistical potentials. A conceptually distinct approach is the ProQres method, which was very recently introduced by Wallner and Elofsson (2006). ProQres is based on a neural network that combines structural features to distinguish correct from incorrect regions. ProQres was shown to outperform earlier methodologies based on statistical approaches (Verify3D, ProsaII, and Errat). The data presented in Wallner and Elofsson's study suggests that their machine-learning approach based on structural features is indeed superior to statistics-based methods. However, the knowledge-based methods examined in their work, Verify3D (Luthy et al. 1992; Eisenberg et al. 1997), Prosa (Sippl 1993), and Errat (Colovos and Yeates 1993), are not based on newer statistical potentials.
Benchmarking
Several large-scale benchmarking efforts have been made to assess the relative quality of various current homology modeling methods. Critical Assessment of Structure Prediction (CASP) is a community-wide prediction experiment that runs every two years during the summer months and challenges prediction teams to submit structural models for a number of sequences whose structures have recently been solved experimentally but have not yet been published. Its partner Critical Assessment of Fully Automated Structure Prediction (CAFASP) has run in parallel with CASP but evaluates only models produced via fully automated servers. Continuously running experiments that do not have prediction 'seasons' focus mainly on benchmarking publicly available webservers. LiveBench and EVA run continuously to assess participating servers' performance in prediction of imminently released structures from the PDB. CASP and CAFASP serve mainly as evaluations of the state of the art in modeling, while the continuous assessments seek to evaluate the model quality that would be obtained by a non-expert user employing publicly available tools.
Accuracy
The accuracy of the structures generated by homology modeling is highly dependent on the sequence identity between target and template. Above 50% sequence identity, models tend to be reliable, with only minor errors in side chain packing and rotameric state, and an overall RMSD between the modeled and the experimental structure falling around 1 Å. This error is comparable to the typical resolution of a structure solved by NMR. In the 30–50% identity range, errors can be more severe and are often located in loops. Below 30% identity, serious errors occur, sometimes resulting in the basic fold being mis-predicted. This low-identity region is often referred to as the "twilight zone" within which homology modeling is extremely difficult, and to which it is possibly less suited than fold recognition methods.
At high sequence identities, the primary source of error in homology modeling derives from the choice of the template or templates on which the model is based, while lower identities exhibit serious errors in sequence alignment that inhibit the production of high-quality models. It has been suggested that the major impediment to quality model production is inadequacies in sequence alignment, since "optimal" structural alignments between two proteins of known structure can be used as input to current modeling methods to produce quite accurate reproductions of the original experimental structure.
Attempts have been made to improve the accuracy of homology models built with existing methods by subjecting them to molecular dynamics simulation in an effort to improve their RMSD to the experimental structure. However, current force field parameterizations may not be sufficiently accurate for this task, since homology models used as starting structures for molecular dynamics tend to produce slightly worse structures. Slight improvements have been observed in cases where significant restraints were used during the simulation.
Sources of error
The two most common and large-scale sources of error in homology modeling are poor template selection and inaccuracies in target-template sequence alignment. Controlling for these two factors by using a structural alignment, or a sequence alignment produced on the basis of comparing two solved structures, dramatically reduces the errors in final models; these "gold standard" alignments can be used as input to current modeling methods to produce quite accurate reproductions of the original experimental structure. Results from the most recent CASP experiment suggest that "consensus" methods collecting the results of multiple fold recognition and multiple alignment searches increase the likelihood of identifying the correct template; similarly, the use of multiple templates in the model-building step may be worse than the use of the single correct template but better than the use of a single suboptimal one. Alignment errors may be minimized by the use of a multiple alignment even if only one template is used, and by the iterative refinement of local regions of low similarity.
A lesser source of model errors are errors in the template structure. The PDBREPORT database lists several million, mostly very small but occasionally dramatic, errors in experimental (template) structures that have been deposited in the PDB.
Serious local errors can arise in homology models where an insertion or deletion mutation or a gap in a solved structure result in a region of target sequence for which there is no corresponding template. This problem can be minimized by the use of multiple templates, but the method is complicated by the templates' differing local structures around the gap and by the likelihood that a missing region in one experimental structure is also missing in other structures of the same protein family. Missing regions are most common in loops where high local flexibility increases the difficulty of resolving the region by structure-determination methods. Although some guidance is provided even with a single template by the positioning of the ends of the missing region, the longer the gap, the more difficult it is to model. Loops of up to about 9 residues can be modeled with moderate accuracy in some cases if the local alignment is correct. Larger regions are often modeled individually using ab initio structure prediction techniques, although this approach has met with only isolated success.
The rotameric states of side chains and their internal packing arrangement also present difficulties in homology modeling, even in targets for which the backbone structure is relatively easy to predict. This is partly due to the fact that many side chains in crystal structures are not in their "optimal" rotameric state as a result of energetic factors in the hydrophobic core and in the packing of the individual molecules in a protein crystal. One method of addressing this problem requires searching a rotameric library to identify locally low-energy combinations of packing states. It has been suggested that a major reason that homology modeling so difficult when target-template sequence identity lies below 30% is that such proteins have broadly similar folds but widely divergent side chain packing arrangements.
Utility
Uses of the structural models include protein–protein interaction prediction, protein–protein docking, molecular docking, and functional annotation of genes identified in an organism's genome. Even low-accuracy homology models can be useful for these purposes, because their inaccuracies tend to be located in the loops on the protein surface, which are normally more variable even between closely related proteins. The functional regions of the protein, especially its active site, tend to be more highly conserved and thus more accurately modeled.
Homology models can also be used to identify subtle differences between related proteins that have not all been solved structurally. For example, the method was used to identify cation binding sites on the Na+/K+ ATPase and to propose hypotheses about different ATPases' binding affinity. Used in conjunction with molecular dynamics simulations, homology models can also generate hypotheses about the kinetics and dynamics of a protein, as in studies of the ion selectivity of a potassium channel. Large-scale automated modeling of all identified protein-coding regions in a genome has been attempted for the yeast Saccharomyces cerevisiae, resulting in nearly 1000 quality models for proteins whose structures had not yet been determined at the time of the study, and identifying novel relationships between 236 yeast proteins and other previously solved structures.
See also
Protein structure prediction
Protein structure prediction software
Protein threading
Molecular replacement
References
Bioinformatics
Protein methods
Protein structure | Homology modeling | [
"Chemistry",
"Engineering",
"Biology"
] | 5,014 | [
"Biochemistry methods",
"Biological engineering",
"Protein methods",
"Protein biochemistry",
"Bioinformatics",
"Structural biology",
"Protein structure"
] |
7,026,379 | https://en.wikipedia.org/wiki/1179%20Mally | 1179 Mally, provisional designation , is an asteroid and long-lost minor planet from the central region of the asteroid belt, approximately 13 kilometers in diameter. Discovered by Max Wolf in 1931, the asteroid was lost until its rediscovery in 1986. The discoverer named it after his daughter-in-law, Mally Wolf.
Discovery and rediscovery
Mally was discovered on 19 March 1931, by German astronomer Max Wolf at Heidelberg Observatory in southwest Germany.
Soon after its initial discovery, it became one of few well known lost minor planets for over 55 years. In 1986, Mally was rediscovered by astronomers Lutz Schmadel, Richard Martin West and Hans-Emil Schuster, who remeasured the original discovery plates and computed alternative search ephemerides. This allowed them to find the body very near to its predicted position. In addition, historic photographic plates from the Palomar Sky Survey (1956–1958), the UK Schmidt Telescope (Australia), and the ESO Schmidt Telescope (Chile) confirmed the rediscovery.
Orbit and classification
Mally orbits the Sun in the central main-belt at a distance of 2.2–3.1 AU once every 4 years and 3 months (1,548 days). Its orbit has an eccentricity of 0.17 and an inclination of 9° with respect to the ecliptic. The body's observation arc begins with its official discovery observation at Heidelberg in 1931.
Physical characteristics
Diameter and albedo
According to the surveys carried out by the Japanese Akari satellite and the NEOWISE mission of NASA's Wide-field Infrared Survey Explorer, Mally measures between 11.20 and 16.60 kilometers in diameter, and its surface has an albedo between 0.059 and 0.097.
The Collaborative Asteroid Lightcurve Link assumes an albedo of 0.10 – a compromise value between the brighter stony (0.20) and darker carbonaceous asteroids (0.057) used for bodies with a semi-major axis between 2.6 and 2.7 AU – and calculates a diameter of 10.7 kilometers based on an absolute magnitude of 12.98.
Rotation period
In September 2013, a rotational lightcurve of Mally was obtained from photometric observations taken at the Palomar Transient Factory in California. The fragmentary lightcurve gave a longer than average rotation period of 46.6 hours with a brightness variation of 0.08 magnitude. However, the obtained result is poorly rated by CALL ().
Naming
This minor planet was named after Mally Wolf, wife of Franz Wolf and the discoverer's daughter-in-law. The official naming citation was published by Paul Herget in The Names of the Minor Planets in 1955 ().
References
External links
(1179) Mally, at AstDyS, University of Pisa
Asteroid Lightcurve Database (LCDB), query form (info )
Dictionary of Minor Planet Names, Google books
Asteroids and comets rotation curves, CdR – Observatoire de Genève, Raoul Behrend
Discovery Circumstances: Numbered Minor Planets (1)-(5000) – Minor Planet Center
001179
Discoveries by Max Wolf
Named minor planets
19310319
Recovered astronomical objects | 1179 Mally | [
"Astronomy"
] | 665 | [
"Recovered astronomical objects",
"Astronomical objects"
] |
7,026,790 | https://en.wikipedia.org/wiki/Plutonium%20in%20the%20environment | Since the mid-20th century, plutonium in the environment has been primarily produced by human activity. The first plants to produce plutonium for use in Cold War atomic bombs were the Hanford nuclear site in Washington, and the Mayak nuclear plant, in Chelyabinsk Oblast, Russia. Over a period of four decades, "both released more than 200 million curies of radioactive isotopes into the surrounding environment – twice the amount expelled in the Chernobyl disaster in each instance."
The majority of plutonium isotopes are not short-lived on a geological timescale, though it has been argued that traces of the long-lived 244Pu isotope still exist in nature. This isotope has been found in lunar soil, meteorites, and in the Oklo natural reactor. However, one study on plutonium in marine sediments indicates that the atomic bomb fallout accounts for 66% of the 239Pu and 59% 240Pu in the English Channel. In contrast, nuclear reprocessing contributes the majority of the 238Pu and 241Pu in the Earth's oceans, whereas nuclear weapons testing is responsible for only 6.5% and 16.5% of these isotopes, respectively.
Sources of plutonium
Plutonium production
Richland, Washington was the first city established to support plutonium production at the nearby Hanford nuclear site, to power the American nuclear weapons arsenals. Ozersk, Russia supported plutonium production to power the Soviet nuclear arsenals at the Mayak nuclear plant. These were the first two cities in the world to produce plutonium for use in cold war atomic bombs.
In the 2013 book, Plutopia: Nuclear Families, Atomic Cities, and the Great Soviet and American Plutonium Disasters, Kate Brown explores the health of affected citizens in both the United States and Russia, and the "slow-motion disasters" that still threaten the environments where the plants are located. According to Brown, the plants at Hanford and Mayak released over 200 million curies of radioactive isotopes into the surrounding environment over four decades, which is twice the amount expelled in the Chernobyl disaster in each instance.
Most of the radioactive contamination over the years from Hanford and Mayak were part of normal operations. Unforeseen accidents did occur, but plant management kept this secret, and the pollution continued unabated. Even today, as pollution threats to health and the environment persist, the government conceals information about the associated risks from the public.
Bomb detonations
About 3.5 tons of plutonium have been released into the environment by atomic bomb tests. While this might sound significant, it has only resulted in a very small dose to the majority of the humans on Earth. Overall the health effects of fission products are far greater than the effects of the actinides released by a nuclear bomb detonation. The plutonium from the fuel of the bomb is converted into a high-fired oxide that is carried high into the air. It slowly falls to earth as global fallout and is not soluble, and as a result it is difficult for this plutonium to be incorporated into an organism if ingested. Much of this plutonium is absorbed into sediments of lakes, rivers and oceans. However, about 66% of the plutonium from a bomb explosion is formed by the neutron capture of 238U; this plutonium is not converted by the bomb into a high fired oxide, as it is formed more slowly. This formed plutonium is more soluble and more harmful as fallout.
Some plutonium can be deposited close to the point of detonation. The glassy trinitite formed by the Trinity bomb has been examined to determine what actinides and other radioisotopes it contained. A 2006 paper reports the levels of long lived radioisotopes in the trinitite. 152Eu and 154Eu was mainly formed by the neutron activation of the europium in the soil, and the level of radioactivity for these isotopes is highest where the neutron dose to the soil was larger. Some of the 60Co was generated by activation of the cobalt in the soil, but some was also generated by the activation of the cobalt in the steel (100 foot) tower on which the bomb stood. This 60Co from the tower would have been scattered over the site reducing the difference in the soil levels. 133Ba and 241Am were created by the neutron activation of barium and plutonium inside the bomb. The barium was present in the form of the nitrate in the chemical explosives used while the plutonium was the fissile fuel used.
As the 239Pu/240Pu ratio only changed slightly during the Trinity detonation, it has been commented that this isotope ratio for the majority of atomic bombs (in Japan the 239Pu/240Pu ratio in soil is normally in the range 0.17 to 0.19) is very different than from the bomb dropped upon Nagasaki.
Bomb safety trials
Plutonium has also been released into the environment in safety trials. In these experiments, nuclear bombs have been subjected to simulated accidents or detonated with an abnormal initiation of their chemical explosives. An abnormal implosion will result in a compression of the plutonium pit, which is less uniform and smaller than the designed compression in the device. In these experiments where no or very little nuclear fission occurs, plutonium metal has been scattered around the test sites. While some of these tests have been done underground, other such tests were conducted in open air. A paper on the radioisotopes left on an island by the French nuclear bombs tests of the 20th century has been printed by the International Atomic Energy Agency and a section of this report deals with plutonium contamination resulting from such tests.
Other related trials were conducted at Maralinga, South Australia where both normal bomb detonations and "safety trials" have been conducted. While the activity from the fission products has decayed away almost totally (as of 2006) the plutonium remains active.
Space
Plutonium can also be introduced into the environment via the reentry of artificial satellites containing atomic batteries. There have been several such incidents, the most prominent being the Apollo 13 mission. The Apollo Lunar Surface Experiments Package carried on the Lunar Module re-entered the atmosphere over the South Pacific. Many atomic batteries have been of the Radioisotope thermoelectric generator (RTG) type. The Plutonium-238 used in RTGs has a half-life of 88 years, as opposed to the plutonium-239 used in nuclear weapons and reactors, which has a half-life of 24,100 years. In April 1964 a SNAP-9A failed to achieve orbit and disintegrated, dispersing roughly of plutonium-238 over all continents. Most plutonium fell in the southern hemisphere. An estimated 6300 GBq or 2100 man-Sv of radiation was released and led to NASA's development of solar photovoltaic energy technology.
Chain reactions do not occur inside RTGs, so a nuclear meltdown is impossible. In fact, some RTGs are designed so that fission does not occur at all; rather, forms of radioactive decay which cannot trigger other radioactive decays are used instead. As a result, the fuel in an RTG is consumed much more slowly and much less power is produced. RTGs are still a potential source of radioactive contamination: if the container holding the fuel leaks, the radioactive material will contaminate the environment. The main concern is that if an accident were to occur during launch or a subsequent passage of a spacecraft close to Earth, harmful material could be released into the atmosphere. However, this event is extremely unlikely with current RTG cask designs.
In order to decrease the risk of the radioactive material being released, the fuel is typically stored in individual modular units with their own heat shielding. They are surrounded by a layer of iridium metal and encased in high-strength graphite blocks. These two materials are corrosion and heat-resistant. Surrounding the graphite blocks is an aeroshell, designed to protect the entire assembly against the heat of reentering the Earth's atmosphere. The plutonium fuel is also stored in a ceramic form that is heat-resistant, decreasing the risk of vaporization and aerosolization. The ceramic is also highly insoluble.
The US Department of Energy has conducted seawater tests and determined that the graphite casing, which was designed to withstand reentry, is stable and no release of plutonium should occur. Subsequent investigations have found no increase in the natural background radiation in the area. The Apollo 13 accident represents an extreme scenario due to the high re-entry velocities of the craft returning from cislunar space. This accident has served to validate the design of later-generation RTGs as highly safe.
Nuclear fuel cycle
Plutonium has been released into the environment in aqueous solution from nuclear reprocessing and uranium enrichment plants. The chemistry of this plutonium is different from that of the metal oxides formed from nuclear bomb detonations.
One example of a site where plutonium entered the soil is Rocky Flats where in the recent past XANES (X-ray spectroscopy) has been used to determine the chemical nature of the plutonium in the soil. The XANES was used to determine the oxidation state of the plutonium, while EXAFS was used to investigate the structure of the plutonium compound present in the soil and concrete.
Chernobyl
Because plutonium oxide is involatile, most of the plutonium in the reactor was not released during the fire. However that which was released can be measured. V.I. Yoschenko et al. reported that grass and forest fires can make the caesium, strontium and plutonium become mobile in the air again.
Fukushima
The ongoing crisis at this site includes Spent Fuel Pools on the upper floors, exposed to the elements with complex MOX and plutonium products. The Japanese Government Taskforce has asked for submissions to the International Research Institute for Nuclear Decommissioning in regards to the ongoing Contaminated Water Issues.
Nuclear crime
There have been 18 incidents concerning theft or loss of highly enriched uranium (HEU) and plutonium confirmed by the IAEA.
One case exists of a German man who attempted to poison his ex-wife with plutonium stolen from WAK (Wiederaufbereitungsanlage Karlsruhe), a small scale reprocessing plant where he worked. He did not steal a large amount of plutonium, just rags used for wiping surfaces and a small amount of liquid waste. The man was sent to prison for his crime. At least two other people were contaminated by the plutonium. Two flats in Rhineland-Palatinate were also contaminated. These were later cleaned at a cost of two million euros.
Environmental chemistry
Overview
Plutonium, like other actinides, readily forms a dioxide plutonyl core (PuO2). In the environment, this plutonyl core readily complexes with carbonate as well as other oxygen moieties (OH−, NO2−, NO3−, and SO42−) to form charged complexes which can be readily mobile with low affinities to soil.
PuO2(CO3)12−
PuO2(CO3)24−
PuO2(CO3)36−
PuO2 formed from neutralizing highly acidic nitric acid solutions tends to form polymeric PuO2 which is resistant to complexation. Plutonium also readily shifts valences between the +3, +4, +5 and +6 states. It is common for some fraction of plutonium in solution to exist in all of these states in equilibrium.
Binding to soil
Plutonium is known to bind to soil particles very strongly (see above for an X-ray spectroscopic study of plutonium in soil and concrete). While caesium has very different chemistry to the actinides, it is well known that both caesium and many of the actinides bind strongly to the minerals in soil. Hence it has been possible to use 134Cs labeled soil to study the migration of Pu and Cs in soils. It has been shown that colloidal transport processes control the migration of Cs (and will control the migration of Pu) in the soil at the Waste Isolation Pilot Plant according to R.D. Whicker and S.A. Ibrahim. J.D. Chaplin et al. recently reported advances in the Diffusive gradients in thin films technique, which have provided a method to measure labile bioavailable Plutonium in soils, as well as in freshwater and seawater.
Microbiological chemistry
Mary Neu (at Los Alamos in the USA) has done some work which suggests that bacteria can accumulate plutonium because the iron transport systems used by the bacteria also function as plutonium transport systems.
Biology
Plutonium ingested by or injected into humans is transported in the transferrin based iron(III) transport system and then is stored in the liver in the iron store (ferritin), after an exposure to plutonium it is important to rapidly inject the subject with a chelating agent such as calcium complex of DTPA. This antidote is useful for a single exposure such as that which would occur if a glove box worker were to cut his or her hand with a plutonium-contaminated object. The calcium complex has faster metal binding kinetics than the zinc complex but if the calcium complex is used for a long time it tends to remove important minerals from the person. The zinc complex is less able to cause these effects.
Plutonium that is inhaled by humans lodges in the lungs and is slowly translocated to the lymph nodes. Inhaled plutonium has been shown to lead to lung cancer in experimental animals.
See also
Actinides in the environment
References
Plutonium
Element toxicology
Radioactive contamination | Plutonium in the environment | [
"Chemistry",
"Technology"
] | 2,813 | [
"Biology and pharmacology of chemical elements",
"Element toxicology",
"Environmental impact of nuclear power",
"Radioactive contamination"
] |
7,026,929 | https://en.wikipedia.org/wiki/Acetoxy%20group | In organic chemistry, the acetoxy group (abbr. AcO or OAc; IUPAC name: acetyloxy), is a functional group with the formula and the structure . As the -oxy suffix implies, it differs from the acetyl group () by the presence of an additional oxygen atom. The name acetoxy is the short form of acetyl-oxy.
Functionality
An acetoxy group may be used as a protection for an alcohol functionality in a synthetic route although the protecting group itself is called an acetyl group.
Alcohol protection
There are several options of introducing an acetoxy functionality in a molecule from an alcohol (in effect protecting the alcohol by acetylation):
Acetyl halide, such as acetyl chloride in the presence of a base like triethylamine
Activated ester form of acetic acid, such as a N-hydroxysuccinimide ester, although this is not advisable due to higher costs and difficulties.
Acetic anhydride in the presence of base with a catalyst such as pyridine with a bit of DMAP added.
An alcohol is not a particularly strong nucleophile and, when present, more powerful nucleophiles like amines will react with the above-mentioned reagents in preference to the alcohol.
Alcohol deprotection
For deprotection (regeneration of the alcohol)
Aqueous base (pH >9)
Aqueous acid (pH <2), may have to be heated
Anhydrous base such as sodium methoxide in methanol. Very useful when a methyl ester of a carboxylic acid is also present in the molecule, as it will not hydrolyze it like an aqueous base would. (Same also holds with an ethoxide in ethanol with ethyl esters)
See also
Acetyl group
Acetylation
References
Functional groups | Acetoxy group | [
"Chemistry"
] | 405 | [
"Functional groups"
] |
10,847,346 | https://en.wikipedia.org/wiki/Decentralized%20object%20location%20and%20routing | In computer science, decentralized object location and routing (DOLR) is a scalable, location-independent routing technology. It uses location-independent names, or aliases, for each node in the network, and it is an example of peer-to-peer networking that uses a structured-overlay system called Tapestry. It was designed to facilitate large internet applications with millions of users physically distributed around the globe and using a variety of wireless and wired interfaces, specifically in situations where a traditional unstructured network of popular Domain Name System servers would fail to perform well.
References
Routing | Decentralized object location and routing | [
"Technology"
] | 120 | [
"Computing stubs",
"Computer science",
"Computer science stubs"
] |
10,848,348 | https://en.wikipedia.org/wiki/Convolutindole%20A | Convolutindole A (2,4,6-tribromo-1,7-dimethoxy-N,N-dimethyltryptamine) is a brominated tryptamine alkaloid that was first identified in 2001 in Amathia convoluta, a marine bryozoan. Bryozoans are aquatic invertebrates that grow in colonies and may resemble corals.
Chemistry
Convolutamine A is the 2,4,6-tribromo-1,7-dimethoxy derivative of DMT, a hallucinogen that occurs naturally in many plants and animals. Convolutamine A is chemically related to 5-bromo-DMT which also occurs in many marine invertebrates.
Until the discovery of convolutindole A, the 1-methoxyindole moiety was unknown in the marine world. 1-Methoxyindoles, such as lespedamine, were previously only known to occur in plants of the bean and mustard families.
Biological activity
This chemical was tested for its ability to kill parasitic nematodes. It was found to be more effective than levamisole - a synthetic drug used to kill parasitic worms and to treat colon cancer.
References
Narkowicz, C. K.; Blackman, A. J., (June 2001). Abstracts of Papers; 10th International Symposium on Marine Natural Products: Nago, Okinawa, Abstract OR1.
Tryptamine alkaloids
Halogen-containing alkaloids
Bromoarenes
Indole ethers at the benzene ring
Hydroxylamines | Convolutindole A | [
"Chemistry"
] | 337 | [
"Tryptamine alkaloids",
"Halogen-containing alkaloids",
"Hydroxylamines",
"Reducing agents",
"Alkaloids by chemical classification"
] |
10,848,594 | https://en.wikipedia.org/wiki/Jordan%20Phosphate%20Mines | Jordan Phosphate Mines (JPMC) is a mining company based in Amman, Jordan. The company operates 3 mining facilities in Jordan and a chemical manufacturing complex in Aqaba. The company is listed on the Amman Stock Exchange's ASE Weighted Index as "JOPH".
Background
Jordan Phosphate Mines was founded in 1949.
In 1986, JPMC bought the Jordan Fertilizer Company. JPMC already controlled 25% of the fertilizing company before its full acquisition. Jordan Fertilizer was operating a chemical and fertilizer manufacturing complex in Aqaba, which became the property of JPMC.
In 2007, JPMC signed a memorandum of understanding with the Indian Farmers Fertiliser Cooperative (IFFCO), India's largest fertilizer manufacturer. In 2013, the partnership was renewed for one year, committing JPMC to deliver 2 million tonnes of phosphate in a year.
In the first semester of 2014, JPMC recorded a net loss of $9.4 million, mainly due to lower commodity prices and higher fuel costs.
In April 2016, JPMC raised JD82.5 million. In February 2017, JPMC signed a memorandum of understanding with the government of Bangladesh to provide the country with 270,000 metric tonnes of phosphate and phosphoric acid within 3 years for $280 million. In May 2018, IFFCO and Indian Potash Limited (IPL) bought a 37% share in JPMC from the Brunei Investment Agency for $130 million. 60% of JPMC's production was already exported to india by the time of this purchase.
Operations
Mining
According to the company's website, more than 60% of the area of Jordan has phosphate deposits at minable depth. JPMC is the only phosphate-mining company in Jordan. JPMC's three mining facilities are located in:
Russeifa, north of Amman, started in 1935 ;
Al Hassa and Al-Abiad, south of Amman, started in 1962 and 1979 ;
Eshidiya, northeast of Aqaba, started in 1989.
Manufacturing
JPMC's al-Aqaba complex produces fertilizer and chemicals, including:
Phosphoric acid - used to make fertilizers, detergents, pharmaceuticals, steel and cola ;
Diammonium phosphate - DAP fertilizer ;
Sulphuric acid - many uses ;
Aluminium fluoride - used as a catalyst in the manufacture of aluminium and magnesium; used as a ceramic glaze.
Partnerships and joint-ventures
JPMC runs the Indo-Jordan Chemicals, co-owned with Southern Petrochemical Industries of India and The Arab Investment Company of Saudi Arabia.
JPMC runs the Nippon Jordan Fertilizer Company, a joint venture with the Arab Potash Company and a consortium of Japanese companies (ZEN-NOH, Mitsubishi Corporation, Mitsubishi Chemical, Asahi Kasei).
JPMC runs the PT Petro Jordan Abadi, a joint venture with the Indonesian company Petrokimia Gresik.
Incidents
In June 2013, the uncle of the King of Jordan, Walid Kurdi, was found guilty of illegally profiting by using his position of CEO of JPMC. The justice fined him JD284 million. In August 2017, JPMC filed for an arrest warrant with Interpol to extradite Walid Kurdi who fled the country and was living as a fugitive.
References
External links
Official website
Mining companies of Jordan
Chemical companies
Phosphate mining
Companies based in Amman
Companies listed on the Amman Stock Exchange
Companies in the ASE Market Capitalization Weighted Index | Jordan Phosphate Mines | [
"Chemistry"
] | 722 | [
"Chemical companies"
] |
10,848,810 | https://en.wikipedia.org/wiki/Provider%20edge%20router | A provider edge router (PE router) is a router between one network service provider's area and areas administered by other network providers. A network provider is usually an Internet service provider as well (or only that).
The term PE router covers equipment capable of a broad range of routing protocols, notably:
Border Gateway Protocol (BGP) (PE to PE or PE to CE communication)
Open Shortest Path First (OSPF) (PE to CE router communication)
Multiprotocol Label Switching (MPLS) (PE to P router communication)
PE routers do not need to be aware of what kind of traffic is coming from the provider's network, as opposed to a P router that functions as a transit within the service provider's network. However, some PE routers also do labelling.
See also
Customer edge router
Provider router
References
Routers (computing)
MPLS networking | Provider edge router | [
"Technology"
] | 190 | [
"Computing stubs",
"Computer network stubs"
] |
10,848,863 | https://en.wikipedia.org/wiki/Customer%20edge%20router | The customer edge router (CE) generally refers to the router at the customer premises that is interconnected with the provider edge router of a service provider's IP/MPLS network.
The CE router may peer with the provider edge router (PE) and exchanges routes with the corresponding VRF inside the PE for L3VPN services, or it may be connected to utilise L2VPN service provided by the provider. The routing protocol used could be static or dynamic (an interior gateway protocol like OSPF or an exterior gateway protocol like BGP).
The customer edge router can either be owned by the customer or service provider.
Residential broadband
In the case of residential broadband internet services, the service provider or ISP, will often use MPLS internally to transport the broadband customer's layer 2 traffic over the MPLS core, back to their BNGs. In this case, we typically refer to the customer's router as a CPE, where CPE is typically a layer 3 routing devices. But some network engineers may also just label this as a CE router.
See also
Provider edge router
Provider router
References
Routers (computing)
MPLS networking | Customer edge router | [
"Technology"
] | 248 | [
"Computing stubs",
"Computer network stubs"
] |
10,849,137 | https://en.wikipedia.org/wiki/5-Bromo-DMT | 5-Bromo-DMT (5-bromo-N,N-dimethyltryptamine) is a psychedelic brominated indole alkaloid found in the sponges Smenospongia aurea and Smenospongia echina, as well as in Verongula rigida (0.00142% dry weight) alongside 5,6-Dibromo-DMT (0.35% dry weight) and seven other alkaloids. It is the 5-bromo derivative of DMT, a psychedelic found in many plants and animals.
5-Bromo-DMT has a pEC50 value of 5.51 for the 5-HT2A receptor.
Animal studies on 5-Bromo-DMT showed that it produces effects suggestive of sedative and antidepressant activity and caused significant reduction of locomotor activity in the rodent FST model.
5-Bromo-DMT was reported to be psychoactive at 20–50 mg via vaporization with mild psychedelic-like activity.
Legality
5-Bromo-DMT is specifically listed as a controlled drug in Singapore.
Related compounds
5-Chloro-αMT
5-Fluoro-AMT
5-Fluoro-DMT
5-Nitro-DMT
Convolutindole A
Desformylflustrabromine
Plakohypaphorine
References
Biological sources of psychoactive drugs
Bromoarenes
Halogen-containing alkaloids
Psychedelic tryptamines
Serotonin receptor agonists
Tryptamine alkaloids | 5-Bromo-DMT | [
"Chemistry"
] | 330 | [
"Halogen-containing alkaloids",
"Tryptamine alkaloids",
"Alkaloids by chemical classification"
] |
10,849,236 | https://en.wikipedia.org/wiki/Antibody-dependent%20enhancement | Antibody-dependent enhancement (ADE), sometimes less precisely called immune enhancement or disease enhancement, is a phenomenon in which binding of a virus to suboptimal antibodies enhances its entry into host cells, followed by its replication. The suboptimal antibodies can result from natural infection or from vaccination. ADE may cause enhanced respiratory disease, but is not limited to respiratory disease. It has been observed in HIV, RSV, and Dengue virus and is monitored for in vaccine development.
Technical description
In ADE, antiviral antibodies promote viral infection of target immune cells by exploiting the phagocytic FcγR or complement pathway. After interaction with a virus, the antibodies bind Fc receptors (FcR) expressed on certain immune cells or complement proteins. FcγRs bind antibodies via their fragment crystallizable region (Fc).
The process of phagocytosis is accompanied by virus degradation, but if the virus is not neutralized (either due to low affinity binding or targeting to a non-neutralizing epitope), antibody binding may result in virus escape and, therefore, more severe infection. Thus, phagocytosis can cause viral replication and the subsequent death of immune cells. Essentially, the virus “deceives” the process of phagocytosis of immune cells and uses the host's antibodies as a Trojan horse.
ADE may occur because of the non-neutralizing characteristic of an antibody, which binds viral epitopes other than those involved in host-cell attachment and entry. It may also happen when antibodies are present at sub-neutralizing concentrations (yielding occupancies on viral epitopes below the threshold for neutralization), or when the strength of antibody-antigen interaction is below a certain threshold. This phenomenon can lead to increased viral infectivity and virulence.
ADE can occur during the development of a primary or secondary viral infection, as well as with a virus challenge after vaccination. It has been observed mainly with positive-strand RNA viruses, including flaviviruses such as dengue, yellow fever, and Zika; alpha- and betacoronaviruses; orthomyxoviruses such as influenza; retroviruses such as HIV; and orthopneumoviruses such as RSV. The viruses that cause it frequently share common features such as antigenic diversity, replication ability, or ability to establish persistence in immune cells.
The mechanism that involves phagocytosis of immune complexes via the FcγRII/CD32 receptor is better understood compared to the complement receptor pathway. Cells that express this receptor are represented by monocytes, macrophages, and some categories of dendritic cells and B-cells. ADE is mainly mediated by IgG antibodies, but IgM and IgA antibodies have also been shown to trigger it.
Coronavirus
COVID-19
Prior to the COVID-19 pandemic, ADE was observed in animal studies of laboratory rodents with vaccines for SARS-CoV, the virus that causes severe acute respiratory syndrome (SARS). , there have been no observed incidents with vaccines for COVID-19 in trials with nonhuman primates, in clinical trials with humans, or following the widespread use of approved vaccines.
Influenza
Prior receipt of 2008–09 TIV (Trivalent Inactivated Influenza Vaccine) was associated with an increased risk of medically attended pH1N1 illness during the spring-summer 2009 in Canada. The occurrence of bias (selection, information) or confounding cannot be ruled out. Further experimental and epidemiological assessment is warranted. Possible biological mechanisms and immunoepidemiologic implications are considered.
Natural infection and the attenuated vaccine induce antibodies that enhance the update of the homologous virus and H1N1 virus isolated several years later, demonstrating that a primary influenza A virus infection results in the induction of infection enhancing antibodies.
ADE was suspected in infections with influenza A virus subtype H7N9, but knowledge is limited.
Dengue
The most widely known ADE example occurs with dengue virus. Dengue is a single-stranded positive-polarity RNA virus of the family Flaviviridae. It causes disease of varying severity in humans, from dengue fever (DF), which is usually self-limited, to dengue hemorrhagic fever and dengue shock syndrome, either of which may be life-threatening. It is estimated that as many as 390 million individuals contract dengue annually.
ADE may follow when a person who has previously been infected with one serotype becomes infected months or years later with a different serotype, producing higher viremia than in first-time infections. Accordingly, while primary (first) infections cause mostly minor disease (dengue fever) in children, re-infection is more likely to be associated with dengue hemorrhagic fever and/or dengue shock syndrome in both children and adults.
Dengue encompasses four antigenically different serotypes (dengue virus 1–4). In 2013 a fifth serotype was reported. Infection induces the production of neutralizing homotypic immunoglobulin G (IgG) antibodies that provide lifelong immunity against the infecting serotype. Infection with dengue virus also produces some degree of cross-protective immunity against the other three serotypes. Neutralizing heterotypic (cross-reactive) IgG antibodies are responsible for this cross-protective immunity, which typically persists for a period of months to a few years. These heterotypic titers decrease over long time periods (4 to 20 years). While heterotypic titers decrease, homotypic IgG antibody titers increase over long time periods. This could be due to the preferential survival of long-lived memory B cells producing homotypic antibodies.
In addition to neutralizing heterotypic antibodies, an infection can also induce heterotypic antibodies that neutralize the virus only partially or not at all. The production of such cross-reactive, but non-neutralizing antibodies could enable severe secondary infections. By binding to but not neutralizing the virus, these antibodies cause it to behave as a "trojan horse", where it is delivered into the wrong compartment of dendritic cells that have ingested the virus for destruction. Once inside the white blood cell, the virus replicates undetected, eventually generating high virus titers and severe disease.
A study conducted by Modhiran et al. attempted to explain how non-neutralizing antibodies down-regulate the immune response in the host cell through the Toll-like receptor signaling pathway. Toll-like receptors are known to recognize extra- and intracellular viral particles and to be a major basis of the cytokines' production. In vitro experiments showed that the inflammatory cytokines and type 1 interferon production were reduced when the ADE-dengue virus complex bound to the Fc receptor of THP-1 cells. This can be explained by both a decrease of Toll-like receptor production and a modification of its signaling pathway. On the one hand, an unknown protein induced by the stimulated Fc receptor reduces Toll-like receptor transcription and translation, which reduces the capacity of the cell to detect viral proteins. On the other hand, many proteins (TRIF, TRAF6, TRAM, TIRAP, IKKα, TAB1, TAB2, NF-κB complex) involved in the Toll-like receptor signaling pathway are down-regulated, which led to a decrease in cytokine production. Two of them, TRIF and TRAF6, are respectively down-regulated by 2 proteins SARM and TANK up-regulated by the stimulated Fc receptors.
One example occurred in Cuba, lasting from 1977 to 1979. The infecting serotype was dengue virus-1. This epidemic was followed by outbreaks in 1981 and 1997. In those outbreaks; dengue virus-2 was the infecting serotype. 205 cases of dengue hemorrhagic fever and dengue shock syndrome occurred during the 1997 outbreak, all in people older than 15 years. All but three of these cases were demonstrated to have been previously infected by dengue virus-1 during the first outbreak. Furthermore, people with secondary infections with dengue virus-2 in 1997 had a 3-4 fold increased probability of developing severe disease than those with secondary infections with dengue virus-2 in 1981. This scenario can be explained by the presence of sufficient neutralizing heterotypic IgG antibodies in 1981, whose titers had decreased by 1997 to the point where they no longer provided significant cross-protective immunity.
HIV-1
ADE of infection has also been reported in HIV. Like dengue virus, non-neutralizing level of antibodies have been found to enhance the viral infection through interactions of the complement system and receptors. The increase in infection has been reported to be over 350 fold which is comparable to ADE in other viruses like dengue virus. ADE in HIV can be complement-mediated or Fc receptor-mediated. Complements in the presence of HIV-1 positive sera have been found to enhance the infection of the MT-2 T-cell line. The Fc-receptor mediated enhancement was reported when HIV infection was enhanced by sera from HIV-1 positive guinea pig enhanced the infection of peripheral blood mononuclear cells without the presence of any complements. Complement component receptors CR2, CR3 and CR4 have been found to mediate this Complement-mediated enhancement of infection. The infection of HIV-1 leads to activation of complements. Fragments of these complements can assist viruses with infection by facilitating viral interactions with host cells that express complement receptors. The deposition of complement on the virus brings the gp120 protein close to CD4 molecules on the surface of the cells, thus leading to facilitated viral entry. Viruses pre-exposed to non-neutralizing complement system have also been found to enhance infections in interdigitating dendritic cells. Opsonized viruses have not only shown enhanced entry but also favorable signaling cascades for HIV replication in interdigitating dendritic cells.
HIV-1 has also shown enhancement of infection in HT-29 cells when the viruses were pre-opsonized with complements C3 and C9 in seminal fluid. This enhanced rate of infection was almost 2 times greater than infection of HT-29 cells with the virus alone. Subramanian et al., reported that almost 72% of serum samples out of 39 HIV-positive individuals contained complements that were known to enhance the infection. They also suggested that the presence of neutralizing antibody or antibody-dependent cellular cytotoxicity-mediating antibodies in the serum contains infection-enhancing antibodies. The balance between the neutralizing antibodies and infection-enhancing antibodies changes as the disease progresses. During advanced stages of the disease, the proportion of infection-enhancing antibodies are generally higher than neutralizing antibodies. Increase in viral protein synthesis and RNA production have been reported to occur during the complement-mediated enhancement of infection. Cells that are challenged with non-neutralizing levels of complements have been found to have accelerated release of reverse transcriptase and viral progeny. The interaction of anti-HIV antibodies with non-neutralizing complement exposed viruses also aid in binding of the virus and the erythrocytes which can lead to the more efficient delivery of viruses to the immune-compromised organs.
ADE in HIV has raised questions about the risk of infections to volunteers who have taken sub-neutralizing levels of vaccine just like any other viruses that exhibit ADE. Gilbert et al., in 2005 reported that there was no ADE of infection when they used the rgp120 vaccine in phase 1 and 2 trials. It has been emphasized that much research needs to be done in the field of the immune response to HIV-1, information from these studies can be used to produce a more effective vaccine.
Mechanism
Interaction of a virus with antibodies must prevent the virus from attaching to the host cell entry receptors. However, instead of preventing infection of the host cell, this process can facilitate viral infection of immune cells, causing ADE. After binding the virus, the antibody interacts with Fc or complement receptors expressed on certain immune cells. These receptors promote virus-antibody internalization by the immune cells, which should be followed by the virus destruction. However, the virus might escape the antibody complex and start its replication cycle inside the immune cell avoiding the degradation.
This happens if the virus is bound to a low-affinity antibody.
Different virus serotypes
There are several possibilities to explain the phenomenon of enhancing intracellular virus survival:
1) Antibodies against a virus of one serotype binds to a virus of a different serotype. The binding is meant to neutralize the virus from attaching to the host cell, but the virus-antibody complex also binds to the Fc-region antibody receptor (FcγR) on the immune cell. The cell internalizes the virus for programmed destruction but the virus avoids it and starts its replication cycle instead.
2) Antibodies against a virus of one serotype binds to a virus of a different serotype, activating the classical pathway of the complement system. The complement cascade system binds C1Q complex attached to the virus surface protein via the antibodies, which in turn bind C1q receptor found on cells, bringing the virus and the cell close enough for a specific virus receptor to bind the virus, beginning infection. This mechanism has been shown for Ebola virus in vitro and some flaviviruses in vivo.
Conclusion
When an antibody to a virus is unable to neutralize the virus, it forms sub-neutralizing virus-antibody complexes. Upon phagocytosis by macrophages or other immune cells, the complex may release the virus due to poor binding with the antibody. This happens during acidification and eventual fusion of the phagosome with lysosomes. The escaped virus begins its replication cycle within the cell, triggering ADE.
See also
Original antigenic sin
Vaccine adverse event
Other ways in which antibodies can (unusually) make an infection worse instead of better
Blocking antibody, which can be either good or bad, depending on circumstances
Hook effect, most relevant to in vitro tests but known to have some in vivo relevances
References
Immune system | Antibody-dependent enhancement | [
"Biology"
] | 2,914 | [
"Immune system",
"Organ systems"
] |
10,849,414 | https://en.wikipedia.org/wiki/Lever%20rule | In chemistry, the lever rule is a formula used to determine the mole fraction (xi) or the mass fraction (wi) of each phase of a binary equilibrium phase diagram. It can be used to determine the fraction of liquid and solid phases for a given binary composition and temperature that is between the liquidus and solidus line.
In an alloy or a mixture with two phases, α and β, which themselves contain two elements, A and B, the lever rule states that the mass fraction of the α phase is
where
is the mass fraction of element B in the α phase
is the mass fraction of element B in the β phase
is the mass fraction of element B in the entire alloy or mixture
all at some fixed temperature or pressure.
Derivation
Suppose an alloy at an equilibrium temperature T consists of mass fraction of element B. Suppose also that at temperature T the alloy consists of two phases, α and β, for which the α consists of , and β consists of . Let the mass of the α phase in the alloy be so that the mass of the β phase is , where is the total mass of the alloy.
By definition, then, the mass of element B in the α phase is , while the mass of element B in the β phase is . Together these two quantities sum to the total mass of element B in the alloy, which is given by . Therefore,
By rearranging, one finds that
This final fraction is the mass fraction of the α phase in the alloy.
Calculations
Binary phase diagrams
Before any calculations can be made, a tie line is drawn on the phase diagram to determine the mass fraction of each element; on the phase diagram to the right it is line segment LS. This tie line is drawn horizontally at the composition's temperature from one phase to another (here the liquid to the solid). The mass fraction of element B at the liquidus is given by wBl (represented as wl in this diagram) and the mass fraction of element B at the solidus is given by wBs (represented as ws in this diagram). The mass fraction of solid and liquid can then be calculated using the following lever rule equations:
where wB is the mass fraction of element B for the given composition (represented as wo in this diagram).
The numerator of each equation is the original composition that we are interested in is +/- the opposite lever arm. That is if you want the mass fraction of solid then take the difference between the liquid composition and the original composition. And then the denominator is the overall length of the arm so the difference between the solid and liquid compositions. If you're having difficulty realising why this is so, try visualising the composition when wo approaches wl. Then the liquid concentration will start increasing.
Eutectic phase diagrams
There is now more than one two-phase region. The tie line drawn is from the solid alpha to the liquid and by dropping a vertical line down at these points the mass fraction of each phase is directly read off the graph, that is the mass fraction in the x axis element. The same equations can be used to find the mass fraction of alloy in each of the phases, i.e. wl is the mass fraction of the whole sample in the liquid phase.
References
Metallurgy
Phase transitions
Materials science
Charts
Diagrams | Lever rule | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 670 | [
"Physical phenomena",
"Phase transitions",
"Applied and interdisciplinary physics",
"Metallurgy",
"Phases of matter",
"Critical phenomena",
"Materials science",
"nan",
"Statistical mechanics",
"Matter"
] |
10,849,824 | https://en.wikipedia.org/wiki/Label%20switching | Label switching is a technique of network relaying to overcome the problems perceived by traditional IP-table switching (also known as traditional layer 3 hop-by-hop routing). Here, the switching of network packets occurs at a lower level, namely the data link layer rather than the traditional network layer.
Each packet is assigned a label number and the switching takes place after examination of the label assigned to each packet. The switching is much faster than IP-routing. New technologies such as Multiprotocol Label Switching (MPLS) use label switching. The established ATM protocol also uses label switching at its core.
According to (An Architecture for Differentiated Services, December 1998):
"Examples of the label switching (or virtual circuit) model include Frame Relay, ATM, and MPLS. In this model, path forwarding state and traffic management or quality of service (QoS) state is established for traffic streams on each hop along a network path. Traffic aggregates of varying granularity are associated with a label-switched path at an ingress node, and packets/cells within each label-switched path are marked with a forwarding label that is used to look up the next-hop node, the per-hop forwarding behavior, and the replacement label at each hop. This model permits finer granularity resource allocation to traffic streams, since label values are not globally significant but are only significant on a single link; therefore resources can be reserved for the aggregate of packets/cells received on a link with a particular label, and the label switching semantics govern the next-hop selection, allowing a traffic stream to follow a specially engineered path through the network."
A related topic is multilayer switching, which discusses silicon-based wire-speed routing devices that examine not only network-layer packet information but also layer 4 (transport) and layer-7 (application) information.
References
See also
Virtual circuit
Provider edge router
Computer networking | Label switching | [
"Technology",
"Engineering"
] | 390 | [
"Computer networking",
"Computer engineering",
"Computer network stubs",
"Computer science",
"Computing stubs"
] |
10,850,248 | https://en.wikipedia.org/wiki/Site%20and%20services | Site and services is an approach to bringing shelter within the economic reach of the poor.
History
Recognizing that the vast majority of low income families in the world build their own shelter, which lacks basic hygiene, access and electricity, the strategy was developed. The approach first appeared on a large scale in Madras (now Chennai) in 1972 when the World Bank engaged Christopher Charles Benninger to advise the Madras Metropolitan Development Authority (MMDA) on their housing sector investments. The approach links the user group's ability to pay with land prices and the costs of rudimentary and upgradable infrastructure. The fundamental idea is to market plots with essential infrastructure at market prices, to avoid the resale of subsidized housing, directed at low-income groups. The first major scheme planned by Benninger, at Arambakkum in Chennai, created about 7,000 shelter units, within the paying capacity of the urban poor. Within five years the MMDA created more than 20,000 units and the approach became a major strategy of the World Bank to tackle a variety of shelter problems globally.
Dzivarasekwa and Kuwadzana are two suburbs of Harare in Zimbabwe set up on the site and services model.
References
Urban design
Urban planning | Site and services | [
"Engineering"
] | 255 | [
"Urban planning",
"Architecture"
] |
10,850,913 | https://en.wikipedia.org/wiki/Pugmark | Pugmark is the term used to refer to the footprint of most animals (especially megafauna). "Pug" means foot in Hindi (Sanskrit पद् "pad"; Greek πούς "poús"). Every individual animal species has a distinct pugmark and as such this is used for identification.
Pugmark tracking is a technique that has been used by wildlife conservationists track animals and identify the distribution of species in areas where they operate. For some species, such as tigers, pugmark tracking is now considered to be an unreliable method of determining an area's total animal population, leading to the rise in the use of alternative techniques to count populations, such as photographic capture.
Field data collection
Indian forester Saroj Raj Choudhury developed the technique of the ‘pugmark census’ in 1966 to track tigers. It involves collecting pugmark tracings and plaster casts from the field and analyzing these to determine the number, track dimensions and spatial distribution of key species.
Technique
In order to obtain good pug impressions, PIPs (pug impression pads) are laid along roads, animal tracks and footpaths. Field data for each pugmark are then collected in specially devised census forms. The plaster casts and tracings along with field information are together analysed with map of the area to remove repetitions and overlaps in pug-evidences collected for the same tiger.
The final result claimed to indicate the (a) total numbers of male, female and cub of tiger and leopard, (b) their pugmark dimensions with stride where available, (c) the names of locations where the pugmarks of each tiger have been traced to show the gross movement areas (d) interrelationship among different tigers by linking each male to female and the latter to cubs tracked in the movement area, and finally (e) spatial distribution map.
The technique was used for over three decades in India, until the 1990s when it was found to be an inaccurate way to measuring tiger populations.
Benefits as a data collection method
The above approach to pugmark tracking has been developed and was refined over three decades since it was first implemented in the year 1972 at the All India Level. Compared to any other method of data collection on a population of large carnivores, ‘Pugmark tracking’ is considered quick, involving about 10 days of ground preparation, 6 days of rigorous data collection, and about two to four weeks of data analysis. It is very cost-effective or economic, and all money spent in the process goes to local tribal people who act as assistants as they possess the skill to track animals in Indian jungles. It results in data which shows which forest beat possess how many, of what sex/age and which type of large carnivore. This brings a sense of responsibility among the guards, as none of the animals is ‘virtually’ generated through statistical interpretations. Like any study technique, Pugmark tracking also calls for sincerity for true reflection of structure and spatial distribution of the population of large carnivores.
See also
Animal track
Spoor
Singh, L. A. K. (2000): Tracking Tigers: Guidelines for Estimating Wild Tiger Population Using the Pugmark Technique. (Revised Edition). WWF Tiger Conservation Programme, New Delhi.
References
External links
Pugmark-based Population Monitoring Protocol for the Tiger & other Large Felids
Strengthening The Monitoring System for Tigers
Comments on Monitoring Tiger Status and Habita
WWF-India's National Nature Camping Programme - Corbett Report
WWF Bhutan story
How many tigers are there in Ranthambore National Park?
Colour photo of a pugmark
Scale of a pugmark
Improved approach to tiger counting through pugmarks
Anatomical terminology
Ethology
Footprints
Field research | Pugmark | [
"Biology"
] | 771 | [
"Behavioural sciences",
"Ethology",
"Behavior"
] |
10,851,027 | https://en.wikipedia.org/wiki/Statistical%20geography | Statistical geography is the study and practice of collecting, analysing and presenting data that has a geographic or areal dimension, such as census or demographics data. It uses techniques from spatial analysis, but also encompasses geographical activities such as the defining and naming of geographical regions for statistical purposes. For example, for the purposes of statistical geography, the Australian Bureau of Statistics uses the Australian Standard Geographical Classification, a hierarchical regionalisation that divides Australia up into states and territories, then statistical divisions, statistical subdivisions, statistical local areas, and finally census collection districts.
Background
Geographers study how and why elements differ from place to place, as well as how spatial patterns change through time. Geographers begin with the question 'Where?', exploring how features are distributed on a physical or cultural landscape, observing spatial patterns and the variation of phenomena. Contemporary geographical analysis has shifted to 'Why?', determining why a specific spatial pattern exists, what spatial or ecological processes may have affected a pattern, and why such processes operate. Only by approaching the 'why?' questions can social scientists begin to appreciate the mechanisms of change, which are infinite in their complexity.
Role of statistics in geography
Statistical techniques and procedures are applied in all fields of academic research; wherever data are collected and summarized or wherever any numerical information is analyzed or research is conducted, statistics are needed for sound analysis and interpretation of results.
Geographers use statistics in numerous ways:
To describe and summarize spatial data.
To make generalizations concerning complex spatial patterns.
To estimate the probability of outcomes for an event at a given location.
To use samples of geographic data to infer characteristics for a larger set of geographic data (population).
To determine if the magnitude or frequency of some phenomenon differs from one location to another.
To learn whether an actual spatial pattern matches some expected pattern.
Spatial data and descriptive statistics
There are several potential difficulties associated with the analysis of spatial data, among these are boundary delineation, modifiable areal units, and the level of spatial aggregation or scale. In each of these cases, the absolute descriptive statistics of an area - the mean, median, mode, standard deviation, and variation - are changed through the manipulation of these spatial problems.
Boundary delineation
The location of a study area boundary and the positioning of internal boundaries affect various descriptive statistics. With respect to measures such as the mean or standard deviation, the study area size alone may have large implications; consider a study of per capita income within a city, if confined to the inner city, income levels are likely to be lower because of a less affluent population, if expanded to include the suburbs or surrounding communities, income levels will become greater with the influence of homeowner populations. Because of this problem, absolute descriptive statistics such as the mean, standard deviation, and variance should be evaluated comparatively only in relation to a particular study area. In the determination of internal boundaries this is also true, as these statistics may only have valid interpretations for the area and subarea configuration over which they are calculated.
Modifiable areal units
In many cases the subdivision of spatial data has already been determined, this is evident in demographic datasets, as the available information will be grouped into their respective counties or municipalities. For this type of data, analysts must use the same county or municipal boundaries delineated in the collected data for their subsequent analysis. When alternate boundaries are possible, an analyst must take into account that any new subdivision model may create different results.
Spatial aggregation/scale problem
Socio-economic data may be available at a variety of scales, for example: municipalities, regional districts, census tracts, enumeration districts, or at the provincial/state level. When this data is aggregated at different scales, the resulting descriptive statistics may exhibit variations, either in a systematic, predictable way, or in a more uncertain fashion. If we are observing economic data, we may notice a distinct reduction in manufacturing productivity for a country (the USA) over a certain period; since this is a general model, individual states may experience these effects differently. The result of this aggregation is that the standard deviation of the data in question is increased due to the variability among states.
Descriptive spatial statistics
For summarizing point pattern analysis, a set of descriptive spatial statistics has been developed that are areal equivalents to nonspatial measures. Since geographers are particularly concerned with the analysis of locational data, these descriptive spatial statistics (geostatistics) are often applied to summarize point patterns and to describe the degree of spatial variability of some phenomena.
Spatial measures of central tendency
An example here is the idea of a center of population, of which a particular example is the mean center of U.S. population. Several different ways of defining a center are available:
Mean center: The mean is an important measure of central tendency, which when extended to a set of points, located on a Cartesian coordinate system, the average location, centroid or mean center, can be determined.
The weighted mean center is analogous to frequencies in the calculation of grouped statistics, such as the weighted mean. A point may represent a retail outlet, while its frequency will represent the volume of sales within the particular store.
Median center or Euclidean center and in the median center of United States population. This is related to the Manhattan distance.
Spatial measures of dispersion
Standard distance: Just as the standard deviation indicates how closely the values in a data set are clustered around the mean, so standard distance in a spatial distribution indicates how closely the points are clustered around the mean centre.
Relative distance
Topology
The motivating insight behind topology is that some geometric problems depend not on the exact shape of the objects involved, but rather on the "way they are connected together". One of the first papers in topology was the demonstration, by Leonhard Euler, that it was impossible to find a route through the town of Königsberg (now Kaliningrad) that would cross each of its seven bridges exactly once. This result did not depend on the lengths of the bridges, nor on their distance from one another, but only on connectivity properties: which bridges are connected to which islands or riverbanks. This problem, the Seven Bridges of Königsberg, is now a famous problem in introductory mathematics, and led to the branch of mathematics known as graph theory.
Topology rules
Topology rules are particularly important within GIS, and are used for a variety of correction and analytical procedures. The primary shapes in GIS are the point, line, and polygon, each of which implies different spatial characteristics; for instance, the only shape which has a distinguishable inside and outside is the polygon. Principles of connectivity associated with topology lead to applications in hydrology, urban planning, and logistics, as well as other fields; as such, topological analyses offer unique modelling capabilities, defining the vector nature of topological features and correcting spatial data errors from digitizing.
National examples
United Kingdom
Due to the devolved nature of the United Kingdom, responsibility for managing statistical geographies often falls to the National Statistical Institute with jurisdiction for that devolved administration. For England and Wales this is the Office for National Statistics, for Scotland National Records of Scotland and for Northern Ireland the Northern Ireland Statistics and Research Agency.
England and Wales
The lowest form of statistical geography in England and Wales is the Output Area. These are small geographies of approximately 300 people and 100 households for which Census data is published. By containing roughly the same number of people and households it is possible to compare statistics for any two Output Areas in the country, and know that this is being done in a consistent way (unlike comparing statistics for Administrative geographies).
The Output Areas form the smallest part of a hierarchy that consists of Output Areas, Lower Layer Super Output Areas and Middle Layer Super Output Areas.
England and Wales also have a statistical geography designed specifically for the publication of workplace statistics. This is because Output Areas are built around residential populations and make analysing workplace statistics difficult. Workplace Zones have been released as part of the 2011 Census.
Scotland
Like England and Wales, the lowest level of statistical geography in Scotland is the Output Area. Scottish OAs are smaller than those for England and Wales because smaller thresholds are applied, but the methodology for their creation is broadly similar to that used by ONS.
The higher levels are again similar to England and Wales but operate as Data Zones and Intermediate Zones rather than Lower and Middle Layer Super Output Areas.
There are no Workplace Zones for Scotland.
See also
Geostatistics
Neighborhood effect averaging problem
Quantitative revolution
Spatial analysis
References
Applied statistics
Spatial analysis | Statistical geography | [
"Physics",
"Mathematics"
] | 1,737 | [
"Applied mathematics",
"Spatial analysis",
"Space",
"Spacetime",
"Applied statistics"
] |
10,851,309 | https://en.wikipedia.org/wiki/Acidity%20function | An acidity function is a measure of the acidity of a medium or solvent system, usually expressed in terms of its ability to donate protons to (or accept protons from) a solute (Brønsted acidity). The pH scale is by far the most commonly used acidity function, and is ideal for dilute aqueous solutions. Other acidity functions have been proposed for different environments, most notably the Hammett acidity function, H0, for superacid media and its modified version H− for superbasic media. The term acidity function is also used for measurements made on basic systems, and the term basicity function is uncommon.
Hammett-type acidity functions are defined in terms of a buffered medium containing a weak base B and its conjugate acid BH+:
where pKa is the dissociation constant of BH+. They were originally measured by using nitroanilines as weak bases or acid-base indicators and by measuring the concentrations of the protonated and unprotonated forms with UV-visible spectroscopy. Other spectroscopic methods, such as NMR, may also be used. The function H− is defined similarly for strong bases:
Here BH is a weak acid used as an acid-base indicator, and B− is its conjugate base.
Comparison of acidity functions with aqueous acidity
In dilute aqueous solution, the predominant acid species is the hydrated hydrogen ion H3O+ (or more accurately [H(OH2)n]+). In this case H0 and H− are equivalent to pH values determined by the buffer equation or Henderson-Hasselbalch equation.
However, an H0 value of −21 (a 25% solution of SbF5 in HSO3F) does not imply a hydrogen ion concentration of 1021 mol/dm3: such a "solution" would have a density more than a hundred times greater than a neutron star. Rather, H0 = −21 implies that the reactivity (protonating power) of the solvated hydrogen ions is 1021 times greater than the reactivity of the hydrated hydrogen ions in an aqueous solution of pH 0. The actual reactive species are different in the two cases, but both can be considered to be sources of H+, i.e. Brønsted acids. The hydrogen ion H+ never exists on its own in a condensed phase, as it is always solvated to a certain extent. The high negative value of H0 in SbF5/HSO3F mixtures indicates that the solvation of the hydrogen ion is much weaker in this solvent system than in water. Other way of expressing the same phenomenon is to say that SbF5·FSO3H is a much stronger proton donor than H3O+.
References
Acids
Chemical properties
Solvents | Acidity function | [
"Chemistry"
] | 598 | [
"Acids",
"nan"
] |
10,851,420 | https://en.wikipedia.org/wiki/AGILE%20%28satellite%29 | AGILE (Italian: Astro-Rivelatore Gamma a Immagini Leggero) was an X-ray and gamma ray astronomical satellite of the Italian Space Agency (ASI). Launched in 2007, it de-orbited in February 2024.
Objectives
AGILE's mission was to observe gamma-ray sources in the universe.
AGILE is an Italian high-energy astrophysics mission dedicated to the observation of the gamma-ray Universe. Its very innovative instrumentation is unprecedentedly light (100 kg) and the most compact ever operational for high-energy astrophysics (approximately a cube of about 60 cm size) with excellent detection and imaging capability.
Satellite data are collected by the ASI Broglio Space Centre in Malindi (Kenya), then quickly transferred to the Satellite Operations Centre in Fucino, transferred, preprocessed, and stored and analyzed at the ASI Science Data Center (ASDC) in Frascati. In parallel the pre-processed data are transferred at INAF/OAS Bologna for a fast science alert generation, thus assuring a very rapid response to gamma-ray detections, obtained by special quick look analysis programs and coordinated ground-based and space observation.
Key scientific objectives of the AGILE Mission include the study of:
Active Galactic Nuclei
Gamma-Ray Bursts
X-ray and gamma galactic sources
Non-identified gamma sources
Diffuse galactic gamma emissions
Diffuse extragalactic gamma emissions
Fundamental physics
Instrumentation
AGILE's instrumentation includes a Gamma Ray Imaging Detector (GRID) sensitive in the 30 MeV – 50 GeV energy range, a SuperAGILE (SA) hard X-ray monitor sensitive in the 18–60 keV energy range, a Mini-Calorimeter (MCAL) non-imaging gamma-ray scintillation detector sensitive in the 350 keV – 100 MeV energy range, and an Anti-coincidence System (AC), based on a plastic scintillator, to assist with suppressing unwanted background events.
The SuperAGILE SA is an instrument based on a set of four silicon strip detectors, each equipped with one-dimensional coded mask. The SA is designed to detect X-ray signals from known sources and burst-like signals. It provides long-term monitoring of flux and spectral features. MCAL can also effectively detect high-energy radiation bursts in its energy band.
Launch and operations
AGILE was successfully launched on 23 April 2007, from the Indian base of Sriharikota and was inserted in an equatorial orbit with low particle background. It was the First flight of the PSLV with a foreign country's payload as a primary payload. Later that day, ASI made contact with AGILE; its signals were acquired by the ground station at the Broglio Space Centre near Malindi, Kenya and it was placed in a Sun-pointing mode.
Results
During its operations AGILE surveyed the gamma-ray sky and detected many galactic and extragalactic sources: AGILE discovered gamma-ray emission from the microquasar Cygnus X-3, detected many bright blazars, discovered several new gamma-ray pulsars, surveyed the Galactic plane with simultaneous hard X-ray/gamma-ray capability, discovered emission up to 100 MeV from Terrestrial Gamma-Ray Flashes.
Some transient events detected by AGILE are associated with positions not consistent with a known source (Gamma Ray Burst) and have cosmological origins. Others are due to solar flares, while some are due to Earth atmosphere events (Terrestrial Gamma Flash).
The main results of the AGILE satellite are:
Discovery of variable gamma-ray emission from the Crab Nebula: AGILE discovery that the archetypical source of gamma-ray astrophysics is not constant. Very rapid and intense gamma-ray flares from the inner Nebula driven by plasma instabilities. Theoretical particle acceleration models challenged and to be drastically revised. Consequences and broad applications in plasma physics experiments and theoretical studies of particle acceleration. For the discovery of gamma-ray flares from the Crab Nebula the 2012 Bruno Rossi Prize of the American Astronomical Society has been awarded to Marco Tavani and his team.
Resolving the problem of the origin of cosmic-rays: First direct evidence of proton/ion gamma-ray emission by pion emission below 200 MeV in SNR W44. Combined gamma-ray and TeV emission from SNR IC 433 and W28.
Discovery of gamma-ray emission from the black hole system Cygnus X-3: Discovery of extreme particle acceleration preceding relativistic jet plasmoid ejections from the black hole candidate Cyg X-3. Repeatedly detected by AGILE since 2008. First comprehensive survey of all Galactic microquasars by Super-AGILE and AGILE-GRID (Tavani et al., Nature, 462, 620, 2009).
Discovery of TGF emission up to very high energies (100 MeV): Discovery of gamma-ray emission up to 100 MeV from terrestrial flashes associated with intense thunderstorms. Evidence for accelerating potentials larger than 100 MeV. Theoretical models of acceleration in lightning discharges to be drastically revised. Significant impacts for atmospheric physics and climate studies.
The supermassive black hole 3C 454.3: Very active and variable blazar since 2007. AGILE first to announce gamma-ray super-flares in 2009 and 2010. The brightest, ever, gamma-ray source in November 2010, almost 7 times more luminous that the Vela pulsar.
Soft gamma-ray pulsars (PSR 1509-58 and others): First post-EGRET gamma-ray pulsar discovered with the AGILE first light in 2007. Unveiling a class of "soft" gamma-ray pulsar barely detectable below 200 MeV, such as PSR B1509-58. Theoretical constraints on "photon splitting" in PSR magnetospheres.
Unveiling relativistic particle winds: Nebular gamma-ray emission near the Vela pulsar imaged by AGILE with high resolution. Clear evidence for different accelerated populations of particles.
The brightest massive black hole of the BL Lac class: Detection of the strongest gamma-ray flare from a blazar of the BL Lac class, S5 0716+714 (believed to be driven by black-hole rotation). First theoretical determination of a system near the maximal limit of energy to be extracted from a rotating black hole.
Gamma-ray flaring of the massive black hole Markarian 421: First multifrequency campaign in 2008 including X-ray (Super-AGILE), gamma-ray and TeV observations of the flaring blazar Mrk 421.
A key aspect of the AGILE data flow is the fastest gamma-ray alert monitoring system of the world. The overall gamma-ray alert monitoring system of AGILE is compound by two independent pipelines that process the data with different data quality results. The INAF/OAS Bologna pipeline processes the data in the fastest possible way, but it generates alert within 0.5–1 hour from the time of the last GRID event acquired in orbit. The ASDC pipeline is more accurate because all events are considered during the analysis but the alerts are generated 3–3.5 hours after.
References
External links
X-ray telescopes
Gamma-ray telescopes
Space telescopes
Satellites of Italy
Satellites orbiting Earth
Spacecraft launched in 2007
Italian Space Agency | AGILE (satellite) | [
"Astronomy"
] | 1,495 | [
"Space telescopes"
] |
10,853,181 | https://en.wikipedia.org/wiki/Roseophilin | Roseophilin is an antibiotic isolated from Streptomyces griseoviridis shown to have antitumor activity. The chemical structure can be considered in terms of two components, a macrotricyclic segment and a heterocyclic side-chain. Several laboratory syntheses of roseophilin (e.g., those of Trost, Fürstner, Salamone) are based upon the Paal-Knorr synthesis, and two others are based on the Nazarov cyclization reaction (those of Tius, Frontier). The compound is related to the prodiginines.
References
Antibiotics
Pyrroles
Furans
Chloroarenes
Phenol ethers | Roseophilin | [
"Biology"
] | 152 | [
"Antibiotics",
"Biocides",
"Biotechnology products"
] |
10,853,262 | https://en.wikipedia.org/wiki/Ostomy%20system | An ostomy pouching system is a prosthetic medical device that provides a means for the collection of waste from a surgically diverted biological system (colon, ileum, bladder) and the creation of a stoma. Pouching systems are most commonly associated with colostomies, ileostomies, and urostomies.
Pouching systems usually consist of a collection pouch, a barrier on the skin, and connect with the stoma itself, which is the part of the body that has been diverted to the skin. The system may be a one-piece system consisting only of a bag or, in some instances involve a device placed on the skin with a collection pouch that is attached mechanically or with an adhesive in an airtight seal, known as a two-piece system.
The system used varies between individuals and is often based on the medical reason, personal preference and lifestyle.
Uses
Ostomy pouching systems collect waste that is output from a stoma. The pouching system allows the stoma to drain into a sealed collection pouch, while protecting the surrounding skin from contamination. They are used to maintain independence, so that a wearer can continue to lead an active lifestyle that can include all forms of sports and recreation.
Surface barriers
Ostomy barriers sit on the skin and separate the ostomy pouch from the internal conduit. They are not always present. These barriers, also called flanges, wafers, or baseplates are manufactured using pectin or similar organic material and are available in a wide variety of sizes to accommodate a person's particular anatomy.
The internal opening must be the correct size to accommodate the individual's stoma while protecting the skin from contact with waste. The methods for sizing this opening vary depending on the type of wafer/baseplate; some pre-cut sizes are available, some users customize the opening using scissors. Manufacturers have recently introduced moldable wafers that can be shaped by hand without the need for scissors.
Skin adhesion for modern wafers/baseplates/flanges are optimized on all the five parameters required in an adhesive:
absorption
tack and adhesion
flexibility
erosion resistance
ease of removal.
In addition, barriers with adhesive border can provide additional security that the system stays in place. Using a barrier film spray before applying a new flange will improve adhesion, soothe irritated skin and protect the skin from irritation.
A barrier may last between one and many days before it needs to be replaced; this is highly dependent on the individual's lifestyle, ostomy type, and anatomy.
Pouches
The method of attachment to the barrier varies between manufactures and includes permanent (one-piece), press-on/click ("Tupperware" type), turning locking rings and "sticky" adhesive mounts. The two-piece arrangement allows pouches to be exchanged without removing the wafer; for example, some people prefer to temporarily switch to a "mini-pouch" for swimming, intimate and other short-term activities. Mini-pouches are suitable for minimum usage only.
Pouches can be divided into two basic types: open-end (drainable) and closed-end (disposable).
Open-end pouches have a resealable end that can be opened to drain the contents of the pouch into a toilet. The end is sealed with either a Velcro-type closure or a simple clip.
Closed-end pouches can be removed and replaced with a new pouch once the bag is full or the pouch can be emptied and rinsed. The flange or wafer does not need to be replaced.
The use of open-end vs. closed-end pouches is dependent on the frequency in which an individual needs to empty the contents, as well as economics.
Gas is created during digestion, and an airtight pouch will collect this and inflate. To prevent this some pouches are available with special charcoal filtered vents that will allow the gas to escape, and prevent ballooning at night. Some odor can be expelled through the charcoal filter especially if sufficient deodorant is not used in the pouch.
Pouch covers are helpful to disguise the plastic pouch when it is exposed when reaching or other physical activity. These are usually made of cloth and can be decorative or plain to blend in with clothing. Various sources stock sizes for most manufacturers pouches. There are flexible elastic pouch belts available for extreme physical activity but some of these require the pouch to be worn sideways so it does not fill properly and the tight fit causes pancaking of the effluent.
Routine care
People with colostomies must wear an ostomy pouching system to collect intestinal waste. Ordinarily the pouch must be emptied or changed a couple of times a day depending on the frequency of activity; in general the further from the anus (i.e., the further 'up' the intestinal tract) the ostomy is located the greater the output and more frequent the need to empty or change the pouch.
People with colostomies who have ostomies of the sigmoid colon or descending colon may have the option of irrigation, which allows for the person to not wear a pouch, but rather just a gauze cap over the stoma, and to schedule irrigation for times that are convenient. To irrigate, a catheter is placed inside the stoma, and flushed with water, which allows the feces to come out of the body into an irrigation sleeve. Most colostomates irrigate once a day or every other day, though this depends on the person, their food intake, and their health.
Impact
Ostomy systems often take some time for a person to adjust to, including requiring time to learn how to use them and change the pouch, as well as psychologically adjust. The time taken to adjust may last for more than a year.
Because of embarrassment or stigma associated with an ostomy system, a person who has an ostomy system can experience social isolation, depression, and change in sexual function as well as physical complications such as weight change. In various online ostomy groups and ostomy societies, ostomates share their experiences and help each other. One of the largest is MeetAnOstoMate, a community where people with similar experiences share information, ask questions, and receive support.
See also
Elise Sørensen
References
External links
United Ostomy Associations of America
Gastroenterology
Medical equipment
Prosthetics
Incontinence
Danish inventions | Ostomy system | [
"Biology"
] | 1,339 | [
"Incontinence",
"Excretion",
"Medical equipment",
"Medical technology"
] |
10,853,453 | https://en.wikipedia.org/wiki/Immuron | Immuron is a biotechnology company based in Melbourne, Australia. In 2008, the company changed its name to Immuron Limited, having previously operated as Anadis Limited.
Immuron is focused on antigen-primed and dairy-derived health products. Its proprietary technologies allow for rapid development of polyclonal antibody and other proteins-based solutions for a range of diseases.. The company specialises in nutraceutical, pharmaceutical and therapeutic technology products for conditions such as oral and GI mucositis, avian influenza, E. coli travellers' diarrhoea (TD) and Anthrax containment.
In 2005, Anadis signed an agreement with Quebec's Baralex Inc. and Valeo Pharma Inc. for the distribution of Travelan, a product made by Anadis for the Canadian market.
External links
Official website
References
Pharmaceutical companies of Australia
Companies listed on the Australian Securities Exchange | Immuron | [
"Biology"
] | 186 | [
"Biotechnology stubs"
] |
10,853,528 | https://en.wikipedia.org/wiki/United%20States%20foreign%20policy%20in%20the%20Middle%20East | United States foreign policy in the Middle East has its roots in the early 19th-century Tripolitan War that occurred shortly after the 1776 establishment of the United States as an independent sovereign state, but became much more expansive in the aftermath of World War II. With the goal of preventing the Soviet Union from gaining influence in the region during the Cold War, American foreign policy saw the deliverance of extensive support in various forms to anti-communist and anti-Soviet regimes; among the top priorities for the U.S. with regards to this goal was its support for the State of Israel against its Soviet-backed neighbouring Arab countries during the peak of the Arab–Israeli conflict. The U.S. also came to replace the United Kingdom as the main security patron for Saudi Arabia as well as the other Arab states of the Persian Gulf in the 1960s and 1970s in order to ensure, among other goals, a stable flow of oil from the Persian Gulf. , the U.S. has diplomatic relations with every country in the Middle East except for Iran, with whom relations were severed after the 1979 Islamic Revolution, and Syria, with whom relations were suspended in 2012 following the outbreak of the Syrian Civil War.
American influence in the Greater Middle East has reduced in recent years, most significantly since the Arab Spring, yet is still substantial. Currently stated priorities of the U.S. government in the Middle East include resolving the Israeli–Palestinian conflict and limiting the spread of weapons of mass destruction among regional states, particularly Iran.
History
The United States' relationship with the Middle East before World War I was limited, although commercial ties existed even in the early 19th century. The U.S. engaged in a military conflict with Ottoman Tripolitania from 1801 to 1805 during the Tripolitan War regarding tributary payment which president Thomas Jefferson refused to pay. President Andrew Jackson established formal ties with the Sultan of Muscat and Oman in 1833. (The Sultan saw the U.S. as a potential balance to Britain's overwhelming regional influence.) Commercial relations opened between the U.S. and Persia in 1857, after Britain persuaded the Persian government not to ratify a similar agreement in 1851.
After defeating it in World War I, Britain and France took control of most of the former Ottoman Empire. They held mandates from the League of Nations. The United States refused to take any mandates in the region and was "popular and respected throughout the Middle East". Indeed, "Americans were seen as good people, untainted by the selfishness and duplicity associated with the Europeans." American Christian missionaries brought modern medicine and set up educational institutions all over the Middle East as an adjunct to their religious proselytizing. Moreover, the United States had provided the Middle East with highly skilled petroleum engineers. Thus, there were some connections made between the United States and the Middle East before the Second World War. Other examples of cooperation between the U.S. and the Middle East are the Red Line Agreement signed in 1928 and the Anglo-American Petroleum Agreement signed in 1944. Both of these agreements were legally binding and reflected an American interest in control of Middle Eastern energy resources, mainly oil, and moreover reflected an American "security imperative to prevent the (re)emergence of a powerful regional rival". The Red Line Agreement had been "part of a network of agreements made in the 1920s to restrict the supply of petroleum and ensure that the major [mostly American] companies ... could control oil prices on world markets". The Red Line agreement governed the development of Middle East oil for the next two decades. The Anglo-American Petroleum Agreement of 1944 was based on negotiations between the United States and Britain over controlling Middle Eastern oil. Below is shown what the American President Franklin D. Roosevelt had in mind for a British Ambassador in 1944:
Persian oil ... is yours. We share the oil of Iraq and Kuwait. As for Saudi Arabian oil, it's ours.
On August 8, 1944, the Anglo-American Petroleum Agreement was signed, dividing Middle Eastern oil between the United States and Britain. Consequently, political scholar Fred H. Lawson remarks, that by mid-1944, U.S. officials had buttressed their country's position on the peninsula by concluding an Anglo-American Petroleum Agreement that protected "all valid concession contracts and lawfully acquired rights" belonging to the signatories and established a principle of "equal opportunity" in those areas where no concession had yet been assigned. Furthermore, political scholar Irvine Anderson summarizes American interests in the Middle East in the late 19th century and the early 20th century noting that, "the most significant event of the period was the transition of the United States from the position of net exporter to one of net importer of petroleum."
By the end of the Second World War, Washington had come to consider the Middle East region as "the most strategically important area of the world." and "one of the greatest material prizes in world history," argues Noam Chomsky. For that reason, it was not until around the period of World War II that America became directly involved in the Middle East region. At this time the region was going through great social, economic, and political changes and as a result, internally the Middle East was in turmoil. Politically, the Middle East was experiencing an upsurge in the popularity of nationalistic politics and an increase in the number of nationalistic political groups across the region, which was causing great trouble for the English and French colonial powers.
Historian Jack Watson explains that "Europeans could not hold these lands indefinitely in the face of Arab nationalism". Watson then continues, stating that "by the end of 1946 Palestine was the last remaining mandate, but it posed a major problem". In truth, this nationalistic political trend clashed with American interests in the Middle East, which were, as Middle East scholar Louise Fawcett argues, "about the Soviet Union, access to oil and the project for a Jewish state in Palestine". Hence, Arabist Ambassador Raymond Hare described the Second World War, as "the great divide" in United States' relationship with the Middle East, because these three interests would later serve as a backdrop and reasoning for a great deal of American interventions in the Middle East and thus also come to be the cause of several future conflicts between the United States & the Middle East.
As of 2024, the United States has approximately 45,000 troops in the region, including approximately 2,500 troops stationed in Iraq, 900 troops stationed in Syria, and others stationed in Bahrain, Djibouti, Jordan, Kuwait, Qatar, and the United Arab Emirates. soldiers at the country's base. About 15,000 of these troops were deployed to the region as part of a temporary surge after October 7 2023, with the United States retaining about 30,000 troops until then. The troops are a fraction of the number the U.S. deployed in 2010, when it had more than 100,000 troops in Iraq, about 70,000 in Afghanistan and many more in neighboring countries. After 2015, the U.S. military presence in Iraq declined sharply; and all U.S. troops were withdrawn from Afghanistan in 2021.
Israel
Israel is designated by the United States as a major non-NATO ally. Israel–United States relations are an essential factor in the United States foreign policy in the Middle East. Congress has placed significant importance on the maintenance of a close relationship with Israel. Analysts maintain that Israel is a strategic ally for the United States, and that relations with the former will strengthen the latter's influence in the Middle East. Former US senator Jesse Helms argued that the military foothold offered by Israel in the region alone justified the expense of American military aid. He referred to Israel as "America's aircraft carrier in the Middle East".
Formation of Israel (1948)
In 1947, the U.S. and the Truman administration, under domestic political pressure, pushed for a solution and resolution on the Arab–Israeli conflict, and in May 1948 the new state of Israel came into existence. This process was not without its fights and loss of lives. Nevertheless, "the first state to extend diplomatic recognition to Israel was the United States; the Soviet Union and several Western nations quickly followed suit. No Arab state, however, recognized Israel." The United States denounced the Arab invasion of former Mandatory Palestine that took place shortly after the Israeli Declaration of Independence.
Israel-Hamas War (2023)
Following the Hamas-led attack on Israel on October 7, 2023, and the subsequent Israel–Hamas war, the Biden administration requested ~$14 billion in aid from congress to provide military aid for Israel. Congress later approved a bill on February 13, 2024, the legislation included ~$19.3 billion; to support military operations ($14.1bn), air defense ($4bn), and the Iron Beam defense system ($1.2bn). The legislation also included $9.2 billion in humanitarian assistance for civilians in Gaza and the West Bank, along with those caught in warzones across the globe.
As a result of the on-going support of Israel in the face of a humanitarian crisis in Gaza, the United States and President Joe Biden have faced scrutiny and backlash from some NGOs such as Human Rights Watch, Doctors Without Borders, and the Center for Constitutional Rights. The CCR has joined a lawsuit from Defense for Children International - Palestine against Biden's administration for allegedly "failing in his duty under international and US laws to prevent Israel committing genocide in Gaza." The case was dismissed by the United States District Court for the Northern District of California on January 31st, 2024 as a non-justiciable political question; the dismissal was affirmed on appeal to the United State Court of Appeals for the Ninth Circuit on July 15, 2024.
Syrian coup d'état (1949)
Syria became an independent republic in 1946, but the March 1949 Syrian coup d'état, led by Army Chief of Staff Husni al-Za'im, ended the initial period of civilian rule. Za'im met at least six times with CIA operatives in the months prior to the coup to discuss his plan to seize power. Za'im requested American funding or personnel, but it is not known whether this assistance was provided. Once in power, Za'im made several key decisions that benefitted the United States. He approved the Trans-Arabian Pipeline (TAPLINE), an American project designed to transport Saudi Arabian oil to Mediterranean ports. Construction of TAPLINE had been delayed due to Syrian intransigence. Za'im also improved relations with two American allies in the region: Israel and Turkey. He signed an armistice with Israel, formally ending the 1948 Arab–Israeli War and he renounced Syrian claims to Hatay Province, a major source of dispute between Syria and Turkey. Za'im also cracked down on local communists. However, Za'im's regime was short-lived. He was overthrown in August, just four and a half months after seizing power.
Mosaddeq and the Shah of Iran (1953)
Opposed to foreign intervention in Iran and a keen nationalist, Mohammed Mosaddeq became the prime minister of Iran in 1951. Thus, when Mosaddeq was elected he chose to nationalize the Iranian oil industry, where previously British holdings had generated great profits for Britain through the Anglo-Iranian Oil Company. Furthermore, prior to the nationalization of Iranian oil, Mosaddeq had also cut all diplomatic ties with Britain. The Shah of Iran, Mohammad Reza Pahlavi was opposed to the nationalization of Iranian oil as he feared this would result in an oil embargo, which would destroy Iran's economy and thus, the Shah was very concerned with the effect of Mosaddeq's policies on Iran. Equally worried were workers in the Iranian oil industry, when they experienced the economic effect of the sanctions on Iranian oil exports which Mosaddeq's policies had resulted in, and riots were happening across Iran.
Thus, Mohammad Reza Pahlavi asked Mosaddeq to resign, as was the Shah's constitutional right, but Mosaddeq refused, which resulted in national uprisings. The Shah, fearing for his personal security, fled the country but nominated General Fazlollah Zahedi as the new Prime Minister. Although General Fazlollah Zahedi was a nationalist, he did not agree with the Mosaddeq's lenient attitude towards the communist Tudeh party, which the United States had also become increasingly concerned with, fearing Soviet influence spreading in the Middle East. Therefore, in late 1952, the British government asked the U.S. administration for help with the removal of Mohammed Mosaddeq. President Harry S. Truman thought Mossadeq was a valuable bulwark against Soviet influence. However, Truman left office in January 1953, and the new administration of Dwight Eisenhower shared British concern over Mossadeq. Allen Dulles, the director of the CIA, approved one million dollars on April 4, 1953, to be used "in any way that would bring about the fall of Mossadegh" Consequently, after a failed attempt on August 15, "on August 19, 1953, General Fazlollah Zahedi succeeded [with the help of the United States and Britain] and Mossadegh was overthrown. The CIA covertly funneled five million dollars to General Zahedi's regime on August 21, 1953."
This CIA operation, often referred to as Operation Ajax and led by CIA officer Kermit Roosevelt Jr., ensured the return of the Shah on August 22, 1953.
Suez Crisis (1956)
Although accepting large sums of military aid from the United States in 1954, by 1956 Egyptian leader Nasser had grown tired of the American influence in the country. The involvement that the U.S. would take in Egyptian business and politics in return for aid, Nasser thought "smacked of colonialism." Indeed, as political scholar B.M. Bleckman argued in 1978, "Nasser had ambivalent feelings toward the United States. From 1952 to 1954 he was on close terms with U.S. officials and was viewed in Washington as a promising moderate Arab leader. The conclusion of an arms deal with the USSR in 1955, however, had cooled the relationship between Cairo and Washington considerably, and the Dulles-Eisenhower decision to withdraw the offer to finance the Aswan High Dam in mid-1956 was a further blow to the chances of maintaining friendly ties. Eisenhower's stand against the British, French and Israeli attack on Egypt in October 1956 created a momentary sense of gratitude on the part of Nasser, but the subsequent development of the Eisenhower Doctrine, so clearly aimed at 'containing' Nasserism, undermined what little goodwill existed toward the United States in Cairo." "The Suez Crisis of 1956 marked the demise of British power and its gradual replacement by the USA as the dominant power in the Middle East." The Eisenhower Doctrine became a manifestation of this process. "The general objective of the Eisenhower Doctrine, like that of the Truman Doctrine formulated ten years earlier, was the containment of Soviet expansion." Furthermore, when the Doctrine was finalized on March 9, 1957, it "essentially gave the president the latitude to intervene militarily in the Middle East ... without having to resort to Congress." indeed as, Middle East scholar Irene L. Gerdzier explains "that with the Eisenhower Doctrine the United States emerged "as the uncontested Western power ... in the Middle East."
Eisenhower Doctrine
In response to the power vacuum in the Middle East following the Suez Crisis, the Eisenhower administration developed a new policy designed to stabilize the region against Soviet threats or internal turmoil. Given the collapse of British prestige and the rise of Soviet interest in the region, the president informed Congress on January 5, 1957, that it was essential for the U.S. to accept new responsibilities for the security of the Middle East. Under the policy, known as the Eisenhower Doctrine, any Middle Eastern country could request American economic assistance or aid from U.S. military forces if it was being threatened by armed aggression. Though Eisenhower found it difficult to convince leading Arab states or Israel to endorse the doctrine, he applied the new doctrine by dispensing economic aid to shore up the Kingdom of Jordan, encouraging Syria's neighbors to consider military operations against it, and sending U.S. troops into Lebanon to prevent a radical revolution from sweeping over that country. The troops sent to Lebanon never saw any fighting, but the deployment marked the only time during Eisenhower's presidency when U.S. troops were sent abroad into a potential combat situation.
Though U.S. aid helped Lebanon and Jordan avoid revolution, the Eisenhower doctrine enhanced Nasser's prestige as the preeminent Arab nationalist. Partly as a result of the bungled U.S. intervention in Syria, Nasser established the short-lived United Arab Republic, a political union between Egypt and Syria. The U.S. also lost a sympathetic Middle Eastern government due to the 1958 Iraqi coup d'état, which saw King Faisal II replaced by General Abd al-Karim Qasim as the leader of Iraq.
Jordan
Meanwhile, in Jordan nationalistic anti-government rioting broke out and the United States decided to send a battalion of marines to nearby Lebanon prepared to intervene in Jordan later that year. Douglas Little argues that Washington's decision to use the military resulted from a determination to support a beleaguered, conservative pro-Western regime in Lebanon, repel Nasser's pan-Arabism, and limit Soviet influence in the oil-rich region. However Little concludes that the unnecessary American action brought negative long-term consequences, notably the undermining of Lebanon's fragile, multi-ethnic political coalition and the alienation of Arab nationalism throughout the region. To keep the pro-American King Hussein of Jordan in power, the CIA sent millions of dollars a year of subsidies. In the mid-1950s the U.S. supported allies in Lebanon, Iraq, Turkey, and Saudi Arabia and sent fleets to be near Syria. However, 1958 was to become a difficult year in U.S. foreign policy; in 1958 Syria and Egypt were merged into the "United Arab Republic", anti-American and anti-government revolts started occurring in Lebanon, causing the Lebanese president Chamoun to ask America for help, and the very pro-American King Feisal the 2nd of Iraq was overthrown by a group of nationalistic military officers. It was quite "commonly believed that [Nasser] ... stirred up the unrest in Lebanon and, perhaps, had helped to plan the Iraqi revolution."
Six-Day War (1967) and Black September (1970)
In June 1967 Israel fought with Egypt, Jordan, and Syria in the Six-Day War. As a result of the war, Israel captured the West Bank, Golan Heights, and the Sinai Peninsula. The U.S. supported Israel with weapons and continued to support Israel financially throughout the 1970s. On September 17, 1970, with U.S. and Israeli help, Jordanian troops attacked PLO guerrilla camps, while Jordan's U.S.-supplied air force dropped napalm from above. The U.S. deployed the aircraft carrier Independence and six destroyers off the coast of Lebanon and readied troops in Turkey to support the assault.
The American interventions in the years before the Iranian revolution have all proven to be based in part on economic considerations, but more so have been influenced and led by the international Cold War context.
Iran–Iraq War (1980–1988)
On 22 September 1980, Saddam Hussein's Iraq attacked Ayatollah Khomeini ruled Iran, starting bombing 10 military airfields.
Support for Iraq
Ted Koppel's ABC News broadcast of July 1992 points out the US cooperation with Iraq, by sending money, armaments, dual-use technology and if necessary, the provision of emergency action plans against Iran. According to revealed CIA files, the United States supported Hussein's Iraq even to the point of a US awareness of Iraqi use of chemical armaments. This violated the 1925 Geneva Protocol, which Iraq did not approve. Moreover, the US Defense Intelligence Agency provided Iraq with satellite positions of Iranian troops to help keep track of the enemies. American position in the war played "a secretly but unambiguously" pro-Iraq support.
A few scholars have argued the US gave a "green light" to Hussein's attack on Iran. Yet, considering now available US and Iraqi papers, the "green light" hypothesis is "more a myth than reality". US did not provide an initial encouragement to let the war begin as well as Hussein's attack was independent of the US.
U.S. government support for Iraq was not a secret and was frequently discussed in open sessions of the Senate and House of Representatives. On June 9, 1992, Ted Koppel reported on ABC's Nightline that the "Reagan/Bush administrations permitted—and frequently encouraged—the flow of money, agricultural credits, dual-use technology, chemicals, and weapons to Iraq."
American views toward Iraq were not enthusiastically supportive in its conflict with Iran, and activity in assistance was largely to prevent an Iranian victory. This was encapsulated by Henry Kissinger when he remarked, "It's a pity they both can't lose."
Support for Iran
US-Iran relations drastically changed since the Iranian 1979 revolution. It marked the fall of the Shah and its closeness with the Western world and the takeover of Khomeini with a return to Islamic law. In 1979 the US Embassy in Teheran was caught by protesters, and American civilians were taken hostages. In 1980, the US changed policy to allow Israel to sell American armament to Iran during the war. The deal between US and Israel was coordinated by the State Department Counselor, McFarlane, with US Secretary of State Alexander Haig Jr. and Israeli Prime Minister Menachem Begin agreeing to a 6 to 18 months period weapons' provision. This support to Iran was first explained as a way to have back the American hostages. Yet, the hostages were delivered before the US supply of weapons to Iran. In addition, this armament provision lasts for more than the established period. Indeed, this was later known as the Iran-Contra Affair publicly divulgated in November 1985. US supplied weapons to Iran through Israel, and the profit gained went to finance the Contra rebels, opponents to the Nicaragua Sandinista Front.
Kuwait and the Gulf War (1991)
The Gulf War in 1991 involved a coalition of 35 countries led by the United States against Iraq after it invaded Kuwait. Iraq had been an ally of the Soviet Union during the Cold War, resulting in little relation with the US. After Iraq threatened to invade Kuwait, the US said they would also protect their allies in the region against Iraq's invasion. After the invasion in 1990, economic sanctions are implemented when the US request a meeting of the United Nations Security Council and adopt Resolution 660. The US rejected the proposal of the Iraqi army to leave Kuwait if a solution for Palestine is found. Military means are employed by the US in 1991, as Resolution 678 allows. Also, the coalition is created, with 73% of the armed force being American. The United States armed forces lead many attacks on the Iraqi army in several battles, through air strikes and land battles.
Saudi Arabia
Saudi Arabia and the United States are strategic allies, but relations with the U.S. became strained following September 11 attacks.
Foreign policies of the US in Saudi Arabia started with the Quincy Agreement in 1945, in which the US agreed to provide Saudi Arabia with military security in exchange for secure access to supplies of oil. Military aid was provided to Saudi Arabia during the Gulf War, and almost 500,000 soldiers were sent to protect Saudi Arabia from Iraq.
In March 2015, President Barack Obama declared that he had authorized U.S. forces to provide logistical and intelligence support to the Saudis in their military intervention in Yemen, establishing a "Joint Planning Cell" with Saudi Arabia. The report by Human Rights Watch stated that US-made bombs were being used in attacks indiscriminately targeting civilians and violating the laws of war.
During his election campaign, Biden had pledged to make Saudi Arabia "a pariah". The Biden Administration emphasized its human rights policy as the key arbiter of the U.S. relationship with Saudi Arabia. Diplomatic relations hit a new low after a February 2021 U.S. intelligence report accused the crown prince of being directly involved in the assassination of Khashoggi. During Russia's invasion of Ukraine, Saudi Arabia defied U.S. efforts to isolate Vladimir Putin and instead strengthened relations with Russia by coordinating to reduce oil output of OPEC countries in October 2022. This event triggered a strong backlash in the United States, with relations sinking to an "all-time low" and tensions exacerbating further. American officials have criticized Saudi Arabia for actively enabling Russians to bypass US-EU sanctions and for undermining Western efforts to isolate Vladimir Putin. Saudi Arabia has also defied the United States' China containment policy. In December 2022, Saudi Arabia hosted Chinese leader Xi Jinping for a series of summits to sign a "comprehensive strategic partnership agreement" which elevated Sino-Arab relations.
US- Saudi Arabia Arm deal
Both countries have an interest in fighting terrorism and are allies. In 2017, an agreement aiming to provide Saudi Arabia with $115 billion of weapons containing tanks, combat ships and missile defence systems was announced by President Donald Trump. In 2018, the Saudi Government had purchased over $14.5 billion of weapons to the US. Also in 2018, the Saudi-led coalition fighting terrorism in Yemen bombed a school bus killing 40 children, with a bomb provided by the United States. Many criticized the United States' support for Saudi intervention in Yemen which contributed to the killing of 10,000 children. In December 2018, the end of American assistance to Saudi Arabia's war in Yemen in voted by senators.
The lack of support from the US for the Saudi-led coalition interventions in Yemen stained the relationship of the two countries, causing Saudi Arabia to refuse the US's request of increasing oil production.
Afghanistan & Pakistan
Iraqi conflict
Libya (2011–present)
Yemen
20th century
The US established diplomatic relations with Yemen in 1947 when it became a member of the United Nations. The Yemen Arab Republic is created in 1962 and recognized by the US the same year. In 1967, the US recognize the People's Democratic Republic of Yemen. The 20th century's Us policies in Yemen support the unification and are largely concentrated on humanitarian aid and some military operations. In the 1990s, the US develop a $42 million program in Yemen subsidizing agriculture, education and health. In return, the Yemeni government cooperates with US oil companies. US-Yemen relationship deteriorates when both take different sides during the Kuwait crisis.
21st century
Al-Qaeda's terrorist attacks in the United States have transformed US's policies in Yemen. The US has engaged in many military actions against the terrorist group but also humanitarian help and cooperation with other actors. Also, the Yemeni government improved its cooperation in dismantling the terrorist group with the US government after this event.
Over the last decades, the US has responded to Yemen's humanitarian crisis caused by the war. The reported funding in the country from the US has increased this past decade from $115m in 2012 to almost a billion in 2019. It funds sectors like the supply of food security, health, education and protection. But the blockade of access to the country by the Saudi-led coalition, which has received support from the United States, prevents humanitarian aid to be fully applied.
Military policies in Yemen have increased since the replacement of the previous president Ali Abdullah Saleh by Abdrabbuh Mansur Hadi, far more cooperative in fighting terrorism in Yemen. Military policies are characterized by the training of the military by the US forces, the supply of weapons but also air strikes. The US also concluded an agreement with Saudi Arabia in 2015 which engages the US in supplying weapons to Saudi Arabia for counterterrorist actions in Yemen.
Syria (2011–present)
2011 saw several anti-governmental protests arising in many Arab countries, known as the Arab Spring. Syria opposed the Assad government through demonstrations which were put down fomenting a civil war.
US involvement in the Syrian civil war started under the Obama presidency, with the involvement of US troops in 2015. US troops' involvement continued under the Trump presidency, although Trump stated on several occasions that he did not want "boots on the ground" in Syria for much longer, asking the army to retire altogether, which never happened. US continued to lead an alliance of up to 74 countries to fight against ISIS terrorist organization, but also with peacekeeping and patrolling of oilfields missions. The situation became more complicated in 2019, after Turkey struck an agreement with Russia, whose army also got directly involved. The US and the Western coalition got involved in multiple fights, mostly on the side of the Kurdish led YPG and SDF liberation army, causing therefore tensions with Turkey, which fundamentally never stopped fighting Kurds in Syria. Trump's presidency has not made things any easier for US troops deployed in Syria, moving from showing little interest to showing interest in the oilfields located in the North-Eastern province of Syria, to finally showing signs of appropriating a victory that did not really happen. But the situation remains far from clear for the US army in Syria with its presence continuing under the Biden presidency, with focus on military operations and airstrikes shifting towards the East, to better fight Iran supported militias.
Turkey
Coup attempt (2016)
On 15 July 2016, a coup d'état was attempted in Turkey by a faction within the Turkish Armed Forces against state institutions, including, but not limited to the government and President Recep Tayyip Erdoğan.
The Turkish government accused the coup leaders of being linked to the Gülen movement, which is designated as a terrorist organization by the Republic of Turkey and led by Fethullah Gülen, a Turkish businessman and cleric who lives in Pennsylvania, United States. Erdoğan accuses Gülen of being behind the coup—a claim that Gülen denies—and accused the United States of harboring him. President Recep Tayyip Erdoğan accused the head of United States Central Command, chief General Joseph Votel of "siding with coup plotters," (after Votel accused the Turkish government of arresting the Pentagon's contacts in Turkey).
Bilateral relations in the Greater Middle East
American allies
States
(see Israel–United States relations) (Major non-NATO ally)
(see Saudi Arabia–United States relations)
(see Turkey–United States relations) (NATO member state)
(see Qatar–United States relations) (Major non-NATO ally)
(see Bahrain–United States relations) (Major non-NATO ally)
(see Kuwait–United States relations) (Major non-NATO ally)
(see United Arab Emirates–United States relations)
(see Jordan–United States relations) (Major non-NATO ally)
(see Egypt–United States relations) (Major non-NATO ally)
(see Cyprus–United States relations)
Autonomous region
Iraqi Kurdistan (see Iraqi Kurdistan–United States relations)
Factions and organizations
People's Mujahedin of Iran
National Council of Resistance of Iran
Pahlavi Royal Family (led by Reza Pahlavi)
Syrian Democratic Forces
Ex-allies
Imperial State of Iran (see Iranian Islamic Revolution, 1953 Iranian coup d'état)
Free Syrian Army (see Timber Sycamore, American-led intervention in the Syrian Civil War)
Islamic Republic of Afghanistan (see 2021 Taliban offensive)
Hostile relations with America
States
(see Iran–United States relations after 1979, United States sanctions against Iran)
(see Syria–United States relations)
(see Pakistan-United States relations, Pakistani Taliban, Insurgency in Khyber Pakhtunkhwa)
(see Turkey–United States relations)
(see Iraq–United States relations)
(see Afghanistan–United States relations, International relations with the Taliban)
Organizations
Islamic Revolutionary Guard Corps
Popular Mobilization Forces
Popular Resistance Committees
Kata'ib Hezbollah
Asa'ib Ahl al-Haq
Criticism
The U.S. has been accused by some U.N. officials of condoning actions by Israel against Palestinians.
See also
2023 American–Middle East conflict
British foreign policy in the Middle East
Arab lobby in the United States
Dual containment
Foreign relations of the Arab League
Gulf War
United States–Middle East economic relations
Middle Eastern foreign policy of the Barack Obama administration
Mission Accomplished
Foreign interventions by the United States
Books
The Israel Lobby and U.S. Foreign Policy
Notes
References
Further reading
Baxter, Kylie, and Shahram Akbarzadeh. US foreign policy in the Middle East: The roots of anti-Americanism (Routledge, 2012)
Bunch, Clea. "Reagan and the Middle East." in Andrew L. Johns, ed., A Companion to Ronald Reagan (2015) pp: 453–468. online
Cramer, Jane K., and A. Trevor Thrall, eds. Why Did the United States Invade Iraq? (Routledge, 2013)
Fawcett, Louise, ed. International relations of the Middle East (3rd ed. Oxford UP, 2016) full text online
Freedman, Lawrence. A Choice of Enemies: America Confronts the Middle East (Public Affairs, 2009) excerpt
Gause III, F. Gregory. "“Hegemony” Compared: Great Britain and the United States in the Middle East." Security Studies 28.3 (2019): 565-587. “Hegemony” Compared: Great Britain and the United States in the Middle East
Hemmer, Christopher. Which lessons matter?: American foreign policy decision making in the Middle East, 1979-1987 (SUNY Press, 2012)
Jacobs, Matthew F. Imagining the Middle East: The Building of an American Foreign Policy, 1918-1967 (2011)
Kelley, Stephen A. "Getting to War: American Security Policy in the Persian Gulf, 1969-1991." (Naval Postgraduate School Monterey United States, 2020) online.
Laqueur, Walter. The Struggle for the Middle East: The Soviet Union and the Middle East 1958-70 (1972) online
Lesch, David W. and Mark L. Haas, eds. The Middle East and the United States: History, Politics, and Ideologies (6th ed, 2018) excerpt
Little, Douglas. "His finest hour? Eisenhower, Lebanon, and the 1958 Middle East crisis." Diplomatic History 20.1 (1996): 27–54. online
O'Sullivan, Christopher D. FDR and the End of Empire: The Origins of American Power in the Middle East (2012)
Petersen, Tore. Anglo-American Policy toward the Persian Gulf, 1978–1985: Power, Influence and Restraint (Sussex Academic Press, 2015)
Pillar, Paul R. Intelligence and US Foreign Policy: Iraq, 9/11, and Misguided Reform (Columbia UP, 2014) 432p
Pollack, Kenneth. Unthinkable: Iran, the bomb, and American strategy (2014)
Wahlrab, Amentahru, and Michael J. McNeal, eds. US approaches to the Arab uprisings: International relations and democracy promotion (Bloomsbury, 2017).
Wight, David M. Oil Money: Middle East Petrodollars and the Transformation of US Empire, 1967-1988 (Cornell University Press, 2021) online review
External links
US State Department Bureau of Near Eastern Affairs
Middle East – U.S. Relations from the Dean Peter Krogh Foreign Affairs Digital Archives
Establishment of U.S. Consuls and Colonies in the Levant – Shapell Manuscript Foundation
Foreign relations of the United States
History of the foreign relations of the United States
History of West Asia
20th-century military history of the United States
United States foreign policy
United States–Middle Eastern relations
Petroleum politics
Near East
International relations
Foreign policy | United States foreign policy in the Middle East | [
"Chemistry"
] | 7,334 | [
"Petroleum",
"Petroleum politics"
] |
10,853,884 | https://en.wikipedia.org/wiki/Kabissa | Kabissa – Space for Change in Africa is a volunteer-led non-governmental organization that promotes Information and Communication Technology (ICT) and Civil Society Organizations (CSO) for positive change in Africa. Kabissa members are active throughout Africa, working on a range of tasks including Advocacy and Policy, Arts, Culture, Conflict Resolution, Humanitarian Services, Economic Development, Poverty Reduction, Education, Environment, Gender, Governance, Health, Human Rights, Democracy, Media, Journalism, Microfinance, Technology, Training, Capacity Building and density of space.
Kabissa headquarters are on Bainbridge Island, Washington (state), although the organization operates mostly online, with international contributors. The founder of the organization is Tobias Eigen who led Kabissa together with Kimberly Lowery from 2002 to 2007.
Introduction
Kabissa, meaning complete in Swahili, helps African civil society organizations put Internet and Communications Technology (ICT) to work for the benefit of their communities. Founded in 1999 by Tobias Eigen, Kabissa initially provided domain hosting services, then capacity-building through a custom training curriculum and manual, and is currently dedicated to connecting people and organizations for Africa via the social media platform.
Membership
Anyone interested in Africa can create a free account, subscribe to newsletters and participate in groups. Nearly everyone in the Kabissa network is involved in organizations working on the continent that are listed in the Kabissa Organization Directory and displayed on the Kabissa Map.
Kabissa's member organizations are varied in nature and thus are an indicator of overall African civil society sector. These members range from newly established localized organizations working in human rights and social justice to large, well-established organizations involved in environmental work. Currently, Kabissa's member organizations categorize themselves into the following focus areas:
Advocacy and Policy
Arts and Culture
Conflict Resolution
Direct Social and Humanitarian Services
Economic Development and Poverty Reduction
Education
Environment
Gender
Governance
Health
Human Rights and Democracy
Media and Journalism
Microfinance
Technology
Training and Capacity Building
Youth
Kabissa Board of Directors
Current
John Githongo, Kenya
Neema Mgana, Tanzania
Tobias Eigen, Germany/USA
George Scharffenberger, USA
Jeff Thindwa, Malawi
Former
Firoze Manji, Kenya
Kimberly Lowery, USA
Peter Eigen, Germany
Daniel Ritchie, USA
Affiliations
Aid for Africa Foundation
Global Washington
Kabissa's Charter
Kabissa operates under the following charter:
Mission
Kabissa’s mission is to help African civil society organizations put Internet and Communications Technology (ICT) to work for the benefit of the people they serve.
Vision
Kabissa’s vision is for a socially, economically, politically, and environmentally vibrant Africa, supported by a strong network of effective civil society organizations.
Principles
Kabissa seeks to adhere to the following principles in its operations and governance:
To work in close cooperation with partner organizations that can provide local expertise, support, and resources wherever possible
To make its operations transparent to the Kabissa community and the general public
To employ the services of companies that share Kabissa’s vision whenever possible. In all cases, the organizations will show professional integrity and provide the best value, so that Kabissa can pass on high-quality, affordable services to the Kabissa community
To avoid any source of income derived from activities which indisputably conflict with our vision
To be a highly efficient organization, keeping overhead costs to a minimum
To develop, use, and promote software and content that is freely available under open source licensing agreements
To embrace a diversity of perspectives in our member community, our staff, and our board
History
Kabissa was founded in 1999 by Tobias Eigen with the idea that Internet and Communications Technology (ICT) could revolutionize the work of African civil society. Building on the years of consulting experience Tobias Eigen had with African civil society, Kabissa began by providing African organizations with accessible, affordable, and secure internet services.
During the next three years Kabissa showed strong growth and gained increasing recognition. In June 2002 Kabissa won the ICT Stories Competition, an initiative of infoDev and the International Institute of Communication and Development (IICD) which sought to capture the learning process that accompanies the introduction and implementation of ICTs for development. In September 2002 Kabissa added a part-time Program Manager, Kim Lowery, to its staff. By November 2002 Kabissa was awarded its first major grant from the German Agency for Technical Cooperation (GTZ) for the pilot phase of Kabissa’s Time To Get Online training initiative. They went on to set up an office at Dupont Circle in Washington DC where for the next five years three employees and dozens of interns and volunteers worked on its programs with funding from major foundations including the Ford Foundation, Open Society Institute Information Program, the Hurford Foundation, National Endowment for Democracy, Yahoo Employee Foundation, and Lonely Planet Foundation (now Planet Wheeler Foundation). They also trained hundreds of activists and development practitioners in end user and training of trainers workshops and distributed thousands of copies of the Time To Get Online manual. In partnership with Tanmia in Morocco, the Time To Get Online manual and training program was localized into French and Arabic.
From April 2005 through March 2008, Kabissa administered the PanAfrican Localisation Project, which was funded by the International Development Research Centre of Canada.
In 2007, Kabissa followed its founder, Tobias Eigen, to Bainbridge Island, WA, and became a volunteer organization with no employees. In 2009, Kabissa announced a new focus on social media in Africa. At the same time, Kabissa streamlined its internet services and shut down the server hosting websites for its member organizations.
As of May, 2010 Kabissa had 1504 member organizations representing over 50 African countries, and included internationally renowned human rights groups, charities, development organizations and orphanages.
References
Information and communication technologies in Africa
Non-profit technology | Kabissa | [
"Technology"
] | 1,200 | [
"Information technology",
"Non-profit technology"
] |
10,853,931 | https://en.wikipedia.org/wiki/Affinity%20space | An affinity space is a place where learning happens. According to James Paul Gee, affinity spaces are locations where groups of people are drawn together because of a shared, strong interest or engagement in a common activity. Often but not always occurring online, affinity spaces encourage the sharing of knowledge or participation in a specific area, and informal learning is a common outcome. In his coining of the term, Gee takes the notion of participatory cultures, and reframes it to the idea of "space". To Gee, what is happening in these online cultures is not merely a "culture" – and far different from a "community". In Gee's view, the word "community" conjures up images of belongingness and membership (p. 70). Instead, he has defined these worlds as "spaces" – a term that allows for the "robust characterization of the ebbs and flows and differing levels of involvement and participation exhibited by members"
According to Gee (2004), "An affinity space is a place or set of places where people affiliate with others based primarily on shared activities, interests, and goals, not shared race, class culture, ethnicity, or gender" (p. 67).
Gee (2004) refers to affinity spaces and states, "Learners 'apprentice' themselves to a group of people who share a certain set of practices (e.g. learning to cook in a family, learning to play video games with a guild, learning to assemble circuit boards in a workplace, learning to splice genes in a biology lab), pick up these practices through joint action with more advanced peers, and advance their abilities to engage and work with others in carrying out such practices" (p. 70).
What Gee (2004) tries to explain about Affinity Spaces is not an attempt to label a group of people. By affinity space, he means a space where people can interact and share a lot with each other. The people who are interacting in a space might find themselves as sharing a community with some others in that space, while other people might view their interactions in the space differently. Gee (2004) adds, " In any case, creating spaces within diverse sorts of people can interact is a leitmotif of the modern world" (p. 71).
Hallmarks of Affinity Spaces
Gee described twelve hallmarks of what he terms "nurturing" affinity spaces:
The affinity in these spaces is to the endeavor, not other people. People from all ages, ethnicities, educational levels, and cultures play/create together – often anonymously or using alter-identities.
Not segregated by age; there is no assumption that older or more senior participants are the only ones with something to teach.
Not segregated by experience; newbies, masters, and everyone else share a common space.
Everyone can, if they so wish, produce and not just consume. The idea that creation can come not only from a space's designers, but also from its users, is a hallmark of these spaces. Users – not just site designers – can help create, shape, and reshape the site and its content. Suggestions are welcome and encouraged, and site designers often use the suggestions of users to reform site designs and configurations.
Content within the space is not fixed, but is transformed by interaction.
Both intensive and extensive knowledge are encouraged. Extensive knowledge is seen as broad, less specialized knowledge about many aspects of the space. Intensive knowledge is in-depth knowledge about certain aspects of the space.
Individual and distributed knowledge are valued.
Dispersed knowledge is encouraged.
Tacit knowledge is encouraged and honored. Members do not have to lead or design; those who wish to “just play” are valued as much as those who wish to contribute more substantially to the site.
Many forms and routes to participation are available.
Different routes to status are inherent in the game.
Leadership is porous and leaders are resources.
Educational uses
Because members of an affinity space are interested in a common practice/belief/activity, they have common ground and motivation together. Gee says that because of this common interest, affinity spaces are able to bridge barriers of age, race, socio-economic status, and educational level, and thus allow each user to participate as he/she chooses, and both experts and novices are equally legitimate participants in the affinity space While not everyone in affinity spaces is an expert, they are not places where the "blind are leading the blind." Many spaces have unwritten rules that while sharing information, you must share only what you know, provide sources to back up what you say, and in general, leave feedback and comments only in areas you know.
Examples
Online fan fiction sites are examples of affinity spaces. While the goal of the sites is usually to share and read other people's fan fiction creations, informal learning takes place as people have their work read and commented on by "'beta readers.'" It is up to the author then to decide what to do with this informal feedback; often, it is used to revise and edit the work, and at the same time, it may aid the author in pinpointing his or her own overall writing flaws.
Other examples come from "snark sites" or "rant communities." The goal of these sites is typically to make fun of particular problems, such as poorly written fan fiction, or digital image editing mistakes. As community members criticize other people's work, they reach new levels of sophistication in their evaluations, creating extended vocabularies of terms and categorizing mistakes. In Benjamin Bloom's taxonomy, evaluation is at the top of higher order thinking skills. Since either authors or their friends and fans are likely to come to the defense of works being criticized, rhetoric and logic are two areas where much active learning takes place.
Notes
References
Bensen, S. "I don't know if that'd be English or not": Third space theory and literacy instruction. In "Journal of Adolescent and Adult Literacy", 53 (7), (pp. 555–563).
Gee, James Paul. Situated Language and Learning: A Critique of Traditional Schooling. New York: Routledge, 2004. , .
Gee, James Paul Semiotic Social Spaces and Affinity Spaces: From The Age of Mythology to Today's Schools. In D. Barton & K. Tusting (Eds.), Beyond communities of practice: Language, power and social context (pp. 214–232). Cambridge: Cambridge University Press, 2005.
Gee, James Paul & Elisabeth Hayes. Public Pedagogy through Video Games: Design, Resources & Affinity Spaces. Game Based Learning. Retrieved from https://web.archive.org/web/20100820191022/http://www.gamebasedlearning.org.uk/content/view/59. 2009.
Jenkins, Henry. Convergence Culture: Where Old and New Media Collide. New York: New York University Press, 2006. .
Educational environment
Game design | Affinity space | [
"Engineering"
] | 1,428 | [
"Design",
"Game design"
] |
10,854,000 | https://en.wikipedia.org/wiki/Statistical%20study%20of%20energy%20data | Energy statistics refers to collecting, compiling, analyzing and disseminating data on commodities such as coal, crude oil, natural gas, electricity, or renewable energy sources (biomass, geothermal, wind or solar energy), when they are used for the energy they contain. Energy is the capability of some substances, resulting from their physico-chemical properties, to do work or produce heat. Some energy commodities, called fuels, release their energy content as heat when they burn. This heat could be used to run an internal or external combustion engine.
The need to have statistics on energy commodities became obvious during the 1973 oil crisis that brought tenfold increase in petroleum prices. Before the crisis, to have accurate data on global energy supply and demand was not deemed critical. Another concern of energy statistics today is a huge gap in energy use between developed and developing countries. As the gap narrows (see picture), the pressure on energy supply increases tremendously.
The data on energy and electricity come from three principal sources:
Energy industry
Other industries ("self-producers")
Consumers
The flows of and trade in energy commodities are measured both in physical units (e.g., metric tons), and, when energy balances are calculated, in energy units (e.g., terajoules or tons of oil equivalent). What makes energy statistics specific and different from other fields of economic statistics is the fact that energy commodities undergo greater number of transformations (flows) than other commodities. In these transformations energy is conserved, as defined by and within the limitations of the first and second laws of thermodynamics.
See also
Energy system
World energy resources and consumption
External links
Statistical Energy Database Review: Enerdata Yearbook 2012
International Energy Agency: Statistics
United Nations: Energy Statistics
The Oslo Group on Energy Statistics
DOE Energy Information Administration
Year of Energy 2009
European Energy Statistics & Key Indicators
Publications
Energy Statistics Yearbook 2004, United Nations, 2006
Energy Balances and Electricity Profiles 2004, United Nations, 2006
Statistical data sets
Energy measurement
Applied statistics | Statistical study of energy data | [
"Mathematics"
] | 408 | [
"Applied mathematics",
"Applied statistics"
] |
10,854,098 | https://en.wikipedia.org/wiki/Bioconjugation | Bioconjugation is a chemical strategy to form a stable covalent link between two molecules, at least one of which is a biomolecule. Methods to conjugate biomolecules are applied in various field, including medicine, diagnostics, biocatalysis and materials. Synthetically modified biomolecules can have diverse functionalities, such as tracking cellular events, revealing enzyme function, determining protein biodistribution, imaging specific biomarkers, and delivering drugs to targeted cells.
Bioconjugation is a crucial strategy that links these modified biomolecules with different substrates. Besides applications in biomedical research, bioconjugation has recently also gained importance in nanotechnology such as bioconjugated quantum dots.
The most common types of bioconjugation include coupling of a small molecule (such as biotin or a fluorescent dye) to a protein. Antibody-drug conjugates such as Brentuximab vedotin and Gemtuzumab ozogamicin are examples falling into this category. Other less common molecules used in bioconjugation are oligosaccharides, nucleic acids, synthetic polymers such as polyethylene glycol, and carbon nanotubes. Protein-protein conjugations, such as the coupling of an antibody to an enzyme, or the linkage of protein complexes, is also facilitated via bioconjugations.
Common Bioconjugation Reactions
Synthesis of bioconjugates involves a variety of challenges, ranging from the simple and nonspecific use of a fluorescent dye marker to the complex design of antibody drug conjugates. Various bioconjugation reactions have been developed to chemically modify proteins. Common types of bioconjugation reactions on proteins are coupling of lysine, cysteine, and tyrosine amino acid residues, as well as modification of tryptophan residues and of the N- and C- terminus.
However, these reactions often lack chemoselectivity and efficiency, because they depend on the presence of native amino acids, which are present in large quantities that hinder selectivity. There is an increasing need for chemical strategies that can effectively attach synthetic molecules site specifically to proteins. One strategy is to first install a unique functional group onto a protein, and then a bioorthogonal reaction is used to couple a biomolecule with this unique functional group. The bioorthogonal reactions targeting non-native functional groups are widely used in bioconjugation chemistry. Some important reactions are modification of ketone and aldehydes, Staudinger ligation with organic azides, copper-catalyzed Huisgen cycloaddition of azides, and strain promoted Huisgen cycloaddition of azides.
On Natural Amino Acids
Reactions of lysines
The nucleophilic lysine residue is commonly targeted site in protein bioconjugation, typically through amine-reactive N-hydroxysuccinimidyl (NHS) esters. To obtain optimal number of deprotonated lysine residues, the pH of the aqueous solution must be below the pKa of the lysine ammonium group, which is around 10.5, so the typical pH of the reaction is about 8 and 9. The common reagent for the coupling reaction is NHS-ester (shown in the first reaction below in Figure 1), which reacts with nucleophilic lysine through a lysine acylation mechanism. Other similar reagents are isocyanates and isothiocyanates that undergo a similar mechanism (shown in the second and third reactions in Figure 1 below). Benzoyl fluorides (shown in the last reaction below in Figure 1), which allows for lysine modification of proteins under mild conditions (low temperature, physiological pH), were recently proposed as an alternative to classically used lysine specific reagents.
Reactions of cysteines
Because free cysteine rarely occurs on protein surface, it is an excellent choice for chemoselective modification. Under basic condition, the cysteine residues will be deprotonated to generate a thiolate nucleophile, which will react with soft electrophiles, such as maleimides and iodoacetamides (shown in the first two reactions in Figure 2 below). As a result, a carbon-sulfur bond is formed. Another modification of cysteine residues involves the formation of disulfide bond (shown in the third reaction in Figure 2). The reduced cysteine residues react with exogenous disulfides, generating new a disulfide bond on the protein. An excess of disulfides is often used to drive the reaction, such as 2-thiopyridone and 3-carboxy-4-nitrothiophenol. Electron-deficient alkynes were demonstrated to selectively react with cysteine residues of proteins in the presence of other nucleophilic amino acid residues. Depending on the alkyne substitution, these reactions can produce either cleavable (when alkynone derivatives are used), or hydrolytically stable bioconjugates (when 3-arylpropiolonitriles are used; the last reaction below in Figure 2).
Reactions of tyrosines
Tyrosine residues are relatively unreactive; therefore they have not been a popular targets for bioconjugation. Recent development has shown that the tyrosine can be modified through electrophilic aromatic substitutions (EAS) reactions, and it is selective for the aromatic carbon adjacent to the phenolic hydroxyl group. This becomes particularly useful in the case that cysteine residues cannot be targeted. Specifically, diazonium effectively couples with tyrosine residues (diazonium salt shown as reagent in the first reaction in Figure 3 below), and an electron withdrawing substituent in the 4-position of diazonium salt can effectively increase the efficiency of the reaction. Cyclic diazodicarboxyamide derivative like 4-Phenyl-1,2,4-triazole-3,5-dione (PTAD) were reported for selective bioconjugation on tyrosine residues (the second reaction in Figure 3 below). A three-component Mannich-type reaction with aldehydes and anilines (the last reaction in Figure 3) was also described to be relatively tyrosine-selective under mild optimised reaction conditions.
Reactions of N- and C- termini
Since natural amino acid residues are usually present in large quantities, it is often difficult to modify one single site. Strategies targeting the termini of protein have been developed, because they greatly enhanced the site selectivity of protein modification. One of the N- termini modifications involves the functionalization of the terminal amino acid. The oxidation of N-terminal serine and threonine residues are able to generate N-terminal aldehyde, which can undergo further bioorthogonal reactions (shown in the first reaction in Figure 4). Another type of modification involves the condensation of N-terminal cysteine with aldehyde, generating thiazolidine that is stable at high pH (second reaction in Figure 4). Using pyridoxal phosphate (PLP), several N-terminal amino acids can undergo transamination to yield N-terminal aldehyde, such as glycine and aspartic acid (third reaction in Figure 4).
An example of C-termini modification is the native chemical ligation (NCL), which is the coupling between a C-terminal thioester and a N-terminal cysteine (Figure 5).
Bioorthogonal Reactions: On Unique Functional Groups
Modification of ketones and aldehydes
A ketone or aldehyde can be attached to a protein through the oxidation of N-terminal serine residues or transamination with PLP. Additionally, they can be introduced by incorporating unnatural amino acids via the Tirrell method or Schultz method. They will then selectively condense with an alkoxyamine and a hydrazine, producing oxime and hydrazone derivatives (shown in the first and second reactions, respectively, in Figure 6). This reaction is highly chemoselective in terms of protein bioconjugation, but the reaction rate is slow. The mechanistic studies show that the rate determining step is the dehydration of tetrahedral intermediate, so a mild acidic solution is often employed to accelerate the dehydration step.
The introduction of nucleophilic catalyst can significantly enhance reaction rate (shown in Figure 7). For example, using aniline as a nucleophilic catalyst, a less populated protonated carbonyl becomes a highly populated protonated Schiff base. In other words, it generates a high concentration of reactive electrophile. The oxime ligation can then occur readily, and it has been reported that the rate increased up to 400 times under mild acidic condition. The key of this catalyst is that it can generate a reactive electrophile without competing with desired product.
Recent developments that exploit proximal functional groups have enabled hydrazone condensations to operate at 20 M−1s−1 at neutral pH while oxime condensations have been discovered which proceed at 500-10000 M−1s−1 at neutral pH without added catalysts.
Staudinger ligation with azides
The Staudinger ligation of azides and phosphine has been used extensively in field of chemical biology. Because it is able to form a stable amide bond in living cells and animals, it has been applied to modification of cell membrane, in vivo imaging, and other bioconjugation studies.
Contrasting with the classic Staudinger reaction, Staudinger ligation is a second order reaction in which the rate-limiting step is the formation of phosphazide (specific reaction mechanism shown in Figure 9). The triphenylphosphine first reacts with the azide to yield an azaylide through a four-membered ring transition state, and then an intramolecular reaction leads to the iminophosphorane intermediate, which will then give the amide-linkage under hydrolysis.
Huisgen cyclization of azides
Copper catalyzed Huisgen cyclization of azides
Azide has become a popular target for chemoselective protein modification, because they are small in size and have a favorable thermodynamic reaction potential. One such azide reactions is the [3+2] cycloaddition reaction with alkyne, but the reaction requires high temperature and often gives mixtures of regioisomers.
An improved reaction developed by chemist Karl Barry Sharpless involves the copper (I) catalyst, which couples azide with terminal alkyne that only give 1,4 substituted 1,2,3 triazoles in high yields (shown below in Figure 11). The mechanistic study suggests a stepwise reaction. The Cu (I) first couples with acetylenes, and then it reacts with azide to generate a six-membered intermediate. The process is very robust that it occurs at pH ranging from 4 to 12, and copper (II) sulfate is often used as a catalyst in the presence of a reducing agent.
Strain promoted Huisgen cyclization of azides
Even though Staudinger ligation is a suitable bioconjugation in living cells without major toxicity, the phosphine's sensitivity to air oxidation and its poor solubility in water significantly hinder its efficiency. The copper(I) catalyzed azide-alkyne coupling has reasonable reaction rate and efficiency under physiological conditions, but copper poses significant toxicity and sometimes interferes with protein functions in living cells. In 2004, chemist Carolyn R. Bertozzi's lab developed a metal free [3+2] cycloaddition using strained cyclooctyne and azide. Cyclooctyne, which is the smallest stable cycloalkyne, can couple with azide through [3+2] cycloaddition, leading to two regioisomeric triazoles (Figure 12). The reaction occurs readily at room temperature and therefore can be used to effectively modify living cells without negative effects. It has also been reported that the installation of fluorine substituents on a cyclic alkyne can greatly accelerate the reaction rate.
Transition Metal-Mediated Bioconjugation Reactions
Transition metal-based bioconjugation had been challenging due to the nature of biological conditions – aqueous solution, room temperature, mild pH, and low substrate concentrations – which are generally challenging for organometallic reactions. However, recently, besides copper-catalyzed [3 + 2] azide alkyne cycloaddition reaction, more and more diverse transition metal-mediated chemical transformations have been applied for bioconjugation reactions, introducing olefin metathesis, alkylation, C–H arylation, C–C, C–S, and C–N cross-coupling reactions.
Alkylation
On Natural Amino Acids
Rh-catalyzed Trp and Cys alkylation
Using in situ generated RhII-carbenoid by activation of vinyl-substituted diazo compounds with Rh2(OAc)4, tryptophans and cysteines were shown to be selectively alkylated in aqueous media.
However, this method is limited to surface tryptophans and cysteines possibly because of steric constraints.
Ir-catalyzed Lys and N-terminus (reductive) alkylation
Imines formed from the condensation of aldehydes with lysines or the N-terminus can be reduced efficient by an water-stable [Cp*Ir(bipy)(H2O)]SO4 complex in the presence of formate ions (serving as the hydride source). The reaction happens readily under physiologically relevant conditions and results in high conversion for various aromatic aldehydes.
Pd-catalyzed Tyr O-alkylation
By using a pre-formed electrophilic π-allylpalladium(II) reagent derived from allylic acetate or carbamate precursors, selective allylic alkylation of tyrosines can be achieved in aqueous solution at room temperature and in the presence of cysteines.
Au-catalyzed Cys alkylation
Cysteine-containing peptides have been shown to undergo 1,2-addition to allenes in the presence of gold(I) and/or silver(I) salts, producing hydroxyl substituted vinyl thioethers. The reaction with peptides proceeds with high yields and is selective for cysteines over other nucleophilic residues.
However, the reactivity towards proteins is much decreased, potentially due to the coordination of gold to the protein backbone.
Arylation
On Natural Amino Acids
Trp arylation
Multiple methods have been reported to achieve tryptophan C–H arylation, where diverse electrophiles such as aryl halides and aryl boronic acids (an example shown below) have been used to transfer the aryl groups.
However, current tryptophan C–H arylation reaction conditions remain relatively harsh, requiring organic solvents, low pH and/or high temperatures.
Cys arylation
Free thiols has been considered unfavorable for Pd-mediated reactions due to Pd-catalyst decomposition. However, PdII oxidative addition complexes (OACs) supported by dialkylbiaryl phosphine ligands have shown to work efficiently towards cysteine S-arylation.
The first example is the use of PdII OAC with RuPhos: The PdII complex resulting from the oxidative addition of aryl halides or trifluoromethanesulfonates and using RuPhos as the ligand could chemoselectively modify cysteines in various buffer with 5% organic co-solvent under neutral pH. This method has been shown to modify peptides and proteins, achieve peptide macrocyclization (by using bis-palladium reagent and peptides with two unprotected cysteines) and synthesizing antibody-drug conjugates (ADCs). Changing the ligand to sSPhos supports the PdII complex to be sufficiently water soluble to achieve cysteine S-arylation under cosolvent-free aqueous conditions.
There are other applications of this method where the PdII complexes were generated as PdII-peptide OACs by introducing 4-halophenylalanine into peptides during SPPS to achieve peptide-peptide or peptide-protein ligation.
Alternate to directly oxidative addition to the peptide, the Pd OACs could also be transferred to the protein through amine-selective acylation reaction via NHS ester. The latter has been applied to selectively label surface lysine residues of a protein (forming PdII-protein OACs) and oligonucleotides (forming PdII-oligonucleotide OACs), which could then be linked to cysteine-containing peptides or proteins.
Another example of protein-protein cross-coupling is achieved through converting cysteine residues into an electrophilic S-aryl–Pd–X OAC by utilizing an intramolecular oxidative addition strategy.
Lys arylation
Similar to cysteine, lysine N-arylation could be achieved through Pd OACs with different dialkylbiaryl phosphine ligands. Due to weaker nucleophilicity and slower reductive elimination rate compared to cysteine, the selection of supporting ligands is shown to be critical. The bulky BrettPhos and t-BuBrettPhos ligands in conjunction with mildly basic sodium phenoxide have been used as the strategy to functionalize lysines on peptide substrates. The reaction happens in mild conditions and is selective over most other nucleophilic amino acid residues.
On Unnatural Amino Acids
Pd-mediated Sonogashira, Heck, and Suzuki-Miyaura cross-coupling reactions have been applied widely to modify peptides and proteins, where diverse Pd reagents have been developed for the application in aqueous solutions. Those reactions require the protein or peptide substrate bearing unnatural functional groups such as alkyne, aryl halides, and aryl boronic acids, which can be achieved through genetic code expansion or post-translational modifications.
Examples of Applied Bioconjugation Techniques
Growth Factors
Bioconjugation of TGF-β to iron oxide nanoparticles and its activation through magnetic hyperthermia in-vitro has been reported. This was done by using 1-(3-dimethylaminopropyl)ethylcarbodiimide combined with N-Hydroxysuccinimide to form primary amide bonds with the free primary amines on the growth factor. Carbon nanotubes have been successfully used in conjunction with bioconjugation to link TGF-β followed by an activation with near-infrared light. Typically, these reactions have involved the use of a crosslinker, but some of these add molecular space between the compound of interest and base material and in turn causes higher degrees of non-specific binding and unwanted reactivity.
See also
Immunofluorescence
Biomolecular engineering
Biotinylation
SpyTag/SpyCatcher
In situ cyclization of proteins
Unnatural amino acids
Bioconjugate Chemistry journal
References
Biochemistry
Chemical bonding | Bioconjugation | [
"Physics",
"Chemistry",
"Materials_science",
"Biology"
] | 4,096 | [
"Biochemistry",
"Chemical bonding",
"Condensed matter physics",
"nan"
] |
10,854,684 | https://en.wikipedia.org/wiki/Karnaugh%20map | A Karnaugh map (KM or K-map) is a diagram that can be used to simplify a Boolean algebra expression. Maurice Karnaugh introduced it in 1953 as a refinement of Edward W. Veitch's 1952 Veitch chart, which itself was a rediscovery of Allan Marquand's 1881 logical diagram (aka. Marquand diagram). It is also useful for understanding logic circuits. Karnaugh maps are also known as Marquand–Veitch diagrams, Svoboda charts -(albeit only rarely)- and Karnaugh–Veitch maps (KV maps).
Definition
A Karnaugh map reduces the need for extensive calculations by taking advantage of humans' pattern-recognition capability. It also permits the rapid identification and elimination of potential race conditions.
The required Boolean results are transferred from a truth table onto a two-dimensional grid where, in Karnaugh maps, the cells are ordered in Gray code, and each cell position represents one combination of input conditions. Cells are also known as minterms, while each cell value represents the corresponding output value of the Boolean function. Optimal groups of 1s or 0s are identified, which represent the terms of a canonical form of the logic in the original truth table. These terms can be used to write a minimal Boolean expression representing the required logic.
Karnaugh maps are used to simplify real-world logic requirements so that they can be implemented using the minimal number of logic gates. A sum-of-products expression (SOP) can always be implemented using AND gates feeding into an OR gate, and a product-of-sums expression (POS) leads to OR gates feeding an AND gate. The POS expression gives a complement of the function (if F is the function so its complement will be F'). Karnaugh maps can also be used to simplify logic expressions in software design. Boolean conditions, as used for example in conditional statements, can get very complicated, which makes the code difficult to read and to maintain. Once minimised, canonical sum-of-products and product-of-sums expressions can be implemented directly using AND and OR logic operators.
Example
Karnaugh maps are used to facilitate the simplification of Boolean algebra functions. For example, consider the Boolean function described by the following truth table.
Following are two different notations describing the same function in unsimplified Boolean algebra, using the Boolean variables , , , and their inverses.
where are the minterms to map (i.e., rows that have output 1 in the truth table).
where are the maxterms to map (i.e., rows that have output 0 in the truth table).
Construction
In the example above, the four input variables can be combined in 16 different ways, so the truth table has 16 rows, and the Karnaugh map has 16 positions. The Karnaugh map is therefore arranged in a 4 × 4 grid.
The row and column indices (shown across the top and down the left side of the Karnaugh map) are ordered in Gray code rather than binary numerical order. Gray code ensures that only one variable changes between each pair of adjacent cells. Each cell of the completed Karnaugh map contains a binary digit representing the function's output for that combination of inputs.
Grouping
After the Karnaugh map has been constructed, it is used to find one of the simplest possible forms — a canonical form — for the information in the truth table. Adjacent 1s in the Karnaugh map represent opportunities to simplify the expression. The minterms ('minimal terms') for the final expression are found by encircling groups of 1s in the map. Minterm groups must be rectangular and must have an area that is a power of two (i.e., 1, 2, 4, 8...). Minterm rectangles should be as large as possible without containing any 0s. Groups may overlap in order to make each one larger. The optimal groupings in the example below are marked by the green, red and blue lines, and the red and green groups overlap. The red group is a 2 × 2 square, the green group is a 4 × 1 rectangle, and the overlap area is indicated in brown.
The cells are often denoted by a shorthand which describes the logical value of the inputs that the cell covers. For example, would mean a cell which covers the 2x2 area where and are true, i.e. the cells numbered 13, 9, 15, 11 in the diagram above. On the other hand, would mean the cells where is true and is false (that is, is true).
The grid is toroidally connected, which means that rectangular groups can wrap across the edges (see picture). Cells on the extreme right are actually 'adjacent' to those on the far left, in the sense that the corresponding input values only differ by one bit; similarly, so are those at the very top and those at the bottom. Therefore, can be a valid term—it includes cells 12 and 8 at the top, and wraps to the bottom to include cells 10 and 14—as is , which includes the four corners.
Solution
Once the Karnaugh map has been constructed and the adjacent 1s linked by rectangular and square boxes, the algebraic minterms can be found by examining which variables stay the same within each box.
For the red grouping:
A is the same and is equal to 1 throughout the box, therefore it should be included in the algebraic representation of the red minterm.
B does not maintain the same state (it shifts from 1 to 0), and should therefore be excluded.
C does not change. It is always 0, so its complement, NOT-C, should be included. Thus, should be included.
D changes, so it is excluded.
Thus the first minterm in the Boolean sum-of-products expression is .
For the green grouping, A and B maintain the same state, while C and D change. B is 0 and has to be negated before it can be included. The second term is therefore . Note that it is acceptable that the green grouping overlaps with the red one.
In the same way, the blue grouping gives the term .
The solutions of each grouping are combined: the normal form of the circuit is .
Thus the Karnaugh map has guided a simplification of
It would also have been possible to derive this simplification by carefully applying the axioms of Boolean algebra, but the time it takes to do that grows exponentially with the number of terms.
Inverse
The inverse of a function is solved in the same way by grouping the 0s instead.
The three terms to cover the inverse are all shown with grey boxes with different colored borders:
:
:
:
This yields the inverse:
Through the use of De Morgan's laws, the product of sums can be determined:
Don't cares
Karnaugh maps also allow easier minimizations of functions whose truth tables include "don't care" conditions. A "don't care" condition is a combination of inputs for which the designer doesn't care what the output is. Therefore, "don't care" conditions can either be included in or excluded from any rectangular group, whichever makes it larger. They are usually indicated on the map with a dash or X.
The example on the right is the same as the example above but with the value of f(1,1,1,1) replaced by a "don't care". This allows the red term to expand all the way down and, thus, removes the green term completely.
This yields the new minimum equation:
Note that the first term is just , not . In this case, the don't care has dropped a term (the green rectangle); simplified another (the red one); and removed the race hazard (removing the yellow term as shown in the following section on race hazards).
The inverse case is simplified as follows:
Through the use of De Morgan's laws, the product of sums can be determined:
Race hazards
Elimination
Karnaugh maps are useful for detecting and eliminating race conditions. Race hazards are very easy to spot using a Karnaugh map, because a race condition may exist when moving between any pair of adjacent, but disjoint, regions circumscribed on the map. However, because of the nature of Gray coding, adjacent has a special definition explained above – we're in fact moving on a torus, rather than a rectangle, wrapping around the top, bottom, and the sides.
In the example above, a potential race condition exists when C is 1 and D is 0, A is 1, and B changes from 1 to 0 (moving from the blue state to the green state). For this case, the output is defined to remain unchanged at 1, but because this transition is not covered by a specific term in the equation, a potential for a glitch (a momentary transition of the output to 0) exists.
There is a second potential glitch in the same example that is more difficult to spot: when D is 0 and A and B are both 1, with C changing from 1 to 0 (moving from the blue state to the red state). In this case the glitch wraps around from the top of the map to the bottom.
Whether glitches will actually occur depends on the physical nature of the implementation, and whether we need to worry about it depends on the application. In clocked logic, it is enough that the logic settles on the desired value in time to meet the timing deadline. In our example, we are not considering clocked logic.
In our case, an additional term of would eliminate the potential race hazard, bridging between the green and blue output states or blue and red output states: this is shown as the yellow region (which wraps around from the bottom to the top of the right half) in the adjacent diagram.
The term is redundant in terms of the static logic of the system, but such redundant, or consensus terms, are often needed to assure race-free dynamic performance.
Similarly, an additional term of must be added to the inverse to eliminate another potential race hazard. Applying De Morgan's laws creates another product of sums expression for f, but with a new factor of .
2-variable map examples
The following are all the possible 2-variable, 2 × 2 Karnaugh maps. Listed with each is the minterms as a function of and the race hazard free (see previous section) minimum equation. A minterm is defined as an expression that gives the most minimal form of expression of the mapped variables. All possible horizontal and vertical interconnected blocks can be formed. These blocks must be of the size of the powers of 2 (1, 2, 4, 8, 16, 32, ...). These expressions create a minimal logical mapping of the minimal logic variable expressions for the binary expressions to be mapped. Here are all the blocks with one field.
A block can be continued across the bottom, top, left, or right of the chart. That can even wrap beyond the edge of the chart for variable minimization. This is because each logic variable corresponds to each vertical column and horizontal row. A visualization of the k-map can be considered cylindrical. The fields at edges on the left and right are adjacent, and the top and bottom are adjacent. K-Maps for four variables must be depicted as a donut or torus shape. The four corners of the square drawn by the k-map are adjacent. Still more complex maps are needed for 5 variables and more.
Related graphical methods
Related graphical minimization methods include:
Marquand diagram (1881) by Allan Marquand (1853–1924)
Veitch chart (1952) by Edward W. Veitch (1924–2013)
Svoboda chart (1956) by Antonín Svoboda (1907–1980)
Mahoney map (M-map, designation numbers, 1963) by Matthew V. Mahoney (a reflection-symmetrical extension of Karnaugh maps for larger numbers of inputs)
Reduced Karnaugh map (RKM) techniques (from 1969) like infrequent variables, map-entered variables (MEV), variable-entered map (VEM) or variable-entered Karnaugh map (VEKM) by G. W. Schultz, Thomas E. Osborne, Christopher R. Clare, J. Robert Burgoon, Larry L. Dornhoff, William I. Fletcher, Ali M. Rushdi and others (several successive Karnaugh map extensions based on variable inputs for a larger numbers of inputs)
Minterm-ring map (MRM, 1990) by Thomas R. McCalla (a three-dimensional extension of Karnaugh maps for larger numbers of inputs)
See also
Algebraic normal form (ANF)
Binary decision diagram (BDD), a data structure that is a compressed representation of a Boolean function
Espresso heuristic logic minimizer
List of Boolean algebra topics
Logic optimization
Punnett square (1905), a similar diagram in biology
Quine–McCluskey algorithm
Reed–Muller expansion
Venn diagram (1880)
Zhegalkin polynomial
Notes
References
Further reading
(146 pages)
(282 pages with 14 animations)
External links
Detect Overlapping Rectangles, by Herbert Glarner.
Using Karnaugh maps in practical applications, Circuit design project to control traffic lights.
K-Map Tutorial for 2,3,4 and 5 variables
POCKET–PC BOOLEAN FUNCTION SIMPLIFICATION, Ledion Bitincka — George E. Antoniou
K-Map troubleshoot
Boolean algebra
Diagrams
Electronics optimization
Logic in computer science | Karnaugh map | [
"Mathematics"
] | 2,844 | [
"Boolean algebra",
"Fields of abstract algebra",
"Logic in computer science",
"Mathematical logic"
] |
10,854,775 | https://en.wikipedia.org/wiki/Environmental%20data%20rescue | Environmental data rescue is a collection of processes, including photography and scanning, that stores historical and modern environmental data in a usable format. The data is then analyzed and used in scientific models. Historical weather information helps meteorologists and climatologists understand past trends in weather changes, which helps them forecast and predict future weather.
One method takes digital photographs of environmental datum stored on paper medium and then ships the images to a facility where they are entered into a database.
Throughout the world, some estimate 700,000 pieces of data are lost every day due to inks fading, paper deteriorating, magnetic tape print-through etc. A rough estimate of 100 billion parameter values that are still on paper exists, microfiche, microfilm, and magnetic tape that are in a format unusable by computers and scientists alike, which need to be digitized. This data is stored on a variety of media from paper, microfiche, to older magnetic tapes that are going bad.
Once data is digitized, it can be used to help a large range of people from farmers to engineers and in scientific pursuits such as climate studies. Historical environmental data are also used as a basis for "Disease Vectorization" where the areal spread of airborne diseases are correlated to historical weather conditions so that in future outbreaks, health care teams can predict the direction and rate of spread of the disease so that remedial actions can begin before the disease reaches the vulnerable population. Historic data are also used in designing structures such as bridges and buildings, assist the 1.8 billion subsistence farmers throughout the world better plan crops alleviating starvation.
National Climatic Data Center is the current collection point for this data within the National Oceanic & Atmospheric Administration. The International Environmental Data Rescue Organization, a 501(C)(3) non-profit organization has also already participated in the rescue and digitization of one million historic weather observations in Africa and South America.
See also
Citizen science
Data archiving
References
Environmental science | Environmental data rescue | [
"Environmental_science"
] | 398 | [
"nan"
] |
10,855,498 | https://en.wikipedia.org/wiki/Biocultural%20anthropology | Biocultural anthropology can be defined in numerous ways. It is the scientific exploration of the relationships between human biology and culture. "Instead of looking for the underlying biological roots of human behavior, biocultural anthropology attempts to understand how culture affects our biological capacities and limitations."
History
Physical anthropologists throughout the first half of the 20th century viewed this relationship from a racial perspective; that is, from the assumption that typological human biological differences lead to cultural differences. After World War II the emphasis began to shift toward an effort to explore the role culture plays in shaping human biology. The shift towards understanding the role of culture to human biology led to the development of Dual inheritance theory in the 1960s. In relation to, and following the development of Dual-inheritance theory, biocultural evolution was introduced and first used in the 1970s.
Key research
Biocultural approaches to human biology have been utilized since at least 1958 when American Biological Anthropologist Frank B. Livingstone contributed early research explaining the linkages among population growth, subsistence strategy, and the distribution of the sickle cell gene in Liberia.
Human adaptability research in the 1960s focused on two biocultural approaches to fatigue: functional differentiation of skeletal muscles associated with various movements, and human adaptability to modern living involving different work types.
"What's Cultural about Biocultural Research," Written by William W. Dressler, connects the cultural perspective of biocultural anthropology to "cultural consonance" which is defined as "a model to assess the approximation of an individuals behavior compared to the guiding awareness of his or her culture. This research has been used to examine outcomes in blood pressure, depressive symptoms, body composition, and dietary habits.
Dr. Romendro Khongsdier's approach to the study of human variation and evolution.
"Building a New Biocultural Synthesis" by Alan H. Goodman and Thomas L. Leatherman.
"New Directions in Biocultural Anthropology" edited by Molly Zuckerman and Debra Martin uses various case studies from around the world to understand how biocultural anthropology can be used to understand the relationship between biology and culture in both past and present populations.
Contemporary biocultural anthropology
Biocultural methods focus on the interactions between humans and their environment to understand human biological adaptation and variation. Contemporary biocultural anthropologists view culture as having several key roles in human biological variation:
Culture is a major human adaptation, permitting individuals and populations to adapt to widely varying local ecologies.
Characteristic human biological or biobehavioral features, such as a large frontal cortex and intensive parenting compared to other primates, are viewed in part as an adaptation to the complex social relations created by culture.
Culture shapes the political economy, thereby influencing what resources are available to individuals to feed and shelter themselves, protect themselves from disease, and otherwise maintain their health.
Culture shapes the way people think about the world, altering their biology by influencing their behavior (e.g., food choice) or more directly through psychosomatic effects (e.g., the biological effects of psychological stress).
While biocultural anthropologists are found in many academic anthropology departments, usually as a minority of the faculty, certain departments have placed considerable emphasis on the "biocultural synthesis". Historically, this has included Emory University, the University of Alabama, UMass Amherst (especially in biocultural bioarchaeology) , and the University of Washington , each of which built Ph.D. programs around biocultural anthropology; Binghamton University, which has a M.S. program in biomedical anthropology; Oregon State University, University of Kentucky and others. Paul Baker, an anthropologist at Penn State whose work focused upon human adaptation to environmental variations, is credited with having popularized the concept of "biocultural" anthropology as a distinct subcategory of anthropology in general. Khongsdier argues that biocultural anthropology is the future of anthropology because it serves as a guiding force towards greater integration of the subdisciplines.
Reception and criticism
Modern anthropologists, both biological and cultural, have criticized the biocultural synthesis, generally as part of a broader critique of "four-field holism" in U.S. anthropology (see anthropology main article). Typically such criticisms rest on the belief that biocultural anthropology imposes holism upon the biological and cultural subfields without adding value, or even destructively. For instance, contributors in the edited volume Unwrapping the Sacred Bundle: Reflections on the Disciplining of Anthropology argued that the biocultural synthesis, and anthropological holism more generally, are artifacts from 19th century social evolutionary thought that inappropriately impose scientific positivism upon cultural anthropology.
Some departments of anthropology have fully split, usually dividing scientific from humanistic anthropologists, such as Stanford's highly publicized 1998 division into departments of "Cultural and Social Anthropology" and "Anthropological Sciences". Underscoring the continuing controversy, this split is now being reversed over the objections of some faculty. Other departments, such as at Harvard, have distinct biological and sociocultural anthropology "wings" not designed to foster cross subdisciplinary interchange.
Biocultural research has shown to contain a few challenges to the researcher. "In general we are much more experienced in measuring the biological than the cultural. It is also difficult to precisely define what is meant by constructs such as socioeconomic status, poverty, rural, and urban. Operationalizing key variables so that they can be measured in ways that are enthnographically valid as well as replicable. Defining and measuring multiple causal pathways."
See also
Biocultural evolution
Cultural neuroscience
Evolutionary anthropology
Sociocultural anthropology
References
External links
Essays by Prof. Jack Kelso
Anthropology
Sociobiology | Biocultural anthropology | [
"Biology"
] | 1,134 | [
"Behavioural sciences",
"Behavior",
"Sociobiology"
] |
10,855,624 | https://en.wikipedia.org/wiki/Amihan%20%28mythology%29 | Amihan is a genderless
deity that is depicted as a bird in the Philippine mythology. According to the Tagalog folklore, Amihan is the first creature to inhabit the universe, along with the gods called Bathala and Aman Sinaya. In the legend, Amihan is described as a bird who saved the first human beings, Malakas and Maganda, from a bamboo plant.
Amihan is also depicted with Habagat which explains the wind patterns in the country. In one legend, they are depicted as children of the supreme deity Bathala. They are allowed by their father to play in turns, every half a year, since having the two play together causes destruction in the land. Amihan is depicted as the gentler sister while Habagat is depicted as the more active brother. In another legend, Amihan is depicted as a giant who is at war with another giant Habagat.
References
Sky and weather deities
Bird deities
Creator deities
Tagalog deities
Legendary birds
Androgynous and hermaphroditic deities | Amihan (mythology) | [
"Physics"
] | 212 | [
"Weather",
"Sky and weather deities",
"Physical phenomena"
] |
10,855,775 | https://en.wikipedia.org/wiki/Trap%20crop | A trap crop is a plant that attracts agricultural pests, usually insects, away from nearby target crops. This form of companion planting can save a target crop from decimation by pests without the use of artificial pesticides. A trap crop is used for attracting the insect and pests away from a target crop field. Many trap crops have successfully diverted pests from focal crops in small scale greenhouse, garden and field experiments; a small portion of these plants have been shown to reduce pest damage at larger commercial scales. A common explanation for reported trap cropping failures, is that attractive trap plants only protect nearby plants if the insects do not move back into the target crop. In a review of 100 trap cropping examples in 2006, only 10 trap crops were classified as successful at a commercial scale, and in all successful cases, trap cropping was supplemented with management practices that specifically limited insect dispersal from the trap crop back into the target crop.
Examples
Examples of trap crops include:
Alfalfa planted in strips among cotton, to draw away lygus bugs, while castor beans surround the field, or tobacco planted in strips among it, to protect from the budworm Heliothis.
Rose enthusiasts often plant Pelargonium geraniums among their rosebushes because Japanese beetles are drawn to the geraniums, which are toxic to them.
Chervil is used by gardeners to protect vegetable plants from slugs.
Rye, sesbania, and sicklepod are used to protect soybeans from corn seeding maggots, stink bugs, and velvet green caterpillars, respectively.
Mustard and alfalfa planted near strawberries to attract lygus bugs, a method pioneered by Jim Cochran.
Blue Hubbard squash is planted near cucurbit crops to attract squash vine borer, squash bugs, and both spotted and striped Cucumber beetle.
In push-pull agricultural pest management, napier grass or signal grass (Brachiaria brizantha) are used as trap crops to attract stemboring moths such as Chilo partellus.
Trap crops can be planted around the circumference of the field to be protected, which is assumed to act as a barrier for entry by pests, or they can be interspersed among the main crop, for example being planted every ninth row. Planting crops in rows helps facilitate supplemental management practices that prevent insect pest dispersal back into the main field, such as driving a vehicle above the trap crop which then removes insect pests by vacuuming them off of the trap crop row or targeted insecticides, which are only deployed on the trap crop. Even if pesticides are used to control insects on the trap crop, total pesticides are greatly reduced in this scenario over conventional agricultural pesticide applications because they are only deployed on a small portion of the farm (the trap crop). Other strategies that prevent dispersal of insect pests back into the main crop include cutting the trap plants, applying predators or parasitoids to the trap plant that eat the pest, and planting a high ratio of trap plants to other plants.
Trap crops, when used on an industrial scale, are generally planted at a key time in the pest's life-cycle, and then destroyed before that life-cycle finishes and the pest might have transferred from the trap plants to the main crop.
Mechanism
Recent studies on host-plant finding have shown that flying pests are far less successful if their host-plants are surrounded by any other plant, or even "decoy-plants" made of green plastic, cardboard or any other green material. The host-plant finding process occurs in three phases.
The first phase is stimulation by odours characteristic to the host-plant. This induces the insect to try to land on the plant it seeks. But insects avoid landing on brown (bare) soil. So if only the host-plant is present, the insects will quasi-systematically find it by landing on the only green thing around. This is called an "appropriate landing". When it does an "inappropriate landing", it flies off to any other nearby patch of green. It eventually leaves the area if there are too many "inappropriate" landings.
The second phase of host-plant finding is for the insect to make short flights from leaf to leaf to assess the plant's overall suitability. The number of leaf-to-leaf flights varies according to the insect species and to the host-plant stimulus received from each leaf. But the insect must accumulate sufficient stimuli from the host-plant to lay eggs; so it must make a certain number of consecutive "appropriate" landings. Hence if it makes an "inappropriate landing", the assessment of that plant is negative and the insect must start the process anew.
Thus, a clover ground cover was shown to have the same disruptive effect on eight pest species from four insect orders. An experiment showed that 36% of cabbage root flies laid eggs beside cabbages growing in bare soil (which resulted in no crop), compared with only 7% beside cabbages growing in clover (which allowed a good crop). Moreover, simple decoys made of green cardboard disrupted appropriate landings just as well as the clover.
See also
Tropaeolum
References
Biological pest control
Chemical ecology | Trap crop | [
"Chemistry",
"Biology"
] | 1,064 | [
"Biochemistry",
"Chemical ecology"
] |
10,856,211 | https://en.wikipedia.org/wiki/Silicon%20Wadi | Silicon Wadi (, ) is a region in Israel that serves as one of the global centres for advanced technology. It spans the Israeli coastal plain, and is cited as among the reasons why the country has become known as the world's "start-up nation" (see science and technology in Israel). The highest concentrations of high-tech industry in the region can be found around Tel Aviv, including small clusters around the cities of Raʽanana, Petah Tikva, Herzliya, Netanya, Rehovot, and Ness Ziona. Additional clusters of high-tech industry can be found in Haifa and Caesarea. More recent high-tech establishments have been raised in cities such as Jerusalem and Beersheba, in towns such as Yokneam Illit, and in Airport City. Israel has the third highest number of startups by region and the highest rate of startups per capita in the world.
Etymology
The term "Silicon Wadi" is a pun-name derived from a similarly high-tech region in the United States known as Silicon Valley, which is located in California. The word "wadi" derives from the Arabic "واد", meaning 'valley'.
History
Israeli high-tech firms originally began to form in the 1960s. In 1961 ECI Telecom was founded, followed in 1962 by Tadiran and Elron Electronic Industries regarded by many to be the "Fairchild of Israel." The number of internationally successful firms grew slowly, with only one or two new successful firms each year until the early 1990s. Motorola was the first U.S. corporation to set up an R&D unit in Israel, in 1964. The center initially developed wireless products including remote irrigation systems and later developed leading chips such as the 68030. Following the 1967 French arms embargo, Israel was forced to develop a domestic military industry, focusing on developing a technological edge over its neighbors. Some of these military firms started to seek and develop civilian applications of military technology. In the 1970s more commercial innovations began, many of which were based on military R&D, including: Scitex digital printing systems, which were based on fast rotation drums from fast-rotation electronic warfare systems, and Elscint, which developed innovative medical imaging and became a leading force in its market.
High-tech firms continued to struggle throughout this period with marketing and many products, such as a mini-computer developed in the 1970s by Elbit, who were unable to successfully commercialise the product.In the 1970s, Intel and IBM both opened offices in Israel, IBM opened in 1972 and Intel opened in 1974.
Role in the global software market
Slowly, the international computing industry shifted the emphasis from hardware (in which Israel had no comparative advantage) to software products (in which human capital plays a larger role). The country became one of the first nations to compete in global software markets. By the 1980s a diverse set of software firms had developed. Each found niches which were not dominated by U.S. firms and between 1984 and 1991 "pure" software exports increased from $5 million to $110 million. Many of the important ideas here were developed by graduates of Mamram, the Israeli computer corps, established by the IDF in the 1960s.
During the 1980s and early 1990s several successful software companies emerged from Israel, including: Amdocs (established in 1982 as Aurec Information), Cimatron (established in 1982), Magic Software Enterprises (established in 1983), Comverse (established in 1983 as Efrat Future Technologies), Aladdin Knowledge Systems (established in 1985), NICE Systems (established in 1986), Mercury Interactive (established in 1989) and Check Point Software Technologies (established in 1993).
The 1990s saw the real takeoff of high-tech industries in Israel, with international media attention increasing awareness of innovation in the country. Growth increased, whilst new immigrants from the Soviet Union increased the available high-tech workforce. Many of these immigrants were highly skilled and educated which strengthened Israeli enterpeneurship research centers and universities. Peace agreements including the 1993 Oslo Peace Accord increased the investment environment and Silicon Wadi began to develop into a noticeable high-tech cluster.
Dot-com boom
In 1998, Mirabilis, an Israeli company that developed the ICQ instant messaging program, which revolutionized communication over the Internet, was purchased by America Online (AOL) for $407 million in cash, 18 months after it was founded and having no revenues. The free service attracted a user base of 15 million in that period and by 2001, ICQ had over 100 million users worldwide.
The success of Mirabilis triggered the dot-com boom in Israel; thousands of start-up companies were established between 1998 and 2001, while venture capital raised by Israeli companies reached $1,851 million in 1999, peaking at $3,701 million in 2000. Over fifty Israeli companies had initial public offerings on NASDAQ and other international stock markets during that period.
Silicon Wadi today
The government assists industrial growth by providing low-rate loans from its development budget. The main limitations experienced by the industry are the scarcity of domestic raw materials, limited energy sources, and the restricted size of the domestic market. One certain advantage is that many Israeli university graduates are likely to become IT entrepreneurs or join startups, about twice as much as U.S. university graduates, who are also attracted to traditional corporate executive positions, according to Charles A. Holloway, co-director of the Center for Entrepreneurial Studies and a professor at the Stanford Graduate School of Business of Stanford University. ICQ, for instance, was one of the world's most famous Israeli software products, developed by 4 young entrepreneurs. IBM has its IBM Content Discovery Engineering Team in Jerusalem, which is part of a number of IBM R&D Labs in Israel.
Tel Aviv is a global innovation hub with multiple international companies including not only Volkswagen, Hyundai, Visa and Citi, have built their centers of innovation in the Tel Aviv region. Tel Aviv university in 2023 launched an aggregation center for innovation. Sampo, Jaguar and Amazon have also launched or are launching centers. Investment in Israeli startups in 2023 was $7 billion.
Israel has the third highest number of startups by region and the highest rate of startups per capita in the world.
The RAD Group, founded in 1981 by brothers Yehuda and Zohar Zisapel, has been "the most fertile ground" for the creation of Israeli entrepreneurs, having produced 56 "serial entrepreneurs" who established more than one start-up each. RAD Group "graduates" were responsible for the establishment of a total of 111 significant high-tech initiatives.
The Israeli Quantum Computing Center was the first quantum computing center to have several different quantum computers able to hold different qubit modalities, which opened in June 2024 at Tel Aviv University.
Around 30 quantum startups are active in Israel as of 2024 according to Aviv Zeevi.
As of the middle of 2021, 29 unicorns, companies worth more than $1 billion, have been founded by Israelis. The scaleup of Israeli startups has led to Israel being dubbed "the scaleup nation". Total unicorns founded by Israelis, no matter their headquarters' location accounts for 71 unicorns.
Location
Due to the small size of Israel, the concentration of high-tech firms across much of the country is enough for it to be recognised as one large cluster. Most activity is located in the densely populated areas of metropolitan Tel Aviv, Haifa (Matam), and Jerusalem (Technology Park, Malha, Har Hotzvim and JVP Media Quarter in Talpiot), and the Startup Village Ecosystem in the Yokneam area, although some secondary with additional activity include the corridor to Beer Sheba, including Kiryat Gat, and the Western Galilee. In all, this is an area no larger than 6000 square kilometers, half of the extended Silicon Valley's geographical coverage.
Economy
In 2006, more than 3,000 start-ups were created in Israel, a number that is only second to the U.S. Newsweek Magazine has also named Tel Aviv as one of the world's top ten "Hot High-Tech Cities". In 1998, Tel Aviv was named by Newsweek as one of the ten technologically most influential cities in the world. In 2012, the city was also named one of the best places for high-tech startup companies, placed only second behind its California counterpart.
A cluster of software companies, who are monetizing "free" software downloads by adware or altering user's systems, has been dubbed Download Valley.
Israeli venture capital industry
The origins of the now thriving venture capital industry in Israel can be traced to a $100 million government initiative in 1993 named the Yozma program ("Initiative" in Hebrew); which offered attractive tax incentives to any foreign venture-capital investments in Israel and offered to double any investment with funds from the government. As a result, Between 1991 and 2000, Israel's annual venture-capital outlays, nearly all private, rose nearly 60-fold, from $58 million to $3.3 billion; companies launched by Israeli venture funds rose from 100 to 800; and Israel's information-technology revenues rose from $1.6 billion to $12.5 billion. By 1999, Israel ranked second only to the United States in invested private-equity capital as a share of GDP. And it led the world in the share of its growth attributable to high-tech ventures: 70 percent.
Israel's thriving venture capital industry has played an important role in financing and funding Silicon Wadi. The financial crisis of 2007–08 affected the availability of venture capital locally. In 2009, there were 63 mergers and acquisitions in the Israeli market worth a total of $2.54 billion; 7% below 2008 levels ($2.74 billion), when 82 Israeli companies were merged or acquired, and 33% lower than 2007 proceeds ($3.79 billion) when 87 Israeli companies were merged or acquired. Numerous high tech Israeli companies have been acquired by global multinational corporations for its provision of profit-driven technologies along with its reliable and quality corporate personnel. The March acquisition of Israeli company Mellanox for $6.9 billion by Nvidia Corporation is a definite contender for the largest M&A deal in 2019. Generally, Israeli startups are becoming so attractive that U.S. companies tend to acquire them more than anyone else: they account for half of all transactions in 2018. Thus, Israels eventually became a "net seller".
Israel's venture capital industry has about 70 active venture capital funds, of which 14 international VCs with Israeli offices. Additionally, there are some 220 international funds, including Polaris Venture Partners, Accel Partners and Greylock Partners, that do not have branches in Israel, but actively invest in Israel through an in-house specialist.
In 2009, the life sciences sector led the market with $272 million or 24% of total capital raised, followed by the software sector with $258 million or 23%, the communications sector with $219 million or 20%, and the Internet sector with 13% of capital raised in 2009.
Multinational technology companies operating in Israel
As of 2010, more than 35,000 Israeli personnel were employed in various research and development centers operated by multinational corporations with a presence across Israel. In recent years, East Asian multinational corporations and investors, especially from Mainland China, have actively invested and opened up offices in Israel, including Chinese technology giants such as Alibaba, Baidu, Tencent and Kuang-Chi. Around 60 foreign R&D centers are engaged in a diverse range of activities including biotechnology, chemicals, industrial machinery, communication equipment, scientific instruments, medical devices, flash memory storage equipment, computer hardware components, software, semiconductors and internet.
Global ranking
The following global region rankings are a ranking of the Tel Aviv area, based on a 2024 study by Dutch research firm, Dealroom.
See also
Download Valley
Economy of Israel
Israel Innovation Authority
List of Israeli companies quoted on the Nasdaq
List of multinationals with research and development centres in Israel
List of technology centers
Made in JLM
Science and technology in Israel
Start-up Nation: The Story of Israel's Economic Miracle
Startup Village, Yokneam
Yozma
References
External links
Israel’s Silicon Wadi: The forces behind cluster formation — by Catherine de Fontenay and Erran Carmel, June 2002
Wireless Valley, Silicon Wadi and Digital Island – Helsinki, Tel Aviv and Dublin and the ICT global production network — by Stephen Roper and Seamus Grimes, March 2005. (Subsequently, published in Geoforum)
Entrepreneurship Models of the Countries that Leverage Silicon Valley by Mustafa Ergen
Economy of Israel
Silicon Wadi, Israel
Information technology places
Regions of Israel
Science and technology in Israel | Silicon Wadi | [
"Technology"
] | 2,606 | [
"Information technology",
"Information technology places"
] |
10,856,249 | https://en.wikipedia.org/wiki/Druglikeness | Druglikeness is a qualitative concept used in drug design for how "druglike" a substance is with respect to factors like bioavailability. It is estimated from the molecular structure before the substance is even synthesized and tested. A druglike molecule has properties such as:
Solubility in both water and fat, as an orally administered drug needs to pass through the intestinal lining after it is consumed, be carried in aqueous blood and penetrate the lipid-based cell membrane to reach the inside of a cell. A model compound for the lipophilic cellular membrane is 1-octanol (a lipophilic medium-chain fatty alcohol), so the logarithm of the octanol-water partition coefficient, known as LogP, is used to predict the solubility of a potential oral drug. This coefficient can be experimentally measured or predicted computationally, in which case it is sometimes called "cLogP". As the lipophilicity of ionizable compounds is strongly dependent of pH, the distribution coefficient logD, or a logP vs pH curve may be used instead.
Potency at the biological target. High potency (high value of pIC50) is a desirable attribute in drug candidates, as it reduces the risk of non-specific, off-target pharmacology at a given concentration. When associated with low clearance, high potency also allows for low total dose, which lowers the risk of idiosyncratic drug reactions.
Ligand efficiency and lipophilic efficiency.
Molecular weight: The smaller the better, because diffusion is directly affected. The great majority of drugs on the market have molecular weights between 200 and 600 daltons, and particularly <500; they belong to the group of small molecules.
A traditional method to evaluate druglikeness is to check compliance of Lipinski's rule of five, which covers the numbers of hydrophilic groups, molecular weight and hydrophobicity.
Since the drug is transported in aqueous media like blood and intracellular fluid, it has to be sufficiently water-soluble in the absolute sense (i.e. must have a minimum chemical solubility in order to be effective). Solubility in water can be estimated from the number of hydrogen bond donors vs. alkyl sidechains in the molecule. Low water solubility translates to slow absorption and action. Too many hydrogen bond donors, on the other hand, lead to low fat solubility, so that the drug cannot penetrate the cell membrane to reach the inside of the cell.
Based on one definition, a drug-like molecule has a logarithm of partition coefficient (log P) between −0.4 and 5.6, molecular weight 160–480 g/mol, molar refractivity of 40–130, which is related to the volume and molecular weight of the molecule and has 20–70 atoms.
Substructures with known toxic, mutagenic or teratogenic properties affect the usefulness of a designed molecule. However, several poisons have a good druglikeness. Natural toxins are used in pharmacological research to find out their mechanism of action, and if it could be exploited for beneficial purposes. Alkylnitro compounds tend to be irritants, and Michael acceptors, such as enones, are alkylating agents and thus potentially mutagenic and carcinogenic.
Druglikeness indices are inherently limited tools. Druglikeness can be estimated for any molecule, and does not evaluate the actual specific effect that the drug achieves (biological activity). Simple rules are not always accurate and may unnecessarily limit the chemical space to search: many best-selling drugs have features that cause them to score low on various druglikeness indices. Furthermore, first-pass metabolism, which is biochemically selective, can destroy the pharmacological activity of a compound despite good druglikeness.
Druglikeness is not relevant for most biologics, since they are usually proteins that need to be injected, because proteins are digested if eaten.
See also
Lipinski's rule of five (RO5)
Fragment-based lead discovery (FBLD)
References
External links
OSIRIS Property Explorer: Prediction of druglikeness
molinspiration free drug-likeness and bioactivity calculator
Drug discovery
Medicinal chemistry | Druglikeness | [
"Chemistry",
"Biology"
] | 902 | [
"Life sciences industry",
"Drug discovery",
"nan",
"Medicinal chemistry",
"Biochemistry"
] |
10,856,257 | https://en.wikipedia.org/wiki/Fluorogenic | Fluorogenic describes a property of chemical compounds which are initially not fluorescent, but become fluorescent through a chemical reaction, typically through an intermolecular covalent reaction which binds the now fluorescent compound to a target molecule. IUPAC uses a broader definition of fluorogenic, wherein an enhancement of fluorescence via a chemical reaction is not required, however in contrast to the IUPAC definition common use of fluorogenic does not refer to non-reaction effects like the enhancement of fluorescence from a fluorophore being in different solvents. Fluorogenic labeling reagents are often used in analytical chemistry procedures, particularly in HPLC or CE to derivative target compounds (e.g. labeling the primary amines of polypeptides), thereby allowing enhanced sensitivity through fluorescence based detection.
Examples
OPA
Fluorescamine
FQ
NBD-F
6-AQC
Epicocconone
CBQCA
See also
Colorogenic
References
Fluorescence
Analytical chemistry | Fluorogenic | [
"Chemistry"
] | 200 | [
"Luminescence",
"Fluorescence",
"nan",
"Analytical chemistry stubs"
] |
10,856,401 | https://en.wikipedia.org/wiki/Habitat-selection%20hypothesis | Habitat selection hypothesis is one of several hypotheses that attempt to explain the mechanisms of brood parasite host selection in cuckoos. Cuckoos are not the only brood parasites, however the behavior is more rare in other groups of birds, including ducks, weavers, and cowbirds.
Brood parasites and their favored host species are known to coevolve, which means both are likely to possess specific adaptations and counteradaptations. An example of such an evolutionary arms race between a brood parasite and its host, is the phenomenon of egg rejection and egg mimicry, its counteradaptation. Cuckoo eggs have been found in the nests of over 100 different species, of which 11 have been identified as primary host species and a similar number as secondary. Egg patterns and coloring differs greatly between these host species, and the cuckoo eggs vary accordingly. Thus it is important for a female cuckoo to deposit her eggs in a nest corresponding to the same species as her foster parents, because if she were to select a different host species, that would likely entail a higher risk of egg rejection.
According to the habitat selection hypothesis, host selection occurs through the means of habitat imprinting in early post-natal development. A female cuckoo retains recognition of certain stimuli, like vegetation, from experience with her natal habitat. Habitats might be defined as dry or wet, shrubby or forested, lakeside, etc. This process has been termed natal habitat preference induction (NHPI) and has been found in many species across different taxa, such as insects (Hopkins’ host selection principle), fish, amphibians, mammals and birds of course. This imprinting of the habitat type in which the female cuckoo was reared may cause her to subsequently return to this habitat type in order to lay eggs and therefore increases the likelihood of encountering the suitable host species, as most host species are known to be habitat specific. Thus, habitat selection is thought to allow for specific host selection by the female cuckoo. In some cases an individual may choose a different habitat from their original imprint based on the reproductive success of conspecific individuals in the vicinity.
Alternative Hypotheses
There are 5 hypotheses for host selection in cuckoos: Inherited preference, host imprinting, natal philopatry (returning to their own birthplace to lay eggs), nest site choice (preference based on egg and nest similarity), and the hypothesis described above, habitat selection. Although the preponderance of evidence seems to be in favor of the habitat selection hypothesis, some evidence for natal philopatry has been observed in cuckoos and the majority of cuckoo eggs are found in nests and among eggs matching their foster species, which supports the nest site choice hypothesis, but does not invalidate any of the other hypotheses. It could also be the case that there is more than one mechanism of host selection at play here. In their 1997 study, Teuschl et al. suggest the possibility of a hierarchal decision process consisting of 3 steps: 1) upon returning from their spring migration the female cuckoos go back to the approximate location of their birthplace, which should increase the likelihood of them finding a familiar habitat, 2) choosing a suitable habitat based on habitat imprinting, 3) choosing a suitable nest within that habitat.
References
Behavioral ecology | Habitat-selection hypothesis | [
"Biology"
] | 668 | [
"Behavioural sciences",
"Ethology",
"Behavior",
"Behavioral ecology"
] |
10,856,772 | https://en.wikipedia.org/wiki/Immunization%20during%20pregnancy | Immunization during pregnancy is the administration of a vaccine to a pregnant individual. This may be done either to protect the individual from disease or to induce an antibody response, such that the antibodies cross the placenta and provide passive immunity to the infant after birth. In many countries, including the US, Canada, UK, Australia and New Zealand, vaccination against influenza, COVID-19 and whooping cough is routinely offered during pregnancy.
Other vaccines may be offered during pregnancy where travel-related or occupational exposure to disease-causing organisms warrant this. However, certain vaccines are contra-indicated in pregnancy. These include vaccines that include live attenuated organisms, such as the MMR and BCG vaccines, since there is a potential risk that these could infect the fetus.
Tetanus and whooping cough vaccination in pregnancy
Newborns are at increased risk of infection, particularly before they receive their first infant vaccinations. For this reason, certain vaccinations are offered during pregnancy in order to induce an antibody response, resulting in the passage of antibody across the placenta and into the fetus: this confers passive immunity on the newborn. As early as 1879, it was noted that infants born following smallpox vaccination in pregnancy were themselves protected against smallpox. However, the original smallpox vaccination was never widely used during pregnancy because, as a live vaccine, its use is contraindicated.
Tetanus is a bacterial infection caused by Clostridium tetani. Newborns can be infected via their unhealed umbilical stump, particularly when the umbilical cord is cut with a non-sterile instrument, and suffer a generalised infection. The tetanus toxoid vaccine was first licensed for use in 1938 and, during the 1960s, it was noted that tetanus vaccination in pregnancy could prevent neonatal tetanus. Subsequent trials showed that vaccination of pregnant women reduces infant deaths from tetanus by 94%. In 1988, the World Health Assembly passed a resolution to use maternal vaccination to eliminate neonatal tetanus by the year 2000. Although neonatal tetanus has not yet been eliminated, by 2017 there were an estimated 31,000 annual infant deaths from tetanus, down from 787,000 in 1987.
Whooping cough, or pertussis, is a contagious respiratory disease caused by the bacteria Bordetella pertussis. It is fatal in an estimated 0.5% of infants in the USA. The first vaccine against whooping cough was developed in the 1930s, and in the 1940s a study found that vaccination in pregnancy protected infants against developing whooping cough.
The tetanus and whooping cough vaccinations are generally administered in combination during pregnancy, for example as the DTaP vaccine (which also protects against diphtheria) or the 4-in-1 vaccine (which also protects against diphtheria and polio).
Influenza vaccination in pregnancy
Influenza is a respiratory infection caused by influenza viruses. Pregnant women are disproportionately affected by influenza: in the 1918 pandemic, mortality rates as high as 27% were reported in this population and in the 1957 pandemic, nearly 20% of deaths in pregnancy were attributed to influenza. In the 2009 pandemic, even with medical advances, pregnant women accounted for a disproportionately high percentage of deaths.
The influenza vaccine was first used in the US military from 1938, and then in the civilian population from the 1940s. Given the increased risk of influenza during pregnancy, public health bodies in the USA recommended that pregnant women should be prioritised for influenza vaccination from the 1960s, with the CDC endorsing the recommendation from 1997. However, it was not until 2005 that a randomised clinical trial formally demonstrated the efficacy of influenza vaccination in pregnancy.
Following the 2009 pandemic, both Australia and the UK added influenza vaccination to the recommended schedule for pregnant women.
COVID-19 vaccination in pregnancy
COVID-19 is a respiratory infection caused by the SARS-CoV2 virus. Before COVID-19 vaccines were available, pregnant women who caught the disease were at increased risk of needing intensive care, invasive ventilation or ECMO, but not at increased risk of death. Infection significantly increased the risk of preterm birth, stillbirth and pre-eclampsia.
COVID-19 vaccination during pregnancy is safe and associated with improved levels of risk for stillbirth, premature birth and admission of the newborn to intensive care. Vaccination can prevent COVID-19 infection during pregnancy although these immunity benefits are not passed on to the child.
mRNA COVID-19 vaccines were first rolled out in December 2020. At this time, in recognition of the risks posed by COVID-19 disease in pregnancy, the US and Israel offered the vaccines to all pregnant women shortly afterwards, and the first safety and effectiveness data therefore came from these vaccines and these nations.
Rubella vaccination to prevent fetal disease
Rubella, or German measles, is an infection caused by the rubella virus. In childhood, it usually causes a mild disease but infection in pregnancy can result in fetal infection, or congenital rubella syndrome, which causes neonatal deaths, deafness, blindness and intellectual disabilities. The first rubella vaccine was licensed for use in 1969, with its development largely spurred by the heavy burden of congenital rubella experienced in the 1960s.
Because the rubella vaccine is a live attenuated vaccine, there is a theoretical risk that it could cause fetal infection, although this has never been seen to occur. Therefore, rubella vaccination is usually avoided during pregnancy. Rather, vaccination is offered to children to reduce the prevalence of rubella virus in circulation and/or to adolescent girls, to boost their immunity before they are likely to conceive.
References
Vaccination
Infectious diseases
Obstetrics
Health issues in pregnancy
COVID-19 | Immunization during pregnancy | [
"Biology"
] | 1,232 | [
"Vaccination"
] |
10,857,059 | https://en.wikipedia.org/wiki/Classification%20of%20mental%20disorders | The classification of mental disorders, also known as psychiatric nosology or psychiatric taxonomy, is central to the practice of psychiatry and other mental health professions.
The two most widely used psychiatric classification systems are chapter V of the International Classification of Diseases, 10th edition (ICD-10), produced by the World Health Organization (WHO); and the Diagnostic and Statistical Manual of Mental Disorders, 5th edition (DSM-5), produced by the American Psychiatric Association (APA).
Both systems list disorders thought to be distinct types, and in recent revisions the two systems have deliberately converged their codes so that their manuals are often broadly comparable, though differences remain. Both classifications employ operational definitions.
Other classification schemes, used more locally, include the Chinese Classification of Mental Disorders.
Manuals of limited use, by practitioners with alternative theoretical persuasions, include the Psychodynamic Diagnostic Manual.
Definitions
In the scientific and academic literature on the definition or categorization of mental disorders, one extreme argues that it is entirely a matter of value judgments (including of what is normal) while another proposes that it is or could be entirely objective and scientific (including by reference to statistical norms); other views argue that the concept refers to a "fuzzy prototype" that can never be precisely defined, or that the definition will always involve a mixture of scientific facts (e.g. that a natural or evolved function is not working properly) and value judgments (e.g. that it is harmful or undesired). Lay concepts of mental disorder vary considerably across different cultures and countries, and may refer to different sorts of individual and social problems.
The WHO and national surveys report that there is no single consensus on the definition of mental disorder, and that the phrasing used depends on the social, cultural, economic and legal context in different contexts and in different societies. The WHO reports that there is intense debate about which conditions should be included under the concept of mental disorder; a broad definition can cover mental illness, intellectual disability, personality disorder and substance dependence, but inclusion varies by country and is reported to be a complex and debated issue. There may be a criterion that a condition should not be expected to occur as part of a person's usual culture or religion. However, despite the term "mental", there is not necessarily a clear distinction drawn between mental (dys)functioning and brain (dys)functioning, or indeed between the brain and the rest of the body.
Most international clinical documents avoid the term "mental illness", preferring the term "mental disorder". However, some use "mental illness" as the main overarching term to encompass mental disorders. Some consumer/survivor movement organizations oppose use of the term "mental illness" on the grounds that it supports the dominance of a medical model. The term "serious mental impairment" (SMI) is sometimes used to refer to more severe and long-lasting disorders while "mental health problems" may be used as a broader term, or to refer only to milder or more transient issues. Confusion often surrounds the ways and contexts in which these terms are used.
Mental disorders are generally classified separately to neurological disorders, learning disabilities or intellectual disabilities.
ICD-10
The International Classification of Diseases (ICD) is an international standard diagnostic classification for a wide variety of health conditions. The ICD-10 states that mental disorder is "not an exact term", although is generally used "...to imply the existence of a clinically recognisable set of symptoms or behaviours associated in most cases with distress and with interference with personal functions." Chapter V focuses on "mental and behavioural disorders" and consists of 10 main groups:
F0 – F9: Organic, including symptomatic, mental disorders
F10 – F-19: Mental and behavioural disorders due to use of psychoactive substances
F20 – F25: Schizophrenia, schizotypal and delusional disorders
F30 – F39: Mood [affective] disorders
F40 – F49: Neurotic, stress-related and somatoform disorders
F50 – F59: Behavioural syndromes associated with physiological disturbances and physical factors
F60 – F69: Disorders of personality and behaviour in adult persons
F70 – F79: Mental retardation
F80 – F89: Disorders of psychological development
F90 – 98: Behavioural and emotional disorders with onset usually occurring in childhood and adolescence
In addition, a group of F99 "unspecified mental disorders".
Within each group there are more specific subcategories. The WHO has revised ICD-10 to produce the latest version, ICD-11, adopted by the 72nd World Health Assembly in 2019 and came into effect on 1 January 2022.
DSM-IV
The DSM-IV was originally published in 1994 and listed more than 250 mental disorders. It was produced by the American Psychiatric Association and it characterizes mental disorder as "a clinically significant behavioral or psychological syndrome or pattern that occurs in an individual,...is associated with present distress...or disability...or with a significantly increased risk of suffering" but that "...no definition adequately specifies precise boundaries for the concept of 'mental disorder'...different situations call for different definitions" (APA, 1994 and 2000). The DSM also states that "there is no assumption that each category of mental disorder is a completely discrete entity with absolute boundaries dividing it from other mental disorders or no mental disorders."
The DSM-IV-TR (Text Revision, 2000) consisted of five axes (domains) on which disorder could be assessed. The five axes were:
Axis I: Clinical Disorders (all mental disorders except Personality Disorders and Mental Retardation)
Axis II: Personality Disorders and Mental Retardation
Axis III: General Medical Conditions (must be connected to a Mental Disorder)
Axis IV: Psychosocial and Environmental Problems (for example limited social support network)
Axis V: Global Assessment of Functioning (Psychological, social and job-related functions are evaluated on a continuum between mental health and extreme mental disorder)
The axis classification system was removed in the DSM-5 and is now mostly of historical significance.
The main categories of disorder in the DSM are:
Other schemes
The Chinese Society of Psychiatry's Chinese Classification of Mental Disorders (currently CCMD-3)
The Latin American Guide for Psychiatric Diagnosis (GLDP).
The Research Domain Criteria (RDoC), a framework being developed by the National Institute of Mental Health
The Hierarchical Taxonomy of Psychopathology (HiTOP), developed by the HiTOP consortium, a group of psychologists and psychiatrists who had a record of scientific contributions to classification of psychopathology.
Childhood diagnosis
Child and adolescent psychiatry sometimes uses specific manuals in addition to the DSM and ICD. The Diagnostic Classification of Mental Health and Developmental Disorders of Infancy and Early Childhood (DC:0-3) was first published in 1994 by Zero to Three to classify mental health and developmental disorders in the first four years of life. It has been published in 9 languages. The Research Diagnostic criteria-Preschool Age (RDC-PA) was developed between 2000 and 2002 by a task force of independent investigators with the goal of developing clearly specified diagnostic criteria to facilitate research on psychopathology in this age group. The French Classification of Child and Adolescent Mental Disorders (CFTMEA), operational since 1983, is the classification of reference for French child psychiatrists.
Usage
The ICD and DSM classification schemes have achieved widespread acceptance in psychiatry. A survey of 205 psychiatrists, from 66 countries across all continents, found that ICD-10 was more frequently used and more valued in clinical practice and training, while the DSM-IV was more frequently used in clinical practice in the United States and Canada, and was more valued for research, with accessibility to either being limited, and usage by other mental health professionals, policy makers, patients and families less clear. . A primary care (e.g. general or family physician) version of the mental disorder section of ICD-10 has been developed (ICD-10-PHC) which has also been used quite extensively internationally. A survey of journal articles indexed in various biomedical databases between 1980 and 2005 indicated that 15,743 referred to the DSM and 3,106 to the ICD.
In Japan, most university hospitals use either the ICD or DSM. ICD appears to be the somewhat more used for research or academic purposes, while both were used equally for clinical purposes. Other traditional psychiatric schemes may also be used.
Types of classification schemes
Categorical schemes
The classification schemes in common usage are based on separate (but may be overlapping) categories of disorder schemes sometimes termed "neo-Kraepelinian" (after the psychiatrist Kraepelin) which is intended to be atheoretical with regard to etiology (causation). These classification schemes have achieved some widespread acceptance in psychiatry and other fields, and have generally been found to have improved inter-rater reliability, although routine clinical usage is less clear. Questions of validity and utility have been raised, both scientifically and in terms of social, economic and political factors—notably over the inclusion of certain controversial categories, the influence of the pharmaceutical industry, or the stigmatizing effect of being categorized or labelled.
Non-categorical schemes
Some approaches to classification do not use categories with single cut-offs separating the ill from the healthy or the abnormal from the normal (a practice sometimes termed "threshold psychiatry" or "dichotomous classification").
Classification may instead be based on broader underlying "spectra", where each spectrum links together a range of related categorical diagnoses and nonthreshold symptom patterns.
Some approaches go further and propose continuously varying dimensions that are not grouped into spectra or categories; each individual simply has a profile of scores across different dimensions. DSM-5 planning committees are currently seeking to establish a research basis for a hybrid dimensional classification of personality disorders. However, the problem with entirely dimensional classifications is they are said to be of limited practical value in clinical practice where yes/no decisions often need to be made, for example whether a person requires treatment, and moreover the rest of medicine is firmly committed to categories, which are assumed to reflect discrete disease entities. While the Psychodynamic Diagnostic Manual has an emphasis on dimensionality and the context of mental problems, it has been structured largely as an adjunct to the categories of the DSM. Moreover, dimensionality approach was criticized for its reliance on independent dimensions whereas all systems of behavioral regulations show strong inter-dependence, feedback and contingent relationships
Descriptive vs Somatic
Descriptive classifications are based almost exclusively on either descriptions of behavior as reported by various observers, such as parents, teachers, and medical personnel; or symptoms as reported by individuals themselves. As such, they are quite subjective, not amenable to verification by third parties, and not readily transferable across chronologic and/or cultural barriers.
Somatic nosology, on the other hand, is based almost exclusively on the objective histologic and chemical abnormalities which are characteristic of various diseases and can be identified by appropriately trained pathologists. While not all pathologists will agree in all cases, the degree of uniformity allowed is orders of magnitude greater than that enabled by the constantly changing classification embraced by the DSM system. Some models, like Functional Ensemble of Temperament suggest to unify nosology of somatic, biologically based individual differences in healthy people (temperament) and their deviations in a form of mental disorders in one taxonomy.
Cultural differences
Classification schemes may not apply to all cultures. The DSM is based on predominantly American research studies and has been said to have a decidedly American outlook, meaning that differing disorders or concepts of illness from other cultures (including personalistic rather than naturalistic explanations) may be neglected or misrepresented, while Western cultural phenomena may be taken as universal. Culture-bound syndromes are those hypothesized to be specific to certain cultures (typically taken to mean non-Western or non-mainstream cultures); while some are listed in an appendix of the DSM-IV they are not detailed and there remain open questions about the relationship between Western and non-Western diagnostic categories and sociocultural factors, which are addressed from different directions by, for example, cross-cultural psychiatry or anthropology.
Historical development
Antiquity
In Ancient Greece, Hippocrates and his followers are generally credited with the first classification system for mental illnesses, including mania, melancholia, paranoia, phobias and Scythian disease (transvestism). They held that they were due to different kinds of imbalance in four humors.
Middle ages to Renaissance
The Persian physicians 'Ali ibn al-'Abbas al-Majusi and Najib ad-Din Samarqandi elaborated upon Hippocrates' system of classification. Avicenna (980−1037 CE) in the Canon of Medicine listed a number of mental disorders, including "passive male homosexuality".
Laws generally distinguished between "idiots" and "lunatics".
Thomas Sydenham (1624–1689), the "English Hippocrates", emphasized careful clinical observation and diagnosis and developed the concept of a syndrome, a group of associated symptoms having a common course, which would later influence psychiatric classification.
18th century
Evolution in the scientific concepts of psychopathology (literally referring to diseases of the mind) took hold in the late 18th and 19th centuries following the Renaissance and Enlightenment. Individual behaviors that had long been recognized came to be grouped into syndromes.
Boissier de Sauvages developed an extremely extensive psychiatric classification in the mid-18th century, influenced by the medical nosology of Thomas Sydenham and the biological taxonomy of Carl Linnaeus. It was only part of his classification of 2400 medical diseases. These were divided into 10 "classes", one of which comprised the bulk of the mental diseases, divided into four "orders" and 23 "genera". One genus, melancholia, was subdivided into 14 "species".
William Cullen advanced an influential medical nosology which included four classes of neuroses: coma, adynamias, spasms, and vesanias. The vesanias included amentia, melancholia, mania, and oneirodynia.
Towards the end of the 18th century and into the 19th, Pinel, influenced by Cullen's scheme, developed his own, again employing the terminology of genera and species. His simplified revision of this reduced all mental illnesses to four basic types. He argued that mental disorders are not separate entities but stem from a single disease that he called "mental alienation".
Attempts were made to merge the ancient concept of delirium with that of insanity, the latter sometimes described as delirium without fever.
On the other hand, Pinel had started a trend for diagnosing forms of insanity 'without delirium' (meaning hallucinations or delusions) – a concept of partial insanity. Attempts were made to distinguish this from total insanity by criteria such as intensity, content or generalization of delusions.
19th century
Pinel's successor, Esquirol, extended Pinel's categories to five. Both made a clear distinction between insanity (including mania and dementia) as opposed to mental retardation (including idiocy and imbecility). Esquirol developed a concept of monomania—a periodic delusional fixation or undesirable disposition on one theme—that became a broad and common diagnosis and a part of popular culture for much of the 19th century. The diagnosis of "moral insanity" coined by James Prichard also became popular; those with the condition did not seem delusional or intellectually impaired but seemed to have disordered emotions or behavior.
The botanical taxonomic approach was abandoned in the 19th century, in favor of an anatomical-clinical approach that became increasingly descriptive. There was a focus on identifying the particular psychological faculty involved in particular forms of insanity, including through phrenology, although some argued for a more central "unitary" cause. French and German psychiatric nosology was in the ascendency. The term "psychiatry" ("Psychiatrie") was coined by German physician Johann Christian Reil in 1808, from the Greek "ψυχή" (psychē: "soul or mind") and "ιατρός" (iatros: "healer or doctor"). The term "alienation" took on a psychiatric meaning in France, later adopted into medical English. The terms psychosis and neurosis came into use, the former viewed psychologically and the latter neurologically.
In the second half of the century, Karl Kahlbaum and Ewald Hecker developed a descriptive categorizion of syndromes, employing terms such as dysthymia, cyclothymia, catatonia, paranoia and hebephrenia. Wilhelm Griesinger (1817–1869) advanced a unitary scheme based on a concept of brain pathology. French psychiatrists Jules Baillarger described "folie à double forme" and Jean-Pierre Falret described "la folie circulaire"—alternating mania and depression.
The concept of adolescent insanity or developmental insanity was advanced by Scottish Asylum Superintendent and Lecturer in Mental Diseases Thomas Clouston in 1873, describing a psychotic condition which generally impacts those aged 18–24 years, particularly males, and in 30% of cases proceeded to "a secondary dementia".
The concept of hysteria (wandering womb) had long been used, perhaps since ancient Egyptian times, and was later adopted by Freud. Descriptions of a specific syndrome now known as somatization disorder were first developed by the French physician, Paul Briquet in 1859.
An American physician, Beard, described "neurasthenia" in 1869. German neurologist Westphal, coined the term "obsessional neurosis" now termed obsessive-compulsive disorder, and agoraphobia. Alienists created a whole new series of diagnoses that highlighted single, impulsive behavior, such as kleptomania, dipsomania, pyromania, and nymphomania. The diagnosis of drapetomania was also developed in the Southern United States to explain the perceived irrationality of black slaves trying to escape what was thought to be a suitable role.
The scientific study of homosexuality began in the 19th century, informally viewed either as natural or as a disorder. Kraepelin included it as a disorder in his Compendium der Psychiatrie that he published in successive editions from 1883.
In the late 19th century, Koch referred to "psychopathic inferiority" as a new term for moral insanity. In the 20th century the term became known as "psychopathy" or "sociopathy", related specifically to antisocial behavior. Related studies led to the DSM-III category of antisocial personality disorder.
20th century
Influenced by the approach of Kahlbaum and others, and developing his concepts in publications spanning the turn of the century, German psychiatrist Emil Kraepelin advanced a new system. He grouped together a number of existing diagnoses that appeared to all have a deteriorating course over time—such as catatonia, hebephrenia and dementia paranoides—under another existing term "dementia praecox" (meaning "early senility", later renamed schizophrenia). Another set of diagnoses that appeared to have a periodic course and better outcome were grouped together under the category of manic-depressive insanity (mood disorder). He also proposed a third category of psychosis, called paranoia, involving delusions but not the more general deficits and poor course attributed to dementia praecox. In all he proposed 15 categories, also including psychogenic neurosis, psychopathic personality, and syndromes of defective mental development (mental retardation). He eventually included homosexuality in the category of "mental conditions of constitutional origin".
The neuroses were later split into anxiety disorders and other disorders.
Freud wrote extensively on hysteria and also coined the term, "anxiety neurosis", which appeared in DSM-I and DSM-II. Checklist criteria for this led to studies that were to define panic disorder for DSM-III.
Early 20th century schemes in Europe and the United States reflected a brain disease (or degeneration) model that had emerged during the 19th century, as well as some ideas from Darwin's theory of evolution and/or Freud's psychoanalytic theories.
Psychoanalytic theory did not rest on classification of distinct disorders, but pursued analyses of unconscious conflicts and their manifestations within an individual's life. It dealt with neurosis, psychosis, and perversion. The concept of borderline personality disorder and other personality disorder diagnoses were later formalized from such psychoanalytic theories, though such ego psychology-based lines of development diverged substantially from the paths taken elsewhere within psychoanalysis.
The philosopher and psychiatrist Karl Jaspers made influential use of a "biographical method" and suggested ways to diagnose based on the form rather than content of beliefs or perceptions. In regard to classification in general he prophetically remarked that: "When we design a diagnostic schema, we can only do so if we forego something at the outset … and in the face of facts we have to draw the line where none exists... A classification therefore has only provisional value. It is a fiction which will discharge its function if it proves to be the most apt for the time".
Adolph Meyer advanced a mixed biosocial scheme that emphasized the reactions and adaptations of the whole organism to life experiences.
In 1945, William C. Menninger advanced a classification scheme for the US army, called Medical 203, synthesizing ideas of the time into five major groups. This system was adopted by the Veterans Administration in the United States and strongly influenced the DSM.
The term stress, having emerged from endocrinology work in the 1930s, was popularized with an increasingly broad biopsychosocial meaning, and was increasingly linked to mental disorders. The diagnosis of post-traumatic stress disorder was later created.
Mental disorders were first included in the sixth revision of the International Classification of Diseases (ICD-6) in 1949. Three years later, in 1952, the American Psychiatric Association created its own classification system, DSM-I.
The Feighner Criteria group described fourteen major psychiatric disorders for which careful research studies were available, including homosexuality. These developed as the Research Diagnostic Criteria, adopted and further developed by the DSM-III.
The DSM and ICD developed, partly in sync, in the context of mainstream psychiatric research and theory. Debates continued and developed about the definition of mental illness, the medical model, categorical vs dimensional approaches, and whether and how to include suffering and impairment criteria. There is some attempt to construct novel schemes, for example from an attachment perspective where patterns of symptoms are construed as evidence of specific patterns of disrupted attachment, coupled with specific types of subsequent trauma.
21st century
The ICD-11 and DSM-5 are being developed at the start of the 21st century. Any radical new developments in classification are said to be more likely to be introduced by the APA than by the WHO, mainly because the former only has to persuade its own board of trustees whereas the latter has to persuade the representatives of over 200 countries at a formal revision conference. In addition, while the DSM is a bestselling publication that makes huge profits for APA, the WHO incurs major expense in determining international consensus for revisions to the ICD. Although there is an ongoing attempt to reduce trivial or accidental differences between the DSM and ICD, it is thought that the APA and the WHO are likely to continue to produce new versions of their manuals and, in some respects, to compete with one another.
Criticism
There is some ongoing scientific doubt concerning the construct validity and reliability of psychiatric diagnostic categories and criteria even though they have been increasingly standardized to improve inter-rater agreement in controlled research. In the United States, there have been calls and endorsements for a congressional hearing to explore the nature and extent of harm potentially caused by this "minimally investigated enterprise".
Other specific criticisms of the current schemes include: attempts to demonstrate natural boundaries between related syndromes, or between a common syndrome and normality, have failed; inappropriateness of statistical (factor-analytic) arguments and lack of functionality considerations in the analysis of a structure of behavioral pathology; the disorders of current classification are probably surface phenomena that can have many different interacting causes, yet "the mere fact that a diagnostic concept is listed in an official nomenclature and provided with a precise operational definition tends to encourage us to assume that it is a "quasi-disease entity" that can be invoked to explain the patient's symptoms"; and that the diagnostic manuals have led to an unintended decline in careful evaluation of each individual person's experiences and social context.
Psychodynamic schemes have traditionally given the latter phenomenological aspect more consideration, but in psychoanalytic terms that have been long criticized on numerous grounds.
Some have argued that reliance on operational definition demands that intuitive concepts, such as depression, need to be operationally defined before they become amenable to scientific investigation. However, John Stuart Mill pointed out the dangers of believing that anything that could be given a name must refer to a thing and Stephen Jay Gould and others have criticized psychologists for doing just that. One critic states that "Instead of replacing 'metaphysical' terms such as 'desire' and 'purpose', they used it to legitimize them by giving them operational definitions. Thus in psychology, as in economics, the initial, quite radical operationalist ideas eventually came to serve as little more than a 'reassurance fetish' (Koch 1992, 275) for mainstream methodological practice." According to Tadafumi Kato, since the era of Kraepelin, psychiatrists have been trying to differentiate mental disorders by using clinical interviews. Kato argues there has been little progress over the last century and that only modest improvements are possible in this way; he suggests that only neurobiological studies using modern technology could form the basis for a new classification.
According to Heinz Katsching, expert committees have combined phenomenological criteria in variable ways into categories of mental disorders, repeatedly defined and redefined over the last half century. The diagnostic categories are termed "disorders" and yet, despite not being validated by biological criteria as most medical diseases are, are framed as medical diseases identified by medical diagnoses. He describes them as top-down classification systems similar to the botanic classifications of plants in the 17th and 18th centuries, when experts decided a priori which visible aspects of plants were relevant. Katsching notes that while psychopathological phenomena are certainly observed and experienced, the conceptual basis of psychiatric diagnostic categories is questioned from various ideological perspectives.
Psychiatrist Joel Paris argues that psychiatry is sometimes susceptible to diagnostic fads. Some have been based on theory (overdiagnosis of schizophrenia), some based on etiological (causation) concepts (overdiagnosis of post-traumatic stress disorder), and some based on the development of treatments. Paris points out that psychiatrists like to diagnose conditions they can treat, and gives examples of what he sees as prescribing patterns paralleling diagnostic trends, for example an increase in bipolar diagnosis once lithium came into use, and similar scenarios with the use of electroconvulsive therapy, neuroleptics, tricyclic antidepressants, and SSRIs. He notes that there was a time when every patient seemed to have "latent schizophrenia" and another time when everything in psychiatry seemed to be "masked depression", and he fears that the boundaries of the bipolar spectrum concept, including in application to children, are similarly expanding. Allen Frances has suggested fad diagnostic trends regarding autism and Attention deficit hyperactivity disorder.
Since the 1980s, psychologist Paula Caplan has had concerns about psychiatric diagnosis, and people being arbitrarily "slapped with a psychiatric label". Caplan says psychiatric diagnosis is unregulated, so doctors are not required to spend much time understanding patients situations or to seek another doctor's opinion. The criteria for allocating psychiatric labels are contained in the Diagnostic and Statistical Manual of Mental Disorders, which can "lead a therapist to focus on narrow checklists of symptoms, with little consideration for what is causing the patient's suffering". So, according to Caplan, getting a psychiatric diagnosis and label often hinders recovery.
The DSM and ICD approach remains under attack both because of the implied causality model and because some researchers believe it better to aim at underlying brain differences which can precede symptoms by many years.
See also
Abnormal psychology
Diagnosis
Diagnostic classification and rating scales used in psychiatry
Medical classification
DSM-IV codes
Structured Clinical Interview for DSM-IV (SCID)
Nosology
Operationalism
Psychopathology
Relational disorder (proposed DSM-5 new diagnosis)
References
External links
Dalal PK, Sivakumar T. (2009) Moving towards ICD-11 and DSM-V: Concept and evolution of psychiatric classification. Indian Journal of Psychiatry, Volume 51, Issue 4, Page 310–319.
Classification of mental disorders
Mental disorders | Classification of mental disorders | [
"Biology"
] | 6,016 | [
"Mental disorders",
"Behavior",
"Human behavior"
] |
10,857,373 | https://en.wikipedia.org/wiki/Valley%20exit%20jet | A valley exit jet is a strong, down-valley, elevated air current that emerges above the intersection of the valley and its adjacent plain. These winds frequently reach a maximum of at a height of above the ground. Surface winds below the jet may sway vegetation but are significantly weaker.
The presence of these strong nighttime down-valley air flows has been documented at the mouth of many Alpine valleys that merge with basins, such as the Inn Valley of Austria, where the jet is strong enough to be heard at the ground. In the United States, exit jet signatures have been observed at the North Fork Gunnison River at Paonia, Colorado; the exit of South Boulder Creek south of Boulder, Colorado; Albuquerque, New Mexico at the mouth of Tijeras Canyon; and the mouth of Spanish Fork Canyon in Utah.
Theory
Exit jets are likely to be found in valley regions that exhibit diurnal mountain wind systems, such as those of the dry mountain ranges of the US. These diurnal wind systems are driven by horizontal pressure gradients. Due to the abrupt transition over a short distance between the valley high pressure and the basin low pressure, the gradients are strongest near the valley exit, producing a jet.
Other meteorological factors acting to increase exit wind speeds are the acceleration of winds originating inside the valley as they travel to lower elevations downvalley, and the process of cold valley air sinking and ejecting into the plain. Deep valleys that terminate abruptly at a plain are more impacted by these factors than are those that gradually become shallower as downvalley distance increases.
Impacts
Valley exit jets can play a major role in the mitigation of air pollution:
Airflow emerging into the basin is cleaner due to lower aerosol content
Vertical mixing resulting from directional shear and from the convergence of the jet with basin scale flows reduces ozone and other pollutants.
Surface eddies created near canyon mouths inhibit the transport of pollution.
Methods of examining exit jets include remote sensing and direct observation. SODAR and Doppler LIDAR have been used in numerous studies to identify, quantify and relate the jets to atmospheric transport of hazardous materials. Detailed profiles of winds at canyon exits can be directly observed and calculated using a single or double theodolite and tethersondes.
The identification and measurement of valley exit jets can also significantly aid in fire control, as fire often rides valley jets, as well as the development of wind energy.
References
Atmospheric dynamics
Mountain meteorology
Boundary layer meteorology | Valley exit jet | [
"Chemistry"
] | 498 | [
"Atmospheric dynamics",
"Fluid dynamics"
] |
10,857,685 | https://en.wikipedia.org/wiki/Media-independent%20handover | Media Independent Handover (MIH) is a standard being developed by IEEE 802.21 to enable the handover of IP sessions from one layer 2 access technology to another, to achieve mobility of end user devices (MIH).
Importance
The importance of MIH derives from the fact that a diverse range of broadband wireless access technologies is available and in course of development, including GSM, UMTS, CDMA2000, WiMAX, Mobile-Fi and WPANs. Multimode wireless devices that incorporate more than one of these wireless interfaces require the ability to switch among them during the course of an IP session, and devices such as laptops with Ethernet and wireless interfaces need to switch similarly between wired and wireless access.
Handover may be required, e.g. because a mobile device experiences a degradation in the radio signal, or because an access point experiences a heavy traffic load.
Functionality
The key functionality provided by MIH is communication among the various wireless layers and between them and the IP layer. The required messages are relayed by the Media Independent Handover Function, MIHF, that is located in the protocol stack between the layer 2 wireless technologies and IP at layer 3. MIH may communicate with various IP protocols including Session Initiation Protocol (SIP) for signaling, Mobile IP for mobility management, and DiffServ and IntServ for quality of service (QoS).
When a session is handed off from one access point to another access point using the same technology, the handover can usually be performed within that wireless technology itself without involving MIHF or IP. For instance a VoIP call from a Wi-Fi handset to a Wi-Fi access point can be handed over to another Wi-Fi access point within the same network, e.g. a corporate network, using Wi-Fi standards such as 802.11f and 802.11r. However, if the handover is from a Wi-Fi access point in a corporate network to a public Wi-Fi hotspot, then MIH is required, since the two access points cannot communicate with each other at the link layer, and are, in general, on different IP subnets.
When a session is handed off from one wireless technology to another, MIH may assist the handover process by exchanging messages among the Internet access technologies and IP. Message are of three types:
• Event notifications are passed from a lower layer in the protocol stack to a higher layer or between the MIHF of one device to the MIHF of another device. For instance “wireless link quality is degrading” is an event notification that is passed from the wireless layer to the MIHF layer.
• Commands are passed down the protocol stack or between the MIHF of one device to the MIHF of another device. For instance “Initiate Handover” is a command in which the access point MIHF provides the mobile device MIHF with a list of alternative access points that it could use.
• Information Service is of three types. A higher layer may request information from a lower layer, e.g. the MIHF may request performance information, such as delay from the wireless layer. A lower layer may request information from a higher layer, e.g. the MIHF may request to know the ISP Name from the IP layer. One MIHF may request information from another MIHF, e.g. the availability of location-based services.
Implementation
The MIH function, MIHF, is implemented:
• in mobile devices that have more than one wireless/wired interface;
• in access points that have at least one wireless interface;
• in core network equipment that may have no wireless interface.
Mobile devices and access points clearly need to implement MIHF in order to communicate in a standard way between each other and between the wireless and IP layers. This allows them to make their own local decisions as to whether and how to handover a session. The reason for MIHF in core network equipment with no wireless interface is to enable the design of “handover servers” which can make centralized decisions about the handover of sessions among multiple access points and multiple access technologies. Such servers allow a wireless network operator to balance the traffic load so as to alleviate congestion on specific access points, and deliver sufficient QoS to all users.
Quality of service
Short-lived sessions such as accessing a single web page typically do not require handover or QoS. Longer duration sessions, which may well require handover, such as VoIP, audio/video streaming (including live TV and VoD), and VPNs, typically have QoS requirements including delay, delay variation and packet loss.
It is important that QoS is maintained, not just before and after a handover, but also during the handover, and this can be achieved by using MIH to plan ahead. Before a handover is required, the MIHFs communicate to identify which access points using which wireless technologies are within range and what QoS is available from them. MIH can also be used to pre-authenticate the mobile device with alternative potential access points and to reserve capacity prior to handover. For instance WiMAX allows resources to be reserved for a session before they are actually allocated to that session. When a handover becomes necessary, much of the ground-work is therefore already in place and the session can be handed over with minimal delay and packet loss. Incoming packets to the mobile device that are delivered to the old access point after the handover can be forwarded via the new access point, thus further reducing packet loss.
QoS is handled differently by each technology, including both the wireless access technologies and also IP, which has two QoS approaches, DiffServ and IntServ. Some technologies divide traffic into “Service Classes”, e.g. streaming, while others allow users to specify quantitative “QoS Parameters”, e.g. transfer delay. WiFi, Mobile-Fi and DiffServ use the service class approach and although they do not have exactly the same service classes, it is possible to make a correspondence among them. WiMAX and IntServ use the QoS parameter approach, and UMTS uses both approaches. Again correspondences among parameters can be made, [1].
MIH can be used to exchange information about service class and QoS parameter availability from one wireless technology to another and to the IP layer. One source of such information is performance measurements made by the wireless layer, e.g. 802.11k for WiFi and 802.16f for WiMAX.
Example MIH Scenario
To illustrate the operation of MIH, let us take an example of a real-time gaming application, using DiffServ at the IP layer, being handed over from Mobile-Fi to WiMAX. The application is currently using the Assured Forwarding Class 1, AF1, DiffServ service, and the Class 2 Real-Time Interactive Mobile-Fi service.
Since the MIH standard is not yet finalized, this example is illustrative of the type of functionality that may be provided, as opposed to a firm guarantee of what will become available. Also the standard specifies the MIH messages. The use of those messages in any particular application is implementation dependent. The example below is for illustrative purposes only.
1. The mobile device notices a degradation in the Mobile-Fi wireless signal strength and uses the MIH Event Notification Service to inform the MIHF layer in the mobile device. This information is passed to the MIHF in the access point.
2. The access point uses the MIH Command Service to tell the mobile device to initiate handover and includes a list of potential access points.
3. The mobile device MIHF passes this list to its various wireless layers and, using the MIH Information Service, requests them to determine the signal strength of each access point and report back to the MIHF.
4. The MIHF in the mobile device determines that the best signal strength comes from a WiMAX access point, and passes that information to its IP layer, using the Event Notification Service.
5. DiffServ at the IP layer in the mobile device uses the Information Service to request performance information from the WiMAX access point. This request is passed through the mobile device MIHF, via the WiMAX access point MIHF, to the WiMAX access point wireless layer.
6. The WiMAX layer in the access point uses IEEE 802.16f to obtain the performance information and reports back that it can schedule the session using its Unsolicited Grant Service, UGS, with a link delay of 5 ms, or on its Real-Time Polling Service with a link delay of 18 ms.
7. DiffServ selects the WiMAX UGS, and uses the MIH Command Service to tell the mobile device to commit to handover. It may also use Mobile IP if a change in the mobile device IP address is required.
Related Standards
Another standard that can be used for handover from one wireless technology to another is UMA, Unlicensed Mobile Access, which provides handover between WiFi and GSM/GPRS/UMTS. It was originally developed by an independent industry consortium and was incorporated into the 3GPP standards in 2005 under the name GAN (Generic Access Network).
Another standard of interest is 802.11u which provides roaming between 802.11 networks and other networks, so that services from one network can be accessed when the user is subscribed to services from another network. However it does not provide handover of IP sessions in progress.
See also
Load balancing (computing)
Multihoming
Vertical handover
Access Network Discovery and Selection Function
References
David J Wright; Maintaining QoS During Handover Among Multiple Wireless Access Technologies, International Conference on Mobile Commerce, Toronto, July 2007.
Ok Sik Yang; Seong Gon Choi; Jun Kyun Choi; Jung Soo Park; Hyoung Jun Kim;
A handover framework for seamless service support between wired and wireless networks, Advanced Communication Technology, 2006. ICACT 2006. The 8th International Conference, Volume 3, 20-22 Feb. 2006 Page(s):6 pp.
Al Mosawi, T.; Wisely, D.; Aghvami, H.; A Novel Micro Mobility Solution Based on Media Independent Handover and SIP, Vehicular Technology Conference, 2006. VTC-2006 Fall.2006 IEEE 64th, Sept. 2006 Page(s):1 - 5
Yoon Young An; Byung Ho Yae; Kang Won Lee; You Ze Cho; Woo Young Jung; Reduction of Handover Latency Using MIH Services in MIPv6, Advanced Information Networking and Applications, 2006. AINA 2006. 20th International Conference on, Volume 2, 18-20 April 2006 Page(s):229 - 234
External links
http://www.intel.com/technology/magazine/communications/mobility-on-wireless-0905.htm provides an Intel perspective on MIH.
http://tools.ietf.org/html/draft-hancock-mipshop-gist-for-mih-00 provides a related Internet Draft.
http://www1.cs.columbia.edu/~dutta/research/wpmc-final.pdf describes a lab experiment that demonstrates the effectiveness of some of the MIH functionality.
Mobile telecommunications standards
Networking standards
Wireless networking | Media-independent handover | [
"Technology",
"Engineering"
] | 2,348 | [
"Computer standards",
"Wireless networking",
"Computer networks engineering",
"Mobile telecommunications standards",
"Mobile telecommunications",
"Networking standards"
] |
3,010,311 | https://en.wikipedia.org/wiki/Particle%20shower | In particle physics, a shower is a cascade of secondary particles produced as the result of a high-energy particle interacting with dense matter. The incoming particle interacts, producing multiple new particles with lesser energy; each of these then interacts, in the same way, a process that continues until many thousands, millions, or even billions of low-energy particles are produced. These are then stopped in the matter and absorbed.
Types
There are two basic types of showers. Electromagnetic showers are produced by a particle that interacts primarily or exclusively via the electromagnetic force, usually a photon or electron. Hadronic showers are produced by hadrons (i.e. nucleons and other particles made of quarks), and proceed mostly via the strong nuclear force.
Electromagnetic showers
An electromagnetic shower begins when a high-energy electron, positron or photon enters a material. At high energies (above a few MeV), in which the photoelectric effect and Compton scattering are insignificant, photons interact with matter primarily via pair production — that is, they convert into an electron-positron pair, interacting with an atomic nucleus or electron in order to conserve momentum. High-energy electrons and positrons primarily emit photons, a process called bremsstrahlung. These two processes (pair production and bremsstrahlung) continue, leading to a cascade of particles of decreasing energy until photons fall below the pair production threshold, and energy losses of electrons other than bremsstrahlung start to dominate.
The characteristic amount of matter traversed for these related interactions is called the radiation length . is both the mean distance over which a high-energy electron loses all but 1/e of its energy by bremsstrahlung and 7/9 of the mean free path for pair production by a high energy photon. The length of the cascade scales with ; the "shower depth" is approximately determined by the relation
where is the radiation length of the matter, and is the critical energy (the critical energy can be defined as the energy in which the bremsstrahlung and ionization rates are equal. A rough estimate is ). The shower depth increases logarithmically with the energy, while the lateral spread of the shower is mainly due to the multiple scattering of the electrons. Up to the shower maximum the shower is contained in a cylinder with radius < 1 radiation length. Beyond that point electrons are increasingly affected by multiple scattering, and the lateral size scales with the Molière radius . The propagation of the photons in the shower causes deviations from Molière radius scaling. However, roughly 95% of the shower are contained laterally in a cylinder with radius .
The mean longitudinal profile of the energy deposition in electromagnetic cascades is reasonably well described by a gamma distribution:
where , is the initial energy and and are parameters to be fitted with Monte Carlo or experimental data.
Hadronic showers
The physical processes that cause the propagation of a hadron shower are considerably different from the processes in electromagnetic showers. About half of the incident hadron energy is passed on to additional secondaries. The remainder is consumed in multiparticle production of slow pions and in other processes. The phenomena which determine the development of the hadronic showers are: hadron production, nuclear deexcitation and pion and muon decays. Neutral pions amount, on average to 1/3 of the produced pions and their energy is dissipated in the form of electromagnetic showers. Another important characteristic of the hadronic shower is that it takes longer to develop than the electromagnetic one. This can be seen by comparing the number of particles present versus depth for pion and electron initiated showers. The longitudinal development of hadronic showers scales with the nuclear interaction length:
The lateral shower development does not scale with λ.
Theoretical analysis
A simple model for the cascade theory of electronic showers can be formulated as a set of integro-partial differential equations. Let Π (E,x) dE and Γ(E,x) dE be the number of particles and photons with energy between E and E+dE respectively (here x is the distance along the material). Similarly let γ(E,E')dE' be the probability per unit path length for a photon of energy E to produce an electron with energy between E' and E'+dE'. Finally let π(E,E')dE' be the probability per unit path length for an electron of energy E to emit a photon with energy between E' and E'+dE'. The set of integro-differential equations which govern Π and Γ are given by
γ and π are found in for low energies and in for higher energies.
Examples
Cosmic rays hit Earth's atmosphere on a regular basis, and they produce showers as they proceed through the atmosphere. It was from these air showers that the first muons and pions were detected experimentally, and they are used today by a number of experiments as a means of observing ultra-high-energy cosmic rays. Some experiments, like Fly's Eye, have observed the visible atmospheric fluorescence produced at the peak intensity of the shower; others, like Haverah Park experiment, have detected the remains of a shower by sampling the energy deposited over a large area on the ground.
In particle detectors built at high-energy particle accelerators, a device called a calorimeter records the energy of particles by causing them to produce a shower and then measuring the energy deposited as a result. Many large modern detectors have both an electromagnetic calorimeter and a hadronic calorimeter, with each designed specially to produce that particular kind of shower and measure the energy of the associated type of particle.
See also
Air shower (physics), an extensive (many kilometres wide) cascade of ionized particles and electromagnetic radiation produced in the atmosphere when a primary cosmic ray (i.e. one of extraterrestrial origin) enters our atmosphere.
Telescope Array Project
MAGIC Cherenkov Telescope
High Altitude Water Cherenkov Experiment
Pierre Auger Observatory
ATLAS experiment calorimeters
CMS experiment, electromagnetic and hadronic calorimeters
Collision cascade, a set of collisions between atoms in a solid
References
"Passage of particles through matter", from
Experimental particle physics | Particle shower | [
"Physics"
] | 1,278 | [
"Experimental physics",
"Particle physics",
"Experimental particle physics"
] |
3,010,458 | https://en.wikipedia.org/wiki/Rent%27s%20rule | Rent's rule pertains to the organization of computing logic, specifically the relationship between the number of external signal connections to a logic block (i.e., the number of "pins") with the number of logic gates in the logic block, and has been applied to circuits ranging from small digital circuits to mainframe computers. Put simply, it states that there is a simple power law relationship between these two values (pins and gates).
E. F. Rent's discovery and first publications
In the 1960s, E. F. Rent, an IBM employee, found a remarkable trend between the number of pins (terminals, T) at the boundaries of integrated circuit designs at IBM and the number of internal components (g), such as logic gates or standard cells. On a log–log plot, these datapoints were on a straight line, implying a power-law relation , where t and p are constants (p < 1.0, and generally 0.5 < p < 0.8).
Rent's findings in IBM-internal memoranda were published in the IBM Journal of Research and Development in 2005, but the relation was described in 1971 by Landman and Russo. They performed a hierarchical circuit partitioning in such a way that at each hierarchical level (top-down) the fewest interconnections had to be cut to partition the circuit (in more or less equal parts). At each partitioning step, they noted the number of terminals and the number of components in each partition and then partitioned the sub-partitions further. They found the power-law rule applied to the resulting T versus g plot and named it "Rent's rule".
Rent's rule is an empirical result based on observations of existing designs, and therefore it is less applicable to the analysis of non-traditional circuit architectures. However, it provides a useful framework with which to compare similar architectures.
Theoretical basis
Christie and Stroobandt later derived Rent's rule theoretically for homogeneous systems and pointed out that the amount of optimization achieved in placement is reflected by the parameter , the "Rent exponent", which also depends on the circuit topology. In particular, values correspond to a greater fraction of short interconnects. The constant in Rent's rule can be viewed as the average number of terminals required by a single logic block, since when .
Special cases and applications
Random arrangement of logic blocks typically have . Larger values are impossible, since the maximal number of terminals for any region containing g logic components in a homogeneous system is given by . Lower bounds on p depend on the interconnection topology, since it is generally impossible to make all wires short. This lower bound is often called the "intrinsic Rent exponent", a notion first introduced by Hagen et al. It can be used to characterize optimal placements and also measure the interconnection complexity of a circuit. Higher (intrinsic) Rent exponent values correspond to a higher topological complexity. One extreme example () is a long chain of logic blocks, while a clique has . In realistic 2D circuits, ranges from 0.5 for highly-regular circuits (such as SRAM) to 0.75 for random logic.
System performance analysis tools such as BACPAC typically use Rent's rule to calculate expected wiring lengths and wiring demands.
Rent's rule has been shown to apply among the regions of the brain of Drosophila fruit fly, using synapses instead of gates, and neurons which extend both inside and outside the region as pins.
Estimating Rent's exponent
To estimate Rent's exponent, one can use top-down partitioning, as used in min-cut placement. For every partition, count the number of terminals connected to the partition and compare it to the number of logic blocks in the partition. Rent's exponent can then be found by fitting these datapoints on a log–log plot, resulting in an exponent p'''. For optimally partitioned circuits, but this is no longer the case for practical (heuristic) partitioning approaches. For partitioning-based placement algorithms .
Region II of Rent's rule
Landman and Russo found a deviation of Rent's rule near the "far end", i.e., for partitions with a large number of blocks, which is known as "Region II" of Rent's Rule. A similar deviation also exists for small partitions and has been found by Stroobandt, who called it "Region III".
Rentian wirelength estimation
Another IBM employee, Donath, discovered that Rent's rule can be used to estimate the average wirelength and the wirelength distribution in VLSI chips.
This motivated the System Level Interconnect Prediction workshop, founded in 1999, and an entire community working on wirelength prediction (see a survey by Stroobandt). The resulting wirelength estimates have been improved significantly since then and are now used for "technology exploration".
The use of Rent's rule allows to perform such estimates a priori'' (i.e., before actual placement) and thus predict the properties of future technologies (clock frequencies, number of routing layers needed, area, power) based on limited information about future circuits and technologies.
A comprehensive overview of work based on Rent's rule has been published by Stroobandt.
See also
Electronic design automation
Integrated circuit design
Network architecture
Network on a chip
References
Gate arrays
Electronic design automation
Computer architecture statements | Rent's rule | [
"Technology",
"Engineering"
] | 1,139 | [
"Computer engineering",
"Gate arrays"
] |
3,010,519 | https://en.wikipedia.org/wiki/Bead%20test | The bead test is a traditional part of qualitative inorganic analysis to test for the presence of certain metals. The oldest one is the borax bead test or blister test. It was introduced by Berzelius in 1812. Since then other salts were used as fluxing agents, such as sodium carbonate or sodium fluoride. The most important one after borax is microcosmic salt, which is the basis of the microcosmic salt bead test.
Borax bead
A small loop is made in the end of a platinum wire and heated in a Bunsen burner flame until red hot. A stick made of another inert substance such as a magnesia stick (MgO) may also be used.
It is then dipped into powdered borax and held in the hottest part of the flame where it swells up as it loses its water of crystallization and then shrinks, forming a colourless, transparent glass-like bead (a mixture of sodium metaborate and boric anhydride).
The bead is allowed to cool and then wetted and dipped into the sample to be tested such that only a tiny amount of the substance adheres to the bead. If too much substance is used, the bead will become dark and opaque. The bead and adhering substance is then heated in the lower, reducing, part of the flame, allowed to cool, and the colour observed. It is then heated in the upper, oxidizing, part of the flame, allowed to cool, and the colour observed again.
Characteristic coloured beads are produced with salts of copper, iron, chromium, manganese, cobalt and nickel. After the test, the bead is removed by heating it to fusion point, and plunging it into a vessel of water.
See also
Flame test
References
Chemical tests
ru:Перл (химия) | Bead test | [
"Chemistry"
] | 395 | [
"Chemical tests"
] |
3,010,589 | https://en.wikipedia.org/wiki/Flory%E2%80%93Huggins%20solution%20theory | Flory–Huggins solution theory is a lattice model of the thermodynamics of polymer solutions which takes account of the great dissimilarity in molecular sizes in adapting the usual expression for the entropy of mixing. The result is an equation for the Gibbs free energy change for mixing a polymer with a solvent. Although it makes simplifying assumptions, it generates useful results for interpreting experiments.
Theory
The thermodynamic equation for the Gibbs energy change accompanying mixing at constant temperature and (external) pressure is
A change, denoted by , is the value of a variable for a solution or mixture minus the values for the pure components considered separately. The objective is to find explicit formulas for and , the enthalpy and entropy increments associated with the mixing process.
The result obtained by Flory and Huggins is
The right-hand side is a function of the number of moles and volume fraction of solvent (component ), the number of moles and volume fraction of polymer (component ), with the introduction of a parameter to take account of the energy of interdispersing polymer and solvent molecules. is the gas constant and is the absolute temperature. The volume fraction is analogous to the mole fraction, but is weighted to take account of the relative sizes of the molecules. For a small solute, the mole fractions would appear instead, and this modification is the innovation due to Flory and Huggins. In the most general case the mixing parameter, , is a free energy parameter, thus including an entropic component.
Derivation
We first calculate the entropy of mixing, the increase in the uncertainty about the locations of the molecules when they are interspersed. In the pure condensed phases – solvent and polymer – everywhere we look we find a molecule. Of course, any notion of "finding" a molecule in a given location is a thought experiment since we can't actually examine spatial locations the size of molecules. The expression for the entropy of mixing of small molecules in terms of mole fractions is no longer reasonable when the solute is a macromolecular chain. We take account of this dissymmetry in molecular sizes by assuming that individual polymer segments and individual solvent molecules occupy sites on a lattice. Each site is occupied by exactly one molecule of the solvent or by one monomer of the polymer chain, so the total number of sites is
where is the number of solvent molecules and is the number of polymer molecules, each of which has segments.
For a random walk on a lattice we can calculate the entropy change (the increase in spatial uncertainty) as a result of mixing solute and solvent.
where is the Boltzmann constant. Define the lattice volume fractions and
These are also the probabilities that a given lattice site, chosen at random, is occupied by a solvent molecule or a polymer segment, respectively. Thus
For a small solute whose molecules occupy just one lattice site, equals one, the volume fractions reduce to molecular or mole fractions, and we recover the usual entropy of mixing.
In addition to the entropic effect, we can expect an enthalpy change. There are three molecular interactions to consider: solvent-solvent , monomer-monomer (not the covalent bonding, but between different chain sections), and monomer-solvent . Each of the last occurs at the expense of the average of the other two, so the energy increment per monomer-solvent contact is
The total number of such contacts is
where is the coordination number, the number of nearest neighbors for a lattice site, each one occupied either by one chain segment or a solvent molecule. That is, is the total number of polymer segments (monomers) in the solution, so is the number of nearest-neighbor sites to all the polymer segments. Multiplying by the probability that any such site is occupied by a solvent molecule, we obtain the total number of polymer-solvent molecular interactions. An approximation following mean field theory is made by following this procedure, thereby reducing the complex problem of many interactions to a simpler problem of one interaction.
The enthalpy change is equal to the energy change per polymer monomer-solvent interaction multiplied by the number of such interactions
The polymer-solvent interaction parameter chi is defined as
It depends on the nature of both the solvent and the solute, and is the only material-specific parameter in the model. The enthalpy change becomes
Assembling terms, the total free energy change is
where we have converted the expression from molecules and to moles and by transferring the Avogadro constant to the gas constant .
The value of the interaction parameter can be estimated from the Hildebrand solubility parameters and
where is the actual volume of a polymer segment.
In the most general case the interaction and the ensuing mixing parameter, , is a free energy parameter, thus including an entropic component. This means that aside to the regular mixing entropy there is another entropic contribution from the interaction between solvent and monomer. This contribution is sometimes very important in order to make quantitative predictions of thermodynamic properties.
More advanced solution theories exist, such as the Flory–Krigbaum theory.
Liquid-liquid phase separation
Polymers can separate out from the solvent, and do so in a characteristic way. The Flory–Huggins free energy per unit volume, for a polymer with monomers, can be written in a simple dimensionless form
for the volume fraction of monomers, and . The osmotic pressure (in reduced units) is
.
The polymer solution is stable with respect to small fluctuations when the second derivative of this free energy is positive. This second derivative is
and the solution first becomes unstable when this and the third derivative
are both equal to zero. A little algebra then shows that the polymer solution first becomes unstable at a critical point at
This means that for all values of the monomer-solvent effective interaction is weakly repulsive, but this is too weak to cause liquid/liquid separation. However, when , there is separation into two coexisting phases, one richer in polymer but poorer in solvent, than the other.
The unusual feature of the liquid/liquid phase separation is that it is highly asymmetric: the volume fraction of monomers at the critical point is approximately , which is very small for large polymers. The amount of polymer in the solvent-rich/polymer-poor coexisting phase is extremely small for long polymers. The solvent-rich phase is close to pure solvent. This is peculiar to polymers, a mixture of small molecules can be approximated using the Flory–Huggins expression with , and then and both coexisting phases are far from pure.
Polymer blends
Synthetic polymers rarely consist of chains of uniform length in solvent. The Flory–Huggins free energy density can be generalized to an N-component mixture of polymers with lengths by
For a binary polymer blend, where one species consists of monomers and the other monomers this simplifies to
As in the case for dilute polymer solutions, the first two terms on the right-hand side represent the entropy of mixing. For large polymers of and these terms are negligibly small. This implies that for a stable mixture to exist , so for polymers A and B to blend their segments must attract one another.
Limitations
Flory–Huggins theory tends to agree well with experiments in the semi-dilute concentration regime and can be used to fit data for even more complicated blends with higher concentrations. The theory qualitatively predicts phase separation, the tendency for high molecular weight species to be immiscible, the interaction-temperature dependence and other features commonly observed in polymer mixtures. However, unmodified Flory–Huggins theory fails to predict the lower critical solution temperature observed in some polymer blends and the lack of dependence of the critical temperature on chain length . Additionally, it can be shown that for a binary blend of polymer species with equal chain lengths the critical concentration should be ; however, polymers blends have been observed where this parameter is highly asymmetric. In certain blends, mixing entropy can dominate over monomer interaction. By adopting the mean-field approximation, parameter complex dependence on temperature, blend composition, and chain length was discarded. Specifically, interactions beyond the nearest neighbor may be highly relevant to the behavior of the blend and the distribution of polymer segments is not necessarily uniform, so certain lattice sites may experience interaction energies disparate from that approximated by the mean-field theory.
One well-studied effect on interaction energies neglected by unmodified Flory–Huggins theory is chain correlation. In dilute polymer mixtures, where chains are well separated, intramolecular forces between monomers of the polymer chain dominate and drive demixing leading to regions where polymer concentration is high. As the polymer concentration increases, chains tend to overlap and the effect becomes less important. In fact, the demarcation between dilute and semi-dilute solutions is commonly defined by the concentration where polymers begin to overlap which can be estimated as
Here, m is the mass of a single polymer chain, and is the chain's radius of gyration.
Footnotes
"Thermodynamics of High Polymer Solutions", Paul J. Flory Journal of Chemical Physics, August 1941, Volume 9, Issue 8, p. 660 Abstract. Flory suggested that Huggins' name ought to be first since he had published several months earlier: Flory, P.J., "Thermodynamics of high polymer solutions", J. Chem. Phys. 10:51-61 (1942) Citation Classic No. 18, May 6, 1985
"Solutions of Long Chain Compounds", Maurice L. Huggins Journal of Chemical Physics, May 1941 Volume 9, Issue 5, p. 440 Abstract
We are ignoring the free volume due to molecular disorder in liquids and amorphous solids as compared to crystals. This, and the assumption that monomers and solute molecules are really the same size, are the main geometric approximations in this model.
For a real synthetic polymer, there is a statistical distribution of chain lengths, so would be an average.
The enthalpy is the internal energy corrected for any pressure-volume work at constant (external) . We are not making any distinction here. This allows the approximation of Helmholtz free energy, which is the natural form of free energy from the Flory–Huggins lattice theory, to Gibbs free energy.
In fact, two of the sites adjacent to a polymer segment are occupied by other polymer segments since it is part of a chain; and one more, making three, for branching sites, but only one for terminals.
References
External links
"Conformations, Solutions and Molecular Weight" (book chapter), Chapter 3 of Book Title: Polymer Science and Technology; by Joel R. Fried; 2nd Edition, 2003
Polymer chemistry
Solutions
Thermodynamic free energy
Statistical mechanics | Flory–Huggins solution theory | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 2,220 | [
"Thermodynamic properties",
"Physical quantities",
"Statistical mechanics",
"Materials science",
"Homogeneous chemical mixtures",
"Energy (physics)",
"Thermodynamic free energy",
"Polymer chemistry",
"Solutions",
"Wikipedia categories named after physical quantities"
] |
3,010,704 | https://en.wikipedia.org/wiki/Precooled%20jet%20engine | A precooled jet engine is a concept that enables jet engines with turbomachinery, as opposed to ramjets, to be used at high speeds. Precooling restores some or all of the performance degradation of the engine compressor (by preventing rotating stall/choking/reduced flow), as well as that of the complete gas generator (by maintaining a significant combustor temperature rise within a fixed turbine temperature limit), which would otherwise prevent flight with high ram temperatures.
History
Robert P. Carmichael in 1955 devised several engine cycles that used liquid hydrogen to precool the inlet air to the engine before using it as fuel.
Interest in precooled engines saw an emergence in the UK in 1982, when Alan Bond created a precooled air breathing rocket engine design he called SATAN. The idea was developed as part of the HOTOL SSTO spaceplane project, and became the Rolls-Royce RB545. In 1989, after the HOTOL project was discontinued, some of the RB545 engineers created a company, Reaction Engines Ltd, to develop the idea into the SABRE engine, and the associated Skylon spaceplane.
In 1987, N Tanatsugu published "Analytical Study of Space Plane Powered by Air-Turbo Ramjet with Intake Air Cooler." part of Japan's ISAS (now JAXA) study into an Air-Turbo Ramjet (ATR, later ATREX after the addition of an expander cycle) intended to power the first stage of a TSTO spaceplane. ATREX was superseded by the Preecooled Turbojet (PCTJ) and Hypersonic Turbojet studies. A liquid nitrogen precooled hydrogen burning test engine was flown at Mach 2 at Taiki Aerospace Research Field in September 2010.
Design
For higher flight speeds, precooling may feature a cryogenic fuel-cooled heat exchanger before the air enters the compressor. After gaining heat and vapourising in the heat exchanger, the fuel (e.g. H2) burns in the combustor. Precooling using a heat exchanger has not been used in flight, but is predicted to have significantly high thrust and efficiency at speeds up to Mach 5.5. Precooled jet engine cycles were analyzed by Robert P. Carmichael in 1955. Pre-cooled engines avoid the need for an air condenser because, unlike liquid air cycle engines (LACE), pre-cooled engines cool the air without liquefying it.
For lower flight speeds precooling can be done with mass injection, known as WIPCC (water injection precompressor cooling) This method has been used for short duration (due to limited coolant capacity) increases to an aircraft's normal maximum speed. "Operation Skyburner", which gained a world speed record with a McDonnell Douglas F-4 Phantom II, and the Mikoyan Ye-266 (Mig 25) both used a water/alcohol spray to cool the air ahead of the compressor.
Precooling (as well as combustion chamber water injection) is used at the lowest flight speeds, i.e. during take off, to increase thrust at high ambient temperatures.
Characteristics
One main advantage of pre-cooling is (as predicted by the ideal gas law) for a given overall pressure ratio, there is a significant reduction in compressor delivery temperature (T3), which delays reaching the T3 limit to a higher Mach number. Consequently, sea-level conditions (corrected flow) can be maintained after the pre-cooler over a very wide range of flight speeds, thus maximizing net thrust even at high speeds. The compressor and ducting after the inlet is subject to much lower and more consistent temperatures, and hence may be made of light alloys. This reduces the weight of the engine, which further improves the thrust/weight ratio.
Hydrogen is a suitable fuel because it is liquid at deeply cryogenic temperatures, and over its useful range has a very high total specific heat capacity, including the latent heat of vapourisation, higher than water. However, the low density of liquid hydrogen has negative effects on the rest of the vehicle, and the vehicle physically becomes very large, although the weight on the undercarriage and wing loading may remain low.
Hydrogen causes structural weakening in many materials, known as hydrogen embrittlement.
The weight of the precooler adds to the weight of the engine, thereby reducing its thrust to weight ratio. Passing the intake air through the precooler adds to the inlet drag, thereby reducing the engine net thrust, and so reducing the thrust to weight ratio.
Depending on the amount of cooling required, despite its high thermal capacity, more hydrogen may be needed to cool the air than can be burnt with the cooled air. In some cases, part of the excess hydrogen can be burnt in a ramjet with uncooled air to reduce this inefficiency.
Unlike a LACE engine, a precooled engine does not need to liquefy the oxygen, so the amount of cooling is reduced as there is no need to cover of fusion of the oxygen and a smaller total temperature drop is required. This in turn reduces the amount of hydrogen used as a heat-sink, but unable to be burnt. In addition a condenser isn't required, giving a weight saving.
See also
Air turborocket
Compressor map
Hydrogen-cooled turbo generator
Intercooler
References
Gas turbines
Jet engines
Hydrogen technologies | Precooled jet engine | [
"Technology"
] | 1,115 | [
"Jet engines",
"Engines",
"Gas turbines"
] |
3,010,833 | https://en.wikipedia.org/wiki/Negawatt%20market | A negawatt-hour is a unit of energy saved as a direct result of energy conservation measures, such as reducing the use of heat or electricity. The concept was developed after Amory Lovins authored an article published in the March 21, 1985 issue of Public Utilities Fortnightly arguing that utility companies will sell less electricity and more efficiency by marketing 'negawatts'. In Lovins' opinion, utility customers don't want kilowatt-hours of electricity; they want energy services such as hot showers, cold beer, lit rooms, and spinning shafts, which can come more cheaply if electricity is used more efficiently. Lovins credited the term to a typo in a document by the Colorado Public Utilities Commission in which the word "megawatt" was misspelled.
Negawatts are intended to be a form of encouragement to motivate consumers to conserve energy. Lovins considers the concept of conservation a change in behavior based on the attitude 'Do Less to Use Less.' He makes a distinction between conservation and efficiency by defining efficiency as "the application of technologies and best practices to eliminate waste based on the attitude, 'Do the same or more with less.'"
Cost for negawatt power can be calculated using cost-effectiveness analysis or CEA. For energy efficiency investments a CEA calculation produces the value of saved energy or negawatts in $/kWh. Such a valuation allows comparing the price of negawatts with price of energy such as electricity from the grid or the cheapest renewable alternative.
Lovins explains that many companies are already enjoying the financial and other rewards that come from saving electricity. Yet progress in converting to electricity saving technologies has been slowed by the indifference or outright opposition of some utilities. A second obstacle to efficiency is that many electricity-using devices are purchased by people who won't be paying their running costs and thus have little incentive to consider efficiency. Lovins also believes that many customers "don't know what the best efficiency buys are, where to get them, or how to shop for them".
In 2003 in France under the guide of Thierry Henry 14 scientists wrote "Le manifeste Négawatt." Megawatt and negawatt seem to be reminiscent to the larger concept of ecological footprint, and by following this line of thought toward compatibility and comparability a second frame of concept seems appropriate: the impact in another frame or setting where units or numbers cannot be compared (see paradigm shift). See association négaWatt.
Market
Lovins has advocated a "negawatt revolution", arguing that utility customers don't want kilowatt-hours (kWh) of electricity; they want energy services such as hot showers, cold beer, lit rooms, and spinning shafts, which can come more cheaply if electricity is used more efficiently. Lovins defines the negawatt market as a way to reduce the gap between the cost of making and saving electricity.
The negawatt market can be thought of as a secondary market where electricity is allocated from areas of less use to areas of greater use. This would be a secondary market, due to the fact that it would reallocate electricity from one consumer to another within the already existing energy market. Some feel that to establish a viable market, legislation and cooperation between primary producers, distributors, traders and consumers, may be required. This proposal would encourage the market to have legislative regulations, while still allowing the market to work within itself to set prices and allocate resources.
A negawatt market would allow "demand side resources" to participate in wholesale energy markets. These markets are commonly referred to as a demand response. Demand response can be defined as enrolling large users of energy in programs to lower their usage in return for compensation, which helps take pressure off the grid. This market would help take pressure off the grid because electricity could be treated as a commodity just like copper or sowbellies, and therefore traded to areas that need it more than others. As any commodity, negawatts would have to be tradable across time and space to be an effective market. Being able to trade negawatts across time and space would allow for an international trading system. To create a market for negawatts, energy companies will need to put more focus toward energy efficiency. This shift in focus would require a new business structure that will thrive in the 'negawatt market', which has not yet been developed. Market possibilities are being implemented internationally, implying that one day an official negawatt market will become a reality.
Implementation
Government implementation
Negawatt power is being implemented in many states in the U.S. and is emerging as an international strategy to reduce energy consumption. "Test negawatt auctions began in 1999 in Connecticut and Georgia and more than a dozen utility exchanges were in existence" in 2000. In an effort to move toward energy efficiency, New York has created programs "supported through Energy $mart, which is run by the New York State Energy Research and Development Authority (NYSERDA), with money from a small surcharge on utility bills." Negawatt power is implemented in California as well as Texas. "Some Texas congressmen and energy companies are trying to help California avert blackouts and utility price shocks this summer with 'negawatts'.
On January 1, 2009, the states of South Australia and Victoria (Australia) became the first in Australia to offer "householders energy efficiency incentives programs delivered via local electricity retailers."
The British transmission system operator incentivizes off-peak use.
Private implementation
The negawatt market is being used by governments and companies. Aluminum manufacturers in the Pacific Northwest shut down their power plants and sold the unused energy because selling the negawatts was more profitable for the company than selling the aluminum product. This was possible because "The smelters hold power contracts with the federal Bonneville Power Administration that contain clauses allowing them to market the electricity."
The Associated Electric company in rural Missouri is implementing the usage and spreading the knowledge of negawatts by performing energy audits at their customer's homes to show them where they could be saving electricity. Rebates are being given to help customers pay for more energy-efficient, Energy Star appliances. Keith Hartner, the CEO of Associated Electric Cooperative Inc., feels that negawatts are generating savings for their customers and for the company as well: “The goal of this program is to save money not only at the generator but also at the meter for the members.”
Individual households can practice negawatts through using energy-efficient lighting and Energy Star appliances as well as simply reducing standby power. The resulting savings sometimes offset the investment in new high-efficiency light bulbs and appliances. These efficiencies can offset a portion of the cost of a residential photovoltaic system. Negawatts reduces the overall size of the photovoltaic system required for the household, because they are consuming less electricity. This results in a faster payback period for the photovoltaic system. The City of San Diego has created a negawatts initiative called "Reduce then Produce" to promote this idea.
Advantages
Cost
If a consumer conserves a substantial amount of energy, then there can be an allowance for a tax deduction. According to the Negawatt Power Solutions Group, a "building that achieved a 50% energy cost reduction may be eligible for tax deduction up to $1.80 per square foot."
Local deregulation
Some conservatives claim that the negawatt market could help nations or states have a deregulated electricity system. This would allow a nation or a state to experiment with "electricity deregulation," in which demand reductions could be purchased with a minimum of disruption to businesses, workers and the economy. California could achieve the goal of deregulation by allowing a deficit area to purchase an emergency supply from anywhere within with West in which "the ultimate purpose of deregulation was to allow competition in the electricity market and consumer choice of electricity providers.
Drawbacks
Difficulty in creating a negawatt market
Currently, there is no way to precisely measure the amount of energy saved in negawatts; it can only be theoretically determined based on the consumer's history of energy use. Without the visualization of the energy use, it is difficult to conceptualize negawatts because the consumer cannot see a precise value of the amount of saved energy. Smart meters are becoming a more developed technology to measure energy usage, but consumers are calling on state regulators to move cautiously on smart meters, citing complaints in some states that the meters are raising electric bills rather than lowering them.
Some municipally owned utilities and cooperatives argue that negawatt power lets consumers treat electricity as a property right rather than a service [...giving them] legal entitlement to power [that they] don't consume. This would indicate that consumers would treat electricity as a property, not a service. Some people, including the senior vice president Joe Nipper from the American Public Power Association, oppose the idea that people would receive money for power that they did not even spend.
Electricity price caps may also need to be implemented in order for the emerging negawatts market to function correctly.
Expense of efficiency
Saving energy by the negawatt and creating a negawatt market can present several drawbacks for manufacturers and electricity providers. Manufacturers are less inclined to make energy-efficient devices which meet a specific standard, such as Energy star's standard, because of increased time and cost, while receiving minimal profit. Overall, electricity providers may not want customers to use less energy due to the loss of profit. Some even argue that producing energy-efficient products, such as light bulbs, actually simulate more demand, “resulting in more energy being purchased for conversion into light."
Customers may also be less inclined to buy products that are more energy efficient due to the increase in cost and time spent. Even when the information is known and, despite the overall long-term cost-saving potential, the price of energy is too low for individuals to justify the initial cost of energy efficiency measures. Not only are energy efficient devices more expensive, but consumers are poorly informed about the savings on offer. Even when they can do the sums, the transaction costs are high: it is a time-consuming chore for someone to identify the best energy-saving equipment, buy it and get it installed.
The technology used to measure the amount of energy that a consumer uses and saves, known as smart meters, grid systems, or energy dashboards, require time for the consumer to understand. Some argue that people need to have access to simple yet effective information systems to help users understand their energy without having to become technology experts.
See also
Energy and the environment
Energy hierarchy
Energy park
Energy Star
Environmental issues with energy
Hydrogen economy
International Renewable Energy Agency
Leadership in Energy and Environmental Design (LEED)
List of energy storage projects
Renewable Energy
Renewable Energy and Energy Efficiency Partnership
Smart grid
Sustainable Energy for All initiative
References
Works cited
Airlie, C. (2010, December 7). Uk plans payment for 'negawatt' to curb power use. Retrieved from https://www.bloomberg.com/news/2010-12-07/u-k-plans-payment-for-negawatt-to-curb-power-use-update1-.html
Bartram, L., Rodgers, J., & Muise, K. (2010). Chasing the Negawatt: Visualization for Sustainable Living. IEEE Computer Graphics & Applications, 30(3), 8–14. Retrieved from Military & Government Collection database.
Fickett, A, Gellings, C, & Lovins, A. (1990, September). Efficient use of electricity. Scientific American Retrieved December 2010, from http://www.nature.com/scientificamerican/journal/v263/n3/pdf/scientificamerican0990-64.pdf.
Fotopoulos, T. (2007). Is degrowth compatible with a market economy?. The International Journal of Inclusive Democracy, 3(1), Retrieved from https://web.archive.org/web/20110524125322/http://www.inclusivedemocracy.org/journal/vol3/vol3_no1_Takis_degrowth_PRINTABLE.htm
Gulyas, C. (2008, May 8). Negawatts are creating a market for energy saving. Retrieved from http://cleantechnica.com/2008/05/08/negawatts-are-creating-a-market-for-energy-savings/
H.R. 6--109th Congress: Energy Policy Act of 2005. (2005). In GovTrack.us (database of federal legislation). Retrieved November 2010, 2010, from http://www.govtrack.us/congress/bill.xpd?bill=h109-6
Knickerbocker, B. (2001, May 29). Saving energy by the 'negawatt'''. Christian Science Monitor, p. 2. Retrieved from Academic Search Complete database.
Kolbert, Elizabeth. (2007). "Mr. Green: Environmentalism's most optimistic guru." The New Yorker 1-22.
Landers, Jim. (2001). Legislators push for bill to allow sale of "negawatts' to California. Dallas Morning News, The (TX), Retrieved from Newspaper Source Plus database.
Lovins, Amory. (1989). The negawatt revolution-solving the co2 problem. Retrieved from http://www.ccnr.org/amory.html
Lovins, A, & Browning, W. (1992, July). Negawatts for buildings. Urban Land, Retrieved from http://sustainca.org/files/BGPrUSACO-RMI_3_.pdf
McCarty, J. (2008, April). Negawatts. Rural Missouri, Retrieved from http://www.ruralmissouri.org/08pages/08AprilWatts9.html
Peters, Joey. "Consumers Wary of Smart Meters." (2010).http://www.stateline.org/live/details/story?contentId=500546
Weinberg, CJ. (2001). Keeping the lights on-sustainable scenarios for the future. Renewable Energy World, Retrieved from https://web.archive.org/web/20080720164514/http://www.cleanenergystates.org/CaseStudies/Weinberg.pdf
Alliance for clean energy, New York. (2008).http://www.aceny.org/clean-technologies/energy-efficiency.cfm Energy Efficieny
"Energy Conservation: Not Such a Bright Idea." The Economist 10 Aug. 2010. Web. 9 Dec. 2010. <http://www.economist.com/node/16886228>
Energy Matters, . (2008, December 31). Energy efficiency focus for australis in 2009. Retrieved from http://www.energymatters.com.au/index.php?main_page=news_article&article_id=265
"Generating 'negawatts'". (2010, May). Retrieved from http://sri.dexia-am.com/LibrarySRI/ResearchPaper_Utilities_EnergyEfficiency_2010_UK.pdf
NegaWatt Power Solutions Group (2009). Incentives''. Retrieved from https://web.archive.org/web/20110711110118/http://gonegawatts.com/incentives.php
Presentation on European Green Paper on Energy Efficiency p. 12
The negawatts project: changing the paradigm of family energy consumption. (2010, August 6). Retrieved from https://web.archive.org/web/20110930033655/http://www.mitportugal.org/research-highlights/the-negawatts-project-changing-the-paradigm-of-family-energy-consumption.html
(2008). The elusive negawatt. Economist, 387(8579), 78. Retrieved from MasterFILE Premier database.
(2008, March). http://www.loe.org/shows/segments.htm?programID=08-P13-00013&segmentID=4. "From Megawatts to Negawatts"
http://www.lao.ca.gov/ballot/2005/050129.htm
External links
http://www.negawatt.org/english-presentation-p149.html (English translation of French official website)
Energy conservation
Energy economics
Units of power | Negawatt market | [
"Physics",
"Mathematics",
"Environmental_science"
] | 3,513 | [
"Physical quantities",
"Quantity",
"Energy economics",
"Power (physics)",
"Units of power",
"Environmental social science",
"Units of measurement"
] |
3,010,849 | https://en.wikipedia.org/wiki/Advergame | An advergame (portmanteau of "advertisement" and "video game") is a form of advertising in video games, in which the video game is developed by or in close collaboration with a corporate entity for purposes of advertising a brand-name product. While other video games may use in-game advertising (such as an advertisement on a virtual billboard or branding on an in-game object), an advergame is differentiated by the Interactive Advertising Bureau as a "game specifically designed around [the] product or service being advertised". An advergame is considered a type of advertainment.
Advergames are utilized to capture the consumer's attention more effectively than regular advertisements because of the medium and its interactivity. If the player is positive towards the game, they will likely have positive feelings for the product advertised as well. Advergames are commonly targeted to minors, who tend to be more responsive to persuasive messages that can be embedded in such games. Concerns have been raised by parents and advocates for children that such advergames can influence children's habits, particularly food-based products.
History
Advergames appeared early in the history of the video game industry. One of the first known attempts was a polo sport game tied into the clothing brand Polo, which Carol Shaw had been developing for the Atari 2600 around 1978, but which had been cancelled prior to release. The first known released advergame was Tapper, a 1984 arcade game. The game had been originally sponsored by brewer Anheuser-Busch, and predominately featured the brand's logo and with gameplay based on serving beer. Its release was targeted for bars or other establishments for adults, but the game proved popular, and a non-branded version Root Beer Tapper was released for general arcades, with beer replaced by root beer.
Numerous advergames were developed through the 1980s and 1990s for home video game consoles and personal computers, but with the introduction of wide-spread availability of the Internet, browser games became a popular route for advergames. Such games were cheaper to produce compared to previous advergames as well as to other traditional advertising routes such as television advertising. A Kaiser Family Foundation report in 2006 found that 73% of 96 food product companies had established dedicated sections of their websites with advergames that were targeted at children, with many of these offering multiple advergames.
The term "advergames" was coined by Anthony Giallourakis in 1999. The Internet domain www.advergames.com was purchased that year by Giallourakis and several years later (in 2008), a free web portal showcasing a selection of the browser based advergames was launched.
Advergames moved into mobile games by around 2014, due to the proliferation of mobile devices and their common use by children.
Roblox, an online game platform and creation system, has been commonly used for advergames for its popularity among younger players and allowance for easy development. The Swedish game studio The Gang was founded to create advergames on Roblox for brands. Companies have used Roblox for new marketing methods within advergames, such as Walmart, which has sold real products through the platform, and IKEA, which has hired employees for paid work in its virtual store The Co-Worker Game.
Other examples
Other examples of advergames that have achieved widespread awareness include:
Yo! Noid, a 1990 platformer released for the Nintendo Entertainment System to advertise Domino's Pizza. Its cult fanbase eventually led to a fanmade 2017 sequel called Yo! Noid 2: Enter the Void, which eventually got a deluxe edition called "Yo! Noid 2: Game Of A Year Edition"
Chex Quest, a non-violent first-person shooter developed for personal computers in 1996 for the Chex cereal brand. A total conversion of Doom, it is considered one of the few advergames that was enjoyable to play.
Pepsiman released for the PlayStation in 1999, was developed by KID only for release in Japan. It focused the player on avoiding obstacles to save dehydrated people by bringing them a can of Pepsi.
America's Army, released for personal computers in 2002, was developed by the United States Army as a recruiting tool for teenaged players.
Sneak King, PocketBike Racer, and Big Bumpin', available at Burger King restaurants in the United States and Canada in 2006. The most successful, Sneak King, sold more than 2 million copies that year.
Citroën C4 Robot, released in 2008, promoted the Citroën C4 and was published by Citroën exclusively for Turkey.
Legal concerns
Protection for children
Because video games generally draw significant interest from minors, there are ethical and legal concerns around advergames. Whereas adults generally can recognize and resist persuasive advertising in games, younger children may not recognize that an advergame is a form of advertising and can be drawn in by statements made by the game, especially in a more complex advergame where the advertising is more incorporated or subtle.
One key market area of concern was food product-based advergames. The increased use of browser and mobile advergames in the mid-2000s led to concerns that such games would lead to an increase in the childhood obesity rate. In particular, many food-based advergames promote less nutritious products like snack foods. However, research has shown that the influence of advergames is not limited to foods with poor nutrition, as a study using advergames designed around healthy food choices led to the monitored children to select a healthier snack when presented a variety of choices.
False advertising
Advergames can run afoul of laws established related to truth in advertising. Making false claims, even if in language not intended to be advertising, in advergames can result in penalties and fines by the national or regional consumer protection agencies. In a notable case, the Gatorade company, a subsidiary of PepsiCo, had published a free mobile game Bolt! which featured Usain Bolt and challenged the player to "keep your performance high by avoiding water". The state of California asserted this claim was false, as Gatorade had been shown to be more harmful to the human body than water, and with the game targeted to youth, send the wrong message. The state sued Gatorade, and the case was ultimately settled with Gatorade paying a fine to the state, part of which the state used to promote health-conscious water-drinking habits for children.
National regulations and oversight
In the United States, attempts have been made by the United States Congress to give the Federal Trade Commission (FTC) authority to oversee online advertising aimed at children, including advergames, but had been challenged by lobbies representing the food industry and effectively shut down such attempts. Nevertheless, the propagation of online games and advertising aimed at children led to passage of the Children's Online Privacy Protection Act (COPPA) in 2000, which set strict standards for what type of private information websites could collect from minors, with the FTC overseeing any such fines.
In the United Kingdom, advergames regulation was brought into coverage by the CAP Code or the Code of Non-broadcast Advertising, Sales Promotion and Direct Marketing in 2016. The CAP code, updated frequently, provides specific guidance on what advergames (among other types of advertising) can and cannot do, with specific attention to how such games may influence children. CAP code violations are monitored and penalized by the Advertising Standards Authority.
Other concerns
Messaging in advergames may backfire and impact the reputation of the brand the game promotes. A notable example from Intel, which had published in 2004 The Intel IT Manager Game, a browser-based game that attempted to give insight into the job opportunities of an information technology manager, including simulating the hires of new employees. However, the game only allowed male employees to be hired by design, and Intel received criticism for discounting female hires. The game was taken offline and later replaced with a new version that had equal gender representation. While the situation could be compared to a similar problem around the game Fable which also forced players to use male avatars, the real-world setting of Intel's game was seen as a more serious flaw, and at the time, gender representation in the information technology industry was a serious concern.
References
Video game terminology | Advergame | [
"Technology"
] | 1,710 | [
"Computing terminology",
"Video game terminology"
] |
3,011,128 | https://en.wikipedia.org/wiki/Charismatic%20megafauna | Charismatic megafauna are animal species that are large—in the category that they represent—with symbolic value or widespread popular appeal, and are often used by environmental activists to gain public support for environmentalist goals. In this definition, animals such as penguins or bald eagles are megafauna because they are among the largest animals within the local animal community, and they disproportionately affect their environment. The vast majority of charismatic megafauna species are threatened and endangered by issues such as overhunting, poaching, black market trade, climate change, habitat destruction, and invasive species. In a 2018 study, the top twenty most charismatic megafauna included the tiger, lion, and elephant.
Use in conservation
Charismatic species are often used as flagship species in conservation programs, as they are supposed to affect people's feelings more. However, being charismatic does not protect species against extinction; all of the 10 most charismatic species are currently endangered, and only the giant panda shows a demographic growth from an extremely small population.
Beginning early in the 20th century, efforts to reintroduce extirpated charismatic megafauna to ecosystems have been an interest of a number of private and non-government conservation organizations. Species have been reintroduced from captive breeding programs in zoos, such as the wisent (the European bison) to Poland's Białowieża Forest.
These and other reintroductions of charismatic megafauna, such as Przewalski's horse to Mongolia, have been to areas of limited, and often patchy, range compared to the historic ranges of the respective species.
Environmental activists and proponents of ecotourism seek to use the leverage provided by charismatic and well-known species to achieve more subtle and far-reaching goals in species and biodiversity conservation. By directing public attention to the diminishing numbers of giant panda due to habitat loss, for example, conservation groups can raise support for the protection of the panda and for the entire ecosystem of which it is a part.
Taxonomic bias
Charismatic megafauna may be subject to taxonomic inflation, in that taxonomists will declare a subspecies to be a species because of the advocacy benefits of a unique species, rather than because of new scientific evidence. The public's preference to identify with species sold through the ecotourism industry may be a factor for creating taxonomic inflation. In the public perception, ecotourism may be about seeing species, and the number of unique species increases the perceived biodiversity and tourism value of an area. A correlation may exist between the taxonomic bias in biodiversity datasets and the charisma of terrestrial megafauna, with the more charismatic species being largely over-reported. However, reports that charismatic megafauna are more engaging to the public than other species have been questioned.
See also
Bambi effect
References
Further reading
Conservation biology
Wildlife
Megafauna | Charismatic megafauna | [
"Biology"
] | 580 | [
"Animals",
"Conservation biology",
"Wildlife"
] |
3,011,131 | https://en.wikipedia.org/wiki/Sound%20baffle | A sound baffle is a construction or device which reduces the strength (level) of airborne sound. Sound baffles are a fundamental tool of noise mitigation, the practice of minimizing noise pollution or reverberation. An important type of sound baffle is the noise barrier constructed along highways to reduce sound levels in the vicinity of properties. Sound baffles are also applied to walls and ceilings in building interiors to absorb sound energy and thus lessen reverberation.
Highway noise barriers
The technology for accurate prediction of the effects of noise barrier design using a computer model to analyze roadway noise has been available since the early 1970s. The earliest published scientific design of a noise barrier may have occurred in Santa Clara County, California in 1970 for a section of the Foothill Expressway in Los Altos, California. The county used a computer model to predict the effects of sound propagation from roadways, with variables consisting of vehicle speed, ratio of trucks to automobiles, road surface type, roadway geometrics, micro-meteorology and the design of proposed soundwalls.
Interior sound baffle design
Since the early 1900s, scientists have been aware of the utility of certain types of interior coatings or baffles to improve the acoustics of concert halls, theaters, conference rooms and other spaces where sound quality is important. By the mid-1950s, Bolt, Beranek and Newman and a few other U.S. research organizations were developing technology to address sound quality's design challenges. This design field draws on several disciplines including acoustical science, computer modeling, architecture and materials science. Sound baffles are also used in speaker cabinets to absorb energy from the pressure created by the speakers, thus reducing cabinet resonance.
In 1973, Pearl P. Randolph, a school bus driver in Virginia, won a new school bus in a national contest held by Wayne Corporation for the suggestion that sound baffles be installed in the ceiling of school buses. In 1981, they were first made mandatory by the state of California.
Vehicle exhaust sound baffles
Baffles are also found in the exhaust pipes of vehicles, particularly motorcycles.
See also
Noise pollution
Noise health effects
References
Ceilings
Noise pollution
Noise control | Sound baffle | [
"Engineering"
] | 443 | [
"Structural engineering",
"Ceilings"
] |
3,011,353 | https://en.wikipedia.org/wiki/Preferential%20entailment | Preferential entailment is a non-monotonic logic based on selecting only models that are considered the most plausible. The plausibility of models is expressed by an ordering among models called a preference relation, hence the name preference entailment.
Formally, given a propositional formula and an ordering over propositional models , preferential entailment selects only the models of that are minimal according to . This selection leads to a non-monotonic inference relation: holds if and only if all minimal models of according to are also models of .
Circumscription can be seen as the particular case of preferential entailment when the ordering is based on containment of the sets of variables assigned to true (in the propositional case) or containment of the extensions of predicates (in the first-order logic case).
See also
References
Logic in computer science
Knowledge representation
Non-classical logic | Preferential entailment | [
"Mathematics"
] | 183 | [
"Mathematical logic",
"Logic in computer science"
] |
3,011,538 | https://en.wikipedia.org/wiki/Entropy%20of%20vaporization | In thermodynamics, the entropy of vaporization is the increase in entropy upon vaporization of a liquid. This is always positive, since the degree of disorder increases in the transition from a liquid in a relatively small volume to a vapor or gas occupying a much larger space. At standard pressure , the value is denoted as and normally expressed in joules per mole-kelvin, J/(mol·K).
For a phase transition such as vaporization or fusion (melting), both phases may coexist in equilibrium at constant temperature and pressure, in which case the difference in Gibbs free energy is equal to zero:
where is the heat or enthalpy of vaporization. Since this is a thermodynamic equation, the symbol refers to the absolute thermodynamic temperature, measured in kelvins (K). The entropy of vaporization is then equal to the heat of vaporization divided by the boiling point:
According to Trouton's rule, the entropy of vaporization (at standard pressure) of most liquids has similar values. The typical value is variously given as 85 J/(mol·K), 88 J/(mol·K) and 90 J/(mol·K). Hydrogen-bonded liquids have somewhat higher values of
See also
Entropy of fusion
References
Thermodynamic entropy
Thermodynamic properties | Entropy of vaporization | [
"Physics",
"Chemistry",
"Mathematics"
] | 283 | [
"Thermodynamics stubs",
"Statistical mechanics stubs",
"Thermodynamic properties",
"Physical quantities",
"Quantity",
"Thermodynamic entropy",
"Entropy",
"Thermodynamics",
"Statistical mechanics",
"Physical chemistry stubs"
] |
3,011,543 | https://en.wikipedia.org/wiki/Anton%20Felkel | Anton Felkel (26 April 1740, Kamenz, Silesia – c. 1800, possibly in Lisbon, Portugal) was an Austrian mathematician who worked on the determination of prime numbers.
Work
In 1776 and 1777, Felkel published a table giving complete decompositions of all integers not divisible by 2, 3, and 5, from 1 to 408,000. Felkel had planned to extend his table to 10 million. A reconstruction of his table is found on the LOCOMAT site.
Publications
Tafel aller einfachen Factoren der durch 2, 3, 5 nicht theilbaren Zahlen von 1 bis 10 000 000. Vienna: 1776;
I. Theil enthaltend die Factoren von 1 bis 144000 (also published in Latin)
Pars II. exhibens factores numerorum ab 144001 usque 336000
Pars III. exhibens factores numerorum ab 336001 usque 408000
Wahre Beschaffenheit des Donners: Eine ganz neue Entdeckung durch einen Liebhaber der Naturkunde. Wien: v. Ghelen, 1780;
Neueröffnetes Geheimniss der Parallellinien enthaltend verschiedene wichtige Zusätze zur Proportion und Körperlehre von Anton Felkel; nebst einer dreyfachen vorläufigen Nachricht von den dazu dienenden neuerfundenen mechanischen Kunstgriffen etc. Wien; von Ghelenschen Buchhandlung, 1781;
References
"Number Theory for the Millenium", University of Illinois at Urbana-Champaign
18th-century Austrian mathematicians
Number theorists
Scientists from Vienna
1740 births
Year of death missing | Anton Felkel | [
"Mathematics"
] | 383 | [
"Number theorists",
"Number theory"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.